text
stringlengths
1
803k
meta
dict
\section{Introduction} Synchronization is usually studied in the context of diffusive coupling \cite{kim16, panteley17}, \emph{i.e.}, when the interaction between the oscillators is proportional to the difference in their states. Focusing on diffusive coupling has various limitations. First, because diffusive coupling is passive, oscillators must be intrinsic, that is, they must exhibit limit cycle oscillations in the absence of network interactions. Second, the only type of emergent network activity is a practically synchronous one, where, for large enough diffusive coupling strength, the oscillators converge to the same state modulo a synchronization error. Motivated by understanding the emergence of sustained in-phase oscillations in the suprachiasmatic nucleus (SCN) in the mammal master circadian clock \cite{arechiga04, evans16}, we introduce a model of slow-fast oscillators with synaptic-like coupling and study the emergence of in-phase oscillations in it. A fundamental experimental observation, reproduced in our model but impossible to reproduce in diffusively coupĺed models, is that in the SCN many clock neurons behave as sustained oscillators only in the presence of network interactions, whereas they behave as damped oscillators when isolated \cite{webb09}. SCN dynamics are therefore emergent, in the sense that the collective behavior (sustained oscillations) relies on network interactions and it is distinctively different from the isolated node behavior (damped oscillations) \cite{noguchi17}.\footnote{Note that the notion of emergent dynamics used in~\cite{kim16, panteley17} is different from ours.} The intrinsic dynamics of our oscillators include a saturated fast cellular positive feedback loop and a linear slow negative feedback loop. It is a simplified version of excitable neural dynamics \cite{hh, fhn}. The interaction of the two loops leads to relaxation (slow-fast) neural-like oscillations for strong enough positive feedback through a Hopf bifurcation. The network synaptic-like couplings are approximated as saturated inputs to the receiving oscillator of the state of the sending oscillator. They provide network positive feedback, which cooperates with cellular positive feedback to ignite and shape emergent network oscillations. The contributions of our analysis are the following. First, we prove a general lemma for the spectral properties of a class of block-defined matrices with the structure of the Jacobian matrix of our model. Second, we show that diffusive coupling cannot induce synchronous oscillations in a network of damped oscillators. Third, we prove that under a strongly directed network topology synaptic coupling can lead to in-phase oscillations even when the uncoupled oscillators are damped. If the coupling is in-regular, in-phase oscillations become synchronous, \emph{i.e.}, all oscillators converge to the same state. In this work, we rely on (local) bifurcation analysis at the model equilibrium and show that the (dominant) Perron-Frobenius eigenvector of the network adjacency matrix fully determines the in-phase oscillation pattern. Our results are in line with existing ones on automata synchronization \cite{gusev16, zhong17} and, together with \cite{lee20b}, they stress the importance of considering non-diffusive coupling in synchronization studies. In future works, we will couple our local results with a global analysis, using, for instance, dominance analysis~\cite{forni2018differential}. Also, we only consider here homogeneous (identical) intrinsic dynamics. In future works we will relax this assumption as well by exploiting the power of synaptic coupling of being naturally apt to cope with non-synchronous in-phase oscillations, as those that are expected in heterogeneous populations. \section{Notation and definitions} $\mathds{N}$ denotes the set of positive natural numbers, and $\mathds{R}$ the set of real numbers. In general, $N\in\mathds{N}$ will be a positive integer. As usual, $\mathrm{Re}(z)=x$ denotes the real part of a complex number $z=x+iy\in\mathds{C}$. $\mathds{R}^N$ denotes the set of real $N$-tuples, and $\vo{x}\in\mathds R^N$ denotes an arbitrary $N$-tuple. Because of the specific models used, it will be convenient to denote $\mathds{R}^{2N}=\mathds{R}^N\times\mathds{R}^N$ and its elements as $(\vo{x},\vo{y})\in\mathds{R}^{2N}$. The zero and one vectors $\vo{0}_N\in\mathds{R}^N$, $\vo{1}_N\in\mathds{N}$, denote tuples which have all their entries equal to zeroes and ones, respectively. Finally, a vector is said to be \emph{positive} if all its entries are strictly positive, denoted by $\vo{x}>0$. A {\it sigmoid} is a bounded, continuously differentiable function $S:\mathds{R}\to\mathds{R}$ such that $S(0)=0$, $S'(x)>0$ for all $x\in\mathds R$, $S'(0)=1$, and ${\rm argmax}_{x\in\mathds R}S'(x)=0$. \par The set $\mathscr{M}_{N\times N}$ contains all real $N\times N$ matrices represented as $M=(M_{ij})$. $I_N$ denotes the identity matrix in dimension $N$, and $O_N$ denotes the zero matrix in dimension $N$. The determinant of a matrix $A\in\mathscr{M}_{N\times N}$ is denoted by $\abs{A}$, and its characteristic polynomial is denoted by $p(\lambda)=\abs{A-\lambda I_N}$. \begin{defn} A matrix $M\in\mathscr M_{N\times N}$ is called \emph{non-negative} if $M_{ij}\geqslant 0$. A matrix $M\in\mathscr M_{N\times N}$ is called \emph{Metzler} if $M_{ij}\geqslant 0$ for all $j\neq i$. A matrix is said to be \emph{simple} if all of its diagonal entries are equal to zero. \end{defn} A \emph{weighted digraph} $\mathscr{G}=(V,A)$ is a 2-tuple consisting of a set of vertices or nodes $V=\{ 1,\ldots, N\}$ and an adjacency matrix $A\in\mathscr{M}_{N\times N}$ with the convention that there exists a directed edge from vertex $j$ to vertex $i$ if and only if $A_{ij}\neq0$, in which case $A_{ij}$ is the weight of the edge. We will always assume that $A_{ii}=0$, \emph{i.e.}, there are no self-loops in the digraph, so that every adjacency matrix considered is \emph{simple} as defined before. Given a node $i\in V$ of a weighted digraph $\mathscr{G}$, its \emph{weighted in-degree} is denoted by $\partial_i^-:=\sum_{j}A_{ij}$. The \emph{in-degree matrix} $D^-$ of a weighted digraph $\mathscr{G}$ is a diagonal matrix defined by $D_{ii}^-=\partial^-_i$. The \emph{in-degree Laplacian matrix} of a weighted digraph $\mathscr{G}=(V,A)$, denoted as $L^-$, is defined by $L^-:=D^--A$. {Throughout this paper we don't require the graphs to be undirected, that is, we don't assume that $A$ is symmetric. Therefore, the eigenvalues $\mu_1,\ldots,\mu_N$ of the Laplacian matrix $L^-$ may be complex and they all satisfy ${\rm Re}(\mu_i)\geq \mu_1=0$.} \begin{defn} A weighted digraph $\mathscr{G}=(V,A)$ is \emph{in-regular} if every node $i\in V$ has the same in-degree $d^-$. Under such a condition, $d^-$ will denote the \emph{global in-degree} of the weighted digraph. $\mathscr{G}$ is \emph{strongly connected} if, for any two nodes $i$ and $j$, there exists a directed path which connects $i$ to $j$; in this case, its adjacency matrix is said to be \emph{irreducible}. \end{defn} \section{A network of slow-fast oscillators} We present a single, general model that includes diffusive and excitatory synaptic coupling between slow-fast damped or sustained oscillators \begin{equation}\label{eq:gral} \begin{split} \dot x_i=&-x_i-y_i+\!\sum_{j=1}^N A^d_{ij}(x_j\!-\!x_i)\!+\!S\left(\!\!\alpha_i x_i\!+\!\sum_{j=1}^N A^e_{ij}x_j\!\right),\\ \dot y_i=&\varepsilon(x_i-y_i), \end{split} \end{equation} for every $i\in V=\{1,\ldots,N\}$, where $\varepsilon\in(0,1)$ is the time constant of the slow variables $y_i$, $\alpha_i>0$ are cellular positive feedback gains, and $S$ is a sigmoid function modeling intrinsic and synaptic nonlinearities. $A^d$ is the diffusive coupĺing adjacency matrix, and $A^e$ is the excitatory coupling adjacency matrix. Clearly, $(\vo{x}_0,\vo{y}_0)=(\vo{0}_N,\vo{0}_N)$ constitutes an equilibrium point for system (\ref{eq:gral}). When $A^e=O_N$ the coupling between the oscillators is \emph{purely diffusive}, and when $A^d=O_N$ the coupling is \emph{purely excitatory}. The two matrices $A^d$ and $A^e$ define the diffusive $\mathcal G^d(V,A^d)$ and excitatory $\mathcal G^e(V,A^e)$ digraphs, respectively. \subsection{Transitions from damped to sustained oscillations ruled by a Hopf bifurcation}\label{subsec:two} We now show that the parameter $\alpha$ rules the transition from damped to sustained slow-fast oscillations for uncoupled oscillators. Consider model (\ref{eq:gral}) for $N=1$, which reduces to the single-oscillator model \begin{align*} \dot x&=-x-y+S(\alpha x),\\ \dot y&=\varepsilon(x-y), \end{align*} The Jacobian matrix evaluated at equilibrium is readily computed as $J(0,0)=\begin{pmatrix} \alpha-1 & -1\\ \varepsilon & -\varepsilon \end{pmatrix}$, which leads to the pair of eigenvalues $\lambda_{1,2}=\tfrac{\alpha -(1+\varepsilon)}{2}\pm\tfrac{\sqrt{(\alpha +\varepsilon-1)^2-4\varepsilon}}{2}$. For \begin{equation}\label{eq:alpha hopf} \alpha_H=1+\varepsilon>0 \end{equation} both eigenvalues are purely imaginary. Moreover, continuity of the discriminant function $\Delta$ guarantees $\lambda_{1,2}\in\mathds{C}\backslash\mathds{R}$ for $\alpha$ sufficiently close to $\alpha_H$. Furthermore, observe that $$\dfrac{\partial\mathrm{Re}(\lambda_{1,2})}{\partial\alpha}(\alpha_H)=\dfrac{1}{2}\not=0.$$ Invoking~\cite[Theorem 3.5.2]{guckenheimer83}, we can conclude the existence of a simple Hopf bifurcation for $\alpha=\alpha_H$, at which the model transitions from damped ($\alpha<\alpha_H$) to sustained ($\alpha>\alpha_H$) oscillations. The global validity of this result can be proved via Lyapunov and Poincaré-Bendixon arguments \cite[Theorem 1.8.1]{guckenheimer83}. We do not include it here due to space limitations. \subsection{In-phase oscillations from network positive feedback between two coupled oscillators} Consider model (\ref{eq:gral}) in the low-dimensional case $N=2$, $\alpha_1=\alpha_2=0$, $A^d=O_2$, and $$A^e=\left( \begin{array}{cc} 0 & \beta_2 \\ \beta_1 & 0 \end{array}\right).$$ It is easy to show that the model undergoes a network Hopf bifurcation along the parametric curve $\sqrt{\beta_1\beta_2}=1+\varepsilon$, at which point the oscillators start to oscillate in phase, as shown in Fig. \ref{fig:exci1}. Observe that the uncoupled oscillators are damped in this case. The positive feedback brought by network interactions has the double role of both igniting and synchronizing the emergent oscillations. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{excisub1.pdf} \caption{Model (\ref{eq:gral}) in the particular case $N=2$ and parameters as shown in Subsection \ref{subsec:two}. Variables $x_1$ and $x_2$ (in red and blue, respectively) are seen to oscillate in phase when excitatory parameters are taken near the curve $\beta_1\beta_2=(1+\varepsilon)^2$. Particular values for this graphs were $\varepsilon = 0.01$, $\beta_2 = 2.5$ and $\beta_1=\tfrac{(1+\varepsilon)^2}{\beta_2}+0.01$ (top) and $\beta_1=\tfrac{(1+\varepsilon)^2}{\beta_2}-0.01$ (bottom). Image generated using \texttt{Julia 1.5.2}.} \label{fig:exci1} \end{figure} We will show that the behavior observed in this low-dimensional example is impossible in general if the coupling is diffusive, whereas it generalizes to arbitrary strongly connected excitatory coupling topologies. \section{A useful lemma} During our discussion, we will find several block-wise defined matrices of the form \begin{equation}\label{mat:J} J=\left(\begin{array}{c|c} -aI_N+c M & -I_N \\ \hline \varepsilon I_N+d M & -\varepsilon I_N \end{array}\right), \end{equation} where $M\in\mathscr{M}_{N\times N}$ is any real matrix, $\varepsilon>0$ is a (small) real constant, and $a,c,d\in\mathds R$. The following general lemma will turn out very useful in our analysis. \begin{lem}\label{lem:block} Let $J\in\mathscr{M}_{2N\times 2N}$ be of form (\ref{mat:J}). Then its characteristic polynomial $p(\lambda)$, for $\lambda\not=-\varepsilon$, is obtained as \begin{equation}\label{eq:lem1} (\varepsilon+\lambda)^N\abs{\left(a+\lambda+\tfrac{\varepsilon}{\varepsilon+\lambda}\right)I_N-\left(c-\tfrac{d}{\varepsilon+\lambda}\right)M}, \end{equation} Moreover, any eigenvector $(\vo{x},\vo{y})\in\mathds{R}^{2N}$ of $J$, corresponding to an eigenvalue $\lambda\in\mathds{C}\backslash\{-\varepsilon\}$, must satisfy \begin{equation}\label{eq:lem2} \begin{split} \vo{y}&=\tfrac{1}{\varepsilon+\lambda}(\varepsilon I_N+dM)\vo{x},\\ (c-\tfrac{d}{\varepsilon+\lambda})M\vo{x}&=(a+\lambda+\tfrac{\varepsilon}{\varepsilon+\lambda})\vo{x}. \end{split} \end{equation} \end{lem} \begin{proof} Obtaining the characteristic polynomial only requires us to apply the determinant formula for block-wise defined matrices \cite{powell11} to $$\abs{J-\lambda I_{2N}}=\abs{\begin{array}{c|c} -(a+\lambda)I_N+c M & -I_N \\ \hline \varepsilon I_N+d M & -(\varepsilon+\lambda) I_N\end{array}}.$$ This will require $-(\varepsilon+\lambda)I_N$ to be invertible, which imposes $\lambda\not=-\varepsilon$. Thus, the formula calculates $p(\lambda)$ as $$\abs{-(\varepsilon+\lambda)I_N}\abs{-(a+\lambda)I_N+cM-\tfrac{1}{\varepsilon+\lambda}(\varepsilon I_N+dM)},$$ whence (\ref{eq:lem1}) follows. As for the eigenvector condition (\ref{eq:lem2}), consider $\lambda\in\mathds{C}\backslash\{-\varepsilon\}$ a root of $p(\lambda)$ as given above, and suppose $(\vo{x},\vo{y})\in\mathds{R}^{2N}$ satisfies $$\left(\begin{array}{c|c} -(a+\lambda)I_N+c M & -I_N \\ \hline \varepsilon I_N+d M & -(\varepsilon+\lambda) I_N\end{array}\right) \left(\begin{array}{c} \vo{x} \\ \hline \vo{y} \end{array}\right)=\left(\begin{array}{c} \vo{0}_N \\ \hline \vo{0}_N \end{array}\right),$$ This yields the linear system \begin{align*} (-(a+\lambda)I_N+c M)\vo{x}-\vo{y}&=\vo{0}_N,\\ (\varepsilon I_N+d M)\vo{x}-(\varepsilon+\lambda)\vo{y}&=\vo{0}_N. \end{align*} Given $\varepsilon+\lambda\not=0$, we may solve for $\vo{y}$ in the second equation as $\vo{y}=\tfrac{1}{\varepsilon+\lambda}(\varepsilon I_N+dM)\vo{x}$. We then substitute this into our first equation, getting $$(-(a+\lambda)I_N+c M)\vo{x}-\tfrac{1}{\varepsilon+\lambda}(\varepsilon I_N+dM)\vo{x}=\vo{0}_N.$$ From here the second eigenvector condition follows, thus concluding this proof \end{proof} The relevance of this lemma lies in that the matrix $M$ fully characterizes the spectral properties of the higher-dimensional matrix $J$. More precisely, equation (\ref{eq:lem1}) establishes a one-to-two correspondence between the eigenvalues of $J$ and those of $M$. Equation (\ref{eq:lem2}) establishes a similar correspondence between eigenvectors of these two matrices. \section{Diffusive coupling cannot trigger sustained synchronous oscillations in networks of damped oscillators} In this section we show that global rhythms are not sustainable within networks of damped nodes that are coupled diffusively (as a matter of fact, under such conditions, sustained synchronous oscillations are possible if individual feedback is high enough, that is, if every node is an intrinsic oscillator). \begin{theo}\label{thm:diffusive} Consider model (1) with $A^e=O_N$ and $\alpha_i=\alpha<1$ for all $i\in V=\{1,\ldots,N\}$, and $A^d\in\mathscr{M}_{N\times N}$ an arbitrary non-negative weighted adjacency matrix. Then, for sufficiently small $\varepsilon>0$ the origin is locally exponentially stable. \end{theo} \begin{proof} Observe that $$\dfrac{\partial\dot{x}_i}{\partial x_i} = \alpha S'(0)-1-\sum_{j\not=i} A_{ij}^d=\alpha S'(0)-1-\partial_i^-.$$ Thus, the model Jacobian computed at equilibrium is given by \begin{align*} J^d=J(\vo{0},\vo{0})&=\left(\begin{array}{c|c} (\alpha-1)I_N-D^-+A^d & -I_N\\ \hline \varepsilon I_N & -\varepsilon I_N \end{array}\right)\\ &=\left(\begin{array}{c|c} (\alpha -1)I_N-L^- & -I_N\\ \hline \varepsilon I_N & -\varepsilon I_N \end{array}\right), \end{align*} where $L^-$ is the in-Laplacian matrix associated to $\mathcal G^d(V,A^d)$, which is exactly in the form of Lemma \ref{lem:block}, with matrices $J=J^d$, $M=L^-$, and parameters $a=1-\alpha$, $c=-1$, $d=0$. The associated characteristic polynomial reads $$p(\lambda)=(\varepsilon+\lambda)^N\abs{L^--(\alpha -1-\lambda-\tfrac{\varepsilon}{\varepsilon+\lambda})I_N}.$$ Let $\mu_1,\ldots,\mu_N$ be the eigenvalues of $L^-$ and recall that ${\rm Re}(\mu_i)\geq \mu_1=0$ for all $i\in\{1,\ldots,N\}$. Then any eigenvalue $\lambda$ of $J^d$ satisfies $\alpha-1-\lambda-\tfrac{\varepsilon}{\varepsilon+\lambda}=\mu_k,$ which is equivalent to { \begin{equation}\label{eq:lambdamu} \lambda^2+(\mu_k+1+\varepsilon-\alpha )\lambda+\varepsilon(\mu_k+2-\alpha )=0. \end{equation} } Thus, each $L^-$-eigenvalue $\mu_k$ yields two $J$-eigenvalues $\lambda_{2k-1}=\lambda^-_k$ and $\lambda_{2k}=\lambda^+_k$, where { \begin{equation}\label{eq:valgral} \lambda^{\pm}_k=\dfrac{\alpha\!-\!1\!-\!\varepsilon\!-\!\mu_k\pm\sqrt{(\mu_k+1-\varepsilon-\alpha)^2-4\varepsilon}}{2} \end{equation} } {for $k\in\{1,\ldots,N\}$. Setting $\varepsilon=0$ in (\ref{eq:lambdamu}) yields $\lambda_k^-=0$ with multiplicity $m=N$, and $\lambda_{k}^+=\alpha-1-\mu_k$, which satisfies $\mathrm{Re}(\lambda_{k}^+)=\alpha-1-\mathrm{Re}(\mu_k)<0$. By continuity, the real parts of eigenvalues $\lambda_{k}^+$ remains negative for sufficiently small $\varepsilon>0$. To guarantee a similar result for $\lambda_{k}^-$, we can split (\ref{eq:lambdamu}) into its real and imaginary parts. Letting $\lambda=\sigma+i\tau$ and $\mu=u+iv$, we get \begin{align*} &\sigma^2-\tau^2+\sigma(u+\varepsilon+1-\alpha)-v\tau+\varepsilon(2-\alpha+u)=0,\\ &2\sigma\tau+\sigma v+\tau(u+\varepsilon+1-\alpha)+\varepsilon v=0, \end{align*} which can be interpreted as zero-level sets of some functions $F$, $G$, respectively. Using the Implicit Function Theorem, variables $\sigma$ and $\tau$ can be expressed as functions $S$, $T$ of the remaining variables $\varepsilon$, $u$, $v$ whenever $$\dfrac{\partial(F,G)}{\partial(\sigma,\tau)}=(2\sigma+(u+\varepsilon+1-\alpha))^2+(2\tau+v)^2\not=0$$ is satisfied. The derivative $\tfrac{\partial S}{\partial \varepsilon}$ is readily obtained by implicit differentiation as $$\dfrac{(\alpha-\sigma-2-u)(2\sigma+1+\varepsilon+u-\alpha)-(2\tau+v)(\tau+v)}{(2\sigma+(u+\varepsilon+1-\alpha))^2+(2\tau+v)^2}$$ which is negative for $\varepsilon=\sigma=\tau=0$, given that $u=Re(\mu)\geqslant0$ and $\alpha<1$. recall that $\lambda^-_{k}=0$ is obtained as a zero of (\ref{eq:lambdamu}) when $\varepsilon=0$. Thus, for positive and {sufficiently small} values of $\varepsilon$, every $J^d$-eigenvalue has negative real part, and the equilibrium at the origin is locally exponentially stable.} \end{proof} Theorem \ref{thm:diffusive} shows that diffusive coupling requires intrinsic oscillators to lead to synchronous network oscillations. One could provide a global proof by means of Lyapunov functions and convergent systems analysis \cite{pavlov05}. We omit this proof due to space constraints. \section{Network and cellular positive feedback cooperate in triggering synchronous oscillations in networks of slow-fast damped nodes} We now turn to the network positive feedback present in model (\ref{eq:gral}), for $A^d=O_N$ and non-negative $A^e$. Throughout this section we will make the standing \emph{homogeneity} assumption $\alpha_i=\alpha$ for every $i\in\{1,\ldots,N\}$, \emph{i.e.}, we assume that the uncoupled oscillators are identical. We will relax this homogeneity assumption in future works. We also let $A^e=\beta A$, where $A$ is a simple matrix. The two parameters $\alpha>0$, $\beta>0$ govern cellular and network positive feedback, respectively. Then, the Jacobian of model (\ref{eq:gral}) evaluated at its equilibrium at the origin reads $$J^e=J(\vo{0}_N,\vo{0}_N)=\left(\begin{array}{c|c} -(1-\alpha)I_N+\beta A & -I_N \\ \hline \varepsilon I_N & -\varepsilon I_N \end{array}\right). $$ So we may apply Lemma \ref{lem:block} to matrix $J=J^e$, considering $M=A$, $a=1-\alpha$, $c=\beta$, $d=0$, to arrive at the following result. \begin{lem}\label{lem:feed} Let $A\in\mathscr{M}_{N\times N}$ be a simple, non-negative matrix, and consider model (\ref{eq:gral}) with $A^d=O_N$, $A^e=\beta A$ and $\alpha_i=\alpha$, where $\alpha\geqslant0$ and $\beta>0$ are non-negative parameters. Let $J^e$ be the Jacobian matrix of this system evaluated at equilibrium $(\vo{0}_N,\vo{0}_N)$. Then any eigenvector $(\vo{x},\vo{y})$ of $J^e$, associated to eigenvalue $\lambda$, must satisfy the conditions \begin{equation}\label{eq:vect} \vo{y}=\tfrac{\varepsilon}{\varepsilon+\lambda}\vo{x},\ \ \beta A\vo{x}=(1-\alpha+\lambda+\tfrac{\varepsilon}{\varepsilon+\lambda})\vo{x}. \end{equation} Moreover, by letting $\mu_1,\ldots,\mu_N$ be the eigenvalues of matrix $\beta A$, we obtain for each $\mu_k$ two $J^e$-eigenvalues $\lambda_k^{\pm}$, where \begin{equation}\label{eq:val} \lambda^{\pm}_k\!=\!\dfrac{\mu_k\!+\!\alpha\!-\!1\!-\!\varepsilon\!\pm\sqrt{(\mu_k\!+\!\alpha\!-\!1\!-\!\varepsilon)^2\!-\!4\varepsilon(2\!-\!\alpha\!-\!\mu_k)}}{2}. \end{equation} \end{lem} \begin{proof} Apply Lemma \ref{lem:block} with $J=J^e$, $M=\beta A$, $a=1-\alpha$, $c=1$, $d=0$. Then equation (\ref{eq:vect}) holds. This implies that the eigenvalues of $\beta A$ and $J^e$ are linked by the expression $1-\alpha+\lambda+\tfrac{\varepsilon}{\varepsilon+\lambda}=\mu_k,$ which is equivalent to $\lambda^2+(1+\varepsilon-\alpha-\mu_k)\lambda+(2-\alpha-\mu_k)\varepsilon=0$. From here we obtain two $J^e$-eigenvalues $\lambda^{\pm}_{k}$, which are indeed given by expression (\ref{eq:val}), thus ending the proof. \end{proof} The relationship established at the end of Lemma \ref{lem:block} has become clearer, in that equation (\ref{eq:val}) explicitly determines $J^e$-eigenvalues as functions of $\beta A$-eigenvalues (and, therefore, of $A$-eigenvalues). Thus, spectral analysis of matrix $J^e$, \emph{i.e.} local analysis of purely excitatory system (\ref{eq:gral}) under homogeneity hypotheses, reduces to spectral analysis of adjacency matrix $A$. \subsection{In-regular homogeneous network} We start by showing that if the coupling topology is in-regular, then model (\ref{eq:gral}) undergoes a Hopf-bifurcation for strong enough cellular and network positive feedback. Furthermore, because $\vo{1}_N$ is the dominant eigenvector of the adjacency matrix, the Hopf bifurcation happens along the synchronization space where each oscillator has the same state. That is, the network Hopf bifurcation leads to synchronous network oscillations. \begin{theo}\label{theo:inreg} Let $A\in\mathscr{M}_{N\times N}$ be an irreducible, simple, non-negative matrix associated to a strongly connected in-regular digraph of global in-degree $d^->0$, and consider model (\ref{eq:gral}) with $A^d=O_N$, $A^e=\beta A$ and $\alpha_i=\alpha$, where $\alpha\in[0,1)$ and $\beta>0$. Then, for sufficiently small $\varepsilon>0$ the system undergoes a Hopf bifurcation along the parametric curve $\beta=\tfrac{1+\varepsilon-\alpha}{d^-}$. Moreover, the center manifold associated to the bifurcation is tangent to the synchronization subspace $$E = \{(r\vo{1}_N,(\varepsilon r+\sqrt{\varepsilon(1-\varepsilon)}s)\vo{1}_N)\in\mathds{R}^{2N}:\,(r,s)\in\mathds{R}^2 \}$$ and is locally exponentially stable. \end{theo} \begin{proof} Given $A^e=\beta A$, where $A$ is in-regular, we conclude that $\mu_1=\beta d^->0$ is an eigenvalue with corresponding eigenvector $\vo{x}_1=\vo{1}_N$. Applying formula (\ref{eq:val}) to this eigenvalue yields two $J^e$-eigenvalues, namely $\lambda^{\pm}_{1}$, given by the expression $$ \dfrac{\beta d+\alpha-1-\varepsilon\pm\sqrt{(\beta d+\alpha-1-\varepsilon)^2-4\varepsilon(2-\alpha-\beta d)}}{2}.$$ Thus for $\beta=\tfrac{1+\varepsilon-\alpha}{d^-}>0$, $\lambda^{\pm}_{1}$ are purely imaginary complex conjugates while all other eigenvalues have negative real part. Indeed, irreducibility of matrix $A$ makes it possible to apply the Perron-Frobenius Theorem which guarantees that the dominant eigenvalue $d^->0$ has algebraic and geometric multiplicity one. Transversality is also easily verified (we omit details here due to space constraint), which yields the Hopf bifurcation. Conditions (\ref{eq:vect}) give us the eigenvectors $\vo{z}_{1,2}=(\vo{1}_N,(\varepsilon\mp i\sqrt{\varepsilon(1-\varepsilon)})\vo{1}_N)$ associated to eigenvalues $\lambda^{\pm}_{1}=\pm i\lambda$, $\lambda = \sqrt{\varepsilon(1-\varepsilon)}$. Now, identifying the real and imaginary parts $\vo{u}=(\vo{1}_N,\varepsilon\vo{1}_N)$, $\vo{v}=(\vo{0}_N,\sqrt{\varepsilon(1-\varepsilon)}\vo{1}_N)$ of the spanning vectors, it follows that $J^e\vo{u}\mp iJ^e\vo{v}=J^e(\vo{u}\mp i\vo{v})=\pm i\lambda(\vo{u}\mp i\vo{v})=\lambda\vo{v}\pm i\lambda\vo{u}.$ $J^e$, $\vo{u}$, $\vo{v}$ and $\lambda$ are real (matrices, vectors and values), so this last equation implies $J^e(\vo{u})=\lambda\vo{v}$ and $J^e\vo{v}=-\lambda\vo{u}$. Therefore the center manifold is tangent to the span of vectors $\vo{u}$, $\vo{v}$, whence the form of subspace $E$ is obtained. Because all other eigenvalues have negative real part, the associated center is locally exponentially attractive. \end{proof} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{excifru1.pdf} \caption{Contrast of two different configurations for model (\ref{eq:gral}) under in-regularity and homogeneity conditions as described in Theorem \ref{theo:inreg}. Matrix $A$ is a weighted modification of an adjacency matrix corresponding to the Frucht graph \cite{frucht39}, $N=12$. Specific parameters are $\alpha=0.5$, $\varepsilon=0.01$ and $d^-=5.0$. Image generated using \texttt{Julia 1.5.2}.} \label{fig:excifru1} \end{figure} Theorem \ref{theo:inreg} shows that when $\beta-\frac{1+\varepsilon-\alpha}{d^-}>0$ the model exhibits synchronous oscillations. This condition can be fulfilled both by increasing the cellular positive feedback $\alpha$ for fixed network positive feedback $\beta$ or vice-versa. Figure \ref{fig:excifru1} numerically illustrates the predictions of our theorem. \subsection{Strongly-connected homogeneous network} In-regular networks are too restrictive to accurately model biological networks like the SCN. In this section we relax the in-regularity assumption. Irreducibility of the adjacency matrix for strongly connected coupling topologies implies uniqueness of a Perron-Frobenius eigenvector that fully determines the pattern of in-phase oscillations emerging at the network Hopf bifurcation. \begin{theo}\label{theo:str} Let $A\in\mathscr{M}_{N\times N}$ be an irreducible, simple, non-negative matrix associated to a strongly connected digraph, and consider model (\ref{eq:gral}) associated to $A^d=O_N$, $A^e=\beta A$ and $\alpha_i=\alpha$, $\alpha\in[0,1)$ and $\beta>0$. Let $\rho>0$ be the leading eigenvalue of $A$, and $\vo{x}_0>0$ the Perron eigenvector associated to $\rho$. Then, for $\varepsilon>0$ and sufficiently small, the system undergoes a Hopf bifurcation along the parametric curve $\beta=\tfrac{1+\varepsilon-\alpha}{\rho}$. Moreover, the center manifold associated to the bifurcation is tangent to the real subspace $$E=\{(r\vo{x}_0,(r\varepsilon+s\sqrt{\varepsilon(1-\varepsilon)})\vo{x}_0)\in\mathds{R}^{2N}:\,(r,s)\in\mathds{R}^2\},$$ and is locally exponentially stable. \end{theo} \begin{proof} Given $A^e = \beta A$, where $A$ is irreducible and non-negative, we conclude that $\mu_1=\beta\rho>0$ is its leading real eigenvalue with corresponding eigenvector $\vo{x}_0>0$. Applying formula (\ref{eq:val}) to this eigenvalue yields two $J^e$-eigenvalues $\lambda^{\pm}_{1}$, where $$\lambda^{\pm}_1=\dfrac{\beta\rho\!+\!\alpha\!-\!1\!-\!\varepsilon\!\pm\!\sqrt{(\beta\rho\!+\!\alpha\!-\!1\!-\!\varepsilon)^2\!-\!4\varepsilon(2\!-\!\alpha\!-\!\beta\rho)}}{2}.$$ Thus, for $\beta=\tfrac{1+\varepsilon-\alpha}{\rho}>0$, $\lambda^{\pm}_{1}$ are purely imaginary complex eigenvalues while all other eigenvalues have negative real part. Indeed, irreducibility of matrix $A$ makes it possible to apply the Perron-Frobenius Theorem which guarantees that leading eigenvalue $\rho>0$ has algebraic and geometric multiplicity one. Transversality is once again easily verified, which yields the Hopf bifurcation. Setting $\lambda = \sqrt{\varepsilon(1-\varepsilon)}$, conditions (\ref{eq:vect}) give us the $J^e$-eigenvector $\vo{z}_{1,2}=(\vo{x}_0,(\varepsilon\mp i\sqrt{\varepsilon(1-\varepsilon)})\vo{x}_0)$ associated to $\lambda_{1}^{\pm}=\pm i\lambda$ at bifurcation. As in the previous Theorem, writing $\vo{z}_0=\vo{u}_0\mp i\vo{v}_0$ in its real and imaginary parts, one again sees that $\vo{u}_0$ and $\vo{v}_0$ span the real tangent subspace to the center manifold, whence we conclude the form of subspace $E$. Because all other eigenvalues have negative real part at the bifurcation, the associated center manifold is locally exponentially stable. \end{proof} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{excistr_perron1.pdf} \caption{Contrast of two different configurations for model (\ref{eq:gral}) under homogeneity conditions as described in Theorem \ref{theo:str}. Matrix $A$ corresponds to a weighted directed cycle, $N=25$, with random positive weights $(d_1,\ldots,d_N)$. The leading eigenvalue $\rho>0$ is the positive solution of $r^N=\prod d_i$. Specific parameters are $\alpha=0.5$ and $\varepsilon=0.01$. {In the upper plot, the dashed black line shows the evolution of $l(t)=\frac{\abs{\langle\vo{x}_0,\vo{x}(t)\rangle}}{\|\vo{x}_0\|\|\vo{x}(t)\|}$, where $\langle\cdot,\cdot\rangle$ and $\|\cdot\|$ denote the standard scalar product and 2-norm, respectively, and $\vo{x}_0$ is the adjacency matrix Perron-Frobenious eigenvector as defined in Theorem~\ref{theo:str}. Observe that $l(t)\to 1$ at almost all time points, \emph{i.e.}, excluding time points where $\vo{x}(t)=0$}. In other words, along the in-phase network oscillations, the $\vo{x}$-component of the state vector is parallel to the Perron eigenvector $\vo{x}_0$, as predicted by Theorem \ref{theo:str}.} \label{fig:excistr1} \end{figure} Theorem \ref{theo:str} shows that when $\beta = \tfrac{1+\varepsilon-\alpha}{\rho}>0$ the model exhibits in-phase oscillations. This condition can be fulfilled both by increasing the cellular positive feedback $\alpha$ for fixed network positive feedback $\beta$ or vice-versa. Furthermore, it shows that the Perron-Frobenius eigenvector of the adjacency matrix fully controls the oscillation pattern, at least close to the Hopf bifurcation. Figure \ref{fig:excistr1} numerically illustrates the predictions of our theorem. \section{Discussion and future directions} \subsection{Model extension} As it is the case for many physiological networks, sometimes outputs from one node affect the receiving node along multiple timescales. To incorporate such an effect in our model one could consider an \emph{extended case} of the excitatory version, adjusting equations (\ref{eq:gral}) to account for the effect of $\vo{x}$-variables over $\vo{y}$-variables. Under homogeneity conditions, computations of this extended case would be very similar to those made before, as seen by applying Lemma \ref{lem:block} when $d\not=0.$ \subsection{Extension to global results} The analysis made in this paper relies solely on local properties of dynamical systems near equilibrium. Therefore, a more complete and formal approach should also incorporate global tools before and after bifurcation to guarantee convergence to either a stable steady state or a stable limit cycle. For example, in the diffusive case one could propose a Lyapunov function \cite{zhou11} or use the theory of convergent systems \cite{pavlov05}. Alternatively one can use dominance analysis~\cite{forni2018differential}, through which it might be possible to show the existence of a globally attractive and invariant 2-dimensional manifold corresponding to the center manifold of the Hopf bifurcation, which would effectively make our local bifurcation analysis global. \subsection{Heterogeneous populations} It is evident that real life networks won't maintain, in general, the homogeneous hypothesis which were used extensively in the proofs of the excitatory case. A more delicate analysis should be provided when considering heterogeneous networks, much more likely to be found in real life phenomena, through higher dimensional bifurcation theory \cite{guckenheimer83}. \subsection{Application to circadian rhythmogenesis} The model in this work was originally motivated by the synchronization phenomena observed in the suprachiasmatic nucleus (SCN) of the mammal hypothalamus. One may identify different subpopulations and connections inside the SCN (e.g. spatially \cite{welsh10}, GABAergic \cite{dewoskin15}, neuropeptidergic \cite{evans16, shan20}). Neuromodulation here not only affects the electrophysiological rhythms, but also gives input to the molecular clock through a much slower loop \cite{diekman13}. Other works have pointed out that neural appositions and density of connections vary according from one neuropeptidergic subpopulation to the other \cite{mieda19, varadarajan18}. Therefore, a \emph{multilayer digraph} model may result useful in capturing the dynamic properties of this circadian phenomenon. Additional topologies $A_j^e$ could be incorporated to account for different neuropeptide release (VIP, AVP, GRP being the main ones). \bibliographystyle{ieeetr}
{ "timestamp": "2021-10-06T02:07:35", "yymm": "2109", "arxiv_id": "2109.13932", "language": "en", "url": "https://arxiv.org/abs/2109.13932" }
\section{Evaluation of the Charlier series} The first few complete Bell polynomials are: \begin{equation} \begin{aligned} & B_2(0,\omega_2) = \omega_2,\\ & B_3(0,\omega_2, \omega_3) = \omega_3,\\ & B_4(0,\omega_2,\omega_3, \omega_4) = 3 \omega_2^2 +\omega_4,\\ & B_5(0,\omega_2,\omega_3, \omega_4,\omega_5) = 10 \omega_3 \omega_2 + \omega_5,\\ & B_6(0,\omega_2,\omega_3, \omega_4,\omega_5, \omega_6) = 15 \omega_2^3 + 10 \omega_3^2 + 15 \omega_2 \omega_4 + \omega_6\\ \end{aligned} \end{equation} so that (Eq. \ref{Charlier}) \begin{equation}\label{fx} \begin{aligned} f(x) = &h(x) + \frac{\omega_2 h^{(2)}(x) }{2!} - \frac{\omega_3 h^{(3)}(x) }{3!}\\* & + \frac{(3 \omega_2^2 +\omega_4 )h^{(4)}(x) }{4!} + \frac{(10 \omega_3 \omega_2 + \omega_5)h^{(5)}(x) }{5!}\\ & +\frac{(15 \omega_2^3 + 10 \omega_3^2 + 15 \omega_2 \omega_4 + \omega_6)h^{(6)}(x) }{6!} + \dots, \end{aligned} \end{equation} where $h^{(n)}$ is an nth derivative of $h(x)$. Or in terms of numbers: \begin{equation}\label{fxnum} \begin{aligned} f(x) = &h(x) -0.138231 h^{(2)}(x) + 0.0248857 h^{(3)}(x)\\ &+ 0.00357113 h^{(4)}(x) + 0.00184635 h^{(5)}(x)\\ &+ 0.000250579 h^{(6)}(x) +\dots. \end{aligned} \end{equation} \section{Failure of of the truncated Charlier series} In the derivation of Eq. \ref{exact}, $\beta(r)$ was expanded while $\zeta(r)$ was left intact. A different approach is to leave $\beta(r)$ intact and represent the zeta function by its series: $\zeta(r) = \sum_{k = 1}^\infty k^{-r}$. Summation of the cumulant series then gives: \begin{equation}\label{B1} \Phi(t) = \prod_{k = 1}^\infty{ \frac{\Gamma(3/4)^2 \Gamma(1/4- i t/4k)^2}{\Gamma(1/4)^2\Gamma(3/4- i t/4k)^2}} \end{equation} where a phase factor has been suppressed as it does not affect the final PDF. Using a summation theorem \cite{WW}, this quotient can be expressed as the double product: \begin{equation} \Phi(t) =\prod_{m= 1}^\infty \prod_{k = 1}^\infty \frac{(m + 1/4)^2 (m+ 3/4-i t/4k)^2}{(m+ 3/4)^2(m + 1/4- i t/4k)^2}. \end{equation} For small $m, k$ this series can be Fourier transformed exactly to give, after rescaling the variable, approximations to $\phi(z)$, but this shows that, for any finite truncation, there is a singular point on the PDF where it hits zero on the `steep' side. This may be traced back to the fact that the original `gamma one half' variables that compose the distribution are strictly positive~\cite{PRE}. Taking the thermodynamic limit removes this singularity and allows the compound variables $x$ or $z$ to take unrestricted positive and negative values, with the resulting PDFs analytic everywhere on the real line. The singularity is genuine in a finite system but it is removed by fast Fourier transform (Fig. 1) through a (rather arbitrary) discretisation. Without proving it in detail, it seems clear that the gamma functions (e.g. in Eq. \ref{exact} and \ref{B1}) represent thermodynamic limit summations, but any finite expansion of them restores the singularity. Hence the Charlier method, which expands all but the first term of Eq. \ref{exact}, fails when it meets the singularity. \newpage
{ "timestamp": "2022-02-04T02:14:39", "yymm": "2109", "arxiv_id": "2109.13877", "language": "en", "url": "https://arxiv.org/abs/2109.13877" }
\section{Introduction} In the late 1920s, C. V. Raman discovered that when a material is exposed to light, its molecules scatter a small fraction of the incident photons inelastically. This inelastic scattering results in lower energy (Stokes) and higher energy (anti-Stokes) photons~\cite{raman1928}. Shortly after, Pringsheim postulated that anti-Stokes fluorescence may be used to decrease the temperature of a material ~\cite{pringsheim1929zwei}. It was not until the end of the 20\textsuperscript{th} century that optical cooling of solids was realized experimentally by Epstein and coworkers in ytterbium doped fluoride glass ~\cite{epstein1995observation}. Since this milestone achievement, systematic investigations have resulted in the observation of laser cooling in several families of rare-earth doped crystals and glasses~\cite{seletskiy2010laser, seletskiy2016laser,nemova2010laser, hoyt2000observation}. To date, the coldest temperature achieved by solid state optical refrigeration is in crystalline Yb:YLiF$_4$ down to 91\,K~\cite{melgaard2016solid}. For the first 24 years of laser cooling research activity, the observations of optically cooling glasses were confined to non-silicates ~\cite{seletskiy2016laser}. The paradigm has shifted recently with the success of cooling Yb-doped silica fibers and fiber preforms ~\cite{mobini2019laser,mobini2020laser,Knall:2020,10.1117/12.2510889,PhysRevApplied.11.014066,8426483,10.1117/12.2545233,10.1117/12.2548506,Knall:20,Knall_20_comp, Peysokhan:2021}. The high degree of polymerization and strong Si--O bonds make vitreous silica superior to fluoride systems, such as the ZLBAN family, with respect to mechanical and chemical durability. These attributes make silicates a more desirable material for fiber laser applications. In high-power fiber lasers, heat mitigation is required to maintain the integrity of the material and the beam profile~\cite{richardson2010review, brown2001,zenteno1993,ward2012,dawson2008, peysokhan2020characterization,peysokhan2019measuring}. Anti-Stokes florescence has been suggested as a viable method for heat mitigation in lasers~\cite{bowman1999, bowman2010, bowman2016}. Such a radiation-balanced fiber laser (RBL) experiences no increase in temperature, by effectively radiating out the waste heat generated during operation. Although silica-based radiation-balanced devices have been reported this year in pioneering work ~\cite{knall2021radiationbalanced, knallRBL}, those devices are operating at orders of magnitude below the threshold of interest for adoption by industry. Our work here demonstrates that it is possible for silica optical fibers to reach a steady-state of net cooling when exposed to pump powers of genuine interest to fiber laser practitioners. This suggests we are rapidly approaching the realization of a technologically desirable RBL. Here, we present to the best of our knowledge, a new record in the cooling of Yb-doped silica in vacuum by more than 18\,K from ambient temperature. Further, we observe record cooling in air by more than 6\,K from ambient, which is two orders of magnitude greater than previously published cooling results of optical fibers in air. We achieve this by using pump powers in the range of 1\,W to 185\,W at 1035\,nm wavelength. The high-purity fibers (Table \ref{tab:material}) were drawn from preforms fabricated with the modified chemical vapor deposition technique. Cation (Yb, Al) doping of the core was carried out by the gas-phase doping technique ~\cite{kuhn2019}, using Yb(thd)$_3$ and AlCl$_3$ as precursors. Relative to previously successful laser cooling compositions doped with Al and F ~\cite{mobini2020laser}, the molar concentration of Yb$_2$O$_3$ was increased by 25\% for fiber A. These glasses were developed for single-mode, high-power fiber laser applications, thus a controlled core-cladding refractive index step is essential. To achieve this, codoping with fluorine was used to decrease the refractive index of the material and assure single mode operation in the drawn large-mode area double clad fiber geometry (e.g. 20/400 geometry). \begin{table} \centering \caption{\bf Material properties of Yb doped fibers} \begin{tabular}{ccc} \hline \hline Fiber & A & B \\ \hline Codopants & Al, F & Al, F \\ Yb$_2$O$_3$ (mol\%) & 0.15 & 0.12 \\ Yb$^{3+}$ density (10$^{25}$ atoms/m$^{3}$) & 6.56 & 5.26 \\ Al:Yb ratio & 6:1 & 8.3:1\\ NA$_{\rm core}$ & 0.06 & 0.05 \\ D$_{\rm core}$/D$_{\rm cladding}$ ($\mu$m/$\mu$m) & 900/1000 & 900/1000 \\ \hline \hline \end{tabular} \label{tab:material} \end{table} Fiber lasers using these types of glasses have been used to achieve continuous wave (CW) output powers of more than 4 kW from a single fiber, while maintaining good beam quality ~\cite{beier2017, beier2018}. Output powers like these can only be accomplished with, among other things, high-purity core materials with low background absorption. The background losses of these glasses were below 10\,dB\,km$^{-1}$ measured at a wavelength of 1200\,nm, which has been found to be acceptable for laser cooling silica ~\cite{mobini2020laser, Peysokhan:2021}. The fabricated preforms consisted of a >3\,mm diameter doped core and >14\,mm undoped cladding. Partial removal of the cladding has enabled greater cooling in the study of a previous preform ~\cite{Peysokhan:2021}. Here, in effort to increase the cooling effect in these fibers, the undoped cladding was reduced from the preform and only a thin layer of passive cladding was left surrounding the active core material. Afterward, the preforms were drawn to fibers with an outer diameter of 1000\,$\mu$m and a doped core diameter of 900\,$\mu$m. For fiber A, the details of the cooling experiment are akin to those used in Ref.~\cite{Peysokhan:2021} and will only be briefly summarized here. Approximately 45\,mW of 1035\,nm light from a CW\,Ti:Sapphire laser is coupled through free space to a single mode fiber with an objective lens. A custom-built fiber amplifier increases the signal power, providing an output adjustable between 1\,W and 20\,W. The amplified signal serves as the input power to the Yb doped silica fiber subject to cooling. The doped fiber is supported by thin fused silica fibers to minimize conductive heating. To minimize convective heating, the doped fiber and sample holder are placed in an evacuated chamber where, for these experiments, a pressure of about 5\,$\times$\,10$^{-5}$\,torr was achieved. The input is coupled to the enclosed fiber by a 10\,cm focal length lens. Perpendicular to the axis of the fiber are two windows for real-time observation. One window is thermally transparent KCl for recording the temporal behavior of the fiber with a thermal imaging camera (TIC). The other window is fused silica through which discrete measurements of the fluorescence are taken by collecting the emitted light with a multimode fiber connected to a silicon CCD line spectrometer (Ocean Optics). See Fig.~\ref{fig:exp} for a visualization. For fiber B, an independent set of measurements were obtained using an amplifier capable of reaching 185\,W output of 1033\,nm light. The measurements on fiber B were made in air and data acquisition was carried out with a FLIR T540 thermal camera. \begin{figure}[htbp] \centering \includegraphics[width=3.25 in]{setup-fig.png} \caption{Schematic of experimental set up with 1035 nm source (Ti:Sapphire), 20x objective lens (O), single mode fiber (SMF), custom-built fiber amplifier (Fiber Amp), two long pass filters (LP), focusing lens (FL), vacuum chamber (Vac Cube) containing the sample and thermally transparent windows for observation, thermal imaging camera (TIC), multimode fiber (MMF) connected to a spectrometer (SPEC), beam block (B), and computer for data collection (CPU). } \label{fig:exp} \end{figure} First, we discuss the cooling of fiber A. The thermal imaging camera was used to track the evolution of the temperature as was done in Refs.\,\cite{mobini2020laser}\,{\&}\,\cite{Peysokhan:2021}. The temperature difference is defined as $\Delta T=T-T_0$ where $T_0$ is taken to be the ambient temperature of 296\,K. To record cooling beyond $\Delta T=10\,{\rm K}$, differential luminescence thermometry (DLT) was employed~\cite{imangholi2006,Peysokhan:2021}. DLT exploits the temperature-dependence of the luminescence spectral form (see Fig.\,\ref{fig:spec}) which is dictated by the density of states. \begin{figure}[htbp] \centering \includegraphics[width=3.25 in]{dlt-fluor.png} \caption{ Yb:Silica emission spectra changing with temperature for fiber A. The inset is over the range 880\,nm and 960\,nm to highlight the domain relevant to determining the temperature of this fiber, indicated by the vertical dashed lines. The spectra S($\lambda$,T) are individually normalized to their intensity at $\lambda=978\,{\rm nm}$ and compared to the normalized spectrum at the start ($t=0$) of the experiment S($\lambda$,T$_0$) after exciting with 20\,W of coherent 1035\,nm light.} \label{fig:spec} \end{figure} In the DLT analysis, each spectrum is normalized to its maximum (at $\lambda=978\,{\rm nm}$) to eliminate influence of input power fluctuations. The difference between a spectrum at time $t$ is then taken with respect to the spectrum taken at the onset of the experiment, where the cooling is assumed to be negligible and thus the spectral density is representative of $S(\lambda,T_0=296\,{\rm K}$). The normalized difference spectra is defined as \begin{align} \label{Eq:spectraldensity-1} \Delta S(\lambda,T,T_0) = \frac{S(\lambda,T)}{S_{max}(T)} - \frac{S(\lambda,T_0)}{S_{max}(T_0)}. \end{align} The change in temperature has been found to be linearly proportional to the integrated difference in spectral density given by \begin{align} \label{Eq:spectralintegral-2} S_{DLT}(T,T_0) = \int_{\lambda_1}^{\lambda_2} \lvert \Delta S(\lambda,T,T_0) \rvert d\lambda , \end{align} such that $\Delta {\rm T}= \alpha\, S_{DLT}$ with $\alpha=34.5\pm 0.4$\,K for fiber A. The temperature difference measured by the TIC is compared to the temperature difference by DLT in Fig.~\ref{fig:cool} for a 20\,W pump power incident on the fiber held under vacuum. In Fig.~\ref{fig:cool}, we see that, in the absence of convective heating contributions, both DLT (black line) and TIC (blue circles) measurements record nearly the same rate of cooling up to ~25 seconds lapsed time. At this point, the thermal imaging camera becomes saturated. When the experiment was performed in-air (red line), the maximum temperature difference achieved was less than 4\,K and so the TIC was able to reliably track the cooling of the fiber with 20\,W of input power under atmospheric pressure. Despite the maximum $\Delta T$ being reduced by about a factor of 5 for the in-air measurement, the onset of cooling for in-air and in-vacuum trials coincides for the initial ${\sim}$5\,seconds, $\Delta T {\approx} 1.3\,{\rm K}$, before the air measurement deviates. The dashed green line in Fig.~\ref{fig:cool} represents the application of the following exponential form to the data, \begin{align} \label{Eq:tempform-3} &\Delta T(t) = \Delta T_{max}(e^{-t/\tau_c}-1). \end{align} \begin{figure}[t] \centering \includegraphics[width=3.25 in]{20W-air-and-vac.png} \caption{Comparison of temperature measurements of fiber A pumped with 20\,W of 1035\,nm light in both vacuum (TIC-blue and DLT-black) and atmospheric (TIC-red) pressure conditions. The green line corresponds to the application of Eq. 3 to the DLT in vacuum data.} \label{fig:cool} \end{figure} $\Delta T_{max}$ is obtained from the experimental data over a continuous time interval greater than or equal to two minutes in duration while steady-state behavior is exhibited. The time constant in the exponential of Eq.\,(\ref{Eq:tempform-3}) may be defined by \begin{align} \label{Eq:terms-4} \tau_c=\frac{\rho V c_v}{4\epsilon \sigma T_0^3 A }, \end{align} where $A$ is the area of the sample, $c_v=741\,{\rm J\cdot kg^{-1}\cdot K}^{-1}$ is the specific heat of fused silica, $\epsilon=0.85$ is the emissivity of the doped glass, $\rho=2.2\times 10^3\,{\rm kg\cdot m^{-3}}$ is the density of fused silica, $\sigma = 5.67\times 10^{-8}\,{\rm W\cdot m^{-2}\cdot K^{-4}}$ is the Stefan-Boltzmann constant, $T_0$ is the ambient temperature, and $V$ is the volume of the fiber. For the given fiber geometry, evaluation of Eq.\,(\ref{Eq:terms-4}) gives $\tau_c=81$\,s. This agrees well with the average experimental value $\tau_c = 84 {\pm} 3$\,s. Taking $\tau_c = 84 {\pm} 3$\,s and determining the $\Delta T_{max}$ from the TIC or DLT data, Eq.\,(\ref{Eq:tempform-3}) was found to model the experimental data quite well. We next determine the absorbed power, $P_{abs}$, with the Beer-Lambert law \begin{align} \label{Eq:spectraldensity-1} P_{abs} = P_{in} \textsf{T$_{tot}$} (1-e^{-\alpha_r l}). \end{align} $P_{in}$ is the pump power measured before the focusing lens, \textsf{T$_{tot}$} is the total transmission coefficient, $l$ is the length of the fiber, and $\alpha_r$ is the resonant absorption coefficient. \textsf{T$_{tot}$} is the product of the transmission of the focusing lens (\textsf{T$_{\rm l}$} = 0.998), the transmission of the chamber window (\textsf{T$_{\rm cw}$}=0.92), and the transmission into the glass fiber after accounting for Fresnel losses at the surface (\textsf{T$_{\rm g}$} = 0.96) such that \textsf{T$_{tot}$} = \textsf{T$_{\rm l}$}\textsf{T$_{\rm cw}$}\textsf{T$_{\rm g}$}. The resonant absorption coefficient was found to be $\alpha_r$($\lambda$=1035\,nm) = 1.92 $\pm$ 0.04 m$^{-1}$. The magnitude of the cooling of fiber A in-vacuum was found to increase with increasing absorbed power. With the absorbed power now known for each trial, we next inspect the slope of the $\Delta T(t)$ curves at $t=0$ to find the cooling efficiency, $\eta_c$, of fiber A at 1035\,nm wavelength via \begin{align} \label{Eq:coolingeff} &\eta_c = \frac{- \rho V c_v}{P_{abs}} \partial_t \Delta T \rvert_{t=0}. \end{align} The TIC data was used to calculate $\eta_c$ for each trial, as the $\Delta T(t)$ for small $t$ was below the saturation limit. We find the cooling efficiency of fiber A to be $\eta_c = 1.2 \pm 0.1 \%$ (Fig. \ref{fig:etac}). \begin{figure}[t] \centering \includegraphics[width=3.25 in]{etac-new.png} \caption{Calculated cooling efficiency (fiber A) correlation with increasing absorbed power alongside the mean (solid blue line) and the standard deviation (blue dotted line).} \label{fig:etac} \end{figure} For fiber B, experiments were conducted under ambient pressure conditions. Cooling by 6.3\,K from room temperature was observed for 185\,W of a 1033\,nm pump (Fig.\,\ref{fig:fiberB}). In Fig.~\ref{fig:alldata}, we plot the cooling as a function of pump power from all experimental trials. This illustrates well that we may reasonably expect cooling by more than 20\,K below room temperature in-vacuum with larger pump powers. The measurements on fiber A and fiber B were made at separate locations, and so it was not possible to use the vacuum cube configuration and the 185\,W amplifier simultaneously. This gap will be bridged in our future work. Giving consideration to Figs.~\ref{fig:cool} and~\ref{fig:alldata}, for the current fibers we anticipate cooling in-vacuum with higher pump powers to yield cooling to about 30\,K below room temperature. In summary, for the first time, to the best of our knowledge, optically cooling silica in-vacuum to more than 18\,K from room temperature has been achieved. Compared to our previous work, increasing the Yb$^{3+}$ concentration and significantly reducing the thermal load of the passive cladding increased the cooling achieved by a factor of three. These results suggest that these fibers may serve as a platform for a desirable radiation-balanced laser. \begin{figure}[t] \centering \includegraphics[width=3.25 in]{jena-180w.png} \caption{Temporal cooling behavior of fiber B illuminated with 185\,W of 1033\,nm light under ambient pressure conditions. The dotted horizontal line is positioned at -6.3\,K to aid the eye.} \label{fig:fiberB} \end{figure} \begin{figure}[t] \centering \includegraphics[width=3.25 in]{ALLfive.png} \caption{Cooling for different pump powers for fiber A in-vacuum (black squares), fiber A in-air (open black star), and fiber B in-air (blue circles).} \label{fig:alldata} \end{figure} \smallskip \smallskip This material is based upon work supported by the Air Force Office of Scientific Research under award number FA9550-16-1-0362 titled Multidisciplinary Approaches to radiation-balanced Lasers (MARBLE).\\
{ "timestamp": "2021-09-29T02:25:19", "yymm": "2109", "arxiv_id": "2109.13872", "language": "en", "url": "https://arxiv.org/abs/2109.13872" }
\section{Introduction} \label{sec:intro} Redbacks are eclipsing binary systems composed by a neutron star (NS) and a non-degenerate, low mass companion ($0.1 < M_{2}/M_{\odot}< 0.7$) with an orbital period between 0.1 and 1 day. According to the standard scenario (see, for example, \citealt{2002ApJ...565.1107P}), these systems are a result of the evolution of interacting binaries in which the stars exchange mass. In systems that may be redbacks progenitors, the donor is a solar-like star that evolves as it were isolated until it eventually fills its Roche Lobe, allowing matter and angular momentum to flow towards the NS. The standard model predicts a long and stable episode of mass transfer and, sometimes, a small number of Roche Lobe Overflows (RLOFs) due to thermonuclear flashes of the donor. Adding irradiation feedback to this model is relevant when studying systems with short orbital periods, such as redbacks \citep{2011Sci...333.1717B}. Irradiation feedback occurs when the donor star transfers mass onto the NS. The transferred matter produces X-ray radiation that illuminates the donor star; if this star has an outer convective zone, the irradiated surface is partially inhibited from releasing energy emerging from its deep interior. In some cases, the star's structure is unable to sustain the RLOF and the donor detaches. Then, nuclear evolution may lead the donor star to experience a RLOF again, and the X-ray radiation from the matter that falls onto the NS reappears. As a consequence of irradiation feedback, systems suffer several pulses of mass transfer (see, for example, \citealt{1993A&A...277...81H}; \citealt{2004A&A...423..281B}; \citealt{2012ApJ...753L..33B}). The number of mass transfer pulses depends on a free parameter of the model, called $\alpha_{irrad}$, which represents the fraction of the incident flux that effectively irradiates the donor star. Larger values of this parameter would result in a smaller number of RLOFs. Irradiated systems can be detected as X-ray sources during RLOF states, or as binary millisecond pulsars once the mass transfer ceases. Besides, in the standard scenario, when studying systems which descend from interacting binaries, it is usually supposed that the systems are tidally locked, so they have a synchronous rotation, a circular orbit, and the rotational axes of both stars are aligned with the orbital rotation axis. This implies that the orbital period of the binary equals the rotational period of the donor star. These assumptions may be useful to study the general properties of the systems, but it may result inadequate for investigating the changes in the orbital period. In the frame of the model presented above, \citet{2015ApJ...798...44B} found a plausible progenitor to the redback system PSR~J1723-2837 that accounts for its main characteristics (orbital period, mass, mass ratio, and temperature of the donor star), given by \citet{2004MNRAS.355..147F} and \citet{2013ApJ...776...20C}. This progenitor is in a low-irradiation regime with $\alpha_{irrad}=~0.01$. Systems with greater values of $\alpha_{irrad}$ are not plausible progenitors since at the time they have the mass ratio observed ($M_{NS} / M_2 = 3.3\pm0.5$), they are transferring mass, so they would be observed as low-mass X-ray binaries (LMXBs), see Sec.~4 in \cite{2015ApJ...798...44B}. Remarkably, the temporal derivative of the orbital period predicted by the model is three orders of magnitude smaller than the observed value, $\dot{P}_{orb}=~-3.50(12) \times 10^{-9} ss^{-1}$ \citep{2013ApJ...776...20C}. This observed value is large in comparison with the values of $\dot{P}_{orb}$ measured in other close binary systems (CBS), as can be seen in Table~\ref{tab:Ppunto}. We consider that a so large disparity between observations and theoretical predictions may indicate the necessity of improving the models. It is the aim of this paper to explore the physical processes that can lead the PSR~J1723-2837 system to have a so large negative value of the temporal derivative of the orbital period $\dot{P}_{orb}$. The disparity between observations and theory for this redback motivated us to study tidal interactions in CBSs. In systems like redbacks, the compact object introduces a differential force that raises tides on the surface of the donor. If there were not dissipation of kinetic energy into heat, the star would elongate just in the direction of the line joining the centres of mass of both stars. Instead, that dissipation induces a phase lag in the tidal bulge, so the inclined mass portion exerts a torque on the star. This torque leads to an exchange of angular momentum between the stellar spin and the orbit, while conserving the total angular momentum and diminishing the orbital and rotational energy. In consequence, the orbital parameters change, and stellar rotation tends to synchronise with the orbital motion, the orbit tends to circularise and the equatorial plane approaches the orbital plane (\citealt{1981A&A....99..126H,2008EAS....29...67Z, 2014MNRAS.444..542R}). During the synchronisation process, the orbital period changes, allowing the occurrence of large values of $\dot{P}_{orb}$. Here we study the effect of tidal forces acting on the donor star in the redback system PSR~J1723-2837. We consider the models presented in \citet{2015ApJ...798...44B}, in which they found a progenitor for this redback. In that paper, it was assumed that the system is always synchronised. Here we shall relax this hypothesis. As the derived observed radius of the donor star indicates that it is close to filling its Roche Lobe, we explore the effects of tides in between two consecutive mass transfer episodes, i.e., when the system is in a quasi-RLOF state. We work under the weak friction approximation in the equilibrium tide. Equilibrium means that the star is assumed to be in hydrostatic equilibrium and that, if there were no dissipation mechanisms, it would instantaneously adjust to the perturbing force exerted by its companion, i.e., the NS. The weak friction model supposes that the tidal lag angle, produced by the phase lag in the tidal bulge, is proportional to the difference between the orbital angular velocity and the rotational velocity of the star \citep{2008EAS....29...67Z}. Recently, {\citet{2016ApJ...833L..12V} reported observations of PSR~J1723-2837 very relevant for this study. They stated that this system is not synchronised, and inferred a ratio $P_{spin} / P_{orb} = 0.9974(7)$, where $P_{spin}$ is the rotation period of the companion and $P_{orb}$ is the orbital period of the binary. At present there are freely available observations of this object, performed by the spacecraft Kepler second mission K2 \citep{2014PASP..126..398H}. Below, we shall present our analysis, searching for the periods present in this data.} \begin{table} \label{tab:Ppunto} \begin{threeparttable}[b] \caption{Change in the orbital period of different pulsars. Parentheses are uncertainties in the last digit quoted.} \begin{tabular}{lr} \hline \hline Pulsar Name & $\dot{P}_{orb}$ [$ss^{-1}$] \\ \hline $PSR~J1723-2837$\tnote{1} & $-3.50(12) \times 10^{-9}$\\ $2A~1822-371$\tnote{2} & $1.51(7)\times 10^{-10}$\\ $SAX~J17448.9-2021$\tnote{3} & $1.1(3)\times 10^{-10}$ \\ $PSR~1957+20$\tnote{4} & $-3.9(9)\times 10^{-11}$ \\ $PSR~J1023+0038$ \tnote{5} & $-7.32(6)\times10^{-11}$\\ $PSR J0740+6620$ \tnote{6} & $1.2(2)\times10^{-12}$\\ \hline \hline \end{tabular} \begin{tablenotes} \item[1] \citet{2013ApJ...776...20C} \item[2] \citet{2019A&A...625L..12M} \item[3] \citet{2016MNRAS.459.1340S} \item[4] \citet{1991ApJ...380..557R} \item[5] \citet{2013arXiv1311.5161A} \item[6] \citet{2021arXiv210400880F} \end{tablenotes} \end{threeparttable} \end{table} The remainder of this paper is organised as follows. In Sec.~\ref{Sec:EqsOrbEvol} we describe the tidal equations we have considered. In Sec.~\ref{sec:Results} we present a detailed analysis of our tidal model and show our results. In Sec.~\ref{sec:Asyncronew} we analyse the relevant available observations of PSR~J1723-2837 and confront them with our theoretical results. Finally, in Sec.~\ref{sec:conclu}, we summarise the main findings of this paper, and elaborate on our conclusions. \section{The Equations of Orbital Evolution} \label{Sec:EqsOrbEvol} The previous calculations presented in \citet{2015ApJ...798...44B} show that the donor star in the redback system PSR~J1723-2837 is a cool object with a deep outer convective envelope. For stars with these characteristics, turbulent convection is the dominant dissipation source and it acts on the equilibrium tide \citep{2008EAS....29...67Z}. Therefore, the description of the tides we shall consider is that given by \citet{2014MNRAS.444..542R}, which is a generalisation of that given by \citet{1981A&A....99..126H}. Besides, we add the terms corresponding to gravitational wave radiation given in \citet{2002MNRAS.329..897H}. Tides are described by a system of four ordinary, non-linear differential equations describing the evolution of the mean angular velocity $\Omega$ and eccentricity $e$ of the orbit together with the angular rotation of the companion $\omega$, and the inclination of its axis $i$ with respect to the orbital plane: \begin{eqnarray} \frac{d\Omega}{dt}= 9 \bigg(\frac{K}{T}\bigg) q(1+q) \bigg(\frac{R_{2}}{a}\bigg)^8 \frac{\Omega}{(1-e^2)^{15/2}} \nonumber \\ \bigg[f_{1}(e^{2}) - (1-e^2)^{3/2} f_{2}(e^{2}) \frac{\omega}{\Omega} \cos{i} \bigg] + \bigg(\frac{d\Omega}{dt}\bigg)_{GWR}, \label{eq:Omega_orbita} \end{eqnarray} \begin{eqnarray} \frac{de}{dt}= -27 \bigg( \frac{K}{T} \bigg) q(1+q) \bigg( \frac{R_{2}}{a} \bigg)^8 \frac{e}{(1-e^2)^{13/2}} \nonumber \\ \bigg[f_{3}(e^{2}) - \frac{11}{18} (1-e^2)^{3/2} f_{4}(e^{2}) \frac{\omega}{\Omega} \cos{i} \bigg] + \bigg(\frac{de}{dt}\bigg)_{GWR}, \label{eq:excentricidad} \end{eqnarray} \begin{eqnarray} \frac{d\omega}{dt}= 3 \bigg( \frac{K}{T} \bigg) \frac{q^2}{k^2} \bigg( \frac{R_{2}}{a} \bigg)^6 \frac{\Omega}{(1-e^2)^{6}} \bigg[f_{2}(e^{2}) \cos{i} - \nonumber \\ \frac{1}{4} \frac{\omega}{\Omega} (1-e^2)^{3/2} (3+\cos{2i}) f_{5}(e^{2}) \bigg] + \bigg(\frac{d\omega}{dt}\bigg)_{MB}, \label{eq:rotacion1} \end{eqnarray} \begin{eqnarray} \frac{di}{dt}= -3 \bigg( \frac{K}{T} \bigg) \frac{q^2}{k^2} \bigg( \frac{R_{2}}{a} \bigg)^6 \frac{\Omega}{\omega} \frac{\sin{i}}{(1-e^2)^{6}} \bigg[f_{2}(e^{2}) - \frac{f_{5}(e^{2})}{2} \nonumber \\ \bigg(\frac{\omega}{\Omega} (1-e^2)^{3/2} \cos{i} + \frac{R_{2}^{2}}{G M_{2}} a \omega^{2} k^{2} (1-e^2) \bigg) \bigg]. \label{eq:inclinacion1} \end{eqnarray} There, the gravitational wave contributions \citep{2002MNRAS.329..897H} are \begin{eqnarray} \frac{1}{\Omega} \bigg(\frac{d\Omega}{dt}\bigg)_{GWR}= -8.315 \times 10^{-10} \; \frac{M_{1} M_{2} M_{b} }{a^{4}} \frac{1+\frac{7}{8} e^{2}}{(1-e^{2})^{5/2}} \; yr^{-1}, \label{eq:Omega_orbitaGWR} \end{eqnarray} \begin{eqnarray} \frac{1}{e} \bigg(\frac{de}{dt}\bigg)_{GWR}= - 8.315 \times 10^{-10} \; \frac{M_{1} M_{2} M_{b} }{a^{4}} \frac{1+\frac{121}{96} e^{2}}{(1-e^{2})^{5/2}} \; yr^{-1}, \label{eq:excentricidadGWR} \end{eqnarray} \noindent where $M_{1}$ and $M_{2}$ are the masses of the pulsar and the companion respectively; $M_{b}= M_{1}+M_{2}$, $q= M_{1}/M_{2}$, $k$ is the radius of gyration of the companion (the donor), which describes its moment of inertia $I$ as $I= k^{2} M_{2} R_{2}^{2}$. On the other hand, $\big(K/T\big)$ is the tidal timescale, which strongly depends on the structure of the star. The functions $f_{i}(e^{2}), i= 1, \cdots, 5$ are polynomials of the square of the eccentricity, given in \citet{1981A&A....99..126H}; for a circular orbit $f_{i}(e^2=0)= 1$. For the magnetic braking we consider \citep{2014MNRAS.444..542R} \begin{eqnarray} \bigg(\frac{d\omega}{dt}\bigg)_{MB}= -\gamma_{MB}\ R^{2}_{2}\ \omega^{3}, \label{eq:rotacionMB} \end{eqnarray} \noindent where $\gamma_{MB}= 5 \times 10^{-29} s\; cm^{-2}$. As stated above, the companion of PSR~J1723-2837 is a low mass star with a very extended outer convective zone. In this case \citep{2002MNRAS.329..897H}, \begin{equation} \bigg( \frac{K}{T} \bigg)= \frac{2}{21} \frac{F_{conv}}{\tau_{conv}} \frac{M_{env}}{M_{2}} yr^{-1}, \end{equation} \noindent where $M_{env}$ is the mass in the convective envelope, $F_{conv}$ is the fraction of the convective cells which contribute to the damping, and $\tau_{conv}$ is the eddy turnover time-scale. The rate of change of the orbital period is computed as $\dot{P}_{orb}= - \big(2\pi/\Omega^2\big)\; d\Omega/dt$. There are different proposals for expressing the coefficient $F_{conv}$. It represents the main uncertainty in the theory of tides applied to stars with convective envelopes. Here we shall analyse three different expressions for this law: \begin{equation} F_{conv}= {\rm min} \bigg[ 1, \bigg( \frac{P_{tid}}{2\tau_{conv}} \bigg) \bigg], \label{f_lineal} \end{equation} \begin{equation} F_{conv}= {\rm min} \bigg[ 1, \bigg( \frac{P_{tid}}{2\tau_{conv}} \bigg)^2 \bigg], \label{f_cuad} \end{equation} \begin{equation} F_{conv}= 50 \, {\rm min} \bigg[ 1, \bigg( \frac{P_{tid}}{2\tau_{conv}} \bigg)^2 \bigg], \label{f_cuad50} \end{equation} \noindent where $P_{tid}$ is the tidal forcing period \begin{equation} \frac{1}{P_{tid}}= \bigg| \frac{1}{P_{orb}} - \frac{1}{P_{spin}} \bigg|. \label{p_tid} \end{equation} \noindent Eq.~(\ref{f_lineal}) has been suggested by \citet{1966AnAp...29..489Z}, whereas Eq.~(\ref{f_cuad}) is proposed for high tidal forcing frequency (i.e., when $P_{tid} \ll \tau_{conv}$) \citep{1977Icar...30..301G}. Both have been studied under numerical simulations (\citealt{2007ApJ...655.1166P}, \citealt{2012MNRAS.422.1975O}, \citealt{2020ApJ...888L..31V}). On another hand, Eq.~(\ref{f_cuad50}) is a result of calibrations against the cutoff period for circularization of binaries in the open cluster M67 and from the orbital decay of the high mass X-ray binary LMC X-4 \citep{2008ApJS..174..223B}. In the tidal equations presented above it has been assumed that the donor star rotates as a rigid body. Here, we shall relax this assumption describing the donor star as a two layers object that may have different rotational velocities. It is natural to assume that these are the outer convective and inner radiative parts of the star. Magnetic braking will be coupled to the outer layers of the star, which are the ones that synchronise with the orbital period. The central part of the star acts on the outer layer contributing to modify its rotation rate. How these outer layers react will depend on the initial relative difference of rotation velocities of the two portions of the star, $\varepsilon$, \begin{equation} \omega_{in}=(1+\varepsilon)\; \omega_{out} , \end{equation} \noindent where $\omega_{in}$ and $\omega_{out}$ are the rotational angular velocity in the inner and outer parts of the star, respectively. Under the condition that the inner and the outer parts synchronise their rotation for long enough times, we write for simplicity a linear coupling described by the equation \begin{equation} \frac{d\omega_{in}}{dt}=\frac{I_{out}}{I \tau} (\omega_{out}-\omega_{in}) , \label{Eq:rotCentral} \end{equation} \noindent where $\tau$ is the coupling timescale between the two stellar layers, and $I$ is the total moment of inertia, i.e, the sum of the moment of inertia of the internal ($I_{in}$) and external ($I_{out}$) parts of the star, both computed from the stellar models. Accordingly, Eq.~(\ref{eq:rotacion1}) is replaced by: \begin{equation} \frac{d\omega_{out}}{dt}= \frac{I_{in}}{I \tau} (\omega_{in}-\omega_{out})+\frac{-I(\dot{h}/I)}{I_{out}} , \label{Eq:rotAfuera} \end{equation} \noindent where $-\dot{h}/I$ is the right hand side of Eq.~(\ref{eq:rotacion1}), being $h$ the orbital angular momentum (see \citealt{1981A&A....99..126H}). Summarising, the differential equations that describe our treatment of tides are (\ref{eq:Omega_orbita}), (\ref{eq:excentricidad}), (\ref{eq:inclinacion1}), (\ref{Eq:rotCentral}), and (\ref{Eq:rotAfuera}). In Eqs.~(\ref{eq:Omega_orbita}), (\ref{eq:excentricidad}), (\ref{eq:inclinacion1}), and in $\dot{h}$ (Eq.~\ref{Eq:rotAfuera}), we have assigned $\omega=\omega_{out}$ since we assumed that tides interact with the outer stellar layers. We have solved these equations with a fully implicit, finite differences algorithm. \section{Results of the model } \label{sec:Results} \begin{figure} \hspace*{-0.6in} \centering \includegraphics[width=0.7\textwidth]{F-HR-widows.pdf} \caption{Evolutionary track for the donor star of a system composed by a $1.25~M_{\odot}$ solar composition donor star, evolving on a CBS together with a $1.4~M_{\odot}$ mass NS on an initial 0.75~day orbit. The calculation corresponds to an irradiation feedback regime with $\alpha_{irrad}=0.01$. The grey area depicts the range of the effective temperatures compatible with the observations of the PSR~J1723-2837 system.} \label{HR} \end{figure} \begin{figure} \hspace*{-0.6in} \centering \includegraphics[width=0.7\textwidth]{F-porb-m2.pdf} \caption{Orbital period as a function of the mass of the donor star for the evolutionary track presented in Fig.~\ref{HR}. The dashed line represents the orbital period observed for PSR~J1723-2837. } \label{PorbvsM} \end{figure} Using our binary evolutionary code with the inclusion of irradiation feedback (\citealt{2003MNRAS.342...50B, 2014ApJ...786L...7B, 2015MNRAS.449.4184B}), we compute the evolution of a binary that achieves the state observed for PSR J1723-2837 system. The initial parameters where taken from \citet{2015ApJ...798...44B}: a 1.25~$M_{\odot}$ solar composition donor star, evolving in a CBS together with a 1.4~$M_{\odot}$ NS on a 0.75~day orbit and an irradiation feedback regime of $\alpha_{irrad}$= 0.01. Fig.~\ref{HR} shows the evolutionary track in the Hertzsprung-Russell diagram for the donor star, which undergoes a large number of RLOFs, separated by detached stages. The grey area denotes the range of the effective temperature of the donor star observed by \citet{2013ApJ...776...20C}. Fig.~\ref{PorbvsM} shows the orbital period as a function of the mass of the donor star. As it can be seen, the system achieves the observed orbital period $P_{orb}= 0.615436473(\pm8)$~d, denoted with an horizontal dashed line, during several pulses of mass transfer. In order to study the effects of tides, we select one of these pulses, i.e., a section between two successive RLOFs of the donor star, so it is in concordance with the observed orbital period and the range of temperature and mass of the donor star. It is worth to notice that any other mass transfer pulse with the same characteristics leads to similar results. Although in this particular case the progenitor of PSR~J1723-2837 system has $\alpha_{irrad}= 0.01$, it is worth describing how a variation of this parameter would affect the tides. When $\alpha_{irrad}$ is larger, there occur less pulses, the time between pulses increases and the mass transfer rate is greater (see Fig.~1 in \citealt{2015ApJ...798...44B}) in a way in which the mass of the donor is almost unaffected. If we compare three pulses with, for example, $\alpha_{irrad}$=~1, 0.1 and 0.01 in the same moment of the evolution, luminosity, mass, orbital period and radius of the star will be very similar. The main difference will be in the change of the radius $R_{2}$ during the mass transfer pulses, because the greater the $\alpha_{irrad}$, the greater the change in $R_{2}$. Anyhow, the general behaviour would be almost the same, so qualitatively tidal interactions would bring very similar results. As stated above, for larger values of $\alpha_{irrad}$, the star remains detached longer from the Roche Lobe in between two consecutive pulses; so, tides would have more time to act. In any case, we shall show that even for the case of $\alpha_{irrad}$=~0.01, the time that the star needs to get synchronised is much smaller than the time it remains detached. \begin{figure*} \centering \includegraphics[angle=-90,width=1.0\textwidth]{desincroB.pdf} \caption{Change in the orbital period as a function of time for different initial asynchronism. The percentages on the right low corner of the panel (c) give the percentage of departure from synchronism (e.g. an initial asynchronism of 2\% represents a donor in which the rotational velocity in the surface is 2\% greater than the orbital velocity). The panels correspond to each of the different expressions considered for $F_{conv}$. The horizontal grey line represents the orbital period derivative observed for PSR~J1723-2837.} \label{asincronismos} \end{figure*} We investigate the effects of tides on the orbital parameters in this portion of the evolution of the system, i.e., in the pulse mentioned above. For this purpose we have taken the stellar models calculated with our stellar evolutionary code and solved the full set of tidal equations presented in Sec.~\ref{Sec:EqsOrbEvol}. The results are presented in Fig.~\ref{asincronismos}, ~\ref{CR-epsilon-tau}, and ~\ref{pp} where the initial value of time is set after the detachment of the donor from its Roche Lobe, when tidal effects are computed. We consider that the system has initial values $e=0$ and $i=0$. The model of tides acting in a two layers star has three parameters: $\varepsilon$, $\tau$ (defined in Sec.~\ref{Sec:EqsOrbEvol}) and the initial asynchronism (the difference between the orbital rotational velocity and the rotational velocity in the surface of the donor star). $\varepsilon$ represents the relation between the initial rotation velocity of the two portions of the star and $\tau$ represents the characteristic coupling time between the rotation of the central region and the surface of the donor star. Typical values of $\tau$ were obtained from an independent numerical code currently under development, that solves the equations of stellar structure for differentially (shellular) rotating stars. According to our calculations, the time that it takes to a rotating layer to balance to a non-rotating one is of the order of a million years. Additionally, we explored three different laws for the scaling for the viscosity due to the turbulent convection with the tidal forcing frequency, $F_{conv}$ (see Eqs.~\ref{f_lineal}-\ref{f_cuad50}). We study the evolution of the orbital period for each prescription of $F_{conv}$ and different initial asynchronisms, fixing $\varepsilon = 0$ and $\tau = 1~$Myr (see Fig.~\ref{asincronismos}). Each panel corresponds to a different expression for $F_{conv}$, given by Eqs.~(\ref{f_lineal}-\ref{f_cuad50}), labelled as Linear, Quadratic, and Quadratic*50 respectively. By comparing these three prescriptions, it can be seen that the initial asynchronism needed to reach the observed value of $\dot{P}_{orb}$ is very dependent on the prescription of $F_{conv}$. On the right low corner of the panel (c) we give the percentage of departure from synchronism. For example, an initial asynchronism of 2\% represents a donor in which the rotational velocity in the surface is 2\% greater than the orbital velocity. As it can be seen from this Figure, perfectly synchronous systems (those with 0\% asynchronism) never reach the observed orbital period derivative of $-3.50(12) \times 10^{-9} ss^{-1}$. Furthermore, if the rotational velocity is greater than the orbital velocity, i.e., positive percentages of asynchronism, the change in the orbital period is positive during synchronisation. Negative values for the orbital period derivative are reached only by systems where the rotational velocity in the surface of the donor is initially smaller than the orbital velocity. This conclusion can be extended to the three laws of $F_{conv}$. Thereby, we find that, when considering the standard equilibrium theory of tidal interactions, it is possible to account for the observed value of $\dot{P}_{orb}$ if the donor star rotates a bit slower than the orbit immediately after mass transfer. From now on, we continue our analysis using the linear law of $F_{conv}$, since in this case, tidal forcing frequency is not high but $P_{tid}/\tau_{conv}\approx 0.2$ (see Sec.~\ref{Sec:EqsOrbEvol}). Anyhow, it is worth remarking that for the other expressions of $F_{conv}$ it is possible to find qualitatively similar results. \begin{figure*} \centering \includegraphics[angle=-90,width=1.0\textwidth]{CR-tau-epsilon.pdf} \caption{Change in the orbital period as a function of time for a system with Linear $F_{conv}$ and an initial asynchronism of $-4\%$. Panel (a) corresponds to a star modelled as a rigid body. Panel (b) shows different values of $\varepsilon$ and $\tau$= 1~Myr. Panel (c) shows different values of $\tau$ and $\varepsilon$= 0. The horizontal grey line represents the orbital period derivative observed for PSR~J1723-2837.} \label{CR-epsilon-tau} \end{figure*} To explore the dependence of our models with the parameters $\varepsilon$ and $\tau$, we choose to set the initial asynchronism in $-4\%$ and then to vary these parameters. The results obtained are shown in Fig.~\ref{CR-epsilon-tau}. Panel (a) represents $\dot{P}_{orb}$ as a function of time for a star modelled as a single layer, i.e., a rigid body. Panel (b) shows the evolution of the orbital period for $\tau$ between $0.1-10$~Myr and fixed $\varepsilon = 0$. As it can be seen, varying the coupling time does not affect considerably $\dot{P}_{orb}$. Panel (c) shows a system that initially rotates as a rigid body ($\varepsilon = 0$), a system in which the outer layers of the donor star initially rotate 50\% more rapidly than the central region (positive $\varepsilon$), and a system in which the outer layers of the donor star initially rotate 50\% more slowly than the central region (negative $\varepsilon$). Here, we set $\tau = 1$~Myr. The three curves are identical for $\log{\rm(Age/yr)} \leq 3$, in the region where the observed $\dot{P}_{orb}$ is reached. After that, the curves show similar behaviour. Thus, modelling the star as a two layers object affects the value of $\dot{P}_{orb}$ when its value is far smaller (in module) than the observed value in PSR~J1723-2837. \begin{figure} \hspace*{-0.6in} \centering \includegraphics[angle=-90,width=0.65\textwidth]{periodos-sincroB.pdf} \caption{Orbital period and rotational period in the surface of the donor as a function of time for a system with initial asynchronism of $-4\%$, a Linear ${\rm F_{conv}}$, $\varepsilon = 0$ and $\tau = 1~Myr$. The red dotted line corresponds to the case of a rigid body without tides. The grey line represents the orbital period observed for PSR~J1723-2837.} \label{pp} \end{figure} Lastly, as can be seen in Fig.~\ref{pp}, the system synchronises in $\approx3000$~years, although the rotational period of the star remains always slightly larger than the orbital period, due to magnetic braking. The pulse of mass transfer without tides, i.e., the output of the evolution code, was chosen so it crosses the orbital period observed. Remarkably, ${P}_{orb}$ decreases more when tides act, due to the loss of energy from the system caused by this effect. \section{Photometric analysis of PSR J1723-2837 system } \label{sec:Asyncronew} Recently, \citet{2016ApJ...833L..12V} have reported observations of PSR J1723--2837, claiming that the donor star is not tidally locked, having a ratio $P_{spin} / P_{orb} = 0.9974(7)$. This result was based on the detection of a series of small photometric dips, whose orbital phase seemed to decrease continuously \citep{2016MNSSA..75....9V}. These dips were interpreted as due to cold spots on the companion surface, and so the star should be rotating with a period slightly shorter than the orbital motion one. In Section~\ref{sec:Results} we have shown that systems that are not synchronised can account for the observed orbital period derivative, but only if $P_{orb}$ is shorter than $P_{spin}$, in contrast to the result of \citet{2016ApJ...833L..12V}. Motivated by this remarkable discrepancy, we decided to make our own analysis using the photometric data obtained by the spacecraft Kepler second mission K2 \citep{2014PASP..126..398H}. The K2 data provided a precise and densely sampled light curve, suited for an accurate Fourier analysis. PSR J1723-2837 was observed by K2 (with ID 236020326) during campaign 11, Investigation ID GO11901. High Level Science Products are publicly available for this object, in particular corrected light curves with short and long cadences (one photometric point every 60 and 1800 seconds respectively) in the wide optical spectral range characteristic of Kepler's mission \citep[see][]{2009IAUS..253..121R}. The observations are separated in two datasets whose time spans are shown in Table \ref{tab:photom}. A small portion of the unfolded light curve, illustrating several orbital cycles, can be seen in Fig. \ref{fig:lig_cur}. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{light_cur_unfol_c.pdf} \caption{Portion of the unfolded light curve from long cadence K2 photometric data (black dots). Two minima are observed in each cycle. The model fitted using a sine series is superimposed (solid line).} \label{fig:lig_cur} \end{figure} \begin{table*} \caption{K2 photometric datasets.} \label{tab:photom} \begin{tabular}{ccclcl} \hline\hline Dataset & Cadence & \multicolumn{2}{c}{Start time} & \multicolumn{2}{c}{End time} \\ & & Actual & BJD - 2454833 & Actual & BJD - 2454833 \\ \hline 1 & long & 2016-09-24 19:27:13 & 2823.3153628 & 2016-10-18 02:31:01 & 2846.6077437 \\ 1 & short & 2016-09-24 19:12:59 & 2823.3054872 & 2016-10-18 02:45:14 & 2846.6176190 \\ 2 & long & 2016-10-21 06:31:48 & 2849.7746600 & 2016-12-07 23:37:45 & 2897.4825979 \\ 2 & short & 2016-10-21 06:17:34 & 2849.7647845 & 2016-12-07 23:51:58 & 2897.4924732 \\ \hline\hline \end{tabular} \end{table*} Aiming at identifying the periods present in the photometric oscillations, we performed a Fourier analysis of the K2 observations using the Period04 v. 1.2.0 code \citep{2004IAUS..224..786L,2005CoAst.146...53L}. The long cadence flux datasets of both campaigns were first separately normalised by a constant and then a first Fourier spectrum of the observed fluxes was computed (Fig. \ref{fig:spec_all}). A step in frequency of $10^{-5}$~d$^{-1}$ was employed for the calculations, but it was verified that the position of the spectral peaks does not change for frequency steps one order of magnitude greater or smaller. The spectrum was calculated for frequencies between $10^{-4}$ and $24.5$~d$^{-1}$, which is approximately the Nyquist frequency for these data. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{fou_fit3_all.pdf} \caption{Amplitude spectrum of K2 photometric data. Main peaks corresponding to $P_1=0.307701$~d and $P_2=0.6129$~d are labelled. Horizontal dashed line indicate noise level.} \label{fig:spec_all} \end{figure} Several prominent peaks were identified in the Fourier spectrum, corresponding the most intense to a period $\sim~0.31$~d. In order to determine its value, a series of sine functions was fitted to the data, written as \begin{equation} Z + \sum A_i \sin [ 2\pi (\nu_i t + \phi_i)] \label{eq:sinser} \end{equation} were $Z \approx 1$ is a zero point constant, $A_i$ is the amplitude, $\nu_i$ the frequency (initially set at a value corresponding to the maximum of the main peak) and $\phi_i$ the phase. For the main peak $i=1$ and the series has just one term. Then, this main sine term was subtracted from the observed data obtaining the residuals $(O-C)$. A new Fourier spectrum was then calculated from these residuals and its most intense peak ($\sim \nu_2$) identified. The original data was fitted again, this time using a two terms series, so obtaining an improved value of $\nu_1$, a first approximation of $\nu_2$ and new residuals. This process, usually known as prewhitening \citep[see Sec. 5.1.2 in][and references therein]{2010aste.book.....A}, was repeated adding each time one term to the series, until the amplitude of the most intense peak in the residuals spectrum was clearly below the noise level in the first spectrum. The parameters fitted for the first ten terms of the series are shown in Table \ref{tab:period}. The noise level estimated by Period04 for the first spectrum was $0.0015$. The amplitude of the 11th term was 0.0013 and the following terms amplitudes, calculated until $i=17$, stabilise around $0.0012$. \begin{table} \caption{Periods in K2 photometric data.} \label{tab:period} \begin{tabular}{lllll} \hline\hline & Frequency & Period & Amplitude & Phase \\ $i$ & $\nu_i$ [d$^{-1}$] & $P_i$ [$d$] & & \\ \hline 1 & 3.24990(2) & 0.307701(2) & 0.0557(1) & 0.1883(4) \\ 2 & 1.631(1) & 0.6129(5) & 0.023(3) & 0.21(1) \\ 3 & 1.623(1) & 0.6162(5) & 0.016(2) & 0.54(5) \\ 4 & 4.8749(3) & 0.20513(1) & 0.0034(2) & 0.465(7) \\ 5 & 1.607(3) & 0.622(1) & 0.0031(8) & 0.86(1) \\ 6 & 1.647(2) & 0.6073(7) & 0.003(2) & 0.45(1) \\ 7 & 1.660(1) & 0.6025(4) & 0.0019(2) & 0.22(2) \\ 8 & 0.0613(7) & 16.3(2) & 0.0018(2) & 0.30(1) \\ 9 & 0.1964(7) & 5.09(2) & 0.0015(1) & 0.79(2) \\ 10 & 0.0337(8) & 29.7(7) & 0.0016(2) & 0.50(1) \\ \hline\hline \end{tabular} \end{table} The detection of the main peak at $P_1 \approx P_2/2$ is due to the presence of two minima in each orbital cycle (see Fig. \ref{fig:lig_cur}). The second period detected $P_2=0.6129(5)$~d is similar to the orbital period $P_b=0.615436473(8)$~d found by \citet{2013ApJ...776...20C}, but clearly shorter, and comparable with the period proposed by \citet{2016ApJ...833L..12V} as a spin period ($P_s = 0.9974 P_b \approx 0.6138$~d). However, it is worth to notice that the Fourier spectrum of the residuals at original data, after subtraction of the two firsts terms of the series (\ref{eq:sinser}), indicates the presence of at least another three periodic oscillations with periods slightly longer and shorter than $P_2$ and $P_b$, i.e. $P_3 \approx 0.6162$~d, $P_5 \approx 0.622$~d and $P_6 \approx 0.6073$~d (see Fig. \ref{fig:spec_res}). We verified that these peaks are not aliases of the main oscillation. \begin{figure} \centering \includegraphics[width=0.49\textwidth]{res2P2at_ori.pdf} \caption{Fourier spectrum of residuals at original data after subtract sine terms with periods $P_1$ and $P_2$. It is indicated the position of $P_2$ peak before subtraction (red dashed line) and that of orbital period $P_b$ according to \citet{2013ApJ...776...20C} (blue). Other peaks are labelled as in Table \ref{tab:period}. Horizontal dashed line: noise level.} \label{fig:spec_res} \end{figure} As recalled above, \citet{2016ApJ...833L..12V} had already detected a photometric oscillation with a period $P_s$ shorter than the orbital one. They proposed that this oscillation is due to the presence of various spots on the stellar surface, rotating with the whole star. The new periods we found ($P_3$, $P_5$ and $P_6$) could be interpreted in the same way, i. e. as several spots on the surface of the star moving with different rotational velocities. In this case, it could be hypothesised that the star has different rotational velocities at different latitudes, something that has already been observed in other stars (e.g. the sun). In any case, the detection of this structure around $P_2$ in the Fourier spectrum may have another interpretation. The detected frequencies so close to the rotation frequency may also be interpreted as due to the {\it beating} of non-radial pulsations perturbed by rotation. For a general description of non-radial stellar oscillations see, e.g., \citet{1989nos..book.....U,2010aste.book.....A}. In Appendix~\ref{appendix: apendA} we show a more detailed development of this idea. \section{Discussion and Conclusions} \label{sec:conclu} In a previous paper, \citet{2015ApJ...798...44B} found a progenitor to the redback system PSR~J1723-2837 that accounts for the orbital period, mass, mass ratio and temperature of the donor star, but fails to explain the orbital period derivative $\dot{P}_{orb}$. In order to continue the study of this object, we analysed the effect of tides on its orbital evolution in between two successive RLOFs of the donor star. We based our analysis in Hut's equations for equilibrium tidal evolution in the weak friction approximation. We generalised the description of the donor star by treating it as a central sphere surrounded by a spherical layer. Accordingly, we added an extra equation that accounts for the change in the rotational velocity of the centre and added a term in the equation of the rotational velocity on the surface, which is the one that suffers the magnetic breaking effect and eventually synchronises with the orbital motion. We found two main results. One is that, considering tidal interactions on a rigid or on a two layers rotating star leads the system to similar early tidal evolution, when the observed $\dot{P}_{orb}$ value is reached. Secondly, we have shown that large negative values of $\dot{P}_{orb}$ are possible if the donor star detaches from its Roche Lobe with a rotation rate slightly lower than that of the orbit. This result is contrary to the effect caused by the contraction of the donor after Roche Lobe detachment. This should lead the donor to spin-up due to angular momentum conservation, and thus to faster-than-orbital rotation (if synchronization was maintained during RLOF). However, this spin-up will be small because the donor star remains in a quasi-RLOF state, as inferred by observations \citep{2013ApJ...776...20C}. In fact, according to our calculations, the filling factor $R_{2}/R_{LOBE}$ (where $R_{LOBE}$ is the radius of the Roche Lobe) is close to 0.99. Besides, only the outermost layers that are affected by irradiation are involved in the contraction, and these layers contain very little mass, so they will have a minor effect on the total angular momentum of the star. On the other hand, mass transfer will cause angular momentum loss from the donor (see, for example, \citet{2000ApJ...528..368H} for the case of isolated stars), which must be compensated by tides to maintain synchronization during RLOF. If this is not completely effective, the donor could detach from the Roche Lobe spinning somewhat slower than the orbital rate. In summary, the net effect in the variation of the angular momentum of the donor, could make possible that, when detaches, the donor star rotates somewhat slower than the whole system, as required by the tidal theory for the occurrence of $\dot{P}_{orb}<0$ values. In brief, the theory of tidal evolution considered here indicates that systems for which the rotational velocity of the donor is slower than the orbital rotation lead to negative values of $\dot{P}_{orb}$. However, remarkably, this does not agree with the observations presented by \citet{2016ApJ...833L..12V}. This induced us to analyse photometric data obtained by the second mission K2 of the Kepler spacecraft. We performed a Fourier analysis of the data obtaining several periods of the photometric oscillations. Two prominent peaks can be seen in the amplitude spectrum with several others less intense (see Table~\ref{tab:period}). We found several periodic oscillations in the Fourier spectrum of the residuals to the main oscillation (Fig.~\ref{fig:spec_res}), which may be associated to spots in the surface of the donor star. Furthermore, we found another possible interpretation related to non-radial oscillations of the donor star modified by its rotation. In any case, the results presented by \citet{2016ApJ...833L..12V} and those given in our Figure~\ref{fig:spec_res} indicate the presence of few frequencies that are not easy to reconcile with the theory of tidal interactions as proposed above. This difficulty may indicate the necessity of considering tides in a more general way and that, eventually, other here overlooked phenomena may be also operating and affecting the value of $\dot{P}_{orb}$. We judge that this, as well as the origin of the presence of several frequencies close to the rotation one shown in Fig.~\ref{fig:spec_res}, as relevant findings that warrant a further study. \section*{Acknowledgements} We warmly thank our anonymous referee for his/her report, that helped us to largely improve the original version of this paper. \section*{Data Availability} The data generated by our numerical code are available from the corresponding author on request. All remaining data underlying this article are available in the article and references therein. \bibliographystyle{mn2e} \small
{ "timestamp": "2021-09-30T02:02:02", "yymm": "2109", "arxiv_id": "2109.13985", "language": "en", "url": "https://arxiv.org/abs/2109.13985" }
\section{Hartree-Fock analysis}\label{sec:HF} In order to obtain the complete phase diagram of our model, we perform a Hartree-Fock (HF) treatment of the Hubbard interaction. We expect HF to give qualitatively reasonable results for insulating phases and, more importantly, will allow for an analytical understanding of the key physics through the quasiparticle band structure. Our main findings from the HF study are fully supported by density matrix renormalization group (DMRG) calculations to be presented below. The HF approximation amounts to the replacement of the Hubbard term by \begin{equation} H_{\rm{Hub}}^{\rm{HF}} = \frac{U}{2} \sum_{i} \left(n_{i} \langle n_{i} \rangle - \vec{s}_{i}\cdot \langle\vec{s}_{i}\rangle -\frac{1}{2}\langle n_{i}\rangle^2 + \frac{1}{2}|\langle\vec{s}_{i}\rangle|^2\right) \end{equation} where $n_{i}=\sum_{\sigma}n_{i \sigma}$, $s^k_{i} = \langle c^\dagger_{i\sigma} \bm{\pi}^k_{\sigma\sigma^\prime} c_{i\sigma^\prime}\rangle$, and $\bm{\pi}^k$ are $k=x,y,z$ Pauli matrices. The HF Hamiltonian must be solved self-consistently at the filling $n=1$ to obtain the HF ground state (in cases where multiple self-consistent solutions are found, the lowest energy solution among them is chosen). In order to characterize the magnetic order, we define the following order parameter, a matrix in spin space: \begin{eqnarray} S_{\sigma^\prime \sigma}^{\alpha\zeta} \equiv \frac{1}{N} \sum_{\bm{k}}\langle c^\dagger_{\alpha\sigma^\prime(\bm{k}+\zeta\bm{K})} c_{\alpha\sigma \bm{k}}\rangle \end{eqnarray} where $\zeta=0$ describes ferromagnetic (FM) states, and $\zeta=\pm 1$ describes antiferromagnetic (AFM) states with wavevector $\pm \bm{K}=\pm \frac{4\pi}{3a}(1,0)$. In the presence of spin-orbit interaction, $xy$ AFM states with $\zeta=+$ and $-$ are distinct states displaying spin configurations of opposite chirality and are {\it not} degenerate~\cite{Zang2021}. For $\phi_A=0$ and $\phi_B\approx-\frac{2\pi}{3}$, the AFM Mott insulator at large $\Delta$ has $\zeta=-1$. The order parameters are shown in Fig~\ref{fig:HFFig}a as a function of $\Delta$ near the band inversion point for $\phi_B=-\frac{2\pi}{3}$, $t_A=t_B=\frac{1}{2}t_{AB} = t$, and $U=50t$. We have defined the combinations $\mathrm{Z}_{\alpha}\equiv \frac{1}{2}(S^{\alpha 0}_{\uparrow\uparrow} - S^{\alpha 0}_{\downarrow\downarrow})$ and $\mathrm{XY}_{\alpha} \equiv |S^{\alpha(\zeta=-1)}_{\uparrow\downarrow}|$, which capture the observed non-zero $z$ FM and $xy$ AFM orders respectively on the $\alpha$ sublattice. Figure~\ref{fig:HFFig}b also shows charge gap and Chern number of the HF ground state. We observe two distinct insulating phases: a $xy$ AFM (with $\zeta=-$) on the $A$ sublattice at large $\Delta$ transitions into a canted $xy$ AFM as $\Delta$ is decreased. This canted phase, in particular, has non-trivial Chern number $|\mathcal{C}|=1$, and is therefore a QAH phase. This QAH phase with non-coplanar magnetism appearing at reduced charge transfer energy $\Delta$ is the highlight of this work. \begin{figure}[t] \includegraphics[width=\columnwidth]{HFFig_v6.pdf} \caption{ Order parameters (a) and charge gap (b) obtained from self-consistent HF as a function of $\Delta$ is shown, for parameters $t_A=t_B=\frac{1}{2}t_{AB}=t$, $\phi_A=0$, $\phi_B=-\frac{2\pi}{3}$, and $U=50t$. There is a transition from the $xy$ AFM phase to the non-coplanar QAH phase with Chern number $|\mathcal{C}|=1$ as $\Delta$ is reduced. The gap is also shown for $U=30t$. (c) The quasiparticle band structure, with shift $\zeta=-$, obtained from HF at $\Delta=7t$ is shown. The Berry curvature of the filled topological band is shown in (d). }\label{fig:HFFig} \end{figure} To gain insight into the origin of the QAH phase, we examine the evolution of the quasiparticle band structure as a function of $\Delta$. As a first step, it is useful to first derive the noninteracting band structure at $U=0$. By Fourier transform, the single-particle Hamiltonian $\mathcal{H}_0=H_A+H_B+ H_{AB}$ takes the form $\mathcal{H}_0 = \sum_{\bm{k}\sigma} \vec{c}^\dagger_{\sigma\bm{k}} H_{\sigma}(\bm{k})\vec{c}_{\sigma\bm{k}}$, where $\vec{c}_{\sigma\bm{k}}^\dagger = (c_{A\sigma \bm{k}}^\dagger,c_{B\sigma \bm{k}}^\dagger)^T$ in $\bm k$ space, and the Bloch Hamiltonian is \begin{equation} H_{\sigma}(\bm{k}) = \begin{pmatrix} \mathcal{E}_{A\sigma}(\bm{k}) & T_{\sigma}(\bm{k})\\ T_{\sigma}^\dagger(\bm{k}) & \mathcal{E}_{B \sigma}(\bm{k}) \end{pmatrix} \end{equation} where \begin{eqnarray} \mathcal{E}_{\alpha\sigma}(\bm{k}) =& -2t_\alpha\sum_{n}\cos(\bm{k}\cdot \bm{a}_n + s_\sigma \tau_\alpha \phi_\alpha) - \frac{1}{2}\tau_\alpha\Delta \label{eq:singleparticleband}\\ T_{\sigma}(\bm{k}) =& -t_{AB}(e^{-i \bm{k}\cdot\bm{b}_1} + e^{-i \bm{k}\cdot\bm{b}_2} + e^{-i\bm{k}\cdot\bm{b}_3}), \end{eqnarray} where $\bm{a}_n = a\left[\cos\frac{2\pi n}{3},\sin\frac{2\pi n}{3}\right]$ and $\bm{b}_n = \frac{a}{\sqrt{3}} \left[ \sin\frac{2\pi n}{3}, -\cos\frac{2\pi n}{3}\right]$. At large $U$, the quasiparticle band structure of the $xy$ or canted AFM insulator is completely different from the noninteracting case. While the AFM order results in a $\sqrt{3}\times \sqrt{3}$ enlarged unit cell, this state is invariant under a combination of the unit translation and spin rotation around $z$ axis. Thanks to this symmetry property, the description of quasiparticle band structures can be simplified by performing a spin-dependent momentum boost with a unitary transformation $ U_\zeta : c^\dagger_{\uparrow \bm{k}} \rightarrow c^\dagger_{\uparrow (\bm{k}+\zeta \bm{K})}, c^\dagger_{\downarrow \bm{k}} \rightarrow c^\dagger_{\downarrow (\bm{k} -\zeta \bm{K})}$. This transformation preserves the $z$ FM order and maps the $xy$ AFM order into $xy$ FM order, which is translationally invariant. After this transformation, the HF Hamiltonian, which includes the effect of magnetic order, is a $4 \times 4$ matrix (involving sublattice and spin) given by \begin{equation} H^{\rm{HF}}_\zeta(\bm{k}) = \begin{pmatrix} H_{\uparrow}(\bm{k}-\zeta\bm{K}) + \frac{U}{2}\mathrm{S}^{0}_{\downarrow\downarrow} & -\frac{1}{2} U \rm{S}^\zeta_{\uparrow\downarrow} \\ -\frac{1}{2} U (\rm{S}^{\zeta}_{\uparrow\downarrow})^* & H_{\downarrow}(\bm{k}+\zeta\bm{K}) + \frac{U}{2}\rm{S}^{0}_{\uparrow\uparrow} \end{pmatrix} \end{equation} where $\mathrm{S}^{\zeta}_{\sigma^\prime\sigma} = \mathrm{diag}(S^{A\zeta}_{\sigma^\prime\sigma},S^{B\zeta}_{\sigma^\prime\sigma})$. In the limit $\Delta \rightarrow \infty$, the two sublattices are decoupled and only the A sublattice is occupied at the filling of $n=1$, thus realizing the triangular lattice Hubbard model. In the $xy$ AFM insulator, the half-filled band splits into lower and upper Hubbard bands $E_\pm^{\zeta}(\bm{k})$, separated with the Mott gap $U$. In the large-$U$ limit, the lower Hubbard band associated with hole excitations has the energy dispersion \begin{eqnarray} E_{-}^{\zeta}(\bm{k}) = \frac{1}{2}[\mathcal{E}_{A\uparrow}(\bm{k}-\zeta\bm{K}) + \mathcal{E}_{A\downarrow}(\bm{k}+\zeta\bm{K})]. \label{hole} \end{eqnarray} Since the hopping amplitude of holes between adjacent sites is effectively reduced by the {\it noncollinear} spin configuration, the bandwidth of holes is smaller than the noninteracting band, but remains finite even as $U\rightarrow \infty$. This hole dispersion $E_-^{\zeta}(\bm{k})$ has a single maximum at $\Gamma$, which should be contrasted with the noninteracting band, $\mathcal{E}_{A\sigma}(\bm{k})$, which has two maxima. As the charge transfer energy $\Delta$ decreases below $U$, the $B$ sublattice band lies below the upper Hubbard band on the $A$ sublattice. This leads to a charge transfer insulator, in which low-energy hole and electron states reside primarily on $A$ and $B$ sublattice respectively. While the hole band has a unique maximum at $\Gamma$ (after performing the transformation $U_\zeta$), the location of electron band minimum depends on the spin-orbit coupling parameter $\phi_B$. For $\frac{\pi}{3}<\zeta\phi_B<\pi$, there exist two degenerate minima: a $\sigma=\uparrow$ state at $\zeta K$ and $\downarrow$ at $-\zeta K$, both of which are shifted by the transformation $U_\zeta$ to $\Gamma$, coinciding with the hole band maximum. In such case, the charge transfer insulator has a direct gap. We then ask the question: what happens if $\Delta$ is decreased further so as to invert the charge transfer gap? To address this question, we develop a low-energy theory of hole and electron bands around $\Gamma$ near the gap inversion. Prior to the gap inversion, the $B$ sublattice is largely unoccupied, hence the electron band is spin-degenerate. In contrast, due to the $xy$ AFM order, the lower Hubbard band associated with holes on the $A$ sublattice is spin-nondegenerate and comprised of a superposition of $\sigma=\uparrow,\downarrow$ states. The two bands are coupled by the hybridization term $H_{AB}$, which takes $p$-wave form near the gap. Taking $\zeta=-1$ and $\phi_B=-\frac{2\pi}{3}$ as in Fig~\ref{fig:HFFig}, we have \begin{equation} \begin{split} T_{\sigma}(\bm{k} + s_\sigma\bm{K}) &\approx \frac{\sqrt{3}}{2}t_{AB} a s_\sigma (k_x - i s_\sigma k_y) \\ &\equiv \sqrt{2} s_\sigma \lambda k_{s_\sigma}, \end{split} \end{equation} where $k_\pm\equiv k_x\pm i k_y$. By projecting the HF Hamiltonian into this low-energy subspace, we obtain a $k \cdot p$ theory of quasiparticle band structure in the $xy$ AFM state prior to gap inversion: \begin{equation} H^{\rm{eff}}(\bm{k}) = \begin{pmatrix} -\frac{\bm{k}^2}{2 m_A} & \lambda k_- & -\lambda e^{i\theta} k_{+} \\ \lambda k_{+} & \frac{\bm{k}^2}{2m_B} + \delta & 0 \\ -\lambda e^{-i\theta} k_{-} & 0 & \frac{\bm{k}^2}{2 m_B} + \delta \end{pmatrix} \label{eq:Heff} \end{equation} with $m_A = \frac{2}{3 t_A a^2}$ in the large $U$ limit, $m_B = \frac{1}{3 t_B a^2}$, $e^{i\theta}$ reflects the direction of in-plane order on the $A$ sublattice, and $\delta$ defines the charge transfer gap As the charge transfer gap $\delta$ is decreased and eventually becomes negative (while the charge transfer energy $\Delta$ remains positive), the occupation of $B$ sublattices increases, hence the effect of Hubbard repulsion $U_B$ between electrons becomes important. The low-energy theory of our charge transfer insulator, including one-particle term and two-body interaction, is: \begin{eqnarray} \mathcal{H}^{\rm{eff}} = \sum_{\bm k} f^\dagger_{{\bm k} i } H_{ij}^{\rm{eff}}(\bm{k}) f_{{\bm k} j} + g \int d{\bm r} \; n_{B\uparrow}({\bm r}) n_{B\downarrow}({\bm r}) \label{eq:effective} \end{eqnarray} where $f = (f_A, f_{B\uparrow}, f_{B\downarrow})$ denotes fermion quasiparticles; $n_{B\sigma} = f^\dagger_{B\sigma} f_{B\sigma}$, and the contact interaction $g$ is proportional to $U_B$. An additional interaction term $n_A n_B$ appears in the effective Hamiltonian $H$ when we further include nearest-neighbor interaction between $A$ and $B$ sites. Our {\it interacting} Hamiltonian $H$ captures the {\it universal} aspects of ``Hubbard band inversion'' in charge transfer insulators, in the same spirit as the Dirac Hamiltonian encapsulates band inversion in narrow gap semiconductors. However, there are fundamental differences between the two theories. A charge transfer insulator has an inherent particle-hole asymmetry: holes associated with the lower Hubbard band are spin-nondegenerate, while electrons associated with the charge transfer band are spin-degenerate prior to the inversion. As a result, new physics arises after inverting the charge transfer gap, as show below. We first analyze the quasiparticle energy spectrum at $g=0$, given by $H^{\rm{eff}}(\bm k)$. At ${\bm k}=0$ where the hybridization term vanishes, the spectrum consists of the spin-non-degenerate Hubbard band from the $A$ sublattice and the spin-degenerate band from the $B$ sublattice. Importantly, the two-fold degeneracy of the latter is protected by two symmetries of the $xy$ AFM state: (1) three-fold rotation of the lattice and electron spin around a hexagon center ($C_3$); (2) time-reversal transformation combined with a $\pi$ rotation of spin around $z$ axis ($i s_z \Theta$). Note that $(i s_z) \Theta$ is an anti-unitary symmetry operator that squares to identity, effectively acting as a time reversal operator in spinless systems. Thus, the $B$ band at ${\bm k}=0$ furnishes a {\it real two-dimensional} representation of $C_3$. Prior to gap inversion ($\delta>0$), the $B$ band lies above the $A$ band, and the Fermi level is inside the gap (Fig~\ref{fig:FieldTheoryFig}a). We remark that at precisely $\delta=0$, the spectrum of $H^{\mathrm{eff}}$ consists a linearly dispersing Dirac cone and a parabolic electron band (Fig~\ref{fig:FieldTheoryFig}b). This critical point has no divergent susceptibility and we therefore expect it to be perturbatively stable to interactions. When $\delta$ is tuned to become negative, the $B$ band dips below the $A$ band around ${\bm k}=0$. Due to the two-fold degeneracy of $B$ band at ${\bm k}=0$, the spectrum of $H^{\rm eff}$ immediately after band inversion, in the parameter range $-4 \lambda^2 m_B <\delta<0$, shows a quadratic band touching at the Fermi level (dashed line in Fig~\ref{fig:FieldTheoryFig}c), resulting in finite density of states for both electrons and holes. However, as shown by Sun, Yao, Fradkin and Kivelson \cite{Sun2009}, this kind of zero-gap state is unstable towards exciton condensation in the presence of even {\it arbitrarily weak} repulsive interaction. The interaction $g\propto U_B$ on the anions thus plays an essential role after the charge transfer gap is inverted. The leading susceptibility of such a quadratic band touching is towards the opening of a \emph{topological} gap (solid lines in Fig~\ref{fig:FieldTheoryFig}c) resulting in a QAH state with spontaneous $Z_B\neq0$. This analysis, based on the effective field theory Eq~\ref{eq:effective}, is controlled in the limit of small $U_B$. \begin{figure}[t] \includegraphics[width=\columnwidth]{fieldtheory.pdf} \caption{ The band structure of the effective theory (Eq~\ref{eq:effective}) near inversion. The band colors indicate the sublattice content, blue for $A$ and red for $B$ bands. $(a)$ and $(b)$ show the bands before and at inversion. After inversion, $(c)$, the $g=0$ bands feature a quadratic band touching (dashed lines). A perturbative instability then opens a topological gap for $g>0$, as illustrated. For $\delta<-4\lambda^2m_B$, not shown, Fermi surfaces form and the system is metallic at $g=0$. }\label{fig:FieldTheoryFig} \end{figure} Our HF calculation confirms the field theory analysis even beyond the small $U_B$ limit. Additionally, due to the $A-B$ hybridization, a finite occupation of $B$ sublattice is already present at $\delta>0$. This causes an upward shift in the energy of the charge transfer band by $\frac{U_B}{2}\langle n_B\rangle$, which has the effect of delaying the transition to the inverted phase from $\delta=0$ to $\delta_c<0$. More importantly, in the presence of the Hubbard interaction $U_B$, a spontaneous spin polarization in the $\pm z$ direction is found at $\delta<\delta_c$, resulting in a non-coplanar spin structure with canted AFM on the $A$ sublattice and $z$ FM on the $B$ sublattice, as shown in Fig.1. In the noncoplanar phase, the $z$ FM order parameter component breaks the effective time-reversal symmetry $i s_z \Theta$, and produces spin splitting of the $B$ band. One of the spin-split bands is pushed to higher energy, while the other one takes part in the band inversion with the $A$ Hubbard band. Shown in Fig~\ref{fig:HFFig}d is the $\bm k$-space Berry curvature of the noncoplanar phase, obtained from the self-consistent HF Hamiltonian that includes both $xy$ AFM and $z$ FM orders. Now, the inversion around $\Gamma$ between $A$ and $B$ Hubbard bands---with {\it removed spin degeneracy} and $p$-wave hybridization---gives rise to a QAH insulator with the Chern number $\mathcal{C}=\pm 1$ as computed directly from the Berry curvature integration. It is important to note that the appearance of QAH phase requires that the cation and anion Hubbard bands are dispersive, so that they can be inverted in a {\it part} of momentum space near the gap edge {\it before} $\Delta$ decreases to zero. This is satisfied in our model since magnetic frustration of the cations leads to dispersive quasiparticle bands even for large $U$ (Eq~\ref{hole}). As such, the QAH phase is a consequence of the balance and synergy between electron localization and itinerancy. \begin{figure}[t] \includegraphics[width=\columnwidth]{DMRGFig_v5.pdf} \caption{ (a) Order parameters obtained from DMRG as a function of $\Delta$, showing qualitatively similar results as self-consistent HF. The small $XY_B\neq0$ is likely due to the cylindrical geometry~\cite{supp}. (b) The response to an applied Zeeman field $h$. There is a discontinuity at $h=0$ due to broken symmetry. While $Z_B$ is quickly saturated, $Z_A$ continues to increase with $h$ while $XY_A$ decreases, indicating a smooth variation in the canting. The Hall conductivity $\sigma_{xy}$ changes sign discontinuously at $h=0$. Hysteresis is absent as we obtain the ground state independently for each value of $h$. }\label{fig:DMRGFig} \end{figure} Our finding of the QAH phase with negative charge transfer gap and non-coplanar magnetism is further confirmed by DMRG calculations~\cite{White1992,Ostlund1995}. Using the infinite DMRG algorithm, we study the ground state of the Hamiltonian on an infinite cylinder $L_x=\infty$ of circumference $L_y=6$ unit cells. The unit cell in $x$ is chosen to be commensurate with the $\sqrt{3}\times\sqrt{3}$ AFM order. More details on the numerical simulations and convergence of DMRG, performed using the TenPy code~\cite{tenpy}, is provided in the supplemental information. Figure~\ref{fig:DMRGFig}a shows the order parameters as a function of $\Delta$, for the same set of parameters as before. For each $\Delta$, we perform calculations for both periodic and anti-periodic boundary conditions in the circumferential direction. The difference in calculated observables, represented by the error bars, serves as an indication of finite-size effect~\cite{supp}. For a range around $\Delta\approx5t$, both $XY^\alpha$ and $Z^\alpha$ are clearly non-zero, showing a canted $120^\circ$ order on the $A$ sublattice and $z$-polarization on the $B$ sublattice. Moreover, we establish the existence of QAH effect directly from the evolution of the entanglement spectrum as a $h/e$ flux quantum is threaded adiabatically through the cylinder~\cite{Zaletel2014,supp}. In Fig~\ref{fig:DMRGFig}b, we show the response of the QAH phase to a magnetic Zeeman field, $H_z = -\frac{h}{2}\sum_{i} (n_{i\uparrow} - n_{i\downarrow})$. The Hall conductivity $\sigma_{xy}$ changes sign abruptly at $h=0$. There is a discontinuity in $Z$ at $h = 0$ due to broken symmetry, after which total $|Z|$ increases smoothly with $h$. This is possible via an increase in canting of the $A$ sublattice, and is a signature of our QAH phase with a partial spin $s_z$ polarization---as opposed to fully saturated---at zero field. We also comment on the stability of the QAH phase against nearest neighbor repulsion, $H_V = V \sum_{\langle i,j\rangle} n_i n_j$. At small $V$, the QAH remains present, albeit in a narrower range of $\Delta$~\cite{supp}. When $V$ is sufficiently large, an abrupt transition between $A$- and $B$-sublattice polarized Mott insulators is found around $\Delta=0$, without the intervening QAH phase. \begin{figure}[t] \includegraphics[width=\columnwidth]{TMDFig_v3.pdf} \caption{ (b) HF phase diagram using the realistic model parameters $(t_A,t_B,t_{AB})=(4.5,9,2)$meV~\cite{supp} describing holes in MoTe$_2$/WSe$_2$. Color indicates the charge gap. We find Mott, QAH, and metal phases near band inversion. Inset shows the quasiparticle bands near the Fermi energy in the metal phase at $\Delta=65$meV and $U=100$meV. Note that our tight binding model describes holes in this system, hence these bands are minus the electron bands. (b) Illustration of the moir\'e superlattice in MoTe$_2$/WSe$_2$. Low energy hole states on the MoTe$_2$ layer are localized on the MM (red), and WSe$_2$ on the XX (blue) regions. Together, they form an effective honeycomb lattice. }\label{fig:TMDFig} \end{figure} Let us now apply our theory to TMD bilayers and in particular, AB-stacked MoTe$_2$/WSe$_2$ heterobilayer. Our theory provides a direct explanation for the observed transition from a Mott insulator to a QAH state in MoTe$_2$/WSe$_2$ at $n=1$ filling of holes, driven by the applied displacement field~\cite{Tingxin2021}. Our tight binding model captures the topology and essential features of the topmost valence bands from the two layers after a particle-hole transformation. The role of the displacement field is to decrease the band offset between the two layers, or equivalently, reduce the charge transfer energy $\Delta$. For $\Delta$ below a critical value $\Delta_c>0$, the quasiparticle gap between MoTe$_2$ and WSe$_2$ Hubbard band is inverted, leading to a QAH insulator. Figure~\ref{fig:TMDFig}a shows the HF phase diagram calculated using realistic parameters for MoTe$_2$/WSe$_2$~\cite{supp}, as a function of $\Delta$ and $U$ near band inversion. As $\Delta$ is reduced, we find that the Mott insulating phase transitions into the non-coplanar QAH phase, which further transitions into a metal for $U\lesssim 160$meV. In this metallic phase, the bands are deeply inverted beyond the $U_B=0$ quadratic band touching regime ($\delta < -4\lambda^2 m_B$ in our effective theory) and $U$ is not large enough to spin polarize the $B$ band. The resulting quasiparticle band structure, shown in the inset of Fig~\ref{fig:TMDFig}a, feature a nearly spin-degenerate hole pocket on the WSe$_2$ layer and a spin-non-degenerate electron pocket on the MoTe$_2$ layer. Thus, this metal phase is a compensated semimetal with $xy$ magnetic order and small quasiparticle Fermi surfaces. Our phase diagram showing Mott insulator, QAH state, and compensated semimetal as a function of displacement field agrees with the experimentally observed phases in MoTe$_2$/WSe$_2$~\cite{Tingxin2021}. Our theory further predicts that (1) at small displacement field, the Mott insulator on the MoTe$_2$ layer is an intervalley coherent ($xy$ ordered) state; (2) the QAH state displays {\it partial} valley $z$ polarization on both layers, and simultaneously, intervalley coherence on the MoTe$_2$ layer. The $z$ and $xy$ components of the valley order parameter increase and decrease with the displacement field, respectively. The spontaneous valley $z$ polarization predicted in the QAH phase (but not in the Mott insulator) and its increase with displacement field can be detected by magnetic circular dichroism from exciton spin splitting at zero field. The existence of intervalley coherence, predicted for both Mott and QAH phase, can be established through gapless spin wave transport~\cite{Bi2021}, which can be detected by optical means as demonstrated in other TMD heterobilayers~\cite{Jin2018}. In the lightly inverted regime, our QAH state features a predominantly $xy$ magnetic order, with only a small $z$ component. It differs from the QAH state in magnetically doped topological insulator films~\cite{Chang2013}, where the magnetic moments spontaneously polarize along $z$ direction. Our case should also be contrasted with a fully valley-polarized QAH state that arises from topological flat bands with valley-contrasting Chern numbers, as widely discussed for magic-angle graphene~\cite{Sharpe2019, Serlin2020, Chen2020, Ming2020, Zhang2019b} and recently proposed for slightly twisted TMD homobilayers~\cite{devakul2021magic}. This scenario was also proposed for MoTe$_2$/WSe$_2$~\cite{xie2021theory}. In the cases discussed above, full valley polarization would be expected throughout the QAH phase. In contrast, we predict that the spontaneous valley polarization is zero prior to inversion, and develops smoothly in the QAH phase after inversion. Our work therefore uncovers a general mechanism by which QAH can emerge in the absence of flat bands. Our mechanism of QAH from inverted Hubbard bands in charge transfer insulators is robust and does not rely on fine tuning. The effective theory (\ref{eq:effective}), which only involves low-energy quasiparticles, is universally applicable in the vicinity of gap inversion, provided that prior to inversion: (1) the charge transfer insulator has a direct quasiparticle band gap; and (2) its electron and hole states at the gap edge have different symmetry eigenvalues. Note that these requirements are for the quasiparticle band structure of an interaction-induced insulator, {\it not} the noninteracting band structure. The central idea of this work, creating magnetic topological states by inverting the charge transfer gap, is potentially applicable to a broad range of materials. Besides MoTe$_2$/WSe$_2$, twisted TMD homobilayers under a displacement field also realize a two-band Hubbard model with a tunable charge transfer energy, and therefore may display a similar QAH phase without requiring magic-angle flat bands. Another promising platform is heterostructures between two-dimensional semiconductors and magnetic insulators. We also note the possibility of negative charge transfer gap in transition metal oxides~\cite{Ushakov2011,Choudhury2015} and perovskite nickelates~\cite{Bisogni2016}, which may provide a new venue for topological physics. \emph{Acknowledgement} --- We are grateful to Yang Zhang, Valentin Crepel, Kin Fai Mak, Jie Shan, Shengwei Jiang, and Tingxin Li for helpful discussion on this work and related collaborations. This work is funded by the Simons Foundation through a Simons Investigator Award. LF is partly supported by the David and Lucile Packard Foundation. \bibliographystyle{unsrt}
{ "timestamp": "2022-01-27T02:23:20", "yymm": "2109", "arxiv_id": "2109.13909", "language": "en", "url": "https://arxiv.org/abs/2109.13909" }
\section{Introduction} \label{sec:intro} Observations of high-redshift ($z > 6$) quasars hold the key to understanding the formation and evolution of the earliest supermassive black holes (SMBHs) and galaxies. Recent observations of quasars at $z>6$ have revealed the existence of massive SMBHs with $\sim 10^{8} - 10^{10}$ solar masses in a very young Universe \citep[e.g.,][]{wu15,banados18,matsuoka19b,onoue19,shen19,yang20a,wang21b}, within only 920 million years of the Big Bang. This raises the question of how these SMBHs grow to a few billion solar mass within such short time. Theoretical models with different seed black hole (BH) mass and/or different modes of accretion offer several potential explanations of the formation and growth of early SMBHs. Detailed observations of a large sample of the highest redshift quasars are needed to test these models and to improve our understanding of SMBH formation and evolution. Such studies rely on both wide-field high-redshift quasar surveys for the discovery and high-quality spectroscopic observations of high-redshift quasars at optical and near-infrared (NIR) wavelengths to measure quasar properties. Recent progress in deep imaging surveys coupled with NIR spectroscopic capabilities on large telescopes have significantly increased the sample size of $z>6$ quasars to $\sim$ 200 and pushed the quasar redshift frontier to $z\gtrsim 7.5$, deep into the epoch of reionization \citep[e.g.,][]{mortlock11,jiang16,mazzucchelli17,banados18,fan19,reed19,matsuoka19a,matsuoka19b,venemans13,venemans15,wang18, wang19,yang19b,yang20a,wang21b}. \citet[][hereafter W19]{wang19} and \citet[][hereafter Y19]{yang19b} have recently carried out a new wide-field survey for reionization-era quasars in a $\sim$20,000 deg$^2$ area by combining a number of publicly available deep optical and infrared photometric datasets. This survey has already discovered more than 35 quasars at $6.3 < z \le 7.64$ \citep[also,][]{fan19,wang18, yang20a, wang21b}. These successful surveys of high-redshift quasars have significantly expanded the high-redshift quasar sample and provided valuable new targets for the investigations of both reionization history \cite[e.g.,][]{davies18,yang20a,yang20b,wang20} and early SMBHs. The measurements of BH masses in high-redshift ($z > 6$) quasars are mainly based on the quasar \ion{Mg}{2}\ emission line from NIR spectra, since \ion{Mg}{2}\ is the best tracer in the observable wavelength range (i.e., optical and NIR). Combined with the bolometric luminosity derived from the NIR spectra after applying bolometric corrections, the measurement of BH mass allows us to estimate the Eddington ratio of these SMBHs. By fitting the continuum of NIR spectra and the \ion{Mg}{2}\ emission line, the BH mass and Eddington ratio of a number of $z > 6$ quasars have been derived \citep[e.g.,][]{jiang07, kurk07, willott10, derosa14, mazzucchelli17, onoue19, shen19, schindler20}. These measurements have improved our understanding of BH growth and accretion in the early Universe and also raised questions related to early SMBH formation, accretion, and BH-host galaxy co-evolution. At the same time, NIR spectroscopy also allows studies of the rest-frame ultraviolet (UV) properties of these early quasars. The evolution of quasar spectral properties (e.g., broad emission line velocity shifts) gives insight into the physical conditions and emission mechanisms of the quasar broad-line region (BLR) \citep[e.g.,][]{gaskell82, richards11,meyer19}. In particular, the \ion{Fe}{2}/\ion{Mg}{2}\ ratio traces the chemical abundances in the quasar BLR and is an important diagnostic of the iron enrichment and the history of star formation in quasar host galaxies in the early Universe \citep[e.g.,][]{hamann99, jiang07, derosa11, schindler20, onoue20}. We conducted a NIR spectroscopic survey of quasars selected from a new survey (W19 and Y19) and other known $z > 6.5$ quasars that did not have published NIR spectra before our observations. In this paper, we present the NIR spectral dataset, including spectra of 37 quasars at $6.3 < z \le 7.64$, and the results obtained from its analysis. We describe the NIR spectral dataset including the quasar sample, observations, and data reduction in Section 2. The spectral analysis is presented in Section 3. We report the measurements of BH mass and Eddington ratio in Section 4 and discuss the quasar rest frame UV spectral properties in Section 5. We then discuss early SMBH growth and broad absorption line (BAL) quasars in this sample in Section 6. A summary of this work is presented in Section 7. All results below refer to a $\Lambda$CDM cosmology with parameters $\Omega_{\Lambda}$ = 0.7, $\Omega_{m}$ = 0.3, and $h$ = 0.7. \section{The NIR Dataset} \subsection{Quasar Sample} Our NIR spectroscopic observations mainly target the new quasars from a series of recent investigations \citep{fan19, wang18,wang19,wang21b,yang19b, yang20a} and also include some previously known $z>6.5$ quasars that did not have published NIR spectra before our observations. The NIR spectral sample presented in this paper is constructed based on (1) quasars from the survey described in W19 and Y19, (2) other known $z >7$ quasars (i.e., J1120+0641 and J1342+0928), (3) quasars observed in our Keck/NIRES NIR spectroscopic programs, and (4) other $z>6.5$ quasars (i.e., J0024+3913 and J2232+2930) that are not in the first three categories but have Gemini/GNIRS data in the archive. The final sample includes 37 quasars at $6.3 < z \le 7.64$. Within this sample, ten quasars have been published with BH mass measurements in the literature \citep{mortlock11,venemans15, banados18,wang18,fan19,tang19,wang20,yang20a,banados21,wang21b}. We include them and present our new BH mass measurements of these quasars in this paper to compare all quasar properties consistently. In this sample, there are six previously unpublished quasars. They are newly discovered objects found in an ongoing survey, based on the same selection method as used previously in W19 and Y19. W19 conducted a $z \gtrsim 6.5$ quasar survey based on color-color selection using photometric data from the DESI Legacy imaging Surveys \citep[DELS,][]{dey18}, Pan-STARRS1 \citep[PS1,][]{chambers16}, and all public NIR imaging surveys, as well as the Wide-field Infrared Survey Explorer \citep[{\it WISE},][]{wright10} mid-infrared survey in the northern sky, while Y19 carried out a similar quasar survey in the southern sky using the data from the Dark Energy Survey \citep[DES,][]{abbott18} DR1 instead of DELS and PS1. In this paper, we report the coordinates and the NIR spectra of these six new quasars. They are J021847.039+000715.175, J052559.675--240622.98, J092358.997+075349.107, J105807.720+293041.703, J200241.594--301321.69, and J233807.032+214358.17. The quasar selection, discovery, and other properties will be presented in detail in a separate paper. Table \ref{tab:sample} lists the full sample of 37 quasars and their redshifts. As shown in Figure \ref{fig:sample}, our new NIR spectral sample comprises the largest NIR spectral dataset of quasars at $z>6.5$ (32 quasars). Thus the derived measurements from these quasars will be representative of the observed NIR properties of luminous quasars in this redshift range, in an absolute magnitude range $M_{\rm 1450} < -25.2$. Moreover, this NIR sample contains a subsample of 32 quasars that meet the uniform selection used in W19 and Y19. These quasars therefore form a complete sample for the measurement of the BH mass function at $z\sim 6.5$ (J. Yang et al. in prep). \begin{figure \centering \epsscale{1.15} \plotone{QSO_Mz.pdf} \caption{The redshift and absolute magnitude distribution of quasars in our sample (red squares) and other known quasars at $z>6.3$ (blue squares). This new NIR spectroscopic sample covers 37 quasars from redshift 6.35 to the most distant known one at $z=7.64$, which forms the largest NIR spectral dataset for quasars at $z>6.5$.} \label{fig:sample} \end{figure} \begin{deluxetable*}{ l l l l l l l l l l} \tablecaption{Quasar Information and Observation Information of the 37 Quasars in Our Sample. \tabletypesize{\scriptsize} \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{Instrument} & \colhead{Exp Time (s)} & \colhead{$z$} & \colhead{$z_{\rm err}$} & \colhead{Discovery} & \colhead{[\ion{C}{2}]} & \colhead{$z$\_Ref\tablenotemark{a}} & \colhead{NIR\tablenotemark{b}} & \colhead{$J$ (AB)\tablenotemark{c}} } \startdata J002429.77+391319.0\tablenotemark{d} & GNIRS & 13800 & 6.621 & 0.002 & \cite{tang17} & Y & \cite{mazzucchelli17} & Y & 20.77$\pm$0.15\\ J003836.10$-$152723.6 & GNIRS & 15300 & 7.0340 & 0.0003 & \cite{wang18} & Y & Wang in prep & Y & 19.69$\pm$0.07\\ J004533.57+090156.9\tablenotemark{d} & NIRES & 13680 & 6.4694 & 0.0025 & \cite{mazzucchelli17} & Y & \cite{eilers20} & N & 20.80$\pm$0.13\\ J021847.04+000715.2\tablenotemark{e} & NIRES & 5760 & 6.7700 & 0.0013 & Yang in prep & Y & Wang in prep & N & 21.08$\pm$0.30\\ J024655.90$-$521949.9 & X-Shooter & 24000& 6.8876 & 0.0003 & Y19 & Y & Wang in prep & N & 21.29$\pm$0.19\\ J025216.64$-$050331.8& NIRES/X-Shooter & 18000/28800 & 7.0006 & 0.0009 & Y19 & Y & Wang in prep & Y & 20.19$\pm$0.07\\ J031343.84$-$180636.4 & FIRE/F2/ & 21723/11040/ & 7.6423 & 0.0013 & \cite{wang21b}& Y & Wang in prep & Y & 20.94$\pm$0.13\\ &GNIRS/NIRES & 29100/16200 & & & & & & & \\ J031941.66$-$100846.0 & NIRES & 18720 & 6.8275 & 0.0021 & Y19 & Y & Wang in prep & N & 20.98$\pm$0.24\\ J041128.63$-$090749.8 & NIRES & 5760 & 6.8260 & 0.0007 & W19 & Y & Wang in prep & N & 20.02$\pm$0.14\\ J043947.08+163415.7 & GNIRS & 3600 & 6.5188 & 0.0004 & \cite{fan19} & Y & \cite{yang19a} & Y & 17.46$\pm$0.02\\ J052559.68$-$240623.0\tablenotemark{e} & F2 & 5400 & 6.5397 & 0.0001 & Yang in prep & Y & Wang in prep & N & ---\\ J070626.39+292105.5 & NIRES & 15210 & 6.6037 & 0.0003 & W19 & Y & Wang in prep & N & 19.16$\pm$0.05\\ J080305.42+313834.2 & GNIRS & 3600 & 6.377 & 0.006 & W19 & N & W19 & N & 20.12$\pm$0.12\\ J082931.97+411740.4 & GNIRS & 13500 & 6.768 & 0.006 & W19 & N & W19 & N & 20.28$\pm$0.15\\ J083737.84+492900.4 & GNIRS & 17400 & 6.710 & 0.008 & W19 & N & W19 & N & 20.21$\pm$0.17\\%J0837+4929 J083946.88+390011.5 & GNIRS & 16800 & 6.905 & 0.01 & W19 & N & W19 & N & 20.39$\pm$0.20\\ J091054.53$-$041406.8 & GNIRS/NIRES & 3600/3600 & 6.6363 & 0.0003 & W19 & Y & Wang in prep & N & 20.25$\pm$0.14\\ J091013.63+165629.8 & GNIRS & 13200 & 6.7289 & 0.0005 & W19 & Y & Wang in prep & N & 21.06$\pm$0.13\\ J092120.56+000722.9 & GNIRS & 9600 & 6.5646 & 0.0003 & \cite{matsuoka18} & Y & Wang in prep & N & 21.21$\pm$0.28\\ J092347.12+040254.4 & NIRES & 11880 & 6.6330 & 0.0003 & W19,\cite{matsuoka18} & Y & Wang in prep & N & 20.02$\pm$0.09\\ J092359.00+075349.1\tablenotemark{e} & GNIRS & 7200 & 6.6817 & 0.0005 & Yang in prep & Y & Wang in prep & N & ---\\ J100758.26+211529.2 & GNIRS/NIRES & 21900/7920 & 7.5149 & 0.0004 & \cite{yang20a} & Y & \cite{yang20a} & Y & 20.22$\pm$0.18\\ J105807.72+293041.7\tablenotemark{e} & NIRES & 3600 & 6.5846 & 0.0005 & Yang in prep & Y & Wang in prep & N & ---\\ J110421.59+213428.8 & GNIRS & 7200 & 6.7662 & 0.0009 & W19 & Y & Wang in prep & N & 19.95$\pm$0.12\\ J112001.48+064124.3 & GNIRS & 4800 & 7.0851 & 0.0005 & \cite{mortlock11} & Y & \cite{venemans17a} & Y & 20.35$\pm$0.15\\ J112925.34+184624.2\tablenotemark{d} & NIRES & 12600 & 6.823 & 0.003 & \cite{banados21} & N & \cite{banados21} & Y & 20.90$\pm$0.11\\ J113508.93+501133.0 & GNIRS/NIRES & 7200/4800 & 6.5851 & 0.0008 & W19 & Y & Wang in prep & N & 20.41$\pm$0.16\\ J121627.58+451910.7 & GNIRS & 4800 & 6.65 & 0.01 & W19 & N & W19 & N & 21.02$\pm$0.13\\ J131608.14+102832.8 & NIRES & 3000 & 6.35 & 0.04 & W19 & N & W19 & N & 20.75$\pm$0.12\\ J134208.10+092838.6 & GNIRS & 32400 & 7.5413 & 0.0007 & \cite{banados18} & Y & \cite{venemans17b} & Y & 20.30$\pm$0.02\\ J153532.87+194320.1 & NIRES & 2880 & 6.40 & 0.05 & W19 & N & W19 & N & 19.64$\pm$0.11\\ J172408.74+190143.0\tablenotemark{d} & NIRES & 15120 & 6.44 & 0.05 & \cite{mazzucchelli17} & N & \cite{mazzucchelli17} & N & 21.09$\pm$0.18\\ J200241.59$-$301321.7\tablenotemark{e} & GNIRS & 3600 & 6.6876 & 0.0004 & Yang in prep & Y & Wang in prep & N & 19.97$\pm$0.16\\ J210219.22$-$145854.0 & GNIRS/NIRES & 10200/5760 & 6.6645 & 0.0002 & W19 & Y & Wang in prep & N & 21.14$\pm$0.20\\ J221100.60$-$632055.8 & X-Shooter & 31200 & 6.8449 & 0.0003 & Y19 & Y & Wang in prep & N & 21.23$\pm$0.18\\ J223255.15+293032.0\tablenotemark{d} & GNIRS & 4800 & 6.666 & 0.004 & \cite{venemans15} & Y & \cite{mazzucchelli17} & Y & 20.37$\pm$0.14\\ J233807.03+214358.2\tablenotemark{e} & GNIRS/NIRES & 1500/7200 & 6.60 & 0.03 & Yang in prep & N & Yang in prep & N & 20.75$\pm$0.30\\ \enddata \tablenotetext{a}{The reference for the redshifts used in the spectral analysis. If the quasar has a [\ion{C}{2}] detection (column [\ion{C}{2}] = Y), the reference is for the [\ion{C}{2}]-based redshift, and the redshift listed in column $z$ is the [\ion{C}{2}]-based redshift. Most of the [\ion{C}{2}] detections are from a series of ALMA/NOEMA programs that will be reported in detail in F. Wang et al. (in prep).} \tablenotetext{b}{The NIR column reports whether the object has previously published BH mass measurements (Y or N).} \tablenotetext{c}{The $J$-band photometric data used to scale the NIR spectra. For the quasars J0525--2406, J0923+0753, and J1058+2930 without $J$ data, we used $Y$ or $K$-band photometry, as described in Section 2.3.} \tablenotetext{d}{These quasars are also named as PSO J006.1240+39.2219, PSO J011.3898+09.0324, PSO J172.3556+18.7734, PSO J261.0364+19.0286, and PSO J338.2298+29.5089, respectively.} \tablenotetext{e}{These quasars are previously unpublished. Details of their selection and identification will be reported separately (J. Yang et al. in prep).} \label{tab:sample} \end{deluxetable*} \subsection{NIR Spectroscopy} We obtained NIR spectroscopy of our quasar sample using the following facilities: Keck/NIRES (Near-Infrared Echellette Spectrometer, \citealt{elias06a,elias06b}), Gemini/GNIRS (Gemini Near-Infrared Spectrograph, \citealt{wilson04}), VLT/X-Shooter \citep{vernet11}, Gemini/F2 (FLAMINGOS-2 near-infrared imaging spectrograph, \citealt{eikenberry04}), and Magellan/FIRE (Folder-port InfraRed Echellette, \citealt{simcoe10}). Table \ref{tab:sample} lists the instruments used to observe each quasar and the exposure times, and the observations with each instrument are described below. \begin{itemize} \vspace{-8pt} \setlength{\itemsep}{-0.5em} \item[1)] We observed 18 quasars with Keck/NIRES from 2018 to 2020. Keck/NIRES has a fixed configuration that simultaneously covers 0.94 to 2.45 $\mu$m with a fixed $0\farcs55$ narrow slit, resulting in a resolving power of $R\sim2700$. \item[2)] Spectra of 22 quasars were taken with Gemini North/GNIRS, including 18 quasars observed in our programs from 2018 to 2020 and four from Gemini archival data. We used the short-slit (cross-dispersion) mode (32 l/mm) with simultaneous coverage of 0.85--2.5 $\mu$m. A 0$\farcs$675 slit was used, corresponding to $R\sim700$. For the archival data, three of the quasars were observed with a 0$\farcs$675 slit, and one (J2232+2930) was observed using a 1$\farcs$0 slit ($R\sim500$). \item[3)] In addition, we observed three quasars with VLT/X-Shooter (ID: 0103.A-0423(A)) in 2019. X-Shooter covers the wavelength range from 3000 to 24800 \AA. We used a 0$\farcs$9 slit for the VIS (5595-10240 \AA) and a 0$\farcs$6 slit for NIR (10240-24800 \AA), resulting in resolving power of 8900 and 8100, respectively. \item[4)] Quasars J0313--1806 and J0525--2406 were observed with Gemini South F2 in 2019. For both, we used a slit width of 0$\farcs$72 which delivers a spectral resolving power of $R\sim400$. With F2, only an $HK$ range spectrum was obtained, covering the wavelength range from 1.45 to 2.5 $\mu$m. \item[5)] In addition to the NIRES, GNIRS, and F2 observations, quasar J0313--1806 was also observed with Magellan/FIRE (0.8-2.5 $\mu$m) in Echelle mode in 2019 November and December with 0$\farcs$75 and 1$\farcs$0 slits, corresponding to resolving power of $R\sim4800$ and $R\sim3600$, respectively. There are seven quasars observed with multiple instruments. \end{itemize} \subsection{Data Reduction} All NIR spectra are reduced with the open-source Python-based spectroscopic data reduction pipeline {\tt PypeIt}\footnote{\url{https://github.com/pypeit/PypeIt}} \citep{prochaska20}. The wavelength solutions are derived from the night sky OH lines in the vacuum frame. We choose this method in order to use on-sky wavelength calibrations and also to reduce observational overheads given that our science goals do not require very high resolution. The sky subtraction is based on the standard A--B mode and a b-spline fitting procedure that is performed to further clean up the sky line residuals following \cite{bochanski09}. An optimal extraction \citep{horne86} is performed to generate 1D science spectra. The extracted spectra are flux calibrated with sensitivity functions derived from the observations of spectroscopic standard stars. Telluric absorption is corrected by fitting absorption models to the quasar spectra, and the absorption models are constructed using a telluric model plus a quasar model. The telluric model grids are produced from the Line-By-Line Radiative Transfer Model \citep[{\tt LBLRTM}\footnote{\url{http://rtweb.aer.com/lblrtm.html}};][]{clough05}. The quasar model is based on a principal component analysis method \citep{davies18}. The stacking of individual exposures or spectra from multiple instruments does not employ any interpolation to avoid correlated noise. We determine a common wavelength grid based on the dispersion of each instrument. The wavelength grid is sampled linearly in velocity space for echelle spectrographs and linearly in wavelength for other long-slit spectrographs. For spectra from multiple instruments, the wavelength grid is derived based on the lowest resolution spectrum. We then use a histogram technique to divide all native pixels into wavelength bins. The stacked flux in each wavelength bin is then computed as the mean flux density of values from all native pixels in that bin, weighted by the average square of the signal-to-noise ratio (S/N) of the exposure that contains this pixel. The reduced spectrum of each quasar is then scaled using its $J$-band magnitude (or scaled with $K$ or $Y$ if $J$ band is not available). We choose $J$ band instead of the $K$ band, which includes the quasar \ion{Mg}{2}\ emission line, since only a few objects have $K$-band photometric data. For the quasars (23 objects) included in public $J$-band photometric catalogs from the UKIRT Hemisphere Survey \citep[UHS;][]{dye18}, the UKIRT InfraRed Deep Sky Surveys--Large Area Survey \citep[ULAS;][]{lawrence07}, or the VISTA Hemisphere Survey \citep[VHS;][]{mcmahon13}, we use the public $J$-band data. For quasars not in these catalogs but which have published $J$-band magnitudes (8 objects), we use the corresponding $J$-band data. Then for the quasars without either of those sources of $J$-band data, if they have been observed in our UKIRT/WFCAM imaging programs \citep{wang19}, we use photometric data from our UKIRT images (2 objects). For those quasars that do not have any of the $J$ data described above but are covered by $J$-band images from public surveys, we perform forced photometry (3\arcsec\ diameter) on the public $J$-band images. The quasars J0923+0753 and J1058+2930 have only $< 3\sigma$ forced photometric magnitudes in $J$ band. For J0923+0753, we use its forced photometric data in the $Y$ band (21.25$\pm$0.26 in AB), which has S/N $>4\sigma$. For the quasar J1058+2930, which does not have images in any NIR bands, we use the photometry from the acquisition image from NIRES in the $K_s$ band (20.56$\pm$0.05 in AB). As a comparison, we also use the same scaling factor as that for the quasar J0910--0414, which was observed with NIRES just a few hours before J1058+2930 on the same night. Both yield consistent scaling factors, with a 5\% difference in flux density. The quasar J0525--2406 has only an F2 spectrum covering the $H$ and $K$ bands, and it does not have public NIR images. For this object we take $K_s$-band imaging with MMT/MMIRS and scale its spectrum using this $K$-band photometry (20.48$\pm$0.09 in AB). Then, all scaled spectra are corrected for Galactic extinction based on the dust map of \cite{schlegel98} and the extinction law of \cite{cardelli89}. All photometric data used to scale spectra are listed in Table \ref{tab:sample}. All final spectra are shown in Figure \ref{fig:fitting} and Figure \ref{fig:allspec01}. Since this work focuses on the NIR range, we plot the spectra only at wavelengths redder than 9800 \AA. \section{Spectral Analysis} After obtaining the final spectral dataset, we fit each quasar spectrum to derive spectral properties for further measurements. In this section, we describe our spectral fitting procedure for the quasar continuum and emission lines. We will also discuss a few individual quasars with unusual spectral features. \begin{figure* \centering \epsscale{1.18} \plotone{J0319m1008_Specfit.png} \caption{An example of spectral fitting for the quasar J0319--1008 at $z = 6.8$. {\bf (a)} The spectrum (black line) is acquired with Keck/NIRES. The grey line shows the spectral uncertainty. The purple dashed line represents the best-fit power law continuum and the solid red line denotes the total fit. The two inset plots show the fits to the \ion{C}{4}\ and \ion{Mg}{2}\ emission lines. The orange and blue solid lines represent the best-fit emission line and iron components, respectively. The quasar J0319--1008 has a [\ion{C}{2}]-based redshift of 6.8275$\pm$0.0021 (F. Wang et al. in prep). The line fitting yields a \ion{Mg}{2}-based redshift of 6.816$\pm$0.004. The continuum fitting obtained a power-law slope of $\alpha_{\rm \lambda}=-0.45\pm0.3$. The fitting of all other quasar spectra is shown in Figure \ref{fig:allspec01} in Appendix A. {\bf (b)} The residual (data -- model) of spectral fitting. {\bf (c)} The telluric model used for the telluric correction for this quasar.} \label{fig:fitting} \end{figure*} \subsection{Spectral Fitting} We fit each near-IR spectrum with a model consisting of continuum plus emission lines. The initial redshift is chosen to be the [\ion{C}{2}]-based redshift or the published redshift as listed in Table \ref{tab:sample}. The pseudo-continuum includes a power-law continuum, \ion{Fe}{2}\ template \citep{vestergaard01,tsuzuki06}, and Balmer continuum \citep{derosa14}. The \ion{Fe}{2}\ template used here is the combination of templates from \cite{vestergaard01} and \cite{tsuzuki06}. \citet[][hereafter VW01]{vestergaard01} constructed an empirical ultraviolet iron template covering the wavelength range 1250--3090 \AA\ based on spectra of the narrow line Seyfert 1 galaxy I Zw 1. At that time, the iron emission underlying the \ion{Mg}{2}\ line could not be well estimated, so the iron emission in the template was set to zero over this region. \citet[][hereafter T06]{tsuzuki06} derived an \ion{Fe}{2}\ template from the spectrum of I Zw 1 for the regions 2200--3500 and 4200--5600 \AA\ and used synthetic spectra calculated with the CLOUDY photoionization code to separate the underlying \ion{Fe}{2}\ emission from the \ion{Mg}{2}\ emission line. In this work, we combine these two templates by combining the \ion{Fe}{2}\ emission from the VW01 template for 1100--2200 \AA\ and the T06 template for 2200--3500 \AA, in order to obtain a template covering a wide wavelength range and also containing the \ion{Fe}{2}\ emission beneath the \ion{Mg}{2}\ line. When fitting a spectrum, the iron template is broadened by convolving the template with a Gaussian kernel derived from the width the of \ion{Mg}{2}\ line. Gaussian fits of the \ion{C}{4}\ and \ion{Mg}{2}\ emission lines are then performed on the continuum-subtracted spectrum. The \ion{Si}{4} and \ion{C}{3}] lines are also fitted if they are visible. However, the \ion{Si}{4} lines sometimes are too close to the edge of the recorded spectrum and thus have lower S/N. Most of the \ion{C}{3}] lines are fully or partly located within the region affected by strong telluric absorption (i.e., $\sim$ 13500--14200 \AA). So the fitting of these two lines has lower quality than the fitting of the \ion{C}{4}\ and \ion{Mg}{2}\ lines, and we are not using these two lines for scientific analysis in this paper. For most cases, a two-component Gaussian profile is used to fit each emission line, while for a few objects only a one-component Gaussian is used. For example, for the quasar J0910--0414, we use only one Gaussian to fit its \ion{Mg}{2}\ line due to multiple strong absorption features around the \ion{Mg}{2}\ line. When masking all the absorption features, the wide range of absorptions results in a significant gap at the line center, such that a two-Gaussian model fit would result in a double-peaked emission line model. For a similar reason, a one-component Gaussian model is also used for the \ion{C}{4}\ line of the quasar J1058+2930. Four quasars do not have \ion{C}{4}\ fitting, including the quasar J0525--2406 that has no spectrum blueward of 14500 \AA\ and the other three quasars with unusual spectral features (i.e., J0910--0414, J1316+1028, and J1535+1943; see details in the next subsection). The redshifts derived from the \ion{C}{4}\ and \ion{Mg}{2}\ emission lines are based on the line centroids \citep{peterson04} rather than the line peaks. In this case, any strong blue/redshifted component will result in a different redshift measurement from the measurement using the line peak. The uncertainties of all spectral measurements are estimated using a Monte Carlo approach, following \cite{yang20a} \citep[also ][]{shen19, wang20}. For each spectrum, we generate 100 mock spectra by randomly adding Gaussian noise at each pixel with standard deviation equal to the spectral error at that pixel. We apply the same fitting procedure to each mock spectrum to obtain the corresponding measurements. Then the uncertainties of the spectral measurements are estimated as the average of the 16\% and 84\% percentile deviations from the median value. The best fits of the continuum and the \ion{C}{4}\ and \ion{Mg}{2}\ lines for each quasar are shown in Figures \ref{fig:fitting} and \ref{fig:allspec01}. The spectral fitting yields a set of spectral properties of these quasars, including continuum slope, luminosity, emission line FWHM, and line rest-frame equivalent width (EW). The redshifts derived from the UV emission lines and line velocity shifts will be discussed in detail in Section 5.1. The quasars in our sample have power-law continuum slopes in the range of --1.74 to --0.24 ($f_{\rm \lambda}\propto \lambda^{\alpha_{\rm \lambda}}$), with a mean of $\alpha_{\rm \lambda} = -1.2$ and a 1$\sigma$ dispersion of 0.4. The mean is in good agreement with the mean slopes from quasar samples at similar redshifts (e.g., $\alpha_{\rm \lambda}=-1.2$ from the NIR spectral sample in \citealt{mazzucchelli17} and $\alpha_{\rm \lambda}=-1.4$ in \citealt{schindler20}). It is also consistent with the quasar composites generated from low-redshift quasars (e.g., $\alpha_{\rm \lambda}=-1.5$ in \citealt{vandenberk01} and $\alpha_{\rm \lambda}=-1.7$ in \citealt{selsing16}) within the uncertainty, although our result has a slightly redder slope. The absolute rest-frame 1450 \AA\ magnitudes are derived from the best-fit power law continuum directly. We also measure the rest-frame 3000 \AA\ luminosity and convert it to a bolometric luminosity assuming a bolometric correction factor of 5.15 \citep{richards06,shen11}. These quasars are in the luminosity range 0.5 -- 3.4 $\times 10^{47}$ erg s$^{-1}$. The range of FWHM of the \ion{C}{4}\ lines is $\sim$ 1900 -- 12000 km\,s$^{-1}$ with a mean of 5900 km\,s$^{-1}$, and the \ion{Mg}{2}\ lines have FWHMs of $\sim$ 1700 -- 5500 km\,s$^{-1}$ with a mean of 3000 km\,s$^{-1}$. The EWs of \ion{C}{4}\ are in the range of 6 to 70 \AA\ and have a mean of 30 \AA. The EWs of the \ion{Mg}{2}\ line are from 8 to 35 \AA\ with a mean of 20 \AA. All these measurements are summarized in Table \ref{tab:fitting}. \subsection{Notes on Individual Objects} Some quasars have unusual spectral features such as reddened continuum shapes or strong absorption lines. We describe these objects and their spectral fitting separately. {\it BAL quasars -- } In our sample, there are nine quasars with significant BAL features (see details in Section 6.2), which significantly affect the spectral fitting. In particular, the quasar J0910--0414 has multiple metal absorptions within the emission-line profiles in addition to its BAL feature, indicative of both outflow and inflow, which will be discussed in a separate paper. Its strong absorption features mask most of the emission at $< 1570$ \AA\ (rest-frame), therefore we use only the longer wavelength range for spectral fitting. For the quasars J0246--5219 and J0038--1527, we also mask the wavelength range shorter than rest-frame 1500 \AA\ because of their strong BAL absorptions on the blue side. For the other BAL quasars, we mask the BAL troughs for spectral fitting. {\it Red quasars --} The quasars J0246--5219, J0319--1008, and J1316+1028 have red continua with slopes $\alpha_{\rm \lambda} > -0.5$. They all have red $J$--W1 colors ($J$--W1 $> 2.5$). J1316+1028's spectrum does not show a \ion{C}{4}\ line and has only a tentative \ion{C}{3}] line, which may be affected by the low S/N in this wavelength range. This quasar shows a strong BAL feature in its observed optical spectrum \citep{wang19} but much weaker absorption features in the NIR spectrum. {\it Unusual reddened quasar -- } The quasar J1535+1943 has a reddened continuum in $Y$ and $J$ but a relatively blue continuum at redder wavelengths. If this reddening is caused by dust extinction, the extinction, relatively flat at wavelengths redward of rest-frame $1700$ \AA\ and steeply rising at shorter wavelengths, is quite similar to the dust extinction of quasar SDSS1048+46 at $z=6$ described in \cite{maiolino04}. This kind of dust extinction detected in high-redshift quasar spectra could be evidence for the origin of early dust formation (e.g., a supernova origin for the dust). A detailed discussion of its dust extinction will be presented in a subsequent paper (J. Yang et al. in prep). Given the relatively flat extinction at $> 1700$ \AA, we fit its continuum using only the spectrum at longer wavelengths. A slope of $\alpha_{\rm \lambda}=-0.92\pm0.02$ is obtained. Due to the uncertain dust extinction, the luminosity measured from the observed spectrum is a lower limit, and thus its BH mass is also a lower limit. \begin{deluxetable*}{l l l l l l l l l l l l l} \rotate \tablecaption{Spectral Fitting and Quasar Properties \tabletypesize{\footnotesize \tablewidth{0pt} \tablehead{ \colhead{Name} & \colhead{$z_{\rm [CII]}$} & \colhead{$z_{\rm CIV}$} & \colhead{$z_{\rm MgII}$} & \colhead{$M_{\rm 1450}$} & \colhead{FWHM$_{\rm CIV}$ } & \colhead{FWHM$_{\rm MgII}$ } & \colhead{EW$_{\rm CIV}$ } & \colhead{EW$_{\rm MgII}$ } & \colhead{$L_{\rm Bol}$} & \colhead{$M_{\rm BH}$} & \colhead{$\alpha_{\rm \lambda}$\tablenotemark{a}} & \colhead{$\lambda_{\rm Edd}$} \\ \nocolhead{} & \nocolhead{} & \nocolhead{} & \nocolhead{} & \nocolhead{} & \colhead{(km s$^{-1}$)} & \colhead{(km s$^{-1}$)} & \colhead{(\AA)} & \colhead{(\AA)} & \colhead{($10^{46} $\rm erg s$^{-1}$)} & \colhead{($10^{9}\,M_{\odot}$)} & \nocolhead{} & \nocolhead{} } \startdata J0024+3913 & 6.621$\pm$0.002 & 6.608$\pm$0.002 & 6.620$\pm$0.004 & --25.65 & 1908$\pm$26 & 1741$\pm$118 & 68.8$\pm$11.1 & 28.8$\pm$6.7 & 7.8$\pm$1.0 & 0.27$\pm$0.02 & --1.11$\pm$0.02 & 2.3$\pm$0.4 \\ J0038--1527\tablenotemark{c} & 7.0340$\pm$0.0003 & 6.929$\pm$0.003 & 6.999$\pm$0.001 & --27.13 & 7800$\pm$349 & 2954$\pm$17 & 12.0$\pm$1.5 & 15.0$\pm$0.6 & 23.7$\pm$0.9 & 1.36$\pm$0.05 & --1.46$\pm$0.02 & 1.4$\pm$0.1 \\ J0045+0901 & 6.4694$\pm$0.0025 & 6.43$\pm$0.02 & 6.441$\pm$0.004 & --25.86 & 6373$\pm$2299 & 2816$\pm$110 & 12.6$\pm$2.8 & 15.5$\pm$3.6 & 6.2$\pm$0.6 & 0.63$\pm$0.02 & --1.69$\pm$0.03 & 0.8$\pm$0.1 \\ J0218+0007 & 6.7700$\pm$0.0013 & 6.725$\pm$0.007 & 6.766$\pm$0.004 & --25.55 & 5406$\pm$983 & 2745$\pm$73 & 18.0$\pm$11.4 & 26.0$\pm$6.4 & 6.4$\pm$1.4 & 0.61$\pm$0.07 & --1.24$\pm$0.13 & 0.8$\pm$0.2 \\ J0246--5219\tablenotemark{c} & 6.8876$\pm$0.0003 & 6.851$\pm$0.002 & 6.86$\pm$0.02 & --25.36 & 4070$\pm$286 & 3211$\pm$523 & 30.6$\pm$1.8 & 30.4$\pm$18.2 & 10.2$\pm$1.0 & 1.05$\pm$0.37 & --0.37$\pm$0.05 & 0.8$\pm$0.3 \\ J0252--0503 & 7.0006$\pm$0.0009 & 6.867$\pm$0.005 & 6.99$\pm$0.02 & --26.63 & 11286$\pm$698 & 3327$\pm$126 & 17.3$\pm$1.0 & 17.6$\pm$2.8 & 13.2$\pm$0.4 & 1.28$\pm$0.09 & --1.62$\pm$0.02 & 0.8$\pm$0.1 \\ J0313--1806\tablenotemark{c} & 7.6423$\pm$0.0013 & 7.523$\pm$0.01 & 7.611$\pm$0.004 & --26.13 & 8740$\pm$1828 & 3670$\pm$405 & 14.2$\pm$0.9 & 9.5$\pm$2.7 & 14.0$\pm$0.7 & 1.61$\pm$0.40 & --0.91$\pm$0.01 & 0.7$\pm$0.2 \\ J0319--1008 & 6.8275$\pm$0.0021 & 6.809$\pm$0.005 & 6.816$\pm$0.004 & --25.36 & 3164$\pm$205 & 2006$\pm$20 & 70.1$\pm$26.2 & 32.8$\pm$4.8 & 9.6$\pm$1.4 & 0.40$\pm$0.03 & --0.45$\pm$0.35 & 1.9$\pm$0.3 \\ J0411--0907 & 6.8260$\pm$0.0007 & 6.790$\pm$0.005 & 6.827$\pm$0.006 & --26.58 & 4046$\pm$1047 & 2729$\pm$96 & 43.2$\pm$4.8 & 19.6$\pm$1.6 & 15.9$\pm$1.0 & 0.95$\pm$0.09 & --1.31$\pm$0.03 & 1.3$\pm$0.2 \\ J0439+1634\tablenotemark{b}\tablenotemark{c} & 6.5188$\pm$0.0004 & 6.492$\pm$0.002 & 6.519$\pm$0.003 & --25.31 & 6067$\pm$277 & 3030$\pm$65 & 41.0$\pm$1.8 & 17.8$\pm$0.6 & 4.6$\pm$0.1 & 0.63$\pm$0.02 & --1.41$\pm$0.03 & 0.6$\pm$0.1 \\ J0525--2406 & 6.5397$\pm$0.0001 & --- & 6.543$\pm$0.002 & --25.47 & --- & 1877$\pm$345 & --- & 14.5$\pm$11.9 & 6.8$\pm$3.5 & 0.29$\pm$0.04 & --1.07$\pm$0.93 & 1.8$\pm$1.0 \\ J0706+2921\tablenotemark{c} & 6.6037$\pm$0.0003 & 6.54$\pm$0.02 & 6.5925$\pm$0.0004 & --27.44 & 8673$\pm$467 & 3372$\pm$106 & 30.5$\pm$4.2 & 20.4$\pm$1.5 & 33.9$\pm$1.5 & 2.11$\pm$0.16 & --1.35$\pm$0.03 & 1.3$\pm$0.1 \\ J0803+3138 & --- & 6.332$\pm$0.005 & 6.384$\pm$0.004 & --26.49 & 8073$\pm$743 & 3460$\pm$173 & 23.2$\pm$2.3 & 18.5$\pm$3.4 & 13.4$\pm$1.1 & 1.40$\pm$0.18 & --1.43$\pm$0.04 & 0.8$\pm$0.1 \\ J0829+4117 & --- & 6.736$\pm$0.001 & 6.773$\pm$0.007 & --26.07 & 4405$\pm$1411 & 2488$\pm$107 & 63.7$\pm$9.2 & 21.7$\pm$3.6 & 12.8$\pm$1.2 & 0.71$\pm$0.02 & --0.95$\pm$0.02 & 1.4$\pm$0.1 \\ J0837+4929 & --- & 6.677$\pm$0.002 & 6.702$\pm$0.001 & --26.33 & 4165$\pm$401 & 2577$\pm$36 & 37.8$\pm$2.6 & 34.5$\pm$1.9 & 14.5$\pm$0.4 & 0.81$\pm$0.01 & --1.11$\pm$0.03 & 1.4$\pm$0.1 \\ J0839+3900\tablenotemark{c} & --- & 6.86$\pm$0.01 & 6.9046$\pm$0.0003 & --26.36 & 5904$\pm$258 & 2233$\pm$21 & 15.3$\pm$1.4 & 16.6$\pm$1.1 & 17.8$\pm$0.7 & 0.671$\pm$0.003 & --0.87$\pm$0.01 & 2.1$\pm$0.1 \\ J0910--0414\tablenotemark{c} & 6.6363$\pm$0.0003 & --- & 6.610$\pm$0.003 & --26.61 & --- & 5396$\pm$544 & --- & 15.3$\pm$1.2 & 15.0$\pm$1.1 & 3.59$\pm$0.61 & --1.43$\pm$0.01 & 0.3$\pm$0.1 \\ J0910+1656 & 6.7289$\pm$0.0005 & 6.718$\pm$0.003 & 6.719$\pm$0.005 & --25.34 & 2181$\pm$341 & 2358$\pm$28 & 61.1$\pm$11.6 & 30.2$\pm$5.3 & 5.3$\pm$0.6 & 0.41$\pm$0.03 & --1.22$\pm$0.09 & 1.0$\pm$0.1 \\ J0921+0007 & 6.5646$\pm$0.0003 & 6.553$\pm$0.005 & 6.5654$\pm$0.0002 & --25.19 & 2221$\pm$154 & 1813$\pm$14 & 43.5$\pm$7.6 & 20.8$\pm$2.1 & 6.1$\pm$0.6 & 0.26$\pm$0.01 & --0.86$\pm$0.09 & 1.9$\pm$0.2 \\ J0923+0402\tablenotemark{c} & 6.6330$\pm$0.0003 & 6.59$\pm$0.02 & 6.612$\pm$0.002 & --26.68 & 4680$\pm$531 & 3454$\pm$109 & 16.0$\pm$2.5 & 16.4$\pm$3.2 & 21.7$\pm$3.0 & 1.77$\pm$0.02 & --1.00$\pm$0.07 & 1.0$\pm$0.1 \\ J0923+0753 & 6.6817$\pm$0.0005 & 6.652$\pm$0.01 & 6.682$\pm$0.002 & --25.5 & 4099$\pm$526 & 2640$\pm$682 & 40.8$\pm$22.6 & 34.7$\pm$15.6 & 4.9$\pm$2.0 & 0.49$\pm$0.15 & --1.55$\pm$0.06 & 0.8$\pm$0.4 \\ J1007+2115 & 7.5149$\pm$0.0004 & 7.39$\pm$0.04 & 7.48$\pm$0.01 & --26.73 & 7988$\pm$2045 & 3152$\pm$168 & 10.0$\pm$1.6 & 20.1$\pm$6.1 & 20.4$\pm$1.3 & 1.43$\pm$0.22 & --1.14$\pm$0.01 & 1.1$\pm$0.2 \\ J1058+2930 & 6.5846$\pm$0.0005 & 6.523$\pm$0.002 & 6.585$\pm$0.005 & --25.68 & 5709$\pm$128 & 2656$\pm$97 & 32.5$\pm$11.7 & 19.0$\pm$6.8 & 5.8$\pm$1.5 & 0.54$\pm$0.03 & --1.57$\pm$0.07 & 0.8$\pm$0.2 \\ J1104+2134 & 6.7662$\pm$0.0009 & 6.739$\pm$0.002 & 6.766$\pm$0.005 & --26.63 & 6396$\pm$1242 & 3695$\pm$225 & 32.8$\pm$4.6 & 24.8$\pm$3.0 & 15.1$\pm$0.9 & 1.69$\pm$0.15 & --1.44$\pm$0.04 & 0.7$\pm$0.1 \\ J1120+0641 & 7.0851$\pm$0.0005 & 7.016$\pm$0.002 & 7.070$\pm$0.003 & --26.44 & 8101$\pm$281 & 3402$\pm$73 & 25.9$\pm$2.4 & 20.9$\pm$2.2 & 13.4$\pm$1.0 & 1.35$\pm$0.04 & --1.36$\pm$0.02 & 0.8$\pm$0.1 \\ J1129+1846 & --- & 6.804$\pm$0.008 & 6.824$\pm$0.001 & --25.73 & 3008$\pm$997 & 1774$\pm$36 & 30.1$\pm$17.6 & 17.6$\pm$3.6 & 8.4$\pm$1.9 & 0.29$\pm$0.02 & --1.10$\pm$0.26 & 2.3$\pm$0.5 \\ J1135+5011 & 6.5851$\pm$0.0008 & 6.53$\pm$0.01 & 6.579$\pm$0.001 & --26.16 & 7469$\pm$397 & 3762$\pm$129 & 32.4$\pm$5.9 & 22.0$\pm$2.7 & 10.8$\pm$0.8 & 1.49$\pm$0.05 & --1.30$\pm$0.02 & 0.6$\pm$0.1 \\ J1216+4519 & --- & 6.56$\pm$0.02 & 6.648$\pm$0.003 & --25.57 & 8947$\pm$410 & 2816$\pm$292 & 34.3$\pm$10.2 & 20.0$\pm$3.4 & 5.8$\pm$1.2 & 0.61$\pm$0.20 & --1.40$\pm$0.01 & 0.8$\pm$0.3 \\ J1316+1028\tablenotemark{c} & --- & --- & 6.329$\pm$0.005 & --25.67 & --- & 2866$\pm$763 & --- & 7.9$\pm$3.0 & 14.8$\pm$3.3 & 1.01$\pm$0.37 & --0.24$\pm$0.39 & 1.2$\pm$0.5 \\ J1342+0928 & 7.5413$\pm$0.0007 & 7.37$\pm$0.02 & 7.51$\pm$0.01 & --26.67 & 11989$\pm$1236 & 2640$\pm$215 & 12.9$\pm$1.4 & 13.9$\pm$5.8 & 13.3$\pm$1.1 & 0.81$\pm$0.18 & --1.67$\pm$0.04 & 1.3$\pm$0.3 \\ J1535+1943 & --- & --- & 6.370$\pm$0.001 & --27.09 & --- & 4372$\pm$266 & --- & 13.4$\pm$1.3 & 33.5$\pm$1.7 & 3.53$\pm$0.33 & --0.92$\pm$0.02 & 0.8$\pm$0.1 \\ J1724+1901 & --- & 6.45$\pm$0.03 & 6.480$\pm$0.001 & --25.55 & 3716$\pm$1267 & 2704$\pm$62 & 7.1$\pm$5.5 & 11.3$\pm$3.2 & 8.4$\pm$1.3 & 0.67$\pm$0.08 & --0.88$\pm$0.14 & 1.0$\pm$0.2 \\ J2002--3013 & 6.6876$\pm$0.0004 & 6.64$\pm$0.02 & 6.673$\pm$0.001 & --26.9 & 7298$\pm$1005 & 3598$\pm$351 & 8.4$\pm$3.2 & 17.9$\pm$3.0 & 15.4$\pm$1.9 & 1.62$\pm$0.27 & --1.74$\pm$0.04 & 0.8$\pm$0.2 \\ J2102--1458 & 6.6645$\pm$0.0002 & 6.611$\pm$0.009 & 6.652$\pm$0.003 & --25.53 & 6146$\pm$295 & 3083$\pm$186 & 23.2$\pm$4.4 & 20.1$\pm$3.0 & 6.0$\pm$0.5 & 0.74$\pm$0.11 & --1.31$\pm$0.04 & 0.6$\pm$0.1 \\ J2211--6320 & 6.8449$\pm$0.0003 & 6.73$\pm$0.01 & 6.83$\pm$0.01 & --25.38 & 7985$\pm$394 & 2679$\pm$608 & 26.2$\pm$4.4 & 16.1$\pm$16.5 & 5.9$\pm$0.2 & 0.55$\pm$0.24 & --1.17$\pm$0.10 & 0.8$\pm$0.4 \\ J2232+2930 & 6.666$\pm$0.004 & 6.671$\pm$0.007 & 6.655$\pm$0.003 & --26.26 & 3876$\pm$178 & 5504$\pm$159 & 38.4$\pm$8.8 & 27.0$\pm$7.2 & 10.0$\pm$1.7 & 3.06$\pm$0.36 & --1.53$\pm$0.04 & 0.3$\pm$0.1 \\ J2338+2143 & --- & 6.49$\pm$0.03 & 6.565$\pm$0.009 & --26.0 & 2296$\pm$3206 & 2516$\pm$113 & 5.9$\pm$4.1 & 14.9$\pm$5.9 & 7.6$\pm$1.3 & 0.56$\pm$0.03 & --1.57$\pm$0.03 & 1.1$\pm$0.2 \\ \enddata \tablenotetext{a}{Continuum slope $\alpha_{\rm \lambda}$ ($f_{\lambda} \propto \lambda^{\alpha_{\rm \lambda}}$).} \tablenotetext{b}{The measurements of the quasar J0439+1634 have been corrected for gravitational lensing using a magnification of 51.3 \citep{fan19}.} \tablenotetext{c}{BAL quasars. See details in Section 6.2.} \label{tab:fitting} \end{deluxetable*} \section{Black Hole Mass and Eddington Ratio} In this section, we report the measurements of BH masses based on the NIR spectra. Together with bolometric luminosities measured from spectral fitting, we then estimate the Eddington ratios of these quasars and compare them with quasar samples at both lower and similar redshift ranges. \subsection{Virial Black Hole Mass} Assuming virial motion for line-emitting gas in the quasar BLR and based on the correlation between the measured BLR size and quasar continuum luminosity (i.e., the $R-L$ relation), quasar BH masses can be estimated from single-epoch spectra by measuring the line width of the UV and optical broad emission lines and continuum luminosity. Emission lines including \ion{C}{4}, \ion{Mg}{2}, H$\alpha$, and H$\beta$ have all been used for virial BH mass estimators \cite[e.g.,][]{mclure02, mclure04, vestergaard02, vestergaard06, greene05, shen11}. Calibration coefficients used in the BH mass estimators are determined using samples with mass measurements based on reverberation mapping (RM) at low redshifts. The uncertainty of this method is estimated to be on the order of $\sim$ 0.5 dex, inferred from the residuals in the calibrations against RM-based BH masses \cite[e.g.,][]{mclure02,vestergaard06,shen13}. Both RM-based measurements and comparison of single-epoch mass estimators have demonstrated that H$\beta$ is the most reliable among the emission lines typically used for virial BH mass estimation \citep[e.g.,][]{shen13}. However, at $z>4$, the H$\beta$ line moves outside the NIR spectral coverage. Given that the \ion{C}{4}\ line has been suggested to be associated with outflows, particularly considering the large blueshift of the \ion{C}{4}\ line found in high-redshift quasars \citep[e.g.,][]{meyer19}, the \ion{Mg}{2}\ line is more acceptable for use as a BH mass estimator at high redshift. The current \ion{Mg}{2}-based scaling relation is calibrated based on the H$\beta$ relations derived from reverberation mapping \citep[e.g.,][]{vestergaard09}. \begin{figure \centering \epsscale{1.2} \plotone{BH_Lbol_2108.png} \plotone{Eddratio_2108.pdf} \caption{{\bf Top:} New measurements of the quasar bolometric luminosities and BH masses of our sample (red open squares), compared with measurements of $z\sim6$ quasars \citep{willott03,kurk07,willott10,derosa11,mazzucchelli17,shen19,schindler20} and the SDSS lower redshift sample \citep[][gray dots]{shen11}, estimated based on the same BH mass estimator. The black points are measurements from \cite{schindler20}, and the results from \cite{shen19} are shown in dark blue. Measurements from other works \citep{willott03,kurk07,willott10,derosa11,mazzucchelli17} are shown in light brown. Quasars duplicated between these samples are excluded. The light blue dots represent a luminosity-matched sample selected from the SDSS lower redshift sample. The histogram towards the right shows the luminosity distributions of the luminosity-matched low-redshift sample (blue) and our sample (red). The bottom histogram shows the BH mass distributions of the low-redshift sample (blue) and our sample (red). {\bf Bottom:} The distribution of Eddington ratios ($\lambda = L_{\rm bol}/L_{\rm Edd}$) measured from quasars in our sample (red), compared with the luminosity-matched low-redshift quasar sample described above (blue). The Eddington ratios of quasars in our sample peak at $\lambda \sim 0.8$ and have a mean of 1.09 (red dashed line), while the low-redshift sample has a mean Eddington ratio of 0.65.} \label{fig:bh_l_edd} \end{figure} We estimate the BH masses of our quasars based on the continuum luminosity at 3000 \AA\ (rest-frame) and the FWHM of the \ion{Mg}{2}\ line, by adopting the empirical relation from \citet[][VO09]{vestergaard09}: \begin{equation} \small \frac{M_{\rm BH}}{M_\odot} = 10^{6.86} \left[\frac{\lambda L_{\lambda}\rm{(3000~\AA)}}{\rm{10^{44}~ erg~s^{-1}}}\right]^{0.5} \left[\frac{\rm{FWHM_{(Mg \ II)}}}{\rm{1000~km~s^{-1}}}\right]^2 \end{equation} The systematic uncertainties of this scaling relation could be up to $\sim0.55$ dex (VO09). The BH mass uncertainties reported in this paper are estimated from spectral fitting only and do not include the systematic uncertainties. We derive BH masses of all 37 quasars in this sample and find them to be in the range $2.6 \times10^{8} -3.6\times10^{9}\,M_{\odot}$. The individual measurements are listed in Table \ref{tab:fitting}. For our sample, if we use the BH mass estimator from \citet[][MD04]{mclure04}, we obtain $\sim$ 0.76 to 0.96 times smaller BH masses, dependent on the luminosity of the quasar. If we use the estimator from \citet[][S11]{shen11}, the BH masses of these quasars change to $\sim$ 1.30 to 1.65 times larger. The \ion{C}{4}\ line has also been used to estimate the BH mass \citep[e.g.,][]{vestergaard06,coatman17}. Since it is thought to include components with non-virial origins, corrections have been suggested for \ion{C}{4}\ single-epoch BH mass estimators \citep[e.g.,][]{park13,coatman16,coatman17,zuo20}. We apply the scaling relation from \cite{vestergaard06} and the empirical correction as a function of \ion{C}{4}\ blueshift derived from low-redshift quasars in \cite{coatman17} to estimate \ion{C}{4}-based BH masses. For the quasars in our sample, the ratios of \ion{C}{4}\ BH masses to \ion{Mg}{2}\ BH masses have a large scatter, from 0.3 to 3.9 (with a mean of 2.0). Note that the redshifts used here for estimating \ion{C}{4}\ blueshifts are [\ion{C}{2}] or \ion{Mg}{2}\ redshifts, while the empirical correction is based on H$\alpha$ line redshifts. In addition, the quasars in our sample show larger \ion{C}{4}\ blueshifts compared with low-redshift quasars (see Section 5.1), which will increase the uncertainty of \ion{C}{4}\ BH masses for these high-redshift quasars. Therefore, we will not directly compare the \ion{C}{4}\ and \ion{Mg}{2}\ BH masses for individual objects and adopt only the \ion{Mg}{2}-based BH masses in subsequent analysis. The BH masses of these quasars are plotted in Figure \ref{fig:bh_l_edd} (top) together with their bolometric luminosities, compared with the measurements for other $z\gtrsim6$ quasars \citep{willott03,kurk07,willott10,derosa11,mazzucchelli17,shen19, schindler20} and the low-redshift SDSS quasar sample \citep{shen11}. All BH masses used here are derived from the same BH mass estimator. Figure \ref{fig:bh_l_edd} shows that the quasars in our sample are located close to the line of Eddington luminosity, indicating that these SMBHs are accreting close to the Eddington limit, similar to the behavior found from most of other known $z \gtrsim 6$ quasars. A sample of low-redshift quasars is selected as comparison from the SDSS DR7 quasar catalog with \ion{Mg}{2}-based BH masses \citep{shen11}. We select quasars in the redshift range $0.4 \le z \le 2.1$ to ensure sufficient spectral coverage of the entire \ion{Mg}{2}\ line in the SDSS DR7 spectra. We adopt the BH masses from \citep{shen11}, derived using the same estimator (i.e., VO09). A luminosity-matched control sample is also selected from them to compare the BH masses of low- and high-redshift samples with the same luminosity distribution. We select quasars from the DR7 sample following the luminosity distribution of our sample, by randomly choosing 10 times more quasars than the quasars in our sample at each bolometric luminosity bin and repeating the sampling 1000 times. Figure \ref{fig:bh_l_edd} shows that the low-redshift control sample and our sample are consistent with being drawn from the same the same luminosity distribution, according to a two-sample K-S test \citep[][$p=0.9$]{kolmogorov1933}, while their BH masses are from different distributions with $p \ll$ 0.01 and low-redshift quasars have more massive BHs. We can also quantify this difference by comparing their Eddington ratios. \subsection{Eddington Ratio} Based on the bolometric luminosities and BH masses measured above, we derive the Eddington ratios. Note that the uncertainty of the Eddington ratio derived based on the \ion{Mg}{2}-based BH mass is subject to the same systematic uncertainty as the BH mass. As shown in Figure \ref{fig:bh_l_edd} (bottom), the Eddington ratios of these high-redshift quasars span values from 0.26 to 2.3, with a mean of 1.08 (a median value of 0.85) and a peak at $\lambda_{\rm Edd} \sim 0.8$. There are 16 quasars with Eddington ratios higher than one, including three quasars with $\lambda_{\rm Edd} >2$. The radio-loud quasar J1129+1846 and the quasar J0024+3913 have already been reported as super-Eddington quasars in \cite{banados21} and \cite{wang21a}, respectively. The quasar J0839+3900 is the third one and has $\lambda_{\rm Edd} = 2.1$. The Eddington ratios for individual quasars are listed in Table \ref{tab:fitting}. The high average Eddington ratio has been reported in a number of previous works on quasar \ion{Mg}{2}-based BH masses at redshifts $z > 5.8$ \citep[e.g.,][]{kurk07, jiang07, willott10, schindler20, yang20a, wang21b}, with Eddington ratios close to unity. The question of whether the high Eddington ratios of these high-redshift quasars are intrinsic or affected by selection effects is still debated. Compared to lower redshift quasar samples, the known high-redshift quasars are located at the relatively luminous end of the distribution, as a result of limited survey depth. The correlations among luminosity, BH mass, and Eddington ratio complicate the determinations of the Eddington ratio distribution based on flux-limited quasar samples, and may limit our study of high-redshift quasars to a relatively narrow range of BH mass and Eddington ratio. It has been suggested that the high Eddington ratios in some high-redshift quasar samples could be due to the high luminosities of quasars in those samples. If these most luminous quasars ($L_{\rm bol} \sim 10^{47}$ erg s$^{-1}$) accrete at $\lambda_{\rm Edd} \sim$ 0.1, they would require a BH mass of $\sim 10^{10}\,M_{\odot}$. In this case, low-luminosity quasars would help to overcome the selection bias. \cite{willott10} report the observations of nine faint CFHQS quasars at $z\sim6$ and find high Eddington ratios in these quasars, with a median of $\lambda=1.2$. This result therefore indicates that BHs with highly active status exist in both high and low-luminosity quasar populations at $z \gtrsim 6$. The six faint quasars from the HSC quasar survey have a wider range of Eddington ratios from 0.16 to 1.1 \citep{matsuoka19b, onoue19}, while the least luminous quasar in this sample, which is also the least massive SMBH at $z>5.8$, has a Eddington ratio of 1.1. The results from the CFHQS and HSC samples suggest a broad range of the Eddington ratio distribution of less luminous $z\gtrsim 6$ quasars. The recent measurements of 50 $z\sim6$ luminous quasars in \cite{shen19}, however, give low Eddington ratios with a median of 0.3. \cite{shen19} check objects overlapping between their sample and earlier works and suggest that the apparent difference in their reported Eddington ratios for the common objects is largely due to the difference in the adopted BH mass estimators. We compare our measurements with the Eddington ratios of low-redshift quasars using the luminosity-matched control sample described above. The low-redshift quasars in the control sample have a mean Eddington ratio of 0.65 (a median of 0.53) and a peak at $\sim 0.3$, which are significantly lower than the values in our high-redshift sample. We also note that the luminosity-matched low-redshift quasar samples in this work and in \cite{shen19} are all from the SDSS DR7 quasar properties catalog \citep{shen11} and the high-redshift quasars in both works have similar luminosity ranges, while the low-redshift luminosity-matched sample used in this work has significantly higher Eddington ratios (median $\sim 0.5$) than that in \cite{shen19} (median $\sim 0.3$). The main reason for this discrepancy is the different BH mass estimators, as we are using VO09 but \cite{shen19} use S11. The measurements in \cite{shen11} derived from the other two estimators (MD04 and S11) result in lower Eddington ratios for the control sample, with median values of $\sim 0.3-0.4$. As described above, if using the MD04 BH mass estimator, for our high-redshift quasars, we will have smaller BH masses and thus higher Eddington ratios, with a median value of 1.08. The S11 estimator will yield a median Eddington ratio of 0.63. In addition, considering the possible difference between spectral fitting procedures in \cite{shen11} and this work (e.g., the use of the iron template, continuum windows, and line fitting method), as a comparison, we also construct a low-redshift control sample using SDSS BOSS spectra and apply our spectral fitting. In order to obtain similar rest-frame spectral coverage to that of our high-redshift quasars with the same continuum windows, we select quasars in a narrow redshift range ($2.0 \le z \le 2.4$) and we also limit the average signal-to-noise ratio in the \ion{C}{4}\ and \ion{Mg}{2}\ regions to be higher than seven for good spectral fitting. This sample is much smaller than the sample from \cite{shen11} (our `primary control sample') so we treat this sample as only a secondary control sample. For the selected quasars, we apply the same spectral fitting procedure as that used for our high-redshift quasars and then measure their BH masses and Eddington ratios. We apply the same luminosity matching process as described before. This secondary control sample yields a mean Eddington ratio of 0.74 (median 0.62), consistent with the result from the primary control sample. Therefore, although the Eddington ratios of both the high-redshift quasars and the low-redshift control sample vary because of different BH mass estimators or spectral fitting, the high-redshift sample always has a higher mean or median Eddington ratio than the low-redshift control sample. Since this work focuses only on luminous quasars, the difference in Eddington ratios between our high-redshift quasars and the low-redshift luminosity-matched sample could be explained by the limited BH mass growth in the early Universe. Under the same luminosity distribution, the low-redshift quasars have BH masses from $9\times 10^{7}$ to $4\times 10^{10}\,M_{\odot}$ (Figure \ref {fig:bh_l_edd}, top). But at $z \gtrsim 6.5$, limited by the available time for BH growth, $10^{10}\,M_{\odot}$ BHs are very rare and thus the majority of these high-redshift quasars have higher Eddington ratios than the low-redshift sample. \subsection{Iron Templates and Other Potential Uncertainties} The BH masses and Eddington ratios discussed above are derived based on our spectral fitting using a combined iron template of VW01 and T06, with T06 being used for the \ion{Mg}{2}\ region. The \ion{Mg}{2}-based BH mass estimator VO09 was originally calibrated based on the VW01 iron template, in which the iron emission underlying the \ion{Mg}{2}\ line (2770-2820 \AA) is set to zero, while from theoretical considerations there should be iron emission at this wavelength. Modified or new iron templates have been generated to model the iron emission below the \ion{Mg}{2}\ line \citep[e.g., T06;][]{kurk07,derosa11,shen11}. In this work, we use the T06 template for spectral fitting in the \ion{Mg}{2}\ region to separate the \ion{Mg}{2}\ from the \ion{Fe}{2}\ emission, which is important for the measurement of spectral properties in the \ion{Mg}{2}\ line region (e.g., \ion{Mg}{2}\ redshift, \ion{Mg}{2}\ FWHM, and \ion{Mg}{2}\ flux) and also for the study of other quasar properties (e.g., \ion{C}{4}\ blueshift and \ion{Fe}{2}/\ion{Mg}{2}\ ratio). The differences between the iron templates VW01 and T06 in the \ion{Mg}{2}\ line fitting and BH mass measurements for high-redshift quasars have also been discussed in previous work \citep[e.g., ][]{schindler20, onoue20}. In order to investigate the impact of iron templates on the BH masses of our quasars, we perform the same spectral fitting for all spectra, but using the VW01 iron template, and measuring the corresponding BH masses. The BH masses and Eddington ratios derived from spectral fitting using the VW01 iron template are listed in Table \ref{tab:vw01fitting} in Appendix B. Compared with the T06-based fitting of the \ion{Mg}{2}\ region, the VW01-based fitting yields averaged 1.06 times higher $L_{\rm bol}$ and 1.20 times higher BH masses (0.89 to 2.37 times). For most objects (33/37), the difference in BH masses is $\sim$0.1 dex, much smaller than the systematic uncertainty of the scaling relation (0.55 dex). The mean Eddington ratio derived from VW01 fitting is 1.02 (median 0.88), similar to the determinations based on our combined iron template. Therefore, using the spectral measurements based on the T06 or VW01 templates will not lead to significant changes in the BH masses and Eddington ratios. In this paper, we adopt the measurements based on the combined iron template of VW01 and T06 (T06 in the \ion{Mg}{2}\ region) as our primary results. The commonly used \ion{Fe}{2}\ templates for quasar spectral fitting are constructed based on the \ion{Fe}{2}\ emission from low-redshift narrow line Seyfert 1 galaxy, I Zw 1, and are broadened based on the quasar \ion{Mg}{2}\ line width when fitting the spectra. The observed \ion{Fe}{2}\ emission is dependent on a number of factors including the continuum emission of the quasar and the physical conditions of the BLR gas, so it is uncertain whether the template derived from an AGN of much lower luminosity provides an accurate model for the \ion{Fe}{2}\ emission from these high-redshift quasars. The broadening of the iron template using the \ion{Mg}{2}\ line width also effectively assumes that the \ion{Fe}{2}\ emission originates from the same portion of the BLR as the \ion{Mg}{2}\ line. The details of the template and its velocity broadening will therefore lead to additional uncertainties in the measurement of quasar spectral properties. However, in most cases, we find that our modeling of the high-redshift quasar spectra using these iron templates provides good overall fits with small residuals, although lines and continuum parameters may be degenerate. Current determinations of single-epoch virial BH masses of high-redshift quasars are mainly based on scaling relations using the \ion{Mg}{2}\ line, which recently have been suggested to potentially include larger intrinsic uncertainties. There is a growing recognition in recent reverberation mapping observations that the quasar ``radius-luminosity'' ($R - L$) relationship is not as tight as was previously assumed, and the correlations between the deviations of the $R-L$ relation and quasar properties could be significant in some cases \citep[e.g.,][]{fonsecaalvarez20}. A possible trend has also been suggested such that objects with high accretion rates have smaller $R_{\rm BLR}$ \citep{du16,du18}, which means that for quasars hosting highly accreting BHs (such as the high-redshift quasars in this paper), the current measurements of BH masses could be overestimated. In addition, the \ion{Mg}{2}-based scaling relations are calibrated using H$\beta$ measurements. However, recent $R-L$ determinations from the \ion{Mg}{2}\ line found an intrinsic scatter of 0.36 dex, significantly larger than that from H$\beta$, implying a broader range of \ion{Mg}{2}\ radii than observed for H$\beta$ \citep{homayouni20}. \section{Rest-frame UV Properties} \subsection{Broad Emission Line Velocity Shifts} The velocity shifts of quasar emission lines, especially the velocity differences between high- and low-ionization lines, have already been widely discussed in earlier studies \citep[e.g.,][]{gaskell82, richards02, richards11, derosa14}. The correlations between the line velocity shifts and quasar intrinsic properties (e.g., quasar UV luminosity, line FWHM, or line EW) have also been observed at different redshifts \citep[e.g.,][]{richards02, richards11,schindler20}. Recent observations of high-redshift quasar samples raise questions about the increase of \ion{C}{4}\ blueshifts at $z \gtrsim 6$ \citep[e.g.,][]{mazzucchelli17, banados18, meyer19, shen19, schindler20}. \cite{meyer19} and \cite{schindler20} report high \ion{C}{4}\ blueshifts in high-redshift quasars, with mean \ion{C}{4}\ blueshifts $\sim$ \hbox{--1500} km\,s$^{-1}$ and $>$ \hbox{--2500} km\,s$^{-1}$ at $z\sim6$ and 6.5, respectively, while quasars at $z \sim 1-4$ have mean \ion{C}{4}\ blueshifts about \hbox{--1000} km\,s$^{-1}$. \cite{shen19}, however, obtain similar \ion{C}{4}\ blueshifts ($\sim$ \hbox{--1000} km\,s$^{-1}$) from both a $z\sim6$ sample and a low-redshift control sample. As the largest quasar sample at $z > 6.5$, our sample gives new insight into the UV emission line velocity shifts of quasars in the early Universe. Since the spectral fitting of \ion{Si}{4} and \ion{C}{3}] is limited by the spectral coverage and low data quality at the edge of the recorded spectra, here we discuss only the \ion{C}{4}\ and \ion{Mg}{2}\ line properties. We will discuss the \ion{C}{4}\ -- \ion{Mg}{2}\ line velocity shift and the shifts of these two lines with respect to the [\ion{C}{2}] redshift below. The velocity shift is described in the observer's frame, so a negative value denotes a blueshifted emission line. \begin{figure \centering \epsscale{1.2} \plotone{CIV.pdf} \plotone{CIV_L.pdf} \caption{{\bf Top:} The \ion{C}{4}\ -- \ion{Mg}{2}\ velocity shifts measured from our sample as a function of the redshift of the \ion{Mg}{2}\ line. The light blue squares are obtained from individual quasars and the large red squares with error bars represent the mean values and standard deviations in two redshift bins, $6.5 \le z < 7$ and $z \ge 7$. We compare our measurements with the results from \cite{meyer19} (mean, green squares), \cite{schindler20} (individuals and mean, grey and orange squares), and \cite{shen19} (mean, blue square). Quasars with $L_{\rm bol} > 10^{47}$ erg/s are marked by black open squares, in both our sample and the sample from \cite{schindler20}. Our sample shows a weakly increased \ion{C}{4}\ blueshift at $z = 6.5 -7$ and a significantly higher blueshift at $z > 7$, where only four quasars are measured. The difference between the results from different samples at $z > 6$ could be attributed to small sample size, different luminosity distributions, and different spectral fitting methods. {\bf Bottom:} The \ion{C}{4}\ -- \ion{Mg}{2}\ velocity shifts vs.\ bolometric luminosity, including our new sample and the measurements from \cite{schindler20}. No correlation between the \ion{C}{4}\ blueshift and quasar luminosity found in either sample.} \label{fig:civ} \end{figure} We measure the redshifts from the line centroids of \ion{C}{4}\ and \ion{Mg}{2}\ and then calculate the \ion{C}{4}\ velocity shifts with respect to \ion{Mg}{2}, $\Delta v$(\ion{C}{4}\ -- \ion{Mg}{2}). Four quasars (J0525--2406, J0910--0414, J1316+1028, and J1535+1943) do not have this measurement due to the lack of \ion{C}{4}\ fitting. In our sample, the $\Delta v$(\ion{C}{4}\ -- \ion{Mg}{2}) values of all 33 quasars span 600 to \hbox{--5100} km\,s$^{-1}$, with a mean of \hbox{--1700} km\,s$^{-1}$. One quasar (J2232+2930) yields a redshifted \ion{C}{4}\ line, caused by a red component of its \ion{C}{4}\ line. We also divide our sample into two redshift bins, $6.5 \le z < 7$ and $z \ge 7$, and calculate the mean at each bin. The mean blueshift in the $z = 6.5 -7$ bin is \hbox{--1500}$\pm$100 km\,s$^{-1}$ and \hbox{--3300}$\pm$400 km\,s$^{-1}$ in the higher redshift bin. As a comparison, we also calculate the mean blueshifts using the line redshifts derived from line peaks instead of from line centroids, and find them to be \hbox{--1700}$\pm$100 km\,s$^{-1}$ in the $z = 6.5 -7$ bin and \hbox{--3100}$\pm$200 km\,s$^{-1}$ at $z\ge7$. Therefore, although the measurements from the line centroid and line peak might be different for a single object, the statistical results of the quasar sample are quite similar. The velocity shifts used for further discussion are all derived from the line centroids. We then compare our measurements with the results from other investigations at similar redshifts, as shown in Figure \ref{fig:civ}. In general, at $z \sim 6.5-7$ our quasars have \ion{C}{4}\ blueshifts in a similar range to that of the sample from \cite{schindler20}. The four luminous quasars at $z>7$ are all located at the high \ion{C}{4}\ blueshift end (J0038--1527 and J0252--0503 have \ion{Mg}{2}-based redshifts below seven so they are not included in this bin). The mean blueshift in the $z = 6.5 - 7$ bin, regardless of how the redshifts are measured, is significantly smaller than the results from both \cite{schindler20} (\hbox{--2501} km\,s$^{-1}$ at $z=6.57$) and \cite{meyer19} (\hbox{--2867} km\,s$^{-1}$ at $z=6.72$) at similar redshifts. Note that the results in \cite{schindler20} and \cite{meyer19} are all derived from line peaks, but as mentioned above, for our sample, the mean values from the line centroid and line peak are very similar. One reason for our smaller mean \ion{C}{4}\ blueshift could be the contributions from relatively faint quasars. Our sample includes three times more quasars with luminosities $< 10^{47}$ erg s$^{-1}$ than the sample in \cite{schindler20}, and most of these less luminous quasars have smaller blueshifts than the mean (Figure \ref{fig:civ}). In the $z = 6.5 -7$ bin, these less luminous quasars have a mean of \hbox{--1400} km\,s$^{-1}$, while the mean of the other quasars is --1700 km\,s$^{-1}$, although they are consistent within the uncertainties, and we do see a few faint quasars with large \ion{C}{4}\ blueshifts, as shown in Figure \ref{fig:civ}. In addition, the small sample statistics and different spectral fitting methods will also bias the measurements using these high-redshift samples. Our luminous subsample ($> 10^{47}$ erg s$^{-1}$) also has a smaller mean blueshift than the results from the other two investigations at $z \sim 6.5-7$. \cite{schindler20} use nine quasars at $6.36 < z < 6.85$, and \cite{meyer19} include eleven quasars at $6.4 < z < 7.6$. Our sample has 26 quasars in this redshift range but it is still a small sample for estimating a representative value for the entire quasar population. Similarly, we can also see a discrepancy at $z \sim 6$. \cite{meyer19} find increased \ion{C}{4}\ blueshift at $z \sim 6$ compared to lower redshift measurements. \cite{schindler20} obtain an even higher blueshift at similar redshift, while \cite{shen19} obtain a mean \ion{C}{4}\ blueshift similar to low-redshift samples and suggest no redshift evolution. In the $z>7$ bin, our quasars have a significantly larger mean \ion{C}{4}\ blueshift, but only four quasars are included. In particular, the only three $z\sim 7.5$ quasars all have large \ion{C}{4}\ blueshifts. The sample size is too small to represent quasars at $z>7$. Therefore, based on our results and the comparisons with other samples, we conclude that from current observations there is a potential increase of \ion{C}{4}\ blueshift toward higher redshift at $z > 6$ but the observed trend of redshift evolution varies among different samples. Our sample shows a weaker redshift evolution than the significant evolution suggested by \cite{meyer19} and \cite{schindler20}. An increase of \ion{C}{4}\ blueshift at high redshift is possible, but the exact evolution is still not clear considering the small sample size, especially at $z \ge 7$, and the difference between spectral fitting methods. In addition, from this sample we do not find any correlations between the \ion{C}{4}\ blueshift and quasar luminosity or Eddington ratio, probably because we are still looking at a narrow $L$ or $\lambda_{\rm Edd}$ range, although the less luminous sample has a relatively small mean of the \ion{C}{4}\ blueshift, as described above (also Figure \ref{fig:civ}). The physical reason for the potential increase of \ion{C}{4}\ blueshift is also unclear. High \ion{C}{4}\ blueshifts have been commonly considered to be associated with strong BLR outflows or winds, which may explain the possible redshift evolution of the \ion{C}{4}\ blueshift as the result of stronger outflows in early quasars. On the other hand, such outflows or winds are also suggested as the reason for strong BAL features. In some samples, it has been found that BAL quasars have somewhat larger \ion{C}{3}] blueshifts \citep{richards11}. In our sample, we do not find a correlation between high \ion{C}{4}\ blueshifts and strong BAL quasars (see Section 6.2). \begin{figure \centering \epsscale{1.2} \plotone{CIV_MgII_CII.pdf} \caption{The relation between the \ion{C}{4}\ blueshift and \ion{Mg}{2}\ blueshift with respect to the [\ion{C}{2}] redshift. We plot our results (red) and the sample from \cite{schindler20} (grey). Both samples show that there is a correlation between the \ion{C}{4}\ and \ion{Mg}{2}\ blueshifts, possibly indicating related physical origins of the velocity shifts of these two lines. There is also a trend that these quasars are distributed in two populations, one with much stronger blueshifts of both the \ion{C}{4}\ and \ion{Mg}{2}\ lines than the other one.} \label{fig:civmgii} \end{figure} We also calculate the velocity shifts of the \ion{C}{4}\ and \ion{Mg}{2}\ emission lines with respect to the [\ion{C}{2}] line, which represents the systemic redshift of the quasar, and investigate the possible correlation between the velocity shifts of the two lines. In our sample, there are 27 quasars that have [\ion{C}{2}]-based redshift measurements and we calculate the $\Delta v$(\ion{C}{4}\ -- [\ion{C}{2}]) and $\Delta v$(\ion{Mg}{2}\ -- [\ion{C}{2}]) for each quasar, as plotted in Figure \ref{fig:civmgii}. Our sample has a mean $\Delta v$(\ion{C}{4}\ -- [\ion{C}{2}]) of \hbox{--2200} km\,s$^{-1}$ and mean $\Delta v$(\ion{Mg}{2}\ -- [\ion{C}{2}]) of \hbox{--500} km\,s$^{-1}$. Our results show a correlation between the \ion{C}{4}\ blueshift and \ion{Mg}{2}\ blueshift relative to the [\ion{C}{2}] line, which is also reported in \cite{schindler20}. This correlation suggests a potential relation between the physical origins of the blueshifts, although \ion{C}{4}\ is a high ionization line and \ion{Mg}{2}\ is a low ionization line, and they are supposed to originate from different locations with different physical conditions in the broad line region. In Figure \ref{fig:civmgii}, there is also a trend that these quasars are in two populations, and quasars in one subsample have much larger velocity shifts. With limited data available we can only speculate on the nature of these two emerging populations. Potentially the separate distributions might indicate different origins of the line blueshifts, may be tied to different geometrical structure of the BLR, or may be dependent on the orientation to the observer's line-of-sight. \subsection{Composite Spectrum for $z > 6.5$ Quasars} \begin{figure* \centering \epsscale{1.15} \plotone{composite.pdf} \caption{Quasar composite spectrum (red solid line) from our sample compared with the low-redshift composite from \cite{vandenberk01} (black line) and the $z\sim6$ quasar composite from \cite{shen19} (blue line). All three composite spectra have been normalized to the continuum flux at 1450 \AA. The inset shows the \ion{C}{4}\ line region, with a more pronounced blueshift of \ion{C}{4}\ in our composite, compared to both the $z \sim 6$ and low-redshift composites. The low-redshift sample includes a higher fraction of low luminosity quasars than our sample, while the $z\sim6$ quasar sample has a similar luminosity range to our sample.} \label{fig:composite} \end{figure*} In this Section, we present a $z > 6.5$ quasar composite based on our quasar sample and compare it with composite spectra for quasars at different redshifts to discuss the possible redshift evolution of average quasar UV spectral properties. We also include NIR data of quasars in \cite{schindler20} for a larger sample. We choose all 31 $z>6.5$ quasars (excluding J0525--2406; see below for details) in our sample and 7 quasars from \cite{schindler20}, after excluding 2 overlapping quasars between the two samples. We generate the median composite spectrum following \cite{vandenberk01} and compare our result with the composite spectra created for SDSS low-redshift quasars in \cite{vandenberk01} and for a sample of $z\sim6$ quasars in \cite{shen19}. Given that the low-redshift and $z\sim 6$ composite spectra are all based on the redshifts derived from UV/optical emission lines, here we use the \ion{Mg}{2}-based redshift instead of the [\ion{C}{2}] redshift. Otherwise, we would see a blueshift of the \ion{Mg}{2}\ line compared to the other two composite spectra. The quasar spectra are all normalized to the rest-frame 1600--1610 \AA\ flux where there are no strong broad lines or iron emission. We use 1600 \AA\ instead of 1450 \AA\ since for BAL quasars, the region around 1450 \AA\ is affected by strong absorption features. We exclude the quasar J0525--2406 due to the lack of data at rest-frame $< 1900$ \AA. For BAL quasars, we mask all strong absorption troughs. We also mask all strong absorption features visually identified in these spectra, mainly from intervening metal absorbers. We also compare the composite spectrum including and excluding all BAL quasars and do not find significant differences. The composite plotted in Figure \ref{fig:composite} is the one including BAL quasars. As shown in Figure \ref{fig:composite}, our composite covers the wavelength range from rest-frame 1150 to 3000 \AA. The data are also provided in Table \ref{tab:composite} and available online\footnote{\url {https://jinyiyang.github.io/composite.html}}. The composite spectrum has relatively low quality at rest-frame 1750--1850 \AA\ and 2300--2500 \AA, and in particular at 2400--2450 \AA, because of the strong telluric absorption between the observed frame wavelengths of $\sim$ 13500--14200 \AA\ and 18000--19500 \AA. Our sample has a relatively narrow redshift range so only a few spectra could be used to fill these two gaps. The wavelength range of rest-frame 2400--2450 \AA\ is covered by only five spectra. At wavelengths other than these two regions, our composite is based on 17 to 38 quasar spectra at each pixel. \begin{deluxetable}{l c c} \tablecaption{$z>6.5$ Quasar Composite Spectrum \setlength{\tabcolsep}{15pt} \tabletypesize{\scriptsize} \tablewidth{0pt} \tablehead{ \colhead{Wavelength (\AA)} & \colhead{$f_{\rm \lambda}$} & \colhead{N$_{\rm QSO}$} } \startdata 1150.5 & --0.019 & 17\\ 1151.5 & 0.095 & 17\\ 1152.5 & 0.029 & 17\\ ... & & \\ 2080.5 & 0.716 & 38\\ 2081.5 & 0.744 & 37\\ ... & & \\ 2997.5 & 0.439 & 26\\ 2998.5 & 0.461 & 25\\ \enddata \label{tab:composite} \tablenotetext{}{{\bf Note.} The median composite spectrum for our quasar sample. Wavelengths are in the rest frame and in units of \AA. Flux density units are arbitrary. The last column shows the number of quasar spectra contributing to the composite spectrum at each pixel. This is only a portion of the full table, and the entire table data is available online.} \end{deluxetable} The composite spectra from \cite{vandenberk01} and \cite{shen19} are both median composites; the former was created using over 2200 SDSS quasars at $0.044 \le z \le 4.789$, and the latter was based on 50 quasars at $5.71 < z < 6.42$. The low-redshift quasar sample mostly has luminosity lower than our quasars and the $z \sim 6$ sample has a luminosity range similar to that of our sample. Compared with these two composite spectra, our composite for $z>6.5$ quasars has very similar line strengths in most broad emission lines. Our quasars have a weaker Ly$\alpha$ line, significantly weaker than SDSS low-redshift quasars and slightly weaker than the $z\sim 6$ sample, because of the increasing absorption from the intergalactic medium toward higher redshift. Both our composite and the $z \sim 6$ composite exhibit an obvious blueshift of the \ion{C}{4}\ line relative to the SDSS low-redshift composite, mainly caused by the difference in luminosity, which has been suggested by the comparisons between composite spectra for the $z \sim 6$ sample and a low-redshift luminosity-matched sample in \cite{shen19}. The \ion{C}{4}\ line in our composite is also blueshifted relative to the $z \sim 6$ composite, which is consistent with the results discussed in Section 5.1, indicating a potential redshift evolution of \ion{C}{4}\ blueshift. All three of these composite spectra have consistent continuum slopes, indicating no strong evolution of the continuum, although the median composite is more suitable for studying the relative fluxes of emission lines \citep{vandenberk01}. \subsection{\ion{Fe}{2}/\ion{Mg}{2}\ up to $z = 7.6$} Observations of $z \gtrsim 6$ quasars have found that these early SMBHs are accompanied by intense star formation and high metallicity in their environments. Photoionization models show that quasar emission-line ratios provide estimates of metallicity in the broad-line region \citep[e.g.,][]{hamann93,hamann02, nagao06}. At high redshift, UV emission line ratios, such as \ion{N}{5}/\ion{C}{4}, \ion{N}{5}/\ion{He}{2}, and \ion{Fe}{2}/\ion{Mg}{2}, have been used to characterize the BLR metallicity of distant quasars \citep[e.g.,][]{hamann99,jiang07,derosa11,onoue20,schindler20}. In addition, Fe/$\alpha$ is expected to be a useful probe of the gas chemical enrichment history in these early quasar environments. In the local Universe, Fe is mainly produced by Type Ia supernovae, while the production of $\alpha$-elements like Mg and O is dominated by Type II, Ib and Ic supernovae. Therefore, appreciable Fe enrichment is expected to have a $\sim$ 1 Gyr delay after $\alpha$-element enrichment \citep[e.g.,][]{greggio83}. This has led to the expectation that there might be a decrease in \ion{Fe}{2}/\ion{Mg}{2}\ with increasing redshift in quasars at redshifts above 6, corresponding to 0.92 Gyr after the Big Bang. However, observations of quasars up to $z \sim 7.5$ have not shown any evidence of such evolution. Using our sample, including more $z > 6.5$ quasars, we measure the \ion{Fe}{2}/\ion{Mg}{2}\ flux ratio to test its redshift evolution. The \ion{Fe}{2}\ flux is derived by integrating the best-fit \ion{Fe}{2}\ component over the rest-frame wavelength range 2200 to 3090 \AA. We then compare our results with such measurements from other investigations from $z<1$ to $z=7.5$ \citep{iwamuro02,maiolino03, derosa11,mazzucchelli17, shin19, onoue20,schindler20}, as displayed in Figure \ref{fig:feii_mgii}. The quasars in our sample have \ion{Fe}{2}/\ion{Mg}{2}\ values comparable to those of quasars at similar or lower redshifts, demonstrating that there is no significant evolution up to redshift 7.6, although the measurements from different investigations may have different systematic uncertainties. Our sample has a mean \ion{Fe}{2}/\ion{Mg}{2}\ ratio of 4.1 (median value of 3.8), which agrees with most samples at all redshifts. \begin{figure* \centering \epsscale{1.2} \plotone{Chemical.png} \caption{The \ion{Fe}{2}/\ion{Mg}{2}\ flux ratio of quasars in our sample, compared with measurements at different redshifts in the literature \citep{iwamuro02, maiolino03, derosa11, mazzucchelli17, shin19, schindler20, onoue20}. The dashed line represents the mean (4.1) of our sample. These measurements suggest no significant redshift evolution of \ion{Fe}{2}/\ion{Mg}{2}\ up to redshift 7.6, only 0.67 Gyr after the Big Bang, although the comparison can be affected by the uncertainties caused by different spectral fitting procedures (e.g., different iron templates). The results from \cite{shin19}, \cite{schindler20}, and \cite{onoue20} are based on the iron template from T06, while \cite{maiolino03}, \cite{derosa11}, and \cite{mazzucchelli17} apply the iron template from VW01. \citet{iwamuro02} use their own iron template.} \label{fig:feii_mgii} \end{figure*} The absence of strong redshift evolution of \ion{Fe}{2}/\ion{Mg}{2}\ from the current epoch up to redshift 7.6 (0.67 Gyr after the Big Bang) could be explained by scenarios of shorter timescales for SNe Ia or different origins of Fe \citep[e.g.,][]{jiang07, onoue20}. \cite{rodney14} find that the fraction of prompt SNe Ia which explode within 500 Myr could be as high as $\sim$50\%. It has also been suggested that the timescale of maximum chemical enrichment from SNe Ia is a strong function of star formation history in galaxies and could be as short as $\sim 0.3$ Gyr in specific galaxy environments \citep{matteucci01}. If a short timescale for SNe Ia is the correct explanation, star formation in these high-redshift quasar host galaxies needs to occur very early. Different origins of iron enrichment, such as Population III stars, could also explain this lack of evolution. It has been suggested that these massive stars could produce a large amount of Fe within a few Myr \citep{heger02}. We also note that the spectra of two quasars, J0218+0007 and J2338+2143, have almost zero \ion{Fe}{2}\ flux measured from spectral fitting. Both of them have relatively lower S/N in the continuum, so the results could be lower limits. Quasars at $z > 6$ with very low \ion{Fe}{2}/\ion{Mg}{2}\ have also been found in previous studies \citep[e.g.,][]{mazzucchelli17}, suggesting large scatter of iron abundance in these high-redshift quasar BLRs, and may indicate that we are witnessing ongoing iron enrichment in these BLRs. However, these results are affected by a number of factors, including the spectral quality, different spectral fitting methods (e.g., different iron templates, the choice of continuum windows, and the \ion{Mg}{2}\ line fitting procedure, as discussed in detail by \citealt{schindler20} and \citealt{onoue20}), and also the possible Eddington ratio dependence of \ion{Fe}{2}/\ion{Mg}{2}\ \citep[e.g.,][]{sameshima17}. These factors could result in different systematic uncertainties in different samples. \section{Discussion} \subsection{Early SMBH Growth} The recent discoveries of SMBHs with $\gtrsim 10^9\,M_{\odot} $ at $z > 6$, in particular at $z> 7$, have already posed significant challenges to BH formation theories \citep[e.g,][]{mortlock11, wu15, banados18, yang20a, wang21b}. The existence of these SMBHs requires either very massive initial seeds or super/hyper-Eddington accretion \citep[e.g.,][]{volonteri12, pacucci17, inayoshi19}. As described above (Section 3), our quasars yield a sample of 37 SMBHs at $z = 6.3-7.6$ with masses in the range $2.6 \times10^{8} -3.6\times10^{9}\,M_{\odot}$, and they have a mean Eddington ratio of 1.1, with a peak at $\lambda \sim 0.8$. With these measurements for a large sample of SMBHs at $z \gtrsim6.5$, we are able to revisit the BH growth scenario and seek new constraints. BH growth can be modeled according to $M_{\rm BH} = M_{\rm seed} \rm exp[ \lambda_{Edd}(1-\epsilon)t/4.5\times10^{8} \epsilon]$, where $\epsilon$ is the radiative efficiency and a typical value of 0.1 has been suggested for $z \gtrsim 5.7$ quasars \citep[e.g.,][]{trakhtenbrot17}. We first assume that these BHs grew at the Eddington limit across the entire time since BH seeding (that is, assuming an accretion duty cycle of unity) with a radiative efficiency of 0.1. With these assumptions, we obtain BH growth tracks since $z=30$ for seeds with different BH masses, as shown in Figure \ref{fig:bhgrowth}. With Eddington accretion, most of the quasars in our sample require seed BH masses $\gtrsim 10^{3}\,M_{\odot}$ at $z=30$ (or $>10^{4}\,M_{\odot}$ at $z = 15$), and the three $z=7.5$ quasars require $\gtrsim 10^{4}\,M_{\odot}$ seed BHs at $z=30$. Only a few quasars allow less massive seed BHs down to a few hundred solar masses. We also consider a lower Eddington ratio for these quasars, since the Eddington ratio distribution obtained from our sample has a peak at $\lambda_{\rm Edd}\sim0.8$. Assuming an Eddington ratio of 0.8, most of our quasars would require at least $10^{4}\,M_{\odot}$ seeds to grow to their measured masses starting from $z=30$. Note that we assume a start time of BH growth at $z = 30$, while only stellar-mass BHs have been suggested to form at such high redshifts \citep{inayoshi19}. A later starting time for BH growth or a higher radiative efficiency would therefore require more massive seed BHs. \begin{figure \centering \epsscale{1.15} \plotone{BHgrowth_2105.pdf} \caption{BH mass measurements (red open squares) obtained from our quasar sample compared with the BH growth tracks with different seed BH masses. The three solid curves represent the BH growth tracks with seed BHs masses of $10^{2}\,M_{\odot}$ (blue), $10^{3}\,M_{\odot}$ (black), and $10^{4}\,M_{\odot}$ (orange), respectively, assuming Eddington accretion since $z=30$. The three dotted lines are the BH growth tracks with constant Eddington ratio $\lambda_{\rm Edd}=0.8$, which is the peak of the Eddington ratio distribution from our sample. All these tracks are based on the assumption of a radiative efficiency of 0.1. With Eddington accretion, most of the quasars in our sample require massive seed BHs with masses $\gtrsim 10^{3}\,M_{\odot}$ at $z=30$, and the three $z=7.5$ quasars require $\gtrsim 10^{4}\,M_{\odot}$ seed BHs. A later starting time for BH growth, lower accretion rate, or higher radiative efficiency will result in a requirement of even more massive seed BHs. With $\lambda_{\rm Edd}=0.8$, most of the quasars will need $\gtrsim 10^{4}\,M_{\odot}$ BH seeds at $z=30$.} \label{fig:bhgrowth} \end{figure} Based on the discussion above, in most cases, the SMBHs in these luminous quasars need to grow from BH seeds more massive than a few hundred solar masses (i.e., the masses consistent with Pop III stellar remnants). In particular, the results from the three $z=7.5$ quasars are more consistent with massive seed BH models like direct collapse of gas, which however are suggested to occur only in rare and special environments \citep[e.g.,][]{haiman13}. It is considered difficult for $100\,M_{\odot}$ seeds to grow to the SMBH masses observed for our sample over such a short duration, although this can not be ruled out due to the unclear BH accretion history. With a seed BH of $\sim 100\,M_{\odot}$, it would take 0.8 Gyr to grow to a $1\times 10^{9}\,M_{\odot}$ BH assuming Eddington accretion and a radiative efficiency of 0.1, which is impossible for all $z > 6$ quasars. It is also thought to be highly unlikely that BHs can have sustained growth for the full 0.8 Gyr \citep[e.g.,][]{davies19, eilers20}. Super/hyper-Eddington accretion has been proposed to occur under specific conditions \citep[e.g.,][]{inayoshi16} and it has been suggested that such processes may dominate BH growth until $z \sim 10$ \citep[e.g.,][]{pezzulli16}. However, the rate of occurrence of extreme high accretion rates or how to maintain a long-term high accretion rate are still open questions. To date, BH masses have been measured for about 100 $z > 6$ quasars using the \ion{Mg}{2}\ line. Only one has been discovered with a BH mass exceeding $10^{10}\,M_{\odot}$ \citep{wu15}. The currently known high-redshift quasars are selected from flux-limited surveys, typically down to a luminosity of $5\times10^{46}$ erg s$^{-1}$ for a $z\sim6.5$ quasar. Most quasars are selected based on photometric data. There is no obvious reason why this selection method should result in significant incompleteness at the massive end of the BH mass distribution in unobscured quasars. Therefore, if quasars with $10^{10}\,M_{\odot}$ BHs exist at this redshift, they should mostly have Eddington ratios $< 0.05$ or significant obscuration. \subsection{BAL Fraction} From our sample, we calculate the balnicity index BI \citep{weymann91} by \begin{equation} {\rm BI} = \int_{v_{\rm min}}^{v_{\rm max}}\left ( 1 - \frac{f(v)}{0.9} \right )C dv. \end{equation} where $f(v)$ is the normalized spectrum, and $C$ is set to 1 only when $f(v)$ is continuously smaller than 0.9 for more than 2000 km s$^{-1}$, otherwise it is set to 0.0. The value of $v_{\rm min}$ is set to 0. We identify nine quasars with strong broad absorption features, including J0038--1527, J0246--5219, J0313--1806, J0439+1634, J0706+2921, J0839+3900, J0910--0414, J0923+0402, and J1316+1028, as indicated in Table \ref{tab:fitting}. This results in a BAL fraction of 24\% of this sample. The quasar J1316+1028 shows an obvious BAL feature in its optical spectrum \citep[see Figure 2 in][]{wang19}, while its NIR spectrum does not show $>$ 2000 km\,s$^{-1}$ continuous absorption (with normalized flux density $<$ 0.9). The quasars J0803+3138 and J0837+4929 also have absorption troughs, but the widths of their troughs are close to the limit of 2000 km\,s$^{-1}$, and the BI calculations give results that are highly dependent on spectral fitting details. Also, there are no obvious BAL features in their optical spectra. We thus do not include these two quasars when we estimate the BAL fraction. J0525--2406 only has NIR spectral coverage at $> 14000$ \AA\, and no strong features are present within this wavelength range or in its optical spectrum ($< 1 \mu$m). Considering the limited S/N and spectral coverage at rest-frame wavelengths 1400--1500 \AA\ in some of the spectra, the BAL fraction reported here could be a lower limit. A BAL fraction of 24\% in this sample is higher than the results from lower redshift samples, 16\% at $z \sim 6$ \citep{shen19} and 15\% from the SDSS DR5 low-redshift sample \citep{gibson09}. The quasar sample in \cite{schindler20} has comparable size to our sample and is in a redshift range ($5.78\le z \le 7.54$) closer to our sample than others. A BAL fraction of 13\% (5/38) has been claimed in their sample through visual classification. If we simply combine the two samples, we would obtain a BAL fraction of 19\% (14/72, after removing 3 overlapping quasars between the two samples). If we take into account only $z>6.5$ quasars, with 32 quasars (8 BAL quasars) from our sample and 7 quasars (1 BAL) from \cite{schindler20}, we derive a fraction of 23\% (9/39) in the combined sample. The high fraction of strong BALs at $z> 6.5$ from our sample and the combined sample could be caused by a higher intrinsic BAL fraction at high redshift or a selection bias. The current high-redshift quasar samples are still small, so they may be subject to biases in the quasar selection. We now compare the spectral properties of these BAL quasars with the non-BAL quasars in our sample. The nine BAL quasars have a median continuum slope of $-1.0$, slightly redder than non-BAL quasars with a median slope of $-1.3$. In addition, these BAL quasars have red $J-W1$ colors with a median value of 2.5, while the median color of the non-BAL quasars is 2.1. So the BAL quasars in our sample do have redder colors than the non-BAL quasars, although this might not represent the intrinsic reddening of the BAL population at this redshift due to the small sample size. BAL features are thought to be associated with powerful outflows or disk winds, which are also commonly considered as the origin of high blueshifts of the \ion{C}{4}\ lines. Thus we test the potential correlation between the BAL features and \ion{C}{4}\ blueshifts, but no difference is found between BAL and non-BAL quasars. The mean \ion{C}{4}\ blueshift of BAL quasars is --1700 km\,s$^{-1}$, similar to the mean of non-BAL quasars and of the entire sample. Note that the \ion{C}{4}\ fitting for BAL quasars could be affected by the absorption troughs. Overall, the high BAL fraction (24\%) in this sample, compared to the fractions at lower redshift (16\% at $z\sim6$; 15\% at lower redshift), potentially indicates a high probability of strong outflows or winds, which may also be an explanation of the higher \ion{C}{4}\ blueshift observed in high-redshift quasars. \section{Summary} We report our studies of quasar BH mass and UV spectral properties using a new NIR spectral dataset of 37 luminous quasars at $6.3 < z \le 7.64$, with 32 quasars at redshift above 6.5, forming the largest quasar NIR spectral sample at this redshift to date. The NIR spectroscopy was obtained using the Keck/NIRES, Gemini/GNIRS and F2, VLT/XShooter, and Magellan/FIRE instruments. These data allow us to model quasar rest-frame UV spectra, statistically characterize quasar UV spectral properties, and study quasar BH behavior. We summarize the main results below. \begin{itemize} \item We measure the BH masses of these 37 quasars using uniform spectral fitting procedures and BH mass estimators. These objects have BH masses in the range $(0.3-3.6)\times10^{9}\,M_{\odot}$. Assuming Eddington-limited accretion, they require massive seed black holes with masses $\gtrsim10^{3-4}\,M_{\odot}$ at $z=30$. \item Luminous quasars in this sample are found to be accreting close to the Eddington limit at the observed epoch. The Eddington ratio distribution has a mean of 1.08 and a median of 0.85, with a peak at 0.8, significantly higher than the Eddington ratios from a low-redshift luminosity-matched quasar sample. \item The difference between \ion{C}{4}\ and \ion{Mg}{2}\ redshifts suggests a large blueshift of the \ion{C}{4}\ lines, yielding a mean of --1500 km\,s$^{-1}$ at $z = 6.5-7$ and --3300 km\,s$^{-1}$ at $z\ge7$. Compared with $z \sim 6$ samples of similar luminosity, our results show an increase of \ion{C}{4}\ blueshift at $z> 6$, but the evolution is weaker than previously reported. The correlation between \ion{C}{4}\ and \ion{Mg}{2}\ line velocity shifts with respect to the [\ion{C}{2}] line redshift potentially indicates associated origins of velocity shifts of these two lines. \item We create a $z > 6.5$ quasar composite spectrum using 38 $z>6.5$ quasars and compare it with the composite spectra for low-redshift SDSS quasars and $z\sim6$ quasars. No significant redshift evolution is found for either broad UV emission lines or quasar continuum slope, except for the \ion{C}{4}\ line which shows a blueshift relative to both low-redshift and $z\sim6$ composites. \item We measure the \ion{Fe}{2}\ to \ion{Mg}{2}\ flux ratio for quasars in this sample and compare the results with measurements at different redshifts. No redshift evolution of \ion{Fe}{2}/\ion{Mg}{2}\ is found up to redshift 7.6, suggesting that the metal abundances in the BLRs of these quasars are similar to those observed at lower redshifts. \item We identify strong BAL quasars and find a BAL fraction of 24\%, higher than the fractions in lower redshift samples. The high BAL fraction could be due to evolution of intrinsic properties (e.g., stronger outflows or winds) of these quasars or due to selection effects in high-redshift quasar surveys. \end{itemize} In subsequent work, we will combine data from this NIR sample with our optical spectral dataset \citep{yang20b} as well as a submm dataset from ALMA, NOEMA, and JCMT to further investigate the quasar proximity zones, star formation in the host galaxies, BH-host co-evolution, and the potential correlations among these properties. \acknowledgments J. Yang, X. Fan, and M. Yue acknowledge support from US NSF grants AST 15-15115, AST 19-08284 and NASA ADAP Grant NNX17AF28G. F. Wang and ACE acknowledge support by NASA through the NASA Hubble Fellowship grant \#HST-HF2-51448.001-A and $\#$HF2-51434 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. Research at UC Irvine was supported by NSF grant AST-1907290. Some of the data presented herein were obtained using the UCI Remote Observing Facility, made possible by a generous gift from John and Ruth Ann Evans. E.P. Farina acknowledge support from the ERC Advanced Grant 740246 (Cosmic Gas). L. J. and X.-B. Wu acknowledge support by the National Key R\&D Program of China (2016YFA0400703) and the National Science Foundation of China (11721303, 11890693). We acknowledge Dale Mudd for his help with the Keck/NIRES observations. Some of the data presented in this paper were obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. This work is based in part on observations made with ESO telescopes at the La Silla Paranal Observatory under program ID 0103.A-0423(A). This paper also uses data based on observations obtained at the Gemini Observatory, which is operated by the Association of Universities for Research in Astronomy, Inc., under a cooperative agreement with the NSF on behalf of the Gemini partnership: the National Science Foundation (United States), the National Research Council (Canada), CONICYT (Chile), Ministerio de Ciencia, Tecnolog\'{i}a e Innovaci\'{o}n Productiva (Argentina), and Minist\'{e}rio da Ci\^{e}ncia, Tecnologia e Inova\c{c}\~{a}o (Brazil). This paper includes data gathered with the 6.5 meter Magellan Telescopes located at Las Campanas Observatory, Chile. We acknowledge the use of the MMT and UKIRT telescopes. We acknowledge the use of the PypeIt data reduction package. \vspace{5mm} \facilities{Keck(NIRES), Gemini(GNIRS,F2), VLT(X-Shooter), Magellan(FIRE), MMT(MMIRS), UKIRT(WFCAM)} \software{PypeIt}
{ "timestamp": "2021-09-30T02:00:30", "yymm": "2109", "arxiv_id": "2109.13942", "language": "en", "url": "https://arxiv.org/abs/2109.13942" }
\section{Method} \label{sec:method} \subsection{Problem Definition} \label{sec:problem_definition} The objective of reading comprehension is to model the distribution $p(a \mid q, c)$, where $q, c, a \in {\mathcal{V}}^*$ represent a \ti{question}, supporting \ti{context}, and \ti{answer} respectively and consist of sequences of tokens from a vocabulary ${\mathcal{V}}$. For simplicity, we focus on extractive reading comprehension, where every question can be answered by selecting a span of tokens in the context, but the approach is generic and can be extended to other formats. We make the standard assumption that the probability of context span $c_{i \ldots j}$ being the answer can be decomposed into the product of $p(\text{start} = i \mid q, c)$ and $p(\text{end} = j \mid q, c)$. We consider a collection of source datasets ${\mathcal{S}}$ and target datasets ${\mathcal{T}}$, where each dataset ${\mathcal{D}} \in {\mathcal{S}} \cup {\mathcal{T}}$ consists of supervised examples in the form $(q, c, a)$. The goal is to train a model on ${\mathcal{S}}$ that achieves high in-domain accuracy and transfers well to unseen datasets in ${\mathcal{T}}$, either zero-shot or given a small number of labeled examples. \subsection{Multi-dataset Fine-tuning} The standard approach to multi-dataset reading comprehension is to fit a single model to examples drawn uniformly from the datasets in ${\mathcal{S}}$: \[ \argmin_{\theta, \psi} \mathbb{E}_{{\mathcal{D}}_i \sim {\mathcal{S}}} \left [ \mathbb{E}_{q, c, a \sim {\mathcal{D}}_i}[ - \log p_{\theta, \psi}(a \mid q, c)] \right ], \] where $\theta$ refers to the parameters of an encoder model (usually a pre-trained Transformer like BERT; \citealp{devlin2018bert}), which maps a question and context to a sequence of contextualized token embeddings, and $\psi$ denotes the classifier weights used to predict the start and end tokens. The objective is approximated by training on mixed mini-batches with approximately equal numbers of examples from each dataset~\citep{fisch2019mrqa,khashabi2020unifiedqa}, although some researchers have investigated more sophisticated sampling strategies~\citep{xu2019multi}. For example,~\citet{gottumukkala2020dynamic} introduce \ti{dynamic sampling}, sampling from each dataset in inverse proportion to the model's current validation accuracy. \subsection{MADE} Our approach is to explicitly model the fact that our data represent a mixture of datasets. We decompose the model parameters into a shared Transformer, $\theta$, and dataset-specific token classifiers $\boldsymbol{\psi} = \psi_1, \ldots, \psi_{|{\mathcal{S}}|}$ and adapters $\boldsymbol{\phi} = \phi_1, \ldots, \phi_{|{\mathcal{S}}|}$ (Figure~\ref{fig:multiadapter_figure}). We use a two-phase optimization procedure to fit these parameters. In the \ti{joint optimization} phase, we jointly train all of the parameters on the source datasets: \begin{align*} &\argmin_{\theta, \boldsymbol{\phi}, \boldsymbol{\psi}} \mathbb{E}_{{\mathcal{D}}_i \sim {\mathcal{S}}} \left [ \mathbb{E}_{q, c, a \sim {\mathcal{D}}_i}[- \log p_{\theta, \phi_i, \psi_i}(a \mid q, c)] \right ] \end{align*} After validation accuracy (average F1 scores of the source datasets) stops improving, we freeze $\theta$ and continue \ti{adapter tuning}, refining each pair of $(\phi_i, \psi_i)$ separately on each dataset. \paragraph{Zero-shot generalization} We use a simple strategy to extend {MADE} to an unseen dataset: we initialize a new adapter and classifier $(\phi', \psi')$ by averaging the parameters of the pre-trained adapters and classifiers $\phi_1, \ldots, \phi_{|{\mathcal{S}}|}$ and $\psi_1, \ldots, \psi_{|{\mathcal{S}}|}$, and return the answer with the highest probability under $p_{\theta, \phi', \psi'}(a \mid q, c)$. We also considered an ensemble approach, averaging the token-level probabilities predicted by each adapter, but found this to perform similarly to parameter averaging at the additional cost of running the model $|{\mathcal{S}}|$ times. \paragraph{Transfer learning} We also consider a transfer learning setting, in which a small number of labeled examples of a target domain (denoted ${\mathcal{D}}_{\text{tgt}}$) are provided. We explore two ways to build a single, more accurate model. The first is to initialize $(\phi', \psi')$ as a weighted average of pre-trained adapters, $\phi' = \frac{1}{|{\mathcal{S}}|} \sum_{i=1}^{|{\mathcal{S}}|} \alpha_i \phi_i$, and classifiers $\psi' = \frac{1}{|{\mathcal{S}}|} \sum_{i=1}^{|{\mathcal{S}}|} \alpha_i \psi_i$, using ${\mathcal{D}}_{\text{tgt}}$ to estimate the mixture weights. For each $i$, we set the mixture weight $\alpha_i$ to be proportional to the exponential of the negative zero-shot loss on the training data: \[ \alpha_i \propto \mathrm{exp} \left ( \mathbb{E}_{q, c, a \in {\mathcal{D}}_{\text{tgt}}}[\log p_{\theta, \phi_i, \psi_i}(a \mid q, c)] \right ), \] and then tune $\theta$ and $(\phi', \psi')$ on the target dataset. The second approach is to first jointly tune $\theta$, $\boldsymbol{\phi}$, and $\boldsymbol{\psi}$ on ${\mathcal{D}}_{\text{tgt}}$, maximizing the marginal likelihood: \[ \mathbb{E}_{q, c, a \sim {\mathcal{D}}_{\text{tgt}}} \left [\log \frac{1}{|{\mathcal{S}}|} \sum_{i=1}^{|{\mathcal{S}}|} p_{\theta, \phi_i, \psi_i}(a \mid q, c) \right ], \] and then take the weighted average of the parameters, calculating the mixture weights $\alpha_i$ as above but using the loss of the fine-tuned adapters on a small number of held-out examples from ${\mathcal{D}}_{\text{tgt}}$. Pre-averaging is faster to train, because it only involves training one model rather than all $|{\mathcal{S}}|$ adapters. After training, both approaches result in a single model that only requires running one forward pass through $(\theta, \phi', \psi')$ to make a prediction. \section{Related Work} Prior work has addressed multi-dataset reading comprehension by fine-tuning a pre-trained Transformer language model~\citep{devlin2018bert} simultaneously on examples from multiple datasets~\citep{talmor2019multiqa,fisch2019mrqa}. Several works explore different multi-task sampling schedules, as a way of mitigating training set imbalances~\citep{xu2019multi,gottumukkala2020dynamic}. Another line of work focuses on training models to answer a wider variety of question types, including UnifiedQA~\citep{khashabi2020unifiedqa}, a T5 model~\cite{raffel2020exploring} trained on datasets with different answer formats, such as yes/no and multiple-choice, using a unified text-to-text format. Adapters~\citep{houlsby2019parameter,rebuffi2018efficient} are task-specific modules interleaved between the layers of a shared Transformer.% ~\citet{stickland2019bert} trained task adapters and the Transformer parameters jointly for the GLUE benchmark~\citep{wang2019glue} but achieved mixed results, improving on small datasets but degrading on larger ones. Subsequent work has used a frozen, pre-trained Transformer and trained task adapters separately. Researchers have explored different methods for achieving transfer learning in this setting, such as learning to interpolate the activations of pre-trained adapters~\citep{pfeiffer2021adapterfusion}. \section{Conclusion} {MADE} combines the benefits of single- and multi-dataset training, resulting in better in-domain accuracy, generalization, and transfer performance than either multi-dataset models or single-dataset models, especially in low resource settings. For future work we plan to explore explicit mixture-modeling approaches for better zero-shot prediction and transfer learning. \newpage \section*{Acknowledgements} We thank the members of the Princeton NLP group and the anonymous reviewers for their valuable comments. This work is supported by a Graduate Fellowship at Princeton University. \section{Introduction} The goal of reading comprehension is to create computer programs that can answer questions based on a single passage of text. Many reading comprehension datasets have been introduced over the years, and prior work has explored ways of training one network on multiple datasets to get a model that generalizes better to new distributions~\citep{talmor2019multiqa,fisch2019mrqa,khashabi2020unifiedqa}. Our goal is to build a multi-dataset model that performs well on the training distributions and can also serve as a strong starting point for transfer learning to new datasets. Multi-dataset training provides a way to model the regularities between datasets but it has the following shortcomings. First, multi-task models are liable to over- or under-fit different tasks~\citep{gottumukkala2020dynamic}, which can result in worse in-distribution accuracy. Second, given a particular target dataset, multi-dataset models might achieve worse transfer performance compared to a specialized model trained on a more similar source dataset. Our idea is to combine the benefits of single- and multi-dataset by training a collection of single-dataset experts that share an underlying Transformer model~(Figure~\ref{fig:multiadapter_figure}). This system is based on adapters~\citep{houlsby2019parameter}, lightweight task-specific modules interleaved between the layers of a pre-trained Transformer (e.g., BERT;~\citealp{devlin2018bert}). The standard use of adapters is as a parameter-efficient alternative to fine-tuning: task-specific adapters are trained separately on top of a frozen Transformer, which means the adapters cannot directly learn cross-task regularities. We instead first train a shared Transformer in a multi-adapter setup before refining adapters for individual datasets, which we call \tf{M}ulti-\tf{A}dapter \tf{D}ataset \tf{E}xperts ({MADE}). Our intuition is that the shared parameters encode regularities across different reading comprehension tasks while the adapters model the sub-distributions, resulting in more accurate and robust specialized models that transfer better to a variety of target datasets. We apply this approach to a range of extractive question answering datasets from the MRQA 2019 shared task~\citep{fisch2019mrqa}, training {MADE} on six in-domain datasets and evaluating generalization and few-shot transfer learning to six out-of-domain datasets. The resulting system outperforms single- and multi-dataset models in terms of in-domain accuracy, and we find that a simple approach to transfer learning works well: averaging the parameters of the {MADE} adapters results in a single model that gets better zero-shot generalization and few-shot transfer performance compared to both baselines as well as a state-of-the-art multi-dataset QA model, UnifiedQA~\citep{khashabi2020unifiedqa}. Our experiments illustrate the benefits of modeling both cross-dataset regularities and dataset-specific attributes, and the trained models offer a strong and versatile starting point for new question-answering models. \section{Experiments} \label{sec:experiments} \subsection{Setup} We use the datasets from the MRQA 2019 shared task~\citep{fisch2019mrqa}, which are split into six large in-domain datasets,\footnote{ SQuAD 1.1~\citep{rajpurkar2016squad}, HotpotQA~\citep{yang2018hotpotqa}, TriviaQA~\citep{joshi2017triviaqa}, NewsQA~\citep{trischler2017newsqa}, SearchQA~\citep{dunn2017searchqa}, and Natural Questions~\citep{kwiatkowski2019natural} } and six small out-of-domain datasets.\footnote{ BioASQ~\citep{tsatsaronis2015overview}, DROP~\citep{dua2019drop}, DuoRC~\citep{saha2018duorc}, RACE~\citep{lai2017race}, RelationExtraction~\citep{levy2017zero}, and TextbookQA~\citep{kembhavi2017you}. } Dataset statistics are in Appendix~\ref{appendix:dataset_details}. We use the RoBERTa-base model~\citep{liu2019roberta} with the default adapter configuration from~\citet{houlsby2019parameter}, which adds approximately 1.8M parameters to the {\textasciitilde}128M in RoBERTa-base (1\%). \subsection{In-domain Performance} \label{sec:in_distribution} First we train {MADE} on the six training datasets and compare in-domain accuracy with single- and multi-dataset fine-tuning and standard adapter training (freezing the Transformer parameters). For context, we also compare with a method from recent work, \ti{dynamic sampling}~\citep{gottumukkala2020dynamic}, by sampling from each dataset in proportion to the difference between the current validation accuracy (EM+F1) on that dataset and the best accuracy from single-dataset training. We train all models by sampling up to 75k training and 1k development examples from each dataset, following~\citet{fisch2019mrqa}. More details are in Appendix~\ref{appendix:training_details}. Table~\ref{tab:in_domain} shows that {MADE} scores higher than both single- and multi-dataset baselines. Both phases of {MADE} training---joint optimization followed by separate adapter tuning---are important for getting high accuracy. Jointly optimizing the underlying {MADE} Transformer improves performance compared to single-dataset adapters, suggesting that joint training encodes some useful cross-dataset information in the Transformer model. Adapter tuning is important because the multi-dataset model converges on different datasets at different times, making it hard to find a single checkpoint that maximizes performance on all datasets (see Appendix Figure~\ref{fig:training_curve} for the training curve). Some of the improvements can also be attributed to the adapter architecture itself, which slightly outperforms fine-tuning in most datasets. Dynamic sampling does not improve results, possibly because the datasets are already balanced in size. \input{tables/2_transfer} \subsection{Zero-shot Generalization} \label{sec:zero_shot} Table~\ref{tab:zero_shot} shows the results of applying this model to an unseen dataset (zero-shot). We compare a simple method for using {MADE}---averaging the parameters of the different adapters---with the multi-dataset model from Section~\ref{sec:in_distribution}, averaging the parameters of single-dataset adapters, and the pre-trained UnifiedQA-base~\citep{khashabi2020unifiedqa}.\footnote{ UnifiedQA was trained on different datasets with a different architecture, but represents an alternative off-the-shelf model for QA transfer learning. We compare to UnifiedQA-base because the encoder has approximately the same number of parameters as RoBERTa-base. } We compare {MADE} with and without the second phase of separate adapter-tuning. Surprisingly, averaging the parameters of the different {MADE} adapters results in a good model, generalizing better on average compared to both multi-dataset models. The second phase of adapter-tuning improves these results. Parameter averaging performs poorly for single-dataset adapters, possibly because the separately-trained adapters are too different from each other to interpolate well. Figure~\ref{fig:zero_shot_comparison} compares the zero-shot accuracy obtained by the different {MADE} and single-dataset adapters. The two sets of adapters show similar patterns, with some adapters generalizing better than others, depending on the target, but all of the {MADE} adapters generalize better than the corresponding single-dataset adapters. This performance gap is considerably bigger than the gap in in-domain performance (Table~\ref{tab:in_domain}), further illustrating the benefit of joint optimization. \subsection{Transfer Learning} \label{sec:transfer} Finally, we compare two ways of using {MADE} for transfer learning: % either averaging the adapter parameters and then fine-tuning the resulting model (\ti{pre avg.}), or first fine-tuning all of the adapters and then taking the weighted average (\ti{post avg.}). In both cases, we also back-propagate through the Transformer parameters. We reserve 400 examples from each target dataset to use as a test set (following~\citealp{ram2021few}) and sample training datasets of different sizes, using half of the sampled examples for training and the other half as validation data for early stopping and to set the mixture weights for averaging the adapter parameters. The results are in Table~\ref{tab:transfer}. On average, {MADE} leads to higher accuracy compared to the baselines, with bigger improvements for the smaller sizes of datasets, showing that a collection of robust single-dataset experts is a good starting point for transfer learning. The post-average method performs about the same as averaging at initialization in the lower-data settings, and better with $K=256$. All models struggle to learn with only 16 examples, and on DuoRC, which has long contexts and distant supervision and might represent a more challenging target for few-shot learning. We also experimented with single-dataset adapters and with a frozen Transformer, which perform worse; detailed results are in Appendix~\ref{appendix:transfer_learning_results_details}. \section{Task Details} \label{appendix:task_details} \subsection{Dataset Details} \label{appendix:dataset_details} \input{tables/A0_datasets} We use the pre-processed datasets from the MRQA 2019 shared task~\citep{fisch2019mrqa}. Table~\ref{tab:datasets} provides some dataset statistics. \subsection{Training Details} \label{appendix:training_details} Our models are implemented in PyTorch~\citep{paszke2019pytorch} using HuggingFace~\citep{wolf2020transformers} and the adapter-transformers library~\citep{pfeiffer2020adapterhub}. For all in-domain experiments, we sample 75,000 training and 1,000 validation examples and train with a constant learning rate and a batch size of 8, taking checkpoints every 1024 steps and stopping if validation F1 fails to improve for 10 checkpoints up to a fixed maximum number of epochs (10 for single-dataset training and 3 epochs for multi-dataset training). We use a constant learning rate of 1e-5 for Transformer parameters and 1e-4 for adapter parameters, following standard settings for RoBERTa and adapters respectively~\citep{liu2019roberta,houlsby2019parameter}, and use the AdamW optimizer~\citep{loshchilov2018decoupled} with the HuggingFace default parameters. For the multi-dataset models, we construct mini-batches of size $B$ by repeating $B$ times: pick a dataset uniformly, and pick an example uniformly from that dataset. We train all models on single 2080Ti GPUs with 11GB of memory each. The multi-dataset models take around two days to train, the single-dataset models take less than 24 hours, and it takes about 2 hours to train one model sequentially on six transfer datasets for three values of $K$ and three seeds. \paragraph{Distant supervision} Some datasets provide the gold answer string but do not mark the gold answer span in the context. We train the model to maximize the marginal likelihood of the gold answer string, marginalizing over all occurrences in the context. The set of possible answer spans are annotated in the pre-processed MRQA datasets. \paragraph{Long contexts} For inputs that are longer than the maximum input window for RoBERTa (512 tokens), we use a sliding window to split in the input into multiple ``chunks'': every input begins with the full question and the $\mathtt{[cls]}$ and separator tokens, and we fill the rest of the input window with tokens from the context, sliding the window 128 characters with each stride. At prediction time, we return the answer from the chunk with that has the highest predicted probability. \paragraph{Negative examples} \label{sec:negative_examples} We follow~\citet{longpre2019exploration} and include ``negative examples'' during training. If a context chunk does not contain the answer span, we include the example as a training instance and train the model to indicate that the example does not contain the answer by selecting the $\mathtt{[cls]}$ token as the most likely start and end span. At prediction time, we discard ``no answer'' predictions and return the non-empty answer from the chunk with that has the highest predicted probability. For UnifiedQA, we train the model to predict an empty string for contexts that don't contain the answer to string and at prediction time return the non-empty answer with the highest probability. \subsection{Transfer learning details} \label{appendix:transfer_learning_details} For transfer learning, we take 1/2 of the K training examples for validation and train for 200 steps or until the validation loss fails to improve for 10 epochs, and we reduce the adapter learning rate to 1e-5. The other hyper-parameters are the same as for in-domain learning. \paragraph{Training UnifiedQA} We download the pre-trained UnifiedQA-base model from HuggingFace and train it in the format described in~\citet{khashabi2020unifiedqa} and in the accompanying code release.~\footnote{https://github.com/allenai/unifiedqa} We lower-case the question and context strings and concatenate them with a special string ``\textbackslash n'', which represents the backslash character followed by the letter n; and train the model to generate the answer string by minimizing cross-entropy loss. We use greedy decoding for prediction. In our pilot experiments, the recommended optimizer (Adafactor with a learning rate of 1e-3) quickly over-fits, so we use the same optimizer, learning rate, and batch size as for RoBERTa. \section{Detailed Results} \label{appendix:detailed_results} \subsection{In-distribution Details} \label{appendix:in_distribution_details} \input{figures/training_curve} Figure~\ref{fig:training_curve} shows the training curve for training {MADE}, normalized by dividing each checkpoint score by the maximum validation accuracy obtained on that dataset during this run. The model reaches the maximum performance on the ``easy'' datasets early in training, which means that the model might over-fit to those datasets before converging on the more difficult datasets. {MADE} avoids this problem by tuning the adapter parameters separately after joint optimization. Interestingly, adapter-tuning leads to improved performance on all datasets (Table~\ref{tab:in_domain}), even datasets on which joint-optimization appears to have already converged. \subsection{Transfer Learning Details} \label{appendix:transfer_learning_results_details} \input{tables/A2_transfer} Table~\ref{tab:transfer_details} provides additional transfer learning results. Single-dataset adapters transfer worse than {MADE}, although performance improves considerably compared to zero-shot performance (Table~\ref{tab:zero_shot}). We observe that the transfer process heavily down-weights some single-dataset adapters (like TriviaQA and SearchQA) that get high loss either before or after training, which might explain the performance improvement. Freezing the Transformer parameters slighly improves results in the $K=16$ setting but leads to worse performance with more data. The biggest drop is on BioASQ, possibly because it introduces new vocabulary and it is beneficial to update the token embeddings.
{ "timestamp": "2021-09-29T02:25:29", "yymm": "2109", "arxiv_id": "2109.13880", "language": "en", "url": "https://arxiv.org/abs/2109.13880" }
\section{Acknowledgments} This work is supported in part by NSF (CAREER-2046955, IIS-1954778) and DARPA (HR001120C0031). The views and conclusions contained in this document are those of the authors only. \section{Conclusion} \label{sec:conclusion} In this paper, we introduced CPIP{}, a framework for integrating introspective perception with path planning in order to learn to reduce robot navigation failures in the deployment environment and with limited amount of training data. We empirically demonstrated that by leveraging introspective perception CPIP{} can learn a navigation competence predictor model that generalizes to novel environments and results in significantly reduced frequency of navigation failures. CPIP{} currently addresses the problem of robot global path planning on a coarse navigation map of the environment. As future directions, the CPIP{} framework can be extended to support competence-aware local motion planning as well as high-level task planning for mobile robots. \section{Experimental Results} \label{experimental_results} In this section: 1) We evaluate CPIP{} on how well it predicts sources of robot failures. 2) We compare CPIP{} against baseline global path planners in terms of their task completion success rate and their task completion time. 3) We evaluate the importance of introspective perception in CPIP{}'s performance and generalizability via ablation studies. \subsection{Experimental Setup}\label{sec:experiment_setup} \paragraph{Simulation} In order to evaluate CPIP{} and compare it against SOTA extensively, we use AirSim~\cite{shah2018airsim}, a photo-realistic simulation environment, where robot failures are not expensive and the robot can easily be reset upon occurrence of navigation failures. A simulated car is equipped with a stereo pair of RGB cameras as well as a depth camera that provides ground truth depth readings for every pixel in the camera frame. We use two separate urban environments for training and testing. The environments are populated with obstacles of different shapes, textures, and colors. \changes{ \paragraph{Real-robot maze} We also evaluate CPIP{} on a real robot. We use a Clearpath Husky robot equipped with a stereo pair of RGB cameras, an Orbbec Astra depth camera, and a Velodyne VLP-16 3D Lidar. We use different indoor sites for training and testing of CPIP{}. Each environment has different types of terrain such as tiles and carpet, and is populated with obstacles of different shapes, textures, and surface materials. The test environment is a maze constructed in an area of size \SI{60}{\meter^2}. } \changes{ \paragraph{Real-robot large-scale} In order to test CPIP{} extensively and in more natural settings, we also conduct a large scale experiment, where we deploy the robot on the entire floor of a building. This test environment has an area of larger than \SI{400}{\meter^2} and the robot traverses more than \SI{1.5}{\kilo\meter} during the deployment. Figure~\ref{fig:robot_exp_env} shows the training and both the large scale and maze test environments. } \begin{figure}[t] \centering \vspace{2mm} \includegraphics[width=1.0\linewidth]{figs/envs/robot_exp_environment_horizontal} \caption{Training and test environments in the real-world experiments.} \label{fig:robot_exp_env} \vspace{-1.5em} \end{figure} \subsection{Failure Prediction Accuracy} \label{sec:failure_pred_acc} \begin{figure}[t] \centering \includegraphics[width=0.6\linewidth]{figs/confusion_mat_rev.pdf} \caption{Prediction results of the competence predictor model in the previously unseen test environment for the three classes of catastrophic failures (CF), non-catasatrophic failures (NCF), and no failures (NF). }\label{fig:comp_predictor_conf_matrix} \IfConference{\vspace{-1em}} \end{figure} In order to evaluate the accuracy of CPIP{} in predicting failures of navigation, we have the autonomous agent traverse each of the edges of the navigation graph in the test simulation environment 50 times and run the captured images by the robot camera through the CPIP{}'s introspective perception module and the competence predictor model to predict instances of navigation failure. In this paper, we implement CPIP{} with two classes of failures. \begin{inparaenum}[1)] \item Catastrophic failures, where the robot ends up in a state that precludes completion of the task and is not recoverable with human intervention. Examples of this class include collisions and the robot getting stuck off-road. \item Non-catastrophic failures, where the robot will not be able to complete its task unless intervention is provided by a human operator or a supervisory sensor. The robot getting stuck due to false detection of obstacles or because of localization errors are examples of this type of failure. \end{inparaenum} Figure~\ref{fig:comp_predictor_conf_matrix} illustrates the predicted and actual navigation failures in a confusion matrix. CPIP{} correctly predicts occurrence of navigation failures more than $70\%$ of the time for both types of failures. Prediction errors mostly correspond to cases, where the source of failure is significantly different looking from the examples available in the training data. \begin{figure}[t] \centering \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=1.0\linewidth, trim=0 0 0 0,clip]{figs/failures_cum_robot_exp_rev2.pdf} \caption{} \label{fig:failure_cum_robot} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \includegraphics[width=1.0\linewidth, trim=0 0 0 0,clip]{figs/failures_cum_robot_exp_ext_rev2.pdf} \caption{} \label{fig:failure_cum_robot_ext} \end{subfigure} \caption{\textcolor{black}{Comparison of cumulative failure count (\subref{fig:failure_cum_robot}) in the real-robot maze experiment, and (\subref{fig:failure_cum_robot_ext}) the real-robot large scale experiment for this work (CPIP{}), SOTA (frequentist), and the baseline with no competence-aware planning.}} \label{fig:failure_count_vs_time} \end{figure} \begin{figure*}[h] \centering \includegraphics[width=0.96\linewidth, trim=10 0 10 0,clip]{figs/rel_completion_duration_over_time_wide_v3.pdf} \caption{\textcolor{black}{Task completion duration w.r.t. an oracle planner that is provided with the true probability of failure throughout the environment ahead of deployment. Vertical bars visualize the incomplete tasks for each method annotated by color. Highlighted red regions in the top band demonstrate tasks, during which the robot encounters previously unseen parts of the environment. Best viewed in color.}} \label{fig:task_completion_duration} \end{figure*} \subsection{Navigation Success Rate and Plan Optimality} \begin{table}[t] \caption{\textcolor{black}{\textsc{Task completion and failure prevention rate.}}} \label{tab:result_summary} \centering \resizebox{\linewidth}{!}{% \begin{threeparttable} \begin{tabular}{@{}llccllcc@{}} \toprule & & \multicolumn{1}{l}{\multirow{2}{*}{\# Tasks}} & \multirow{2}{*}{TCR (\%)} & \multicolumn{2}{l}{Relative TCD\tnote{*}} & \multicolumn{2}{c}{\# Avoided Failures} \\ \cmidrule(r{0.2em}){5-6} \cmidrule(l{0.2em}){7-8} & & \multicolumn{1}{l}{} & & Mean & Std & CF & NCF \\ \midrule \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Real Robot\\ Maze\end{tabular}} & CPIP & 11 & \textbf{100} & \textbf{1.04} & 0.08 & \textbf{5 (100\%)} & \textbf{3 (100\%)} \\ & Frequentist & 11 & 73 & 1.22 & 0.43 & 3 (60\%) & 1 (33\%) \\ \midrule \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Real Robot\\ Large-Scale\end{tabular}} & CPIP & 20 & \textbf{100} & 0.98 & 0.04 & \textbf{9 (100\%)} & \textbf{-} \\ & Frequentist & 20 & 80 & 0.99 & 0.04 & 5 (55\%) & \textbf{-} \\ \midrule \multirow{2}{*}{Simulation} & CPIP & 100 & \textbf{97} & 1.00 & 0.05 & \textbf{14 (93\%)} & \textbf{61 (97\%)} \\ & Frequentist & 100 & 83 & 1.02 & 0.09 & 9 (60\%) & 52 (83\%) \\ \bottomrule \end{tabular} \begin{tablenotes} \item[*] Task completion duration statistics are only calculated for tasks that were completed by both algorithms. \end{tablenotes} \end{threeparttable} } \vspace{-1em} \end{table} \begin{figure*}[t] \centering \begin{subfigure}[b]{0.365\linewidth} \includegraphics[width=0.95\linewidth, trim=0 0 0 0,clip, right]{figs/envs/robot_exp_snapshots_rev2} \caption{} \label{fig:robot_exp_snapshots} \end{subfigure} \hspace*{-1.2em} \begin{subfigure}[b]{0.33\linewidth} \includegraphics[width=0.95\linewidth, trim=0 0 0 0,clip, right]{figs/envs/robot_exp_ext_snapshots_rev5} \caption{} \label{fig:robot_exp_ext_snapshots} \end{subfigure} \begin{subfigure}[b]{0.29\linewidth} \includegraphics[width=0.95\linewidth, trim=0 0 0 0,clip, left]{figs/envs/sim_exp_snapshots_rev3} \caption{} \label{fig:sim_exp_snapshots} \end{subfigure} \caption{\textcolor{black}{Test environments in~(\subref{fig:robot_exp_snapshots}) the real robot maze experiment,~(\subref{fig:robot_exp_ext_snapshots}) large-scale real robot experiment, and~(\subref{fig:sim_exp_snapshots}) the simulation experiment. Regions of the environments highlighted in red cause catastrophic failures, regions highlighted in yellow illustrate sources of non-catastrophic failures, and areas annotated with green, show areas where the robot can successfully operate autonomously.}} \label{fig:test_environments_2} \IfConference{\vspace{-1.5em}} \end{figure*} We test the end-to-end system in predicting navigation failures and leveraging this information to proactively plan paths that reduce the probability of failures, by deploying the robot in a previously unseen test environments. The robot is commanded to complete randomly generated navigation tasks that consist of a starting pose and a target pose. We conduct this experiment in all three settings explained in~\cref{sec:experiment_setup}, i.e. simulation, real-robot maze, and the large-scale real-robot deployment. We compare CPIP{} with a baseline path planner that does not reason about the competence of the robot as well as a state-of-the-art approach for competence-aware path planning --- called the Frequentist approach --- that relies on keeping track of the frequency of past failures in traversing each of the edges of the navigation graph~\cite{lacerda2014optimal}. Figure~\ref{fig:failure_count_vs_time} compares the cumulative failure count for all three methods throughout the real-robot experiments. With the Frequentist approach, the robot learns to avoid regions of the environment, where it cannot navigate reliably as it experiences navigation failures. However, CPIP{} enables the robot to predict and avoid most of these failures, leading to the least number of experienced failures. We also evaluate the optimality of the planned paths by comparing the task completion time for all the methods under test with an oracular path planner that is given the true probability of navigation failures for each edge of the navigation graph. The ground truth failure probabilities are obtained by having the agent traverse each edge of the navigation graph numerous times and logging the frequency of each type of failure. Figure~\ref{fig:task_completion_duration} shows the completion duration for each task in the simulation experiment. The duration values are normalized by the task completion duration when the oracular path planner is used. The figure also illustrates instances of task completion failures for both CPIP{} and the Frequentist method. Such instances include occurrence of catastrophic failures or occurrence of consecutive non-catastrophic failures such that the robot cannot recover from a stuck state by re-planning. CPIP{} task completion duration is similar to that of the oracular path planner except for tasks where the robot visits a previously unseen part of the environment and has to re-plan upon prediction of a source of navigation failure. An example of such re-planning can be seen around task number 50 in Figure~\ref{fig:task_completion_duration}. \changes{Table~\ref{tab:result_summary} summarizes the task completion rate (TCR), task completion duration (TCD), and the number of avoided navigation failures by CPIP{} and the Frequentist method for both simulation and real-robot experiments. CPIP{} achieves a significantly higher TCR across all experiments; moreover, CPIP either performs similarly or outperforms the Frequentist approach in terms of TCD. The reduced task completion duration achieved by CPIP is due to proactively predicting and avoiding non-catastrophic failures, e.g. getting stuck behind falsely detected obstacles. The Frequentist approach, on the other hand, would experience these failures and although it might be able to eventually complete the task by replanning, it will suffer from a longer task completion duration. This effect was more pronounced in the real-robot maze experiment, where the distance traveled by the robot in each task was shorter compared to the other experiments, hence the relative task completion delay caused by non-catastrophic failures was larger. } Figure~\ref{fig:test_environments_2} illustrates snapshots of the test environments and highlights the different sources of navigation failures encountered by the robot, which includes different types of texture-less obstacles as well as reflective surfaces. \subsection{Ablation Study} \begin{figure}[h] \centering \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1.0\linewidth, trim=0 0 0 0,clip]{figs/classification_report_first_test_set_rev1.pdf} \caption{Known environment} \label{fig:seen_test_env} \end{subfigure} \begin{subfigure}[b]{0.45\linewidth} \includegraphics[width=1.0\linewidth, trim=0 0 0 0,clip]{figs/classification_report_second_test_set_rev1.pdf} \caption{Novel environment} \label{fig:unseen_test_env} \end{subfigure} \caption{Results of the navigation failure prediction for CPIP{} vs. an end-to-end classifier that does not use introspective perception (\subref{fig:seen_test_env}) in a previously seen environment and (\subref{fig:unseen_test_env}) in a novel environment.} \label{fig:classification_report} \vspace{-1.5em} \end{figure} In order to evaluate the importance of introspective perception in the pipeline of CPIP{}, we conduct an ablation study. We train a classifier that instead of leveraging the extracted information by introspective perception, directly receives the raw captured RGB images as input and outputs the probability of each class of failure occurring in a specified time window in the future. We use a convolutional neural network with the AlexNet architecture similar to that used in prior work~\cite{daftry2016} for predicting failures of perception. We train the classifier on the same simulation dataset used for training CPIP{} and we compare the performance of both methods in predicting navigation failures both in a previously unseen environment---the same test dataset described in ~\cref{sec:failure_pred_acc}---as well as in a new set of deployments of the agent in the training environment. Figure~\ref{fig:classification_report} shows the average precision, recall, and f1-score metrics over all classes, i.e. two classes of failures and a no-failure class, for both CPIP{} and the end-to-end classifier. While both methods perform similarly good in a previously seen environment, CPIP{} significantly outperforms the alternative classifier in the novel environment. Leveraging the extracted features by introspective perception simplifies the learning task and allows CPIP{} to achieve better generalizability given the same amount of training data. This is specifically a benefit for task-level failure prediction, where the volume of training data is limited due to the costly nature of acquiring data from examples of robot failures. \section{Introduction} \IfJournalThenElse{\IEEEPARstart{A}{s}}{As}robots become increasingly available, they are deployed for tasks where autonomous navigation in uncontrolled environments is crucial to success, such as package delivery, warehouse automation, and home service settings. Deploying robots over extended periods of time and in such open world setting requires addressing failures originating from real-world uncertainty and imperfect perception. Continuous operator monitoring, while effective, is cumbersome and thus not scalable to many robots or large environments. We are thus interested in developing \emph{competence-aware} agents capable of assessing the probability of successfully completing a given task. Such agents would learn from failures and leverage the acquired knowledge when planning to improve their robustness and reliability. Previous efforts towards competence-aware path planning and motion planning either rely solely on statistical analysis of logged instances of failures in the configuration space of the robot and do not benefit from the sensing information collected by the robot~\cite{hawes2017strands}, or are application specific and designed to reduce the probability of failure for a specific perception module such as visual SLAM~\cite{costante2016perception}. While there has been progress on introspective perception to enable perception algorithms to learn to predict their sources of errors~\cite{rabiee2019ivoa, rabiee2020ivslam}, the outputs of such algorithms have not yet been exploited in robot planning. We present competence-aware path planning via introspective perception (CPIP), a general framework that bridges the gap between path planning and introspective perception and allows the robot to iteratively learn and exploit task-level competence in novel deployment environments. CPIP{} models the path planning problem as a Stochastic Shortest Path (SSP) problem and builds a model that represents both the topological map of the environment as well as the competence of the robot in traversing each part of the map autonomously. CPIP{} leverages introspective perception to predict the task-level competence of the robot in novel deployment environments and employs a Bayesian approach to update its estimate of the robot competence online and during the deployment. CPIP~ then uses this information to plan paths that reduce the risk of failures. Our experimental results demonstrate that CPIP~ converges to the optimal planning policy in novel deployment environments while reducing the frequency of navigation failures by more than $80\%$ compared to the state-of-the-art competence-aware path planning algorithms that do not leverage introspective perception. \section{CPIP Definition} \label{sec:cpip_definition} CPIP{} is a framework for integrating path planning with introspective perception in life-long learning settings. It is defined as a tuple $<\mathcal{M}, \mathcal{I}, \mathcal{H}>$, where $\mathcal{M}$ is a stochastic planning model, $\mathcal{I} = \{\mathbb{I}_k\}_{k=1}^{N}$ is a set of introspective perception modules, and $\mathcal{H}$ is a task-level competence predictor. CPIP{} leverages introspective perception and the competence predictor model to predict the probability of task-level failures given the raw sensory data at every time step and uses these estimates to update the planning model iteratively during robot deployments, hence learns policies that reduce the probability of failures. In~\cref{sec:planning}, we introduce the planning model $\mathcal{M}$ and explain how it incorporates the probability of autonomous navigation failure in path planning. We then explain introspective perception $\mathcal{I}$ and the competence predictor model $\mathcal{H}$ and how they are used to structure the problem of learning to predict instances of navigation failures in~\cref{sec:introspection}. \section{Competence-Aware Planning} \label{sec:planning} The CPIP{} planning model $\mathcal{M}$ uses a representation of the environment that includes both the connectivity of a set of sparse locations on the map as well as the probability of successful traversal between each two connected neighboring locations. In this section, we explain this model and how it is actively updated during deployments. \subsection{Planning Model Description} \label{sec:planning_background} \begin{figure}[t] \centering \includegraphics[width=0.95\linewidth, trim=100 130 200 0,clip]{figs/cpip_mdp_diagram_trans} \caption{Planning SSP for an example environment.}\label{fig:cpip_mdp_diagram} \IfConference{\vspace{-1em}} \end{figure} The input to our problem is a topological map of the environment in the form of a directed graph $G=\langle N,E\rangle$ composed of a set of nodes, $N$, and a set of edges, $E$. Each node represents a location, and each edge $e$ is defined by a tuple $\langle n_e^i, n_e^j, t_e, p_e \rangle$. Here, $n_e^i$ is the starting vertex, $n_e^j$ is the ending vertex, $t_e$ is the expected traversal time for the edge $e$, and $p_e$ is the probability of successfully traversing the edge. Given the topological map, $G$, we model the planning problem as a Stochastic Shortest Path (SSP) problem, a formal decision-making model for reasoning in stochastic environments where the objective is to find the least-cost path from a start state to a goal state. An SSP is a tuple $\langle S, A, T, C, s_0, G \rangle$ where $S$ is a finite set of states, $A$ is a finite set of actions, $T : S \times A \times S \rightarrow [0,1]$ represents the probability of reaching state $s' \in S$ after performing action $a \in A$ in state $s \in S$, $C : S \times A \rightarrow \mathbb{R}^+$ represents the expected immediate cost of performing action $a \in A$ in state $s \in S$, $s_0 \in S$ is an initial state, and $G \subset S$ is a finite (possibly singleton) set of goal states such that $T(s_g, a, s_g) = 1 \land C(s_g, a) = 0 \hspace{1mm} \forall a \in A, s_g \in G$. A solution to an SSP is a policy $\pi : S \rightarrow A$ that indicates that action $\pi(s) \in A$ should be taken in state $s \in S$. A policy $\pi$ induces the value function $V^{\pi} : S \rightarrow \mathbb{R}$ that represents the expected cumulative cost $V^{\pi}(s)$ of reaching $s_g$ from state $s$ following the policy $\pi$. An optimal policy $\pi^*$ minimizes the expected cumulative cost $V^*(s_0)$ from the initial state $s_0$. In our problem, $S = N \times \tilde{S}$ is a finite set of states comprised of the map nodes $N$ and a finite set of failure states $\tilde{S}$ and $A = E \cup \tilde{A}$ is a finite set of actions comprised of the directed edges in the graph and a finite set of recovery actions $\tilde{A}$. $T(s,a,s')$ is determined by the probability of successfully traversing the edge $e = (s,s')$, $p_e$, which is zero if the action $a$ does not correspond to the edge $e$. In a failure state $s$, $T(s, a, s') = 0$ if $a \notin \tilde{A}$ and $s \neq s'$. $C(s,a)$ is set to $t_e$ if $a \in E$, and the expected recovery cost for $s$ otherwise. Figure~\ref{fig:cpip_mdp_diagram} illustrates the planning SSP for an example urban environment. During robot deployments, the transition function is updated to reflect the latest belief over the probability of navigation failures in traversing each edge on the map, or equivalently the probability of successful traversals. Next, we explain the method for updating the transition function. \subsection{Updating the Failure Belief during Deployment} CPIP{} builds an SSP model to represent the topological map of the environment as described in~\cref{sec:planning_background}. CPIP{} updates the aforementioned SSP model structure during deployments as it collects more observational data from the environment, altering the underlying transition function such that the resultant model represents not just the map but the competence of the robot in traversing it. In order to achieve that, the occurrence of a failure of class $f_i$ at edge $e$ is assumed to be a random variable from the categorical distribution $F_e \sim \text{Cat}(p_{1:L})$. The belief over this variable is defined as $\bel_t(F_e = f_{i}) = p(F_e = f_{i} | z_{1:t})$, where the subscript $t$ indicates the $t^{\text{th}}$ traversal of the edge and $z_t$ is the observation made by the robot during that traversal. Applying the Bayes rule yields \IfConference{\vspace{-2em}} \begin{center} \begin{equation} \begin{aligned} \belOf{t}{f_{i, e}} &= \dfrac{p\left(z_t | f_{i, e}, z_{1:t-1}\right) p\left(f_{i, e} | z_{1:t-1}\right)}{p\left(z_t | z_{1:t-1}\right)} \\ &= \dfrac{p\left(f_{i, e} | z_t \right) p\left(z_t \right) \belOf{t-1}{f_{i, e}}} {p(f_{i, e}) p(z_t | z_{1:t-1})} \quad. \end{aligned} \end{equation} \end{center} Defining the negation of $f_{i, e}$ as $p(\negf_{i, e}) = 1 - p(f_{i, e}) = \sum_{j\neq i}p(f_{j,e})$ the belief can be implemented as the log odds ratio \IfConference{\vspace{-2em}}\IfJournal{\vspace{-2em}} \begin{center} \begin{equation} \begin{aligned}\label{eq:log_odds} l_t(f_{i, e}) &= \log\left( \dfrac{\belOf{t}{f_{i, e}}} {\belOf{t}{\negf_{i, e}} } \right) \\ &= \log\left( \dfrac{p(f_{i, e} | z_t)}{1 - p(f_{i, e} | z_t)} \right) + l_{t-1}(f_{i, e}) - l_0(f_{i, e}) \quad, \end{aligned} \end{equation} \end{center} where $l_0(f_{i, e}) = \log\left( \frac{p(f_{i, e})}{1 - p(f_{i, e})} \right)$ is the prior in log odds form. Before the first deployment of the robot in a new environment, $p(f_{i, e}) = \epsilon$ and $p_e = p(f_{L,e}) = 1 - \sum_{i \neq L}p(f_{i, e})$ for every $e \in E$. Then upon each traversal of an edge, the above relation is used to update the transition function of the planning SSP model such that $T_t(s, e, \tilde{s_i}) = \belOf{t}{f_{i, e}} = 1 - \frac{1}{1 + \exp(l_t(f_{i, e}))}$. The main term that needs to be computed for updating the belief in Eq.~\ref{eq:log_odds} after each traversal is $p\left(f_{i, e} | z_t \right)$, which is known as the inverse observation likelihood and in CPIP{} it is implemented by two different functions, each handling one of the two different types of observations $z_t$: 1) Occurrence of failures of class $f_i$ which is indicated via intervention signals issued either by a human or a supervisory sensing unit and is denoted by $s_{t,i}$; 2) Sensory input that the robot continuously acquires such as RGB images captured by cameras on the robot, which is denoted by $\mathbf{I}_t$. For the former, the inverse observation likelihood is implemented as \vspace{-3em} \begin{center} \begin{align} p\left(f_{i, e} | z_t = s_{t,j} \right) = \begin{cases} \delta & i = j \\ \frac{1 -\delta}{L - 1} & i \neq j \end{cases} \end{align} \end{center} where $\delta$ is a constant coefficient. The inverse observation likelihood function for the latter type of observations, however, is machine learned and is one of the key components of this work that allows CPIP{} to reach an accurate estimate of $\belOf{t}{f_{i, e}}$ without requiring the robot to experience costly failures. CPIP{} structures the problem of learning $p(f_{i, e} | z_t = \mathbf{I}_t)$ such that it can be achieved with a small number of failure examples for training data. Introspective perception is leveraged to extract features associated with errors in perception from the high dimensional raw sensory data. These features are then used to learn to predict the probability of different classes of failures of navigation. By learning this likelihood function, the robot will learn to better navigate its environment, proactively avoiding paths that are known to lead to failure cases, and reactively adjusting its policy upon encountering novel situations that may lead to failures. In the following section we describe the different parts of this learning problem. \section{Failure Prediction via Introspective Perception}\label{sec:introspection} \begin{figure*}[t] \centering \vspace{2mm} \includegraphics[width=0.9\linewidth ,trim=0 0 0 0,clip]{figs/comp_model/cpip_model_2} \caption{Navigation competence predictor model architecture.} \label{fig:comp_pred_model} \vspace{-1mm} \end{figure*} In order to predict failures of navigation given the sensory data, we need to approximate the function $p(f_{i, e} | z_t = \mathbf{I}_t)$. End-to-end learning of this function is intractable because it requires a great amount of training data, yet catastrophic failures in robotics when executing tasks such as autonomous navigation do not happen frequently. The scarcity of these examples makes it challenging to learn a classifier that predicts the probability of task execution failure directly from the raw sensory data. Without enough training data and without abstracting the acquired high dimensional sensory data, the learned classifier is bound to overfit to the training data. We instead propose to factorize $p(f_{i, e} | z_t) = \int_{\phi} p(f_{i, e} | \phi)p(\phi | z_t)$ where $\phi$ are the features extracted from observations by introspective perception --- a model-free approach to predicting arbitrary errors of perception. \subsection{Introspective Perception} Early works on introspective perception~\cite{zhang2014predicting, daftry2016} defined a perception algorithm to be introspective if it is capable of predicting the occurrence of errors in its output given the current state of the robot. Follow-up works~\cite{rabiee2019ivoa, rabiee2020ivslam} extended this definition and required such perception algorithms to predict the probability of perception error conditioned on the region in the raw sensory data that the output is dependent upon, e.g. an image patch in the captured image by an RGB camera where the estimated depth of the scene is erroneous. This is obtained by means of an introspection function that is trained on empirical data. In CPIP{}, the robot is assumed to be equipped with one or more introspective perception modules; each module has a learned function $\mathbb{I}_k: Z \rightarrow \mathbb{R}^n$, which extracts features $\phi \in \mathbb{R}^n$ from the raw sensory data $Z$ that encode information about sources of perception errors. The outputs of all introspective perception modules are fed to a navigation competence predictor $h{}: \mathbb{R}^{n \times K} \rightarrow [0, 1]^{L}$, which learns to estimate the likelihood of each of the different classes of failure ${f}_{1:L}$ given a set of sources of perception errors, i.e. $P(f_l | \phi_{1:K})$ such that $\sum_l P(f_l | \phi_{1:K}) \leq 1$. The inverse observation likelihood function in Eq.~\ref{eq:log_odds} is then estimated as the composition of the above two functions, i.e. $p(f_{i, e} | z_t = \mathbf{I}_t) = h{}\left(\mathbb{I}_{1:k} \left( \mathbf{I}_t \right) \right)_{[i]}$. \changes{It should be noted that although CPIP{} assumes a constant set of failure classes, the number of distinct sources of failures, which is often much larger than the number of different classes of failure, are not enumerated \emph{a priori}. Each failure class corresponds to a different severity level and hence a different failure recovery cost that is considered in planning. There exist, however, a large number of failure sources that lead to failures with the same severity level. For example, a high-severity class of failure in robot navigation are collisions, for which there exist numerous failure sources including false negatives in obstacle detection due to texture-less surfaces or small object sizes, terrain type mis-classification, dynamic obstacles, occlusions, etc. Furthermore, while the distinct sources of failure differ between environments, the classes of failure are specific to the objective of the domain (e.g. navigation, manipulation, etc.), irrespective of the environment, and are hence comparatively easy to enumerate.} In this paper we implement introspective perception for a block matching-based stereo depth estimator~\cite{pulli2012real} using the same convolutional neural network architecture as that used in~\cite{rabiee2019ivoa} for the introspection function. The training data is collected autonomously using a depth camera as supervisory sensing, which is only occasionally available and provides oracular information about the true depth of the scene. \subsection{Competence Predictor Model}\label{sec:competence_pred_model} We implement the navigation competence predictor model $h{}:\mathbb{R}^{n \times K} \rightarrow [0, 1]^{L}$ as an ensemble of two deep neural networks. The input to the model is a list of image patches $I_i \in {R}^{n}$ extracted from the same input image $\mathbf{I}$ and the output is the probability of each class of failure. The architecture as shown in Figure~\ref{fig:comp_pred_model} consists of two sub-models that are trained independently. The \texttt{global\_info} network is a convolutional neural network (CNN) that operates simultaneously on all input image patches arranged on a blank image in their original pixel coordinates. The input to this network is equivalent to the input image masked at all regions except for those predicted by introspective perception to lead to errors. The \texttt{global\_info} CNN captures task-contextual and spatial information from the current frame related to competence. By masking out parts of the full image deemed to be unrelated to perception failures we are able to ensure that the \texttt{global\_info} CNN does not overfit to specific environments. The \texttt{local\_info} network is a CNN that is fed as input individual image patches. The output of this branch is the probability of each class of failure for each single image patch. This network learns correlations between navigation failures and image features that lead to perception errors. The goal of this branch is to locally pinpoint the potential source of navigation failures in the image space, when a class of failure is predicted by the \texttt{global\_info} network. The last stage of the model is a temporal filtering of the output of each of the two networks. Failure class probabilities that are produced by the \texttt{global\_info} network are passed through a mean filter to output $P_f \in [0, 1]^{L}$. Moreover, image patches that are predicted by the \texttt{local\_info} network to lead to navigation failures are tracked in the full image over consecutive frames to form a set of active tracklets $\Lambda_i$ for each class of failure $f_i$. The output of the model is obtained via strict consensus on the output such that \begin{equation} P(f_i) = \begin{cases} {P_f}_{[i]} & \text{if $\Lambda_i \neq \emptyset$} \\ 0 & \text{otherwise} \end{cases} \label{eq:comp_model} \end{equation} In other words, the predicted probability of each class of failure provided by the \texttt{global\_info} network is only accepted if the \texttt{local\_info} network also supports that by detecting at least one potential cause for the same class of failure in the image space. During deployment, if $ \exists j \mid P(f_j) > \epsilon $, i.e. there exists consensus between the two branches of the network on the existence of any class of failure, the output of the competence predictor model will be used to update the belief in Eq.~\ref{eq:log_odds}. \vspace{-1em} \changes{ \section{Implementation Details} \label{sec:impl_details} In this section we provide implementation details for our application of CPIP{}, i.e. path planning for an unmanned ground vehicle (UGV) that uses a stereo vision-based depth estimator~\cite{pulli2012real} for obstacle avoidance. } \changes{ \subsection{Autonomy Stack} Our navigation software consists of global path planning on the navigation graph and local path planning to follow the planned path while avoiding dynamic obstacles that the robot does not know about \emph{a priori}. CPIP{}'s planning model does the global path planning and we use a trajectory roll-out local path planner. The 3D reconstruction of the environment by stereo-vision is processed and any points with their height larger than \SI{15}{\cm} and less than the height of the robot are detected as obstacles. All obstacle points coordinates are projected to the ground plane, converted to a 2D laser-scan format, and used by the local path planner to select the least cost trajectory from a set of sampled trajectories, such that the robot keeps a large clearance from obstacles and makes progress towards the next way-point on the global plan.} \changes{ We implement introspective perception for the depth estimator and train a CNN to predict depth estimation errors similar to prior work~\cite{rabiee2019ivoa}. The network is composed of 5 convolution layers followed by 3 fully connected layers. The input to the network are the image patches of size $70 \times 70$ pixels extracted from the $512 \times 512$ pixel images captured by the left stereo pair. The output is the probability of depth estimation error and all image patches predicted to lead to perception error with a probability of $>0.5$ are passed to the competence predictor model. We use a similar CNN architecture as that use for introspective depth estimation for both sub-models of the competence predictor with the only difference being the number of nodes in the output layer of the network. The full pipeline of navigation competence predictor which consists of introspective perception and the competence predictor model runs at \SI{5}{\hertz} on a laptop with Intel Core i9-9880H and GeForce RTX 2080 Max-Q.} \subsection{Training of CPIP{}} CPIP{} has two learned components, i.e. introspective perception and the competence predictor model and they are trained sequentially. The training data is extracted from logs of robot deployments in the training environment. The logs include data collected by the primary sensors, i.e. RGB images captured by the stereo cameras, as well as data collected by supervisory sensors, i.e. the Orbbec Astra depth camera that is only occasionally available. Furthermore, intervention signals issued by a human operator upon occurrence of navigation failures are also recorded. The deployment logs are processed offline. First, introspective perception is trained with data that is autonomously labeled using the supervisory sensing. Then, the training data for the competence predictor model is prepared by passing the raw sensory data through the introspective perception module and labeling the output image patches as associated with one of the classes of navigation failures if they fall within a fixed time window preceding the occurrence of such failures. Each of the two sub-models of the competence predictor model explained in \S\ref{sec:competence_pred_model} are then trained using a cross-entropy loss. \section{Planning Model Description} \label{sec:planning_background} The input to our problem is a topological map of the environment in the form of a directed graph $G$ composed of a set of nodes, $N$, and a set of edges, $E \subseteq N^2$. Each nodes represents a location (i.e. an $xy$--coordinate), and each edge $e$ is associated with a corresponding value $t_e$, denoting the expected traversal time for the edge, and $p_e$, denoting the probability of successfully traversing the edge. Given the topological map, $G$, we model the planning problem as a Stochastic Shortest Path (SSP) problem, a formal decision-making model for reasoning in stochastic environments where the objective is to find the least-cost path from a start state to a goal state. An SSP is a tuple $\langle S, A, T, C, s_0, G \rangle$ where $S$ is a finite set of states, $A$ is a finite set of actions, $T : S \times A \times S \rightarrow [0,1]$ represents the probability of reaching state $s' \in S$ after performing action $a \in A$ in state $s \in S$, $C : S \times A \rightarrow \mathbb{R}^+$ represents the expected immediate cost of performing action $a \in A$ in state $s \in S$, $s_0 \in S$ is an initial state, and $G \subset S$ is a finite (possibly singleton) set of goal states such that $T(s_g, a, s_g) = 1 \land C(s_g, a) = 0 \hspace{1mm} \forall a \in A, s_g \in G$. A solution to an SSP is a policy $\pi : S \rightarrow A$ that indicates that action $\pi(s) \in A$ should be taken in state $s \in S$. A policy $\pi$ induces the value function $V^{\pi} : S \rightarrow \mathbb{R}$ that represents the expected cumulative cost $V^{\pi}(s)$ of reaching $s_g$ from state $s$ following the policy $\pi$. An optimal policy $\pi^*$ minimizes the expected cumulative cost $V^*(s_0)$ from the initial state $s_0$. In our problem, $S = N \times \tilde{S}$ is a finite set of states comprised of the map nodes $N$ and a finite set of failure states $\tilde{S}$ and $A = E \cup \tilde{A}$ is a finite set of actions comprised of the directed edges in the graph and a finite set of recovery actions $\tilde{A}$. $T(s,a,s')$ is determined by the probability of successfully traversing the edge $e = (s,s')$, $p_e$, which is zero if the action $a$ does not correspond to the edge $e$, and whether the current state is a failure state. In a failure state $s$, $T(s, a, s') = 0$ if $a \notin \tilde{A}$ and $s \neq s'$. $C(s,a)$ is set to $t_e$ if $a \in E$, and the expected recovery cost for $s$ otherwise. \section{Related Work} \label{related_work} The idea of integrating perception with planning and control was introduced by pioneering works on active perception that suggested performance of perception can be improved by selecting control strategies that depend on the current state of perception data interpretation as well as the goal of the task~\cite{bajcsy1988active, aloimonos1988active}. Researchers have applied this idea to various levels of control ranging from active vergence control for a stereo pair of cameras~\cite{krotkov1988focusing} to object manipulation given the next best view for surface reconstruction of unknown objects~\cite{krainin2011autonomous}. One line of work predicts and avoids degradation of perception performance given features extracted from the raw sensory data. Costante et al.~\cite{costante2016perception} propose a perception-aware path planning for MAVs that maximizes the information gain from image matching while solving for dense V-SLAM. Sadat et al.~\cite{sadat2014feature} and Deng et al.~\cite{deng2018feature} follow a similar approach and use an RRT* planner where the cost of a path is defined as a linear combination of the length of the path and the predicted density of image features along the path to reduce localization errors. In these works, estimates of competence for perception are obtained via hand crafted metrics and the path planner cost function is designed to specifically address the reliability of V-SLAM and is not generalizable to arbitrary perception tasks. \changes{There exists a body of work on risk-aware path planning, where it is assumed that the autonomous agent has accurate models of the uncertainty of perception. Jasour et al.~\cite{jasour2019risk} use the \emph{a priori} known parametric probability distributions for obstacle locations and leverage chance constrained optimization to plan paths that have collision probabilities below a user-specified threshold. Schirmer et al.~\cite{shah2018airsim} use an offline-built localization uncertainty map of the environment to do risk-aware path planning. Barbosa et al.~\cite{barbosa2021risk} and similarly Chung et al.~\cite{chung2019risk} relax the full-observability assumption and do path planning in partially known environments, yet they assume that the agent has an observation model with a known noise process that is used to update its belief over the state of the world. However, in practice estimates of uncertainty of perception algorithms, as obtained by methods such as the Cramer-Rao lower bound, are often overconfident and inaccurate. An alternative approach is to directly model task-level failures as a function of the state of the world as acquired by perception. Saxena et al.~\cite{saxena2017learning} learn to predict task-level failures that are due to errors in perception from the raw sensory data; however, predicted failures are used to trigger an enumerated set of recovery actions rather than proactively generating plans that reduce the probability of failures. Similarly Gurau et al.~\cite{guruau2018learn} leverage image data and location specific features to do reactive planning by selecting between different levels of autonomy at any point in time.} A different line of work on competence-aware path planning that has a more holistic view of failures includes keeping track of all of the robot failures regardless of the perception algorithm that is the cause, and then leveraging this information to proactively generate plans with reduced risk of failures. Lacerda et al.~\cite{lacerda2014optimal} aggregate the failure instances of a service mobile robot while navigating the environment to model the probability of success for traversing each edge of a topological map using an MDP and generate navigation policies that prefer paths with high success probabilities. Krajník~\cite{krajnik2017fremen} use a spectral model to learn mid to long-term environmental changes assuming they have a periodic nature and exploit it to improve robot navigation and localization by predicting such changes. Vintr et al.~\cite{vintr2019spatio} use a similar approach to learn a spatio-temporal model for predicting presence of humans in the robot's deployment environment at different times of the day. Since these methods are based on statistical analysis of the frequency of navigation failures, they require ample experience and several samplings from any location in the map, in order to achieve an accurate estimate of the robot's competence in navigating that specific location. Moreover, due to using location specific features of the environment for estimating the robot competence, these estimates cannot be generalized to novel deployment environments. Basich et al.~\cite{basich2020learning} further expand the concept of competence to the optimal level of autonomy and define a stochastic model for solving the path planning problem, where the generated plans consist of a path and the optimal level of autonomy for each segment of the path. In order to learn to predict the probability of failure at each level of autonomy this work requires a curated list of environmental features that are potentially correlated with robot failures. In this work we leverage machine learned models capable of predicting errors of individual perception modules, to obtain an accurate estimate of the robot's competence at successfully navigating throughout an environment. CPIP{} uses the estimate of competence to plan reliable and short-duration paths. Our work is similar to~\cite{lacerda2014optimal, basich2020learning} in that it reasons about the competence of the robot at successfully performing navigation tasks at a topological map level; however, it removes the need for an enumerated list of perception related features by automatically learning to extract such features from the raw sensory data. Furthermore, CPIP{} significantly reduces the frequency of failures experienced in new environments by exploiting the generalizable learned perception features instead of merely relying on statistical analysis of the location of previous navigation failures.
{ "timestamp": "2022-01-19T02:44:50", "yymm": "2109", "arxiv_id": "2109.13974", "language": "en", "url": "https://arxiv.org/abs/2109.13974" }
\section{Additional Results} \subsection{Large Search Budgets} We study the simple-primitive robustness problems (from \autoref{tbl:robust-simple}, top), as these problems resulted in a small number of \texttt{Sympy} timeouts, allowing for scaling of verification. We increase the beam-width to 500 candidates, and show \texttt{Fail@k} for $k\in \{1,10,50,200,500\}$ in \autoref{apx:fig:beam500}. The top of the solution list remains brittle with the larger beam size: that is, with beam size 500 the top-ranked solutions are still frequently incorrect, supporting the findings in \autoref{tbl:search}. However, the correct solution often exists deeper into the ranked list, i.e. \texttt{Fail@k} decreases as $k$ increases, ranging from 0.1 \texttt{Fail@500} for \texttt{cos} to 3.7 \texttt{Fail@500} for \texttt{tan}. As an illustration, \autoref{apx:fig:beam500-correctrank} shows the ranking of the correct solution for $\texttt{exp}$-robustness problems that had a solution in the top-500 beam candidates. \begin{figure} \centering \includegraphics[width=0.75\columnwidth]{images/beam500_robustness_simple.png} \caption{\texttt{Fail@k} with a \textbf{large search-and-verify budget} (500 candidates, beam search) on \textbf{simple-primitive robustness} problems. The model is `brittle' in that the top of its ranked solution list is often made up of incorrect solutions; e.g. the top-ranked solution is incorrect around 25\% of the time. However, the correct solution often exists within the model's top 500 predictions on these problems. } \label{apx:fig:beam500} \includegraphics[width=0.6\columnwidth]{images/beam500_correctrank.png} \caption{Correct candidate's ranking out of 500 candidates (beam search) on \textbf{exp-primitive robustness} problems. } \label{apx:fig:beam500-correctrank} \end{figure} \subsection{Alternate Decoding Strategies} So far, we have used beam search, a deterministic search that approximately searches for the highest scoring output sequence. An alternative approach is decoding a set of sequences $\{\hat\ensuremath{\mathbf{y}}_1,\ldots,\hat\ensuremath{\mathbf{y}}_k\}$ by \textit{sampling} recursively from model-dependent per-token distributions, $y_t^{(i)}\sim q(y_t|y_{<t},\ensuremath{\mathbf{x}}, p_\theta)$, where each $\hat\ensuremath{\mathbf{y}}_i=(y_1^{(i)},\ldots,y_{T_i}^{(i)})$. We use the common approach of temperature sampling. \autoref{apx:fig:beamsample} shows \texttt{Fail@k} rates for sampling 500 candidates at temperatures $\{0.6,0.8,1.0\}$, and beam search with 500 candidates, for $\texttt{exp}$-robustness problems (other simple-primitives gave similar results). Beam search outperforms sampling, which we attribute to better exploration with beam search: namely, beam search guarantees unique candidates, while sampling returns many duplicates in this setting (here, only 14 unique sequences out of 500 samples). \begin{figure} \centering \includegraphics[width=\columnwidth]{images/beam_sample_robustness_simple.png} \caption{Comparing \texttt{Fail@k} using \textbf{sampling vs. beam search} on \textbf{exp-robustness}. Beam search outperforms sampling, which we attribute to better exploration with beam search: namely, beam search returns unique candidates, while sampling returns many duplicates in this setting. } \label{apx:fig:beamsample} \end{figure} \subsection{SAGGA -- Quantitative Summary} \autoref{tbl:genetic-general} provides a quantitative summary of the archives discovered with SAGGA. \autoref{fig:genetic} shows the rate at which SAGGA fills its archive with failure cases under different mutation strategies and fitness functions. \input{tables/genetic_summary} \input{figures/genetic_fig} \section{Additional Experiment Details} \subsection{Experimental Setup} We use the implementation and pre-trained model from \citet{lample2019deep} for all of our experiments, specifically the \texttt{FWD+BWD+IBP} model which obtained top-10 accuracies of 95.6\%, 99.5\%, and 99.6\% on their publicly available test sets.\footnote{\url{https://github.com/facebookresearch/SymbolicMathematics/}, commit \texttt{4596d07}.} Our evaluation is based on their code provided in \texttt{beam\_integration.ipynb}. We use their utilities for inputs and outputs, and by default use beam search with beam size 10. When computing Fail@50 we use beam size 50. Following \citet{lample2019deep}, we use Sympy to check whether the derivative of a prediction is equal to the original problem, \texttt{simplify(derivative - f) == 0}. We generously count the prediction as correct if a timeout occurs. Since Sympy's \texttt{simplify} function is imperfect, there is a possibility of false negatives, which is an inherent limitation of verifying whether a solution is correct. However, an answer that is not feasibly verifiable in a search-and-verify setting is incorrect for practical purposes. In preliminary experiments, we found that additionally simplifying the derivative and the function before subtracting tended to reduce timeouts, and found that Sympy's ability to simplify hyperbolic functions (e.g. $\sinh, \cosh$) was limited, so we discard problems with hyperbolic functions from our analysis. Future work could verify with additional libraries, and we release all predictions for inspection. \subsection{Robustness} \paragraph{Test accuracy does not imply robustness.} For $X_{\mathcal{N}_1}$ we sample 100 successful validation problems $f$ and $10$ values of $k$, while for $X_{\mathcal{N}_2}$ we sample 1000 successful validation problems. \subsection{Compositionality} \paragraph{Validation problems.} Fail@50 \texttt{valid} had abnormally many timeouts compared to the other experiments; for this setting only we do not consider a timeout as correct. \subsection{Extrapolation} \paragraph{More details of integer distribution.} As a rough picture of integer values in the training distribution, we sample 100,000 sequences from each of the \texttt{FWD} and \texttt{BWD} train sets, convert them to Sympy, and count Sympy equation tree nodes of type \texttt{Integer}. We find that 99.4\% of the positive integers were in the range [1, 100), and that other magnitude buckets were non-empty but sparsely represented -- e.g. [100, 200) had 0.2\%, and [1000, 2000) had 0.096\% ,and [10000, 20000) had 0.005\% of the positive integers in the problem sample. \subsection{Runtime vs. Candidates} We generously stop early if a successful candidate is found, and use a 1-second timeout for \texttt{Sympy}; thus the results are a lower-bound on the runtime. In the worst case, for $N$ problems, without early stopping and with a $t$-second timeout, verification takes $O(ktN)$ seconds, e.g. 41.6 hours to verify 50 candidates on 3,000 problems with a 1 second timeout. \section{SAGGA} \paragraph{Default fitness.} Unless otherwise noted, we use a fitness that favors short (and hence interpretable) problems, \begin{align} F(f_\theta, \ensuremath{\mathbf{x}}) &= m(\ensuremath{\mathbf{x}},f_\theta(\ensuremath{\mathbf{x}}; 1))\cdot \frac{1}{|\ensuremath{\mathbf{x}}|}, \end{align} which is positive when the model returns an incorrect integral of problem $\ensuremath{\mathbf{x}}$, and higher for shorter problems. \paragraph{Mutations.} A mutation transforms a problem's equation tree; i.e. $h(T_\ensuremath{\mathbf{x}})\rightarrow T'$ where $T_\ensuremath{\mathbf{x}}$ is the equation tree of problem $\ensuremath{\mathbf{x}}$. SAGGA supports mutations for internal tree nodes and leaf nodes. The \textit{internal} node mutations replace a node $v_i$ with: \begin{itemize} \item \textbf{Constant}: an integer $k\sim \mathcal{U}(v_\text{min}, v_\text{max})$. \item \textbf{Symbol}: $x$. \item \textbf{Operation}: $v_i'\sim \{+, *, ** \}$ \item \textbf{Add-arg}: $v_i'(w_{1:j},w_{j+1})$, where $w_{1:j}$ are the previous arguments to $v_i$ and $w_{j+1}$ is a random simple operation (see below). For instance, $\texttt{sum}(1, 2)\rightarrow \texttt{sum}(1, 2, 3x^2)$. If $v_i$ is a one-argument function, this mutation adds a new argument via a sum, e.g. $\texttt{exp}(1)\rightarrow \texttt{exp}(1+3x^2)$. \end{itemize} The \textit{leaf} node mutations replace a node $v_i$ with: \begin{itemize} \item \textbf{Constant}: an integer $k\sim \mathcal{U}(v_\text{min}, v_\text{max})$. \item \textbf{Symbol}: $kx$ where $k\sim \mathcal{U}(v_\text{min}, v_\text{max})$. \item \textbf{Simple-op}: random simple operation (see below). \end{itemize} Each random simple operation is of the form $k_1\circ x ^ k_2$, where $k_1\sim \mathcal{U}(v_\text{min}, v_\text{max})$, $\circ\sim \{*,**,/\}$, $k_2\sim \{1, 2\}$. For instance, $3x^2$, $5/x$. \paragraph{Default settings.} Unless otherwise noted, we run SAGGA with the following default settings: \begin{itemize} \item Beam size: 10 \item Evaluation $m(\ensuremath{\mathbf{x}},\cdot)$: Fail@1 \item Seed size: 100 \item Generation size: 1000 \item Seed selection: k-means, $k=10$ \item Fitness threshold $\tau$: 0.01 \item Target archive size: 1000 \item Integer perturbation range $v_\text{min},v_\text{max}$: [-1000, 1000] \item Mutations: all \item Seed problems: $\{1, \quad x,\quad x+1,\quad x^2 + x + 1\}$ \end{itemize} Each generation (iteration) takes around 4 minutes on a single Quadro RTX 8000 GPU. \subsection{Robustness.} \paragraph{Settings.} \begin{itemize} \item Integer perturbation range $v_\text{min},v_\text{max}$: [-100, 100] \item Mutations: only \textbf{Constant} mutations. \end{itemize} \paragraph{Seeds.} Polynomial robustness: \begin{align*} X_{\texttt{poly}}= \{&1, 2x, 2/x, 2x+1, 2/x+1, \\ &2x^2+2x+1, 2x^2+2/x+1,\\ &2x^3 + 2x^2+1, \\ &2x^{42}+2x^3+2x^2+1\} \end{align*} Trigonometric robustness: \begin{align*} X_{\texttt{trig}}= \{&17\cos(83x),17\cos(83x)+1,\\ &34\sin(77x),34\sin(77x)+1\\ &2\cos(2x)+2x, 2\cos(2x)+2x+1,\\ &2\sin(2x)+2x, 2\sin(2x)+2x+1\\ &2\sin(2x)\cos(2x)\} \end{align*} In these experiments, the structure of the seeds remains fixed and the integers are varied. The seed elements with non-trivial integers were selected based on the manual neighborhood experiments. Note that this simply accelerates the experiment, as the algorithm could discover these. \subsection{Out-of-distribution.} \textbf{General failures.} We run SAGGA in two settings. The first is with default settings (see above). The second biases the search towards trigonometric functions, using the seed, \begin{align*} X_{\texttt{trig-general}}= \{&2\cos(2x),2\cos(2x)+1\\ &2\sin(2x),2\sin(2x)+1\\ &2\cos(2x)+2x, 2\cos(2x)+2x+1,\\ &2\sin(2x)+2x, 2\sin(2x)+2x+1\\ &2\sin(2x)\cos(2x)\} \end{align*} and set the default fitness to zero unless the problem contains a trigonometric function. \textbf{Target lengths.} We use the fitness, \begin{align} F(f_\theta, \ensuremath{\mathbf{x}}) &= m(\ensuremath{\mathbf{x}},f_\theta(\ensuremath{\mathbf{x}}; 1))\cdot \frac{1}{|\ensuremath{\mathbf{x}} - \ell|}, \end{align} where $\ell$ is the target length. To more clearly compare growth rates, compared to the other experiments we use a smaller seed size of 50 and smaller generation size of 300, and generate 5,000 failures rather than 1,000, resulting in more iterations per algorithm setting. \begin{table*} \begin{center} \scriptsize \setlength{\tabcolsep}{4pt} \begin{tabular}{llll} \toprule x & hyp\_simplified & derivative & difference \\ \midrule \textbf{-103} & -103*x & -103 & 0 \\ -104 & -48*x - 625 & -48 & 56 \\ -136 & -128*x & -128 & 8 \\ -33 & -31*x - 256 & -31 & 2 \\ \hline \textbf{2*x**(42)+21} & x*(2*x**42 + 903)/43 & 2*x**42 + 21 & 0 \\ 2*x**(42)+22 & 2*x*(x**42 + 469)/43 & 2*x**42 + 938/43 & -8/43 \\ 2*x**(42)+28 & 2*x*(x**42 + 614)/43 & 2*x**42 + 1228/43 & 24/43 \\ 2*x**(42)+68 & 2*x*(x**42 + 1502)/43 & 2*x**42 + 3004/43 & 80/43 \\ \hline \textbf{-47 + 2/x - 2/x**70} & -47*x + 2*log(x) + 2/(69*x**69) & -47 + 2/x - 2/x**70 & 0 \\ -47 + 2/x - 2/x**71 & -47*x + 2*log(x) + 2/(35*x**70) & -47 + 2/x - 4/x**71 & -2/x**71 \\ -47 + 2/x - 31/x**71 & -47*x + log(x**2) + x**(-60) & -47 + 2/x - 60/x**61 & (31 - 60*x**10)/x**71 \\ -71 + 36/x - 2/x**71 & -71*x + 36*log(x) + 1/(17*x**70) & -71 + 36/x - 70/(17*x**71) & -36/(17*x**71) \\ \hline \textbf{13*cos(18*x)} & 13*sin(18*x)/18 & 13*cos(18*x) & 0 \\ 13*cos(19*x) & sin(19*x) & 19*cos(19*x) & 6*cos(19*x) \\ 13*cos(83*x) & sin(83*x)/17 & 83*cos(83*x)/17 & -138*cos(83*x)/17 \\ 17*cos(47*x) & sin(47*x) & 47*cos(47*x) & 30*cos(47*x) \\ \hline \textbf{13*cos(82*x) - 59} & -59*x + 13*sin(82*x)/82 & 13*cos(82*x) - 59 & 0 \\ 13*cos(83*x) - 59 & -59*x + sin(83*x)/13 & 83*cos(83*x)/13 - 59 & -86*cos(83*x)/13 \\ 17*cos(37*x) - 49 & -49*x + sin(37*x) & 37*cos(37*x) - 49 & 20*cos(37*x) \\ 17*cos(41*x) - 45 & -45*x + sin(41*x) & 41*cos(41*x) - 45 & 24*cos(41*x) \\ \hline \textbf{10*sin(45*x)*cos(2*x)} & -5*cos(43*x)/43 - 5*cos(47*x)/47 & 5*sin(43*x) + 5*sin(47*x) & 0 \\ 10*sin(47*x)*cos(2*x) & -255*cos(45*x)/2201 - 215*cos(49*x)/2201 & 11475*sin(45*x)/2201 + 10535*sin(49*x)/2201 & 470*sin(45*x)/2201 - 470*si... \\ 10*sin(90*x)*cos(2*x) & -5*cos(43*x)/44 - 5*cos(47*x)/46 & 215*sin(43*x)/44 + 235*sin(47*x)/46 & 215*sin(43*x)/44 + 235*sin(47... \\ 19*sin(90*x)*cos(2*x) & cos(2*x)**2*cos(90*x)/2 & -2*sin(2*x)*cos(2*x)*cos(90*x) - 45*sin(90*x)*... & -(43*sin(88*x)/2 + 19*sin... \\ \hline \textbf{-x**2 + x + log(4)*tan(x)} & -x**3/3 + x**2/2 - log(4)*log(cos(x)) & -x**2 + x + log(4)*sin(x)/cos(x) & 0 \\ -x**2 + x + log(4)*tan(17*x**2) & -x**3/3 + x**2/2 + log(2)*log(cos(17*x**2)... & -x**2 + 2*x*log(2)*sin(17*x**2)/cos(17*x**2) + x & (x - 1)*log(4)*tan(17*x**2) \\ -x**2 + x + log(4)*tan(2*x**2) & -x**3/3 + x**2/2 + log(2)*log(cos(2*x**2)... & -x**2 + 4*x*log(2)*sin(2*x**2)/cos(2*x**2) + x & (2*x - 1)*log(4)*tan(2*x**2) \\ -x**2 + x + log(4)*tan(63/x**2) & -x**3/3 + x**2/2 - log(2)*log(cos(63/x**2)... & -x**2 + x + 252*log(2)*sin(63/x**2)/(x**3*cos(... & (126 - x**3)*log(4)*tan(63/... \\ \hline \textbf{sqrt(3*x + 3) - 2} & -2*x + 2*sqrt(3)*(x + 1)**(3/2)/3 & sqrt(3)*sqrt(x + 1) - 2 & 0 \\ sqrt(-86**(x**2) + 62/x) - 40 & -40*x + acosh(86**(-x**2/2)/x) & -40 + (-86**(-x**2/2)*log(86) - 86**(-x**2/2)/... & -sqrt(-86**(x**2) + 62/x) ... \\ sqrt(14 + 62/x) + 4 & (49*x**(3/2) + 217*sqrt(x) + sqrt(7*x + 31... & (147*sqrt(x)/2 + (28 + sqrt(749)/(sqrt(x)*sqrt... & (-x*(7*x + 31)**(5/2)*sqrt(2... \\ sqrt(14 + 62/x) - 2 & (49*x**(3/2) + 217*sqrt(x) + (-14*x + 31*... & (147*sqrt(x)/2 + (-14 + sqrt(854)/(2*sqrt(x)*s... & (x*(7*x + 31)**(5/2)*sqrt(6... \\ \hline \textbf{tan(exp(2))/(18*x)} & log(x)*tan(exp(2))/18 & tan(exp(2))/(18*x) & 0 \\ tan(26*x + 2 + exp(2))/(18*x**75) & [add, mul, div, INT-, 1, INT+, 1, 5, 1, 2, mul... & -- & -- \\ tan(exp(-9504*x**2))/(18*x) & -log(cos(exp(-9504*x**2))**(-2))/18144 & 44*x*exp(-9504*x**2)*sin(exp(-9504*x**2))/... & 44*x*exp(-9504*x**2)*sin(... \\ tan(exp(-96*x))/(18*x) & -log(cos(exp(-96*x))**(-2))/1728 & exp(-96*x)*sin(exp(-96*x))/(9*cos(exp(-96*x))) & (2*x - exp(96*x))*exp(-9... \\ \bottomrule \end{tabular} \caption{\label{apx:tbl:sagga-robustness}\textit{SAGGA robustness}. We show the problem $\ensuremath{\mathbf{x}}$, the simplified prediction from the neural sequence integrator, its derivative, and the derivative's difference with $\ensuremath{\mathbf{x}}$. The prediction is incorrect when the difference is not zero. When the model's prediction does not parse successfully, we show its unparsed infix tokens. We show a nearby problem that the model gets correct in \textbf{bold} (either hand-selected or a validation example); all other problems are failures discovered by SAGGA. } \end{center} \end{table*} \begin{table*} \begin{center} \scriptsize \begin{tabular}{llp{5cm}p{4cm}p{4cm}} \toprule {} & x & hyp\_simplified & derivative & difference \\ \midrule 0 & \textbf{30**x} & 30**x/log(30) & 30**x & 0 \\ 1 & 119**x & 119**(x - 1) & 119**(x - 1)*log(119) & 119**(x - 1)*(-119 + log(119)) \\ 2 & 132**x & 132**x/(1 + log(132)) & 132**x*log(132)/(1 + log(132)) & -132**x/(1 + log(132)) \\ 3 & 136**x & exp(x*log(136)) & exp(x*log(136))*log(136) & -136**x + exp(x*log(136))*log(136) \\ \hline 4 & -100*x**x & -50*x**2 & -100*x & -100*x + 100*x**x \\ 5 & -149*x**x & -149*x**2/2 & -149*x & -149*x + 149*x**x \\ 6 & -151*x**x & -151*x**2/2 & -151*x & -151*x + 151*x**x \\ \hline 7 & 158*x**(x**2) + 611 & 158*x**2/sqrt(x**2 + 1) + 611*x & -158*x**3/(x**2 + 1)**(3/2) + 316*x/sqrt(x**2 + 1) + 611 & -158*x**3/(x**2 + 1)**(3/2) + 316*x/sqrt(x**2 + 1) - 158*x**(x**2) \\ 8 & 256*x**(x**2) + 191 & x*(191*x**2 + 256*x**(x**2) + 191)/(x**2 + 1) & -2*x**2*(191*x**2 + 256*x**(x**2) + 191)/(x**2 + 1)**2 + x*(382*x + 256*x**(x**2)*(2*x*log(x) + x))/(x**2 + 1) + (191*x**2 + 256*x**(x**2) + 191)/(x**2 + 1) & 512*x**(x**2 + 2)*(x**2*log(x) + log(x) - 1)/(x**4 + 2*x**2 + 1) \\ 9 & 332*x**(x**2) + 559 & x*(559*x**2 + 332*x + 559)/(x**2 + 1) & -2*x**2*(559*x**2 + 332*x + 559)/(x**2 + 1)**2 + x*(1118*x + 332)/(x**2 + 1) + (559*x**2 + 332*x + 559)/(x**2 + 1) & 332*(2*x - x**(x**2) - 2*x**(x**2 + 2) - x**(x**2 + 4))/(x**4 + 2*x**2 + 1) \\ \hline 10 & -240**x + 2*cos(2*x) & -exp(x*log(240)) + sin(2*x) & -exp(x*log(240))*log(240) + 2*cos(2*x) & 240**x - exp(x*log(240))*log(240) \\ 11 & -398**x + 2*cos(2*x) & -exp(x*log(398)) + sin(2*x) & -exp(x*log(398))*log(398) + 2*cos(2*x) & 398**x - exp(x*log(398))*log(398) \\ 12 & -692**x + 2*sin(2*x) & -exp(x*log(692)) - cos(2*x) & -exp(x*log(692))*log(692) + 2*sin(2*x) & 692**x - exp(x*log(692))*log(692) \\ \bottomrule \end{tabular} \caption{\label{apx:tbl:sagga-exponent}\textit{SAGGA OOD -- $g(x)^x$}. We show the problem $\ensuremath{\mathbf{x}}$ discovered by SAGGA, the simplified prediction from the neural sequence integrator, its derivative, and the derivative's difference with $\ensuremath{\mathbf{x}}$. The prediction is incorrect when the difference is not zero. For the first cluster, we manually found a nearby problem that the model gets correct in \textbf{bold}; thus this cluster can be seen as a \textit{robustness} failure. The second two clusters involve $x^x$ or $x^{x^2}$ which to our knowledge do not have analytical integrals (e.g. see \url{https://www.wolframalpha.com/input/?i=integral+of+x**x}); these clusters can be seen as \textit{exploits}. } \end{center} \end{table*} \begin{table*} \begin{center} \scriptsize \begin{tabular}{llp{7cm}p{7cm}l} \toprule {} & x & hyp\_raw & Correct Integral (\texttt{Wolfram Alpha})\\ \midrule 0 & 169*sin(4*x)/x & 169*log(x**2)/8 - 169*cos(4*x)/4 & $169\ \mathrm{Si}(4 x)$\\ 1 & -2*sin(42/x) & cos(42/x)/21 & $-2 (-42 \mathrm{Ci}(42/x) + x \sin(42/x))$\\ 2 & -2*sin(185*x**2)*cos(2*x) & 4*sin(2*x)*sin(185*x**2)/3421 + 370*cos(2*x)*cos(185*x**2)/3421 & \texttt{-(Sqrt[Pi/370] (Cos[1/185] FresnelS[Sqrt[2/(185 Pi)] (-1 + 185 x)] + Cos[1/185] FresnelS[Sqrt[2/(185 Pi)] (1 + 185 x)] - (FresnelC[Sqrt[2/(185 Pi)] (-1 + 185 x)] + FresnelC[Sqrt[2/(185 Pi)] (1 + 185 x)]) Sin[1/185]))}\\ 3 & 357**(x**(2**x)) + 2*sin(2*x) & exp(x*log(2)) - cos(2*x) & No result \\ 4 & 1/(x**48*(3*x+2)**49) & add, mul, div, INT-, 1, 0, 9, 0, 9, 1, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 5, INT+, 1, 0, 2, 4, ln, x, add, mul, div, INT+, 1, 0, 9, 0, 9, 1, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, INT+, 1, 0, 2, 4, ln, add, div, INT+, 2, INT+, 3, x, mul, INT-, 1, mul, pow, add, mul, INT+, 1, 0, 0, 9, 6, 2, 9, 4, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 & See https://www.wolframalpha.com/input/? i=integral+of+1\%2F\%28x**48*\%283*x\%2B2\%29**49\%29\\ \bottomrule \end{tabular} \caption{\label{apx:tbl:sagga-exploits}\textit{SAGGA OOD -- exploits}. We show the problem $\ensuremath{\mathbf{x}}$, and the neural sequence integrator's prediction as either raw prefix tokens or (when possible) in its infix form. All problems are failures discovered by SAGGA. } \end{center} \end{table*} \section{Conclusion} We study generalization in symbolic mathematics using the predominant modeling paradigm: a large-scale transformer trained with maximum likelihood. We find deficiencies that are not captured by test accuracy, including brittleness to small perturbations, difficulty composing known solutions, and gaps in the training distribution. We offer speculations based on our results. Due to the large space of equations, practical empirical distributions do not provide a dense sampling of individual problem types (e.g. $k_1\cos(k_2x)$), and each empirical sample contains shared biases from the underlying data generator (e.g. integer values, lengths). Thus, sparse test sets do not adequately measure systematic generalization. From a learning perspective, generic networks trained with SGD do not necessarily favor the simplest hypothesis to explain the data; thus a sparse training set yields an underconstrained hypothesis space, with hypotheses that do not strongly generalize (e.g. Table~\ref{tbl:raw-outputs}), causing behavior that breaks simple rules (e.g. adhering to a template or following the sum rule). We suspect that inductive biases-- e.g. encoded through the training distribution, architectural components, or learning algorithm-- are needed to narrow the hypotheses to those that strongly generalize. \section{Integrating Parts But Not The Whole} \label{sec:compositionality} While the preceding section identified weaknesses in robustness -- for instance, integrating $26x^{42}$ but not $88x^{42}$ -- a remaining question is whether successfully integrating a collection of primitives implies that a \textit{composition} of those primitives will be successfully integrated. Compositionality refers to forming compound equations from known primitives and operations. A compositional model should correctly integrate equations of the form, \begin{align} f=f_1 \circ f_2 \circ \cdots \circ f_k, \end{align} where $f_1,\ldots,f_k$ are equations that the model successfully integrates, and $\circ$ is a binary operation (e.g. addition). For instance, a system that integrates $x^2$ and $\cos x$ and is capable of addition should successfully integrate $x^2 + \cos x$. Formally, we say that a model is $k$-compositional with respect to equations $X$ and operation $\circ$ when it successfully integrates any combination of $k$ equations from $X$, $\sum_{\ensuremath{\mathbf{x}}\in \tilde{X}} m(\ensuremath{\mathbf{x}},f_\theta(\ensuremath{\mathbf{x}}))=0$, where $\tilde{X}=\{f_1\circ\cdots \circ f_k |f_i\in X\}$. We evaluate $k$-compositionality with respect to addition, using simple primitive functions and validation problems. As integration is linear, $\int (f+g)=\int f + \int g$, compositionality with respect to addition is a reasonable requirement. \input{tables/compositionality_examples} \input{tables/compositionality} \paragraph{Succeeding on simple primitives, failing on their sum.} We collect simple primitives from the coefficient robustness experiments that the model successfully integrates (\texttt{coeff}), and successful exponents $x^c$ or $x^{1/c}$, $c\in [0,1000]$ (\texttt{exp}). We randomly sample 1000 compound equations $f_1+\ldots+f_k$ for $k\in \{2,3,4\}$ and evaluate the failure rate. Table~\ref{tbl:compositional} shows the results. Adding two primitives gives failure rates of 29\% and 85\% for coefficient and exponent primitives, respectively, despite failing 0\% of the time on the individual primitives. As the number of compositions increases, the failure rate increases towards 100\%. Table~\ref{tbl:compositionality-examples} shows examples. \paragraph{Succeeding on test problems, failing on their sum.} We perform a similar experiment using successful validation-set functions. We filter examples longer than 20 tokens so that composed equations are within the training domain in terms of length, and sample 1000 compound equations $f_1+\ldots+f_k$ for $k\in \{2,3,4\}$. As seen in Table~\ref{tbl:compositional}, the failure rate grows as the number of compositions increases, similar to the simple primitives case. Maximizing the likelihood of a large training set did not yield a compositional model. \section{Introduction} Despite their success, recent studies reveal undesirable properties of conventional neural sequence models, such as assigning high-probabilities to unrealistic sequences \citep{holtzman2020The,welleck2020consistency}, susceptibility to adversarial attacks \citep{wallace2019universal}, and limited generalization on symbolic tasks \citep{saxton2019analysing,nogueira2021InvestigatingTL}, even with very large models and datasets \citep{henighan2020ScalingLF}. Despite these drawbacks, \citet{lample2019deep} recently demonstrated that a standard sequence-to-sequence model, which we call a \textit{neural sequence integrator}, performs surprisingly well at \textit{symbolic integration}, solving problems that are beyond traditional symbolic solvers and achieving near perfect test accuracy. \input{tables/front_examples} Recent studies suggest that achieving \textit{strong and systematic generalization} is difficult with vanilla sequence-to-sequence methods, as they latch onto regularities in the training data, learning dataset-specific solutions that do not generalize beyond the training distribution (e.g. \citet{agrawal2016analyzing,lake2018still,bahdanau2019,hupkes2020}). Symbolic integration -- finding the integral of a mathematical function -- specifically requires these forms of generalization, as it involves an underlying structure that extends beyond this fixed training distribution. For instance, the rule $\int k = kx+C$ applies to all constants $k$, and the sum rule $\int f_1+\int f_2=\int (f_1+f_2)$ means that integrating two functions correctly should imply integrating their sum correctly. Symbolic integration also offers a structured problem domain and a verifier for evaluating whether a proposed solution is correct, making it an effective setting for evaluating generalization. As the neural sequence integrator relies on a common recipe-- a large-scale transformer trained to maximize the likelihood of a training set of input-output sequences -- it is especially interesting to study whether it generalizes systematically. \input{figures/fig1} In this paper, we find a discrepancy between the traditional notion of generalization captured by test set accuracy and the generalization needed in symbolic mathematics. While the model's test accuracy is nearly perfect, we find this breaks down when testing its \textit{robustness}, \textit{compositionality}, and \textit{out-of-distribution generalization} (e.g. Table~\ref{tbl:front-examples}). We describe a methodology for evaluating these aspects, by constructing problem sets and developing a genetic algorithm, SAGGA (Symbolic Archive Generator with Genetic Algorithms), that automatically discovers diverse and targeted failures. We find that successfully integrating an in-distribution problem does not imply success on nearby problems, despite being governed by the same underlying rule (\emph{robustness}). Moreover, the model often succeeds on a collection of problems without being able to systematically compose those problems (\emph{compositionality}), and struggles to generalize to longer problems, larger values, and functions not covered in training (\emph{out-of-distribution}). In addition to the model's approximate mode being incorrect -- i.e. the most probable sequence returned by beam search -- the deficiencies are present deeper into its ranked list of candidate solutions, impacting the model's effectiveness in a \textit{search-and-verify} setting. Overall, our investigation highlights the difficulty of achieving robustness, compositionality, and out-of-distribution generalization with the predominant modeling and learning approach, and the importance of evaluating beyond the test set, across aspects of generalization that are required by the task at hand. \section{Departing Further From Training} \label{sec:ood} The preceding experiments found problems that were nearby to, or composed directly from, in-distribution examples. In this section, we deliberately move from the model's training distribution to evaluate its \textit{out-of-distribution} generalization. First, we study \textit{extrapolation} to longer equation sizes than those in its training distribution, and to integer ranges that are only sparsely covered in the training set. Then we use SAGGA to expose exotic failures and reveal problem classes that were not covered during training. \input{tables/extrapolation} \input{figures/integer_extrapolation} \paragraph{Longer problems are more difficult.} First, we use the same data-generating process as for training, but \textit{vary its parameters} to depart from the training distribution. Specifically, we test extrapolation on number of operator nodes in each equation tree, using \citet{lample2019deep}'s data generation process and varying the \texttt{max\_ops} parameter. Table~\ref{tbl:extrapolation} shows performance when \texttt{max\_ops} is increased past the model's training domain (1-15). The neural sequence integrator does show some extrapolation to equation trees with more operator nodes than it was trained on, but its failure rate increases substantially as the number of nodes increases. \paragraph{Larger failures on larger digits.} Next, we study performance as integer values increase, quickly going out-of-domain. Considering a sample of 200,000 sequences from the training distribution, 99.4\% of the positive integers were between 1 and 100. Other ranges were non-empty but sparsely represented; for instance, 0.2\% of the integers were between 100 and 200, and 0.096\% between 1,000 and 2,000. Figure~\ref{fig:integer-extrapolation} shows performance on primitive functions with coefficients from the specified range. As in the robustness experiments, the $x$ and $\ln$ primitives perform well, showing that there is \textit{some} ability to use large numbers. However, performance severely degrades for the $\texttt{exp}, \texttt{sin}, \texttt{cos}, \texttt{tan}$ primitives as the coefficient magnitudes increase, reaching near 100\% failure rates on large coefficients. \input{tables/genetic_general_exploits} \input{tables/genetic_general} \paragraph{Discovering unsupported functionality.} Next, we run SAGGA in an unconstrained setting with all mutation types, favoring short problems using the fitness, $F(f_\theta, \ensuremath{\mathbf{x}}) = m(\ensuremath{\mathbf{x}},f_\theta(\ensuremath{\mathbf{x}}))\cdot \frac{1}{|\ensuremath{\mathbf{x}}|},$ which is positive when the model returns an incorrect integral for $\ensuremath{\mathbf{x}}$, and higher for shorter problems. SAGGA discovers \emph{exploits} based on the neural sequence integrator's limited training distribution, such as problems whose integral is expressed using the Gamma function $\Gamma(\cdot)$, or the cosine integral $\mathrm{Ci}$, which are not included in its training data (Table~\ref{tbl:genetic-general-exploits}).\footnote{\url{https://en.wikipedia.org/wiki/Trigonometric_integral}.} These examples are a reminder that the sequence-to-sequence paradigm determines which functions are `built in' by inclusion in training data; omitted behavior is left unspecified, leaving it susceptible to exploits. Finally, the last problem in Table~\ref{tbl:genetic-general-exploits} caused the neural sequence integrator to enter a non-terminating loop during decoding (Appendix Table~\ref{apx:tbl:sagga-exploits}), a known idiosyncrasy of autoregressive models with beam search \citep{welleck2020consistency}. SAGGA also finds many clusters that indicate the neural sequence integrator struggles when $x$ appears in an exponent. The discovered problems in Table~\ref{tbl:sagga-general} are a microcosm of our previous findings: For the first cluster, we manually found a nearby problem, $30^x$, that the model gets correct; this cluster is a \textit{robustness} failure. The second cluster shows how such failures cascade further as the function is \textit{composed}. The final two clusters involve $x^x$ or $x^{x^2}$, which do not have analytical integrals;\footnote{\url{https://www.wolframalpha.com/input/?i=integral+of+x**x}} these clusters are \textit{exploits}. \paragraph{Finding problems with target properties.} Finally, we generate failures of a target length, by running SAGGA to target length 10, 20, and 40 problems. As seen in Figure~\ref{fig:genetic-target-length}, SAGGA converges to find problems of the target length. Based on our extrapolation experiments, we expect SAGGA to fail more often on longer equations. The right-hand plot in Figure~\ref{fig:genetic-target-length} shows that it is also easier to \textit{find} failures for longer equations, in that the archive grows more quickly for longer target lengths. While we visually inspect short equations for interpretability, the growth rate is a reminder that the space of failures is vast for longer equations. \input{figures/genetic_length} \section{Related Work} In this work, we study systematic generalization in sequence models applied to symbolic integration, in terms of robustness, compositionality, and extrapolation, and develop a genetic algorithm for building adversarial problem sets. \paragraph{Symbolic mathematics and sequence models.} Several works study extrapolation to longer sequences and larger digits in synthetic arithmetic and basic mathematics tasks \citep{zaremba2014,trask2018neural,saxton2019analysing,nogueira2021InvestigatingTL}. Sequence models have also been applied to polynomial rewriting \citep{piotrowski2019can,agarwal2021analyzing}, and differential system stability \citep{charton2021learning}. For symbolic integration, \citet{davis2019use} argue that the neural sequence integrator's test performance should be qualified, though without an empirical demonstration. These critiques motivate our focus on the neural sequence integrator \citep{lample2019deep}, whose performance we characterize and empirically study in terms of systematic generalization. \paragraph{Systematic generalization.} Several works identify difficulties with modern methods on synthetic tasks (e.g. \citet{lake2018still,bahdanau2019,hupkes2020,kim2020cogs}) and machine translation \citep{raunak2019On}, with a focus on compositionality and extrapolation. Some methods address systematicity with inductive biases in model structure \citep{andreas2016neural,bahdanau2019}, and others through the data \citep{hill2020environmental,andreas2020good} or learning procedure \citep{lake2019compositional,vani2021iterated}. We focus on systematic generalization deficiencies in a state-of-the-art model in a new setting -- symbolic integration -- with additional aspects of generalization. \paragraph{Robustness and adversaries in sequence models.} Several works study robustness in NLP, including classification \citep{tu-etal-2020-empirical}, word substitutions \citep{jia-etal-2019-certified}, domain shift in QA \citep{kamath-etal-2020-selective} or topic distributions \citep{oren-etal-2019-distributionally}. Several methods find adversarial examples in NLP \citep{morris2020textattack}. \citet{alzantot2018generating} use genetic algorithms in a classification setting, while we consider generation. \citet{michel2019evaluation} constrain input sequences to be similar and use a gradient-based attack to swap tokens. We face a non-differentiable cost and generate large collections of failures with a wide class of mutations. \section{Robust or Brittle?} \label{sec:robustness} First, we study whether the model's strong test-set performance adequately represents its \textit{robustness}. Robustness tells us whether the integration model systematically solves all problems in a neighborhood governed by a generalizable pattern; for instance a model that solves $\int 26x^{42}$ should solve $\int 53 x^{42}$. We study problems that are nearby to those from the original test distribution, as well as to simple primitive functions that offer fine-grained, interpretable control. A \textbf{robust} model is stable to small perturbations in input, meaning that it gets nearby problems $\ensuremath{\tilde{\mathbf{x}}}$ correct when it gets a problem $\ensuremath{\mathbf{x}}$ correct. Formally, let $X=\{\ensuremath{\mathbf{x}}_1,\ldots,\ensuremath{\mathbf{x}}_N\}$ contain problems that the model gets correct, $\sum_{\ensuremath{\mathbf{x}}\in X}m(\ensuremath{\mathbf{x}},f_\theta(\ensuremath{\mathbf{x}}))=0,$ and let $\mathcal{N}_d(\ensuremath{\mathbf{x}})$ be a set of problems that are nearby to $\ensuremath{\mathbf{x}}$ according to a distance $d(\ensuremath{\mathbf{x}},\ensuremath{\tilde{\mathbf{x}}})$. We measure robustness by measuring failures on nearby problems, \begin{align} \label{eqn:robust-fail} \texttt{Fail@k}(f_\theta, X_\mathcal{N}), \end{align} where $X_\mathcal{N}=\bigcup_{\ensuremath{\mathbf{x}}\in X}\mathcal{N}_d(\ensuremath{\mathbf{x}})$. We measure this quantity by varying (i) the neighborhood $\mathcal{N}_d(\ensuremath{\mathbf{x}})$ used to generate nearby problems, and (ii) the seed problems $X$ to consider. Below, we will refer to a problem as $\ensuremath{\mathbf{x}}$ or $f$ interchangeably. \input{tables/robustness} \input{tables/robustness_examples} \subsection{Manually Testing Robustness} To define nearby problems, we first consider manual templates which minimally perturb a problem $f$, e.g. \begin{align*} k\cdot f, \quad f + \ln x, \quad \ldots \end{align*} These problems are nearby $f$ in the sense that a single operation is added to the problem's equation tree, or a small number of node values are changed in the tree. \paragraph{Brittleness on simple primitive functions.} We first investigate whether the neural sequence integrator is robust on \textit{simple primitive functions}, since they make up more complicated functions and are frequently entered by real-world users. We use a manual neighborhood which yields, \begin{align*} X_{\mathcal{N}}=\{&k_1\ln (k_2 x),\quad k_1\exp(k_2 x), \quad k_1x, \quad k_1x^{42},\\ & k_1\sin (k_2 x),\quad k_1\cos (k_2 x),\quad k_1\tan(k_2x)\}, \end{align*} where $k_1 \sim \mathcal{U}(a, b)$ and $k_2\sim \mathcal{U}(a, b)$ are randomly sampled coefficients from a range $(a, b)$. We use $[0,100]$ which is covered by the training distribution and evaluate on 1,000 $k_1,k_2$ pairs sampled without replacement for each primitive. Table~\ref{tbl:robust-simple} shows the results. On a positive note, the neural sequence integrator is robust on the primitives $k_1 x$ and $k_1 \ln (k_2 x)$. The integral of $k_1 x$ is $\frac{k_1}{2} x^2$, so the model learned to divide by 2 for these cases. The integral of $\ln$ involves copying the coefficients into a correct template (that is, $\int k_1\ln(k_2 x)=k_1 x(\ln(k_2 x) - 1)$), and the neural sequence integrator learned this behavior. On the other hand, the model is surprisingly brittle on the other primitives. These require dividing coefficients (e.g. $\int k_1\cos(k_2 x)=\frac{k_1}{k_2}\sin (k_2 x)$). The failure rate shows that the model has not perfectly learned the required division behavior. Moreover, despite learning a `division by 2' rule for integrating $k_1 x$, the neural sequence integrator's failures on $k_1 x^{42}$ indicate that it did not perfectly learn an analogous `division by 43' rule. Table~\ref{tbl:robustness-examples} shows examples. \paragraph{Test accuracy does not imply robustness.} Next, we want to see whether the neural sequence integrator's strong test accuracy implies that it is robust on test problems. We use the validation set, and perturb \textit{validation problems that the model correctly integrates} using the neighborhoods, \begin{align*} X_{\mathcal{N}_1}=\{&\frac{1}{k} f,\quad k\cdot f\},\quad X_{\mathcal{N}_2}=\{f + e^x, \quad f+\ln(x)\}, \end{align*} where $k \sim \mathcal{U}(1, 100)$. The first set multiplies the function by a constant, while the second adds a single primitive. Table~\ref{tbl:robust-simple} shows the results. Despite achieving perfect accuracy on the original problems, the model frequently fails under the slight perturbations. The local neighborhood around validation examples reveals deficiencies in robustness that are not evident from validation performance alone, aligning with findings in NLP tasks \citep{gardner2020evaluating}. \subsection{Automatically Finding Robustness Failures} Next, we use SAGGA to automatically discover robustness failures in the neighborhood of a seed set of problems. \paragraph{Discovering brittleness near simple problems.} First, we run SAGGA and only allow it to mutate leaves in a problem's equation tree into a random integer. The problems are nearby in the sense that the tree's structure is not changing, only a small number of its leaf values. We use SAGGA to mutate the leaves of seed sets of 9 polynomials $X_\texttt{poly}$ and 9 trigonometric functions $X_\texttt{trig}$, which are listed in the Appendix. We run SAGGA until it discovers 1000 failing problems, then cluster these using k-means on SciBERT embeddings \citep{beltagy2019scibert} of each problem. Table~\ref{tbl:sagga-robustness-examples} shows three members from three discovered problem clusters, for the polynomial and trigonometric seeds. Intuitively, each cluster shows failures in a neighborhood around a prototypical problem -- for instance, on $2x^{42}+\textbf{k}$ the neural sequence integrator \textit{correctly} integrates $2x^{42}+\textbf{21}$, but not the problems in Cluster 2 (e.g. $2x^{42}+\textbf{22}$). See Appendix Table~\ref{apx:tbl:sagga-robustness} for more prototypes and the model's predictions. Curiously, each problem in a neighborhood is governed by a common template -- e.g. the problems $\{-104, -136, -33\}$ are governed by $\int k=kx+C$, yet the failures suggest that the neural sequence integrator has either not inferred the template, or does not apply it across the neighborhood. To investigate this phenomenon, we show the \textit{raw} model prediction in Table~\ref{tbl:raw-outputs}, along with its simplified version and derivative. Compared to the underlying template $\int k = kx+C$ the model's raw output is long and complex. In contrast, the simplified version is short; we hypothesize this gap makes adhering to the template difficult. \input{tables/genetic_robustness_examples} \input{tables/raw_outputs_sagga_robustness} \paragraph{Discovering brittleness near validation problems.} Finally, we use SAGGA to discover difficult problems that are close to a target set $X$ -- in our case validation problems -- according to an explicit distance $d(\ensuremath{\mathbf{x}},\ensuremath{\tilde{\mathbf{x}}})$. This allows for less hand-designing of the perturbations. Specifically, we define a fitness which is high whenever a candidate is close to any problem in a target set $X$, \begin{align} \text{fitness}(\mathbf{\tilde x}) &= \left[\min_{\mathbf{x}\in X}d(\mathbf{x},\mathbf{\tilde x})\right]^{-1}\cdot m(\tilde{\mathbf{x}}, f_\theta(\mathbf{\tilde x})). \end{align} We randomly sample 10 validation problems to form $X$, set SAGGA's initial seed to $X$, and use cosine similarity of SciBERT vectors to define the distance $d(\ensuremath{\mathbf{x}},\ensuremath{\tilde{\mathbf{x}}})$. Since the \textit{distance} now constrains the problems, we are free to use a wider set of mutations: changing a node's operation, adding an argument to a node, and replacing the node with a random constant, symbol, or simple operation. Table~\ref{tbl:sagga-target-functions} shows example problems that SAGGA discovers around the successful validation problems, exposing a wider class of robustness failures than our preceding experiments. \input{tables/genetic_target_functions} \section{Is it a search problem? Distinguishing between model and search errors} \input{tables/search_probabilities} \input{tables/gold_beam} Both the experiments of \citet{lample2019deep} and our own generate candidate solutions from a sequence-to-sequence model using beam search. This raises the possibility that failures are due to search rather than the model: what if the highest scoring sequences are correct, but not found? Specifically, we want to distinguish between \textbf{search errors}, which occur when $p_\theta(\ensuremath{\mathbf{y}}_*|\ensuremath{\mathbf{x}})\gg p(\ensuremath{\mathbf{y}}|\ensuremath{\mathbf{x}})$ but the search algorithm (e.g. beam search) does not return $\ensuremath{\mathbf{y}}_*$, and \textbf{model errors}, meaning $p_\theta(\ensuremath{\mathbf{y}}|\ensuremath{\mathbf{x}})\gg p(\ensuremath{\mathbf{y}}_*|\ensuremath{\mathbf{x}})$. Here $\ensuremath{\mathbf{y}}_*$ is any correct solution to problem $\ensuremath{\mathbf{x}}$. \subsection{The model is deficient: model errors.} We study simple-robustness and in-distribution problems, and find evidence of model deficiencies that would remain unresolved with perfect search. \paragraph{Robustness.} First, we study the simple-primitive robustness problems (e.g. $k_1\exp(k_2 x)$, see \autoref{tbl:robust-simple}), as these short problems resulted in a small number of timeouts, allowing us to scale up search and verification. We increase the beam size to 500 candidates, and study the model's probabilities on the 500 returned candidates and correct solutions. We refer to the candidates ranked by decreasing probability as $\ensuremath{\mathbf{y}}_{\text{beam}}^{(1)},\ldots,\ensuremath{\mathbf{y}}_{\text{beam}}^{(500)}$ (i.e. $\ensuremath{\mathbf{y}}_{\text{beam}}^{(1)}$ has the highest probability). When a correct solution is within the 500 returned candidates, the correct solution often has \textit{much lower probability} than the top candidate, $p_\theta(\ensuremath{\mathbf{y}}_{\text{beam}}^{(1)}|\ensuremath{\mathbf{x}})\gg p(\ensuremath{\mathbf{y}}_*|\ensuremath{\mathbf{x}})$. Specifically, correct solutions often appear at the bottom of the candidates (\autoref{fig:beam500-mismatch}, orange), yet on average the bottom candidate $\ensuremath{\mathbf{y}}_{\text{beam}}^{(500)}$ has probability $\approx 0.0000035$, while the top candidate $\ensuremath{\mathbf{y}}_{\text{beam}}^{(1)}$ has probability $\approx 0.92$ (\autoref{tbl:mass}). These are model deficiencies: the model is \textit{confidently incorrect}, assigning very high probability to an incorrect solution at the top, and very low probability to correct solutions. When a correct solution is \textit{not} within the top 500 candidates, the model is again confidently incorrect, with the top candidate $\ensuremath{\mathbf{y}}_{\text{beam}}^{(1)}$ receiving $\approx 0.94$ probability. Improving the search algorithm -- e.g. by further increasing the search budget or using an alternative to beam search -- would inevitably return a low probability solution, as the 500 candidates already cover more than $99.4\%$ of the probability mass (\autoref{tbl:mass}). The findings again point to model errors. \paragraph{In-distribution.} Next, we study \textit{in-distribution} problems from the \texttt{FWD} validation set of \citet{lample2019deep}. On failure cases, we test if the ground-truth $\ensuremath{\mathbf{y}}_*$ is scored above the top $k$ beam candidates, meaning that failure@$k$ might be resolved with perfect search--$\ensuremath{\mathbf{y}}_*$ was scored more highly but was simply not found. As seen in \autoref{tbl:search}, the majority of failures -- 91.6\% for failure@1 and 65.2\% for failure@10 -- would remain unresolved with perfect search, again pointing to model deficiencies. \input{figures/search_probability} \subsection{Covering up deficiencies with search.} Our preceding experiments showed that the top of the model's ranked solution list is often made up of incorrect solutions, while correct solutions deeper into the solution list are assigned very low probabilities. A natural question is whether we can simply `cover up' the deficient model by enumerating and verifying even more candidates, while ignoring the model's probabilities. On simple robustness problems (e.g. $k_1\exp(k_2 x)$), we find that large search budgets can alleviate failures, i.e. Fail@$k$ decreases as $k$ increases, for instance moving from roughly 30\% failure@1 to 3\% failure@500 on $\texttt{exp}$ robustness (\autoref{apx:fig:beam500}). On more complex problems, verifying more candidates reduces the failure rate, yet our experiments do not indicate that failures approach zero for practical search budgets. For instance, in our compositionality $\texttt{exp}$ experiments (Table~\ref{tbl:compositional}), verifying 50 instead of 1 candidate reduces failures, but not below 90\%. Larger search budgets quickly become impractical to verify: for compositionality $\texttt{exp}$, the verification time increases substantially, from around 5 minutes for 1 candidate to over 2 hours for 50, with worst-case verification time of 41.6 hours (Table~\ref{tbl:runtime}). Looking ahead, developing methods that decrease search and verification cost can help to further cover up a subset of model errors, yet improving the underlying model remains a core issue. \input{tables/verification_time} \section{Problem Setup} Symbolic integration is the problem of finding the integral $\ensuremath{\mathbf{y}}$ of an input equation $\ensuremath{\mathbf{x}}$. For instance, $x^2/2$ is the integral of $x$, up to an additive constant. \paragraph{Neural sequence integrator.} \citet{lample2019deep} frame symbolic integration as a sequence-to-sequence problem. In this view, input and output equations $\ensuremath{\mathbf{x}}$ and $\ensuremath{\mathbf{y}}$ are prefix-notation sequences. The \textit{neural sequence integrator} uses a 6-layer transformer \citep{vaswani2017attention} to model the distribution $p_\theta(\ensuremath{\mathbf{y}}|\ensuremath{\mathbf{x}})=\prod_{t=1}^{T_{\ensuremath{\mathbf{y}}}}p_\theta(y_t|y_{<t},\ensuremath{\mathbf{x}})$ by training the model to maximize the log-likelihood of a set of training problems, $\arg\max_\theta \sum_{(\ensuremath{\mathbf{x}},\ensuremath{\mathbf{y}})\in \mathcal{D}} \log p_{\theta}(\ensuremath{\mathbf{y}}|\ensuremath{\mathbf{x}})$. Given a trained model and input $\ensuremath{\mathbf{x}}$, a set of predicted solutions ranked by a model score is obtained by beam search, denoted $\{\hat\ensuremath{\mathbf{y}}_1,\ldots,\hat\ensuremath{\mathbf{y}}_k\}=f_\theta(\ensuremath{\mathbf{x}};k,b)$, where $b$ is beam size and $k$ is the number of candidates saved for evaluation. For brevity we omit $b$ in the discussion unless necessary. \paragraph{Evaluation.} The standard practice is to evaluate a candidate $\hat\ensuremath{\mathbf{y}}$ by checking whether the derivative of $\hat\ensuremath{\mathbf{y}}$ is equivalent to $\ensuremath{\mathbf{x}}$ using a symbolic solver (e.g. Sympy). In the \textit{maximum-a-posteriori} (MAP) setting, the model's output is considered correct if its \textit{top-ranked} candidate $\hat\ensuremath{\mathbf{y}}_1$ is correct. This criterion is relaxed in the \textit{search-and-verify} setting, where the model's output is considered correct if \textit{any} of its $k$ candidates $\{\hat\ensuremath{\mathbf{y}}_1,\ldots,\hat\ensuremath{\mathbf{y}}_k\}$ is correct. In this view, the neural network narrows the search space to a small set of candidates that are checked, trading off correctness for search and verification cost. We denote checking $k$ candidate solutions as, \begin{align} \label{eqn:metric} m(\ensuremath{\mathbf{x}},f_\theta(\ensuremath{\mathbf{x}};k)) &= \begin{cases} 0 & \ensuremath{\mathbf{x}} \equiv \frac{d}{dx}\hat\ensuremath{\mathbf{y}}_i \text{ for any } i\in 1 \text{ to } k,\\ 1 & \text{otherwise}. \end{cases} \end{align} In other words, $m(\cdot,\cdot)$ is 1 when the model \textit{fails} to predict the correct integral, and 0 when the model \textit{succeeds}. We measure the proportion of failures on problems $X=\{\ensuremath{\mathbf{x}}_1,\ldots,\ensuremath{\mathbf{x}}_N\}$ using $k$ candidate solutions per problem as: \begin{align} \texttt{Fail@k}(f_\theta,X)=\frac{1}{N}\sum_{\ensuremath{\mathbf{x}}\in X} m(\ensuremath{\mathbf{x}},f_\theta(\ensuremath{\mathbf{x}};k)). \end{align} \texttt{Fail@k} is 0 when the model correctly integrates all of the problems in $X$, and increases towards 1 as it fails to integrate more problems. Evaluating a model's performance in the MAP setting corresponds to evaluating $\texttt{Fail@1}$, while the search-and-verify setting with a budget of $k>1$ candidates uses $\texttt{Fail@k}$. We omit $k$ in $f_\theta(\ensuremath{\mathbf{x}};k)$ unless necessary. \subsection{Experiment Structure} We structure our investigation into three parts (Figure~\ref{fig:fig1}). We begin close to the model's training distribution, evaluating \textit{robustness} to small perturbations of in-distribution problems and simple functions. We then ask whether learning to integrate a collection of functions implies that the model can integrate a \textit{composition} of those functions. Finally we depart from the training distribution by studying \textit{extrapolation} to larger problems and values, then by finding adversarial \textit{exploits} that expose gaps in the training distribution. \paragraph{Experimental setup.} We use the implementation and pre-trained model from \citet{lample2019deep} for all of our experiments, specifically the \texttt{FWD+BWD+IBP} model which obtained top-10 accuracies of 95.6\%, 99.5\%, and 99.6\% on their publicly available test sets.\footnote{\url{https://github.com/facebookresearch/SymbolicMathematics/}, commit \texttt{4596d07}.} Our evaluation is based on their code, we use their utilities for inputs and outputs, and by default use beam search with beam-size 10. Following the authors, we use Sympy to check whether the derivative of a prediction is equal to the original problem. We generously count the prediction as correct if a timeout occurs. See the Apppendix for additional details. \subsection{Automatic Problem Discovery with SAGGA} Automatically finding problems that expose deficiencies requires a non-differentiable cost (Equation~\ref{eqn:metric}), satisfying constraints for valid equations, and finding diverse problem sets to characterize each aspect of generalization. To address these challenges, we develop SAGGA (\textbf{S}ymbolic \textbf{A}rchive \textbf{G}eneration with \textbf{G}enetic \textbf{A}lgorithms), a gradient-free genetic algorithm which iteratively finds diverse failures. At each iteration, SAGGA mutates a seed set of problems by modifying each problem's equation tree, ensuring that the resulting candidates are valid equations. The candidates are scored by a fitness function -- i.e. according to whether the neural sequence integrator fails to integrate the problem and other desired constraints -- and the highest-fitness candidates are saved in a problem archive. The next seed set is then formed to balance diversity and fitness, by clustering candidates and selecting the highest-fitness members of each cluster. SAGGA continues until the archive contains a target number of problems. Algorithm~\ref{alg:sagga} summarizes SAGGA. SAGGA offers control over the types of problems that it discovers through its seed problems, fitness function, and mutation strategy. We detail our choices for each kind of generalization in their respective sections, and show default settings and further implementation details in the Appendix. \input{appendix/algorithm_sagga}
{ "timestamp": "2022-02-25T02:27:13", "yymm": "2109", "arxiv_id": "2109.13986", "language": "en", "url": "https://arxiv.org/abs/2109.13986" }
\section{Introduction} \input{sections/1_introduction} \section{Research Questions \& Results} \input{sections/2_rq_results} \section{Methodology} \input{sections/3_methodology} \label{sec:methodology} \section{Dataset Collection \& Analysis} \label{sec:offline_eval} \input{sections/4_offline_evaluation} \section{Runtime analysis of Mobile DNNs} \label{sec:benchmarking} \input{sections/5_online_evaluation} \section{Available Optimisations} \label{sec:optimisations} \input{sections/6_optimisations} \section{Related Work} \label{sec:related_work} \input{sections/7_related} \section{Discussion \& Future work} \label{sec:discussion} \input{sections/8_discussion} \input{sections/9_conclusion} \bibliographystyle{ACM-Reference-Format} \subsection{DNNs retrieval} \label{sec:crawling} The first step in our methodology is to find, extract and validate the DNNs from Google Play Store most \mbox{popular apps.} \noindent \textbf{App crawling.} First, \texttt{gaugeNN}{} mimics the web API calls made from the Google Play store of a typical mobile device to crawl the Google Play Store. In these requests, both the user-agent and locale headers are defined, which determine the variant of the store and apps retrieved. To perform the crawling, we fetch the list of the top free apps per category which returns a maximum of 500 apps. Additionally, \texttt{gaugeNN}{} stores the store metadata for each app, including popularity, category, reviews, etc. in an ElasticSearch instance for quick ETL\footnote{\cready{Evaluate Transform Loop}} analytics and cross-snapshot investigations (Sec.~\ref{sec:offline_eval}). \noindent \textbf{Model extraction.} Given the downloaded apps, \texttt{gaugeNN}{} proceeds to extract the DNN models from each application's package. Traditionally, Android applications are packaged in a zip file, i.e.~\texttt{apk}, which comes with the the Java/Kotlin ``bytecode'' along with resources used by the app (e.g. textures, images, fonts). \texttt{Apk}s have a size limit of 100MB and files -- such as DNN weights -- can have a larger storage footprint. As a result, Google Play allows additional content to be shared either with expansion files \cite{apk_expansion_files} (\texttt{OBB}s) or through Android App Bundles through Play Asset Delivery \cite{android_app_bundles} The former supplement the main \texttt{apk} file and are hosted and served by Google Play, whereas the latter offers the possibility of downloading assets on demand, as needed for a given device. \texttt{gaugeNN}{} supports file extraction from i)~the base \texttt{apk}, ii)~expansion files (\texttt{OBB}s) and iii)~Android App Bundles, but does not track asset delivery outside of Google Play. Extracted files are matched against a compiled list of 69 known DNN framework formats (listed in the Appendix) to identify potential DNN models. \begin{figure}[t] \centering \includegraphics[width=0.42\textwidth,clip={0 0 0 10mm},clip]{images/power_2.pdf} \vspace{-0.4cm} \caption{\texttt{gaugeNN}{} benchmark platform.} \vspace{-0.4cm} \label{fig:energy} \end{figure} \noindent \textbf{Model validation.} Many models use generic file formats (e.g., protobuffer). Therefore, the number of candidate model files and extensions is quite large and benchmarking all prospective ones quickly becomes computationally prohibitive at scale. Therefore, inspired by \cready{the open-source} Netron \cite{netron_github} tool , \texttt{gaugeNN}{} employs a lightweight -- framework and format specific --validation process to remove files that are not DNN models. This validation consists of checking the binary signature of the file for the presence of specific identifiers that a \mbox{framework uses.} For example, for \texttt{TFLite}, we know that the FlatBuffer files representing models include specific headers at certain positions of the binary file, thus we check for the existence of e.g.~ the string ``\texttt{TFL3}'' there. On the downside, encrypted and obfuscated models do not match such validation rules and are not extracted in our analysis. Moreover, models downloaded on demand by the application outside of the official Google Play distribution mechanisms are omitted from our benchmarks. However, we do track applications using such models indirectly by means of library inclusion in the application code and native libraries, even without explicitly analysing the models. \cready{The native code detection follows the methodology of Xu et al.}~\cite{xu2019first}. \subsection{Offline DNN analysis} \label{sec:methodology_offline} After collecting the top apps from each category, we analyse the usage of Deep Neural Networks in the wild. Apps can use DNN models in different ways; i) they can execute the models on-device or ii) offload the computation to external resources (e.g.~cloud providers). \noindent \textbf{In-app DNN models.} After identifying the model files within an application, \texttt{gaugeNN}{} extracts their DNN architecture either by parsing directly the file, or by using the associated framework's interpreter. A DNN model is typically represented as a DAG\footnote{\cready{Directed Acyclic Graph}}, where layers are represented by vertices and data flows by edges. By going through each model's graph, \texttt{gaugeNN}{} registers the type of layer, its parameters (weights) and operations in a trace-based manner and uses this information to estimate the total operations\footnote{\cready{Model FLOPs are estimated as a function of the cumulative Multiply-Accumulate (MAC) operations performed by each of the model's layers.}} (\#FLOPs) and model size (\#parameters). Furthermore, we can later individually run these models and measure their inference latency, energy and memory footprint. \noindent \textbf{DNN Cloud APIs.} Alternatively, applications might integrate ML functionality through cloud-backed APIs, by means of offloading inference to a remote endpoint. To detect the usage of cloud-based DNN models, \texttt{gaugeNN}{} inspects the app code to search for common DNN framework API calls. Android apps are typically developed in Kotlin or Java and then compiled into \texttt{dex} format\cite{dalvik} and packaged within the app binary. It is possible to extract this \texttt{dex} binary from the app package and decompile it into a human-readable (\texttt{smali}~\cite{smali}) format using the \texttt{apktool}~\cite{apktool} to inspect the original code API calls. \texttt{gaugeNN}{} automates the process of decompiling these binaries and performs string matching on the smali files to detect known cloud DNN framework calls. In particular, \texttt{gaugeNN}{} recognises calls to libraries belonging to Google FireBase~\cite{googlefirebase}, Google Cloud~\cite{googlecloud} and Amazon AWS ML services~\cite{awssdk}. \begin{figure}[t] \centering \includegraphics[width=0.68\columnwidth]{images/gauge-workflow.pdf} \vspace{-0.4cm} \caption{\texttt{gaugeNN}{} benchmark workflow.} \vspace{-0.45cm} \label{fig:benchwork} \end{figure} \subsection{Model benchmarking} \label{sec:benchmarking} Next, we describe how \texttt{gaugeNN}{} assesses the on-device run time and energy consumption of DNNs. \noindent \textbf{Devices.} To assess the performance of the deployed DNN models at runtime -- i.e. latency, energy, memory and CPU utilisation -- we deploy these models on the devices of Table~\ref{tab:specs}. The devices of the first group represent three distinct tiers of smartphones (\textit{low} to \textit{high-end}) and showcase the performance across heterogeneous clients, while the development boards of the second group represent high-tier SoCs from different generations, whose open design allows us to measure energy consumption through cable probes connected to a Monsoon power monitor (Fig.~\ref{fig:energy}). \noindent \textbf{Benchmark workflow.} All benchmarks are written in native code and compiled for \texttt{aarch64} with Android NDK. \texttt{gaugeNN}{} adopts a master-slave architecture depicted in Fig.~\ref{fig:energy}. The server, where the models initially reside, is responsible for orchestrating the deployment and benchmarking of the models across client devices (phones), connected over USB. To control the power passthrough of mobile devices, we use a USB controller board \cite{ykush} that can programmatically disable data and power channels during measurements. This component was necessary, as connecting the device over USB charges it, interfering with the \mbox{energy measurements}. The benchmarking workflow is depicted in Fig.~\ref{fig:benchwork}. Initially, the master (left side) pushes all the necessary dependencies to the device (right side) through \texttt{adb} and asserts the initial device state (WiFi and sensors are off, maximum screen timeout, etc). The benchmark consists of an unattended, headless script that runs on the device upon disconnection of the USB power, controlled through the USB board. This script is launched as a daemon process and performs the following tasks: 1)~It waits until the USB power is off; 2)~it runs a configurable amount of warmup inferences to remove cold cache outliers; 3)~it runs the actual benchmark inferences with a configurable inter-experiment sleep period; 4)~it turns on WiFi upon completion and communicates a TCP message through \texttt{netcat} to the server that the experiment is over. Subsequently, the server re-enables the USB power, connects over \texttt{adb} and gathers the job results before cleaning up and launching the next job. \begin{table}[t] \small \begin{tabular}{llll} \hline Model & SoC & RAM & Battery capacity \\ \hline \multicolumn{4}{l}{\footnotesize{\textbf{Samsung devices}}} \\ A20 & Exynos 7884 & 4GB & 4000mAh \\ A70 & Snapdragon 675 & 6GB & 4500mAh \\ S20 & Snapdragon 888 & 8GB & 4000mAh \\ \hline \multicolumn{4}{l}{\footnotesize{\textbf{Qualcomm development boards}}} \\ Q845 HDK & Snapdragon 845 & 8GB & 2850mAh\\ Q855 HDK & Snapdragon 855 & 8GB & N/A \\ Q888 HDK & Snapdragon 888 & 8GB & N/A \\ \hline \end{tabular} \vspace{0.1cm} \caption{Device specifications.} \vspace{-.95cm} \label{tab:specs} \end{table} \noindent \textbf{Energy measurements.} Energy on open deck devices is measured via a Monsoon power monitor (\texttt{AAA10F}). To prevent Android's battery saving mechanisms (e.g., Doze~\cite{doze}) killing background jobs when the screen goes off or scaling down the CPU frequency, we keep the phone screen on during the benchmark, by interfacing with the Android's Power Manager service. We also ensure that the screen is always in a similar state across devices, by developing an app that shows a black background. While the screen does incur extra energy consumption, this is measured and accounted for. In the following sections, we present the findings of our experiments run with \texttt{gaugeNN}{}. First, we present an offline analysis of the apps and models found from crawling the Google Play Store (Sec.~\ref{sec:offline_eval}) and then we move to runtime analysis of these models on devices (Sec.~\ref{sec:benchmarking}) and specific optimisations (Sec.~\ref{sec:optimisations}). \subsection{Datasets} As shown in Table~\ref{tab:dataset} we collected two snapshots of the top free Google Play apps, on the $14^{th}$ of February 2020 and on the $4^{rd}$ of April 2021. At these points in time, the Android devices represented $73.3\%$ and $72.19\%$ of the mobile OS market share \cite{statista_market_os, gs_market_os} respectively. Data was collected from \cready{an UK-based account associated to a Samsung S10 (SM-G977B)}, downloading the most popular apps across all categories of the Google Play Store (up to 500 apps per category). This accounts for the top 0.6\% of total applications available in the store\footnote{Google Play Store is estimated to have 2.9M apps at the time of the \mbox{latest snapshot \cite{appbrain_stats}}}. In general, apps downloads tend to follow a power law distribution \cite{viennot2014measurement}. Therefore, the most popular apps are installed on most users' phones while the rest follow a long tail. While we could not scale a study of paid apps for monetary reasons, these account for a very small percentage of downloaded apps~\cite{viennot2014measurement}. For the rest of the paper, we report on the latest Play Store snapshot, unless explicitly stated otherwise. \vspace{-0.2cm} \subsection{Model distribution to devices} As described in Sec.~\ref{sec:crawling}, models in Android applications can be distributed post-installation (e.g. through \texttt{OBB}s or Asset Delivery). This allows developers to bypass the 100MB \texttt{apk} limit and to provide customised models for devices with different capabilities (e.g. devices with specified NPU). To identify any models that are distributed post-installation, we downloaded all companion files and Google Play assets. \cready{We found} no models being distributed outside of the main \texttt{apk}. \cready{Furthermore, we downloaded an extra snapshot with a three Android generations older device profile}\footnote{\cready{Samsung S7 edge -- SM-G935F, released in February'16, three years before the S10 5G.}}, and found no evidence of device-specific model customisation. \noindent \textbf{Observations:} \textit{Our results indicate that the functionality offered by Play Services to download device-specific models may be underutilised in the realm of mobile ML or that developers choose not to specialise their models per device SoC or model. \cready{While specialising the model distribution per device target can be beneficial for performance and energy, it requires offline vendor-specific customisation of the model. Evidently, app developers seem to prefer generality of their deployment solutions, in line with} \cite{facebook2019}, \cready{and defer optimisation to middleware in the stack, such as NNAPI drivers or specific hardware delegates} \cite{ai_benchmark_2019}.} \begin{table}[] \small \begin{tabular}{lrr} \hline & {Snapshot '20} & Snapshot '21 \\ \hline \textbf{Date} & $14^{th}$ Feb. 2020 & $4^{rd}$ Apr. 2021 \\ \textbf{Total Apps} & $16,964$ & $16,653$ \\ \textbf{Apps w/ frameworks} & $236(1.4\%)$ & $377 (2.3\%)$ \\ \textbf{Apps w/ models} & $165 (1.0\%)$ & $342 (2.1\%)$ \\ \textbf{Total models} & $821$ & $1,666$ \\ \textbf{Unique models} & $129 (15.7\%)$ & $318 (19.1\%)$ \\ \hline \end{tabular} \vspace{0.1cm} \caption{Dataset snapshots details.} \vspace{-0.75cm} \label{tab:dataset} \end{table} \vspace{-0.2cm} \subsection{ML frameworks} Next, we look into the models found per ML framework. Specifically, Fig.~\ref{fig:categories} depicts the number of models successfully extracted, validated and benchmarked, per category and ML framework. These models represent 90.72\% of the total apps including ML libraries in their codebase (Table~\ref{tab:dataset}), with the rest accounting for obfuscated, encrypted or lazily downloaded models. In total these account for 1,666 models -- 1436 (86.19\%) \texttt{TFLite}, 176 (10.56\%) \texttt{caffe}, 46 (2.76\%) \texttt{ncnn}, 5 (0.3\%) \texttt{TensorFlow} and 3 (0.18\%) \texttt{SNPE}. \texttt{TFLite} is expectedly first in popularity, as the recommended solution from the OS provider for mobile ML inference. However, it is surprising to see \texttt{caffe} so widely used, since it has been long deprecated and replaced by \texttt{caffe2} in 2017 and now PyTorch Mobile. \noindent \textbf{Observations:} \emph{These results illustrate a long latency between the \cready{state-of-the-art} frontier of ML frameworks and \cready{their adoption for in-the-wild deployment.}} \vspace{-0.2cm} \subsection{Model categorisation} \label{sec:models} Here, we perform a quantitative analysis of DNN models and their respective apps and correlate them with metadata from the Google Play Store. Our aim is to categorise the most popular DNN-powered apps and characterise their usage. \begin{figure}[t] \centering \vspace{-0.2cm} \includegraphics[width=0.44\textwidth]{images/category_apps_snap21.pdf} \vspace{-0.4cm} \caption{Number of models \texttt{gaugeNN}{} successfully extracted and executed per framework and \cready{Google Play} category. Categories with less than 20 models are excluded.} \vspace{-0.4cm} \label{fig:categories} \end{figure} Fig.~\ref{fig:categories} \cready{shows the number of ML models per framework and Google Play category.} We observe that the top DNN-powered apps belong to ``communication'' and ``finance'' tools with several DNNs for face and object detection (e.g. for detecting a card or ID to make transactions in the latter case). These are followed by more traditionally DNN-backed categories, such as ``photography'' and ``beauty'', which typically contain DNN-based filters to enhance photos. Potentially less expected categories include ``food and drink'', ``dating'' and ``parenting''. By manually examining these models, we found anecdotal examples of apps within these categories using DNNs to detect or recognise objects (e.g. a bottle of wine or a face), for recommendation systems (e.g. partner matching, advertising and food recipe recommendation) and even for baby monitoring. To dig deeper into the purpose of each AI model, we manually looked into the naming, input/output dimensions and layer types of the encountered DNN models in order to characterise their usage. This labour intensive job was done across three ML researchers with a majority vote on the results. We were able to identify the usage of $1,531$ models, accounting for $91.9\%$ of all models, with around $67\%$ having names which hint either the model, task at hand or both (e.g. ``hair\_segmentation\_mobilenet.tflite''). Our characterisation shows that the most popular task for deploying Deep Learning is computer vision ($>89\%$ of all models), followed by NLP (17 models) and audio (15 models). Last, we found traces of DNN models (4 models) utilising sensor data, such as accelerometer, gyroscope, etc. Two anecdotal use-cases for sensor ML are horse movement tracking and car crash detection in insurance apps. Task-specific results are shown in Table~\ref{tab:dnn_tasks}, where it can be seen that most vision models were targeted at object, face and contour detection, most audio tasks at ambient sound recognition, most NLP tasks at text-completion and sensor tasks at movement tracking. \noindent \textbf{Observations:} \textit{Vision models seem to be the most prevalent, with a focus on object and face detection and text recognition and used mostly across communication, photography and beauty apps.} \vspace{-0.2cm} \subsection{Model uniqueness characterisation.} Diving deeper into the models distributed amongst the most popular applications, we found that not all models are bespoke or unique. Overall, we witness DNN models spread across different application categories, with a significant portion of these being off-the-shelf models without customisation. In fact, after checking for unique checksums on these models \cready{and respective weights}\footnote{\cready{Most apps distribute the model weights in their \texttt{apk}, either in a single file, along with the DNN graph, or in separate files (e.g. \texttt{caffe}). In either case, we perform an \texttt{md5} checksum on both the model and weights.}}, we find that only 318 models (19.1\% of the models as shown in Table~\ref{tab:dnn_tasks}) are unique. For the most prevalent vision task, i.e., object detection, FSSD~\cite{li2017fssd} seems to be the most popular model. We found such occurrences even within popular Google apps (e.g. ``Gallery Go'' and ``Arts \& Culture''). For face detection the Blazeface~\cite{bazarevsky2019blazeface} is another very popular model. Spanning across tasks, MobileNet~\cite{howard2017mobilenets} seems to be the most popular architecture with variants (e.g. FSSD) being used to other vision tasks including semantic segmentation, pose estimation or classification. Last, we encounter multiple occurences of models tackling a common task, e.g. recognise information from credit cards \cite{paycards_ios}, such as names and dates. % \begin{table}[t] \begin{subtable}[t]{.49\columnwidth} \centering \footnotesize \begin{tabular}{lr} \hline Task & Models \\ \hline \multicolumn{2}{c}{\scriptsize{\textbf{Vision}} (1495 models)} \\ object detection & 788 ($52.7\%$) \\ face detection & 197 ($13.2\%$) \\ contour detection & 192 ($12.8\%$) \\ text recognition & 185 ($12.4\%$) \\ augmented reality & 51 ($3.4\%$) \\ semantic segmentation & 14 ($0.9\%$) \\ object recognition & 14 ($0.9\%$) \\ pose estimation & 8 ($0.5\%$) \\ photo beauty & 8 ($0.5\%$) \\ image classification & 7 ($0.4\%$) \\ nudity detection & 5 ($0.3\%$) \\ other & 26 ($1.7\%$) \\ \hline \end{tabular} \end{subtable} \hfill \begin{subtable}[t]{.49\columnwidth} \footnotesize \begin{tabular}{lr} \hline Task & Models \\ \hline \multicolumn{2}{c}{\scriptsize{\textbf{NLP}} (17 models)} \\ auto-complete & 9 ($52.9\%$) \\ sentiment prediction & 4 ($23.5\%$) \\ content filter & 2 ($11.8\%$) \\ text classification & 1 ($5.9\%$) \\ translation & 1 ($5.9\%$) \\ \hline \multicolumn{2}{c}{\scriptsize{\textbf{Audio}} (15 models)} \\ sound recognition & 12 ($80.0\%$) \\ speech recognition & 2 ($13.3\%$) \\ keyword detection & 1 ($6.7\%$) \\ \hline \multicolumn{2}{c}{\scriptsize{\textbf{Sensor}} (4 models)} \\ movement tracking & 3 ($75.0\%$) \\ crash detection & 1 ($25.0\%$) \\ \hline \end{tabular} \end{subtable} \vspace{0.1cm} \caption{DNN task classification.} \vspace{-0.9cm} \label{tab:dnn_tasks} \end{table} \noindent \textbf{Model fine-tuning.} Taking this analysis one step further, we perform a checksum-based analysis at finer-granularity (layer-level) to see to what degree to developers train their own models from scratch or fine-tune the last layers through \textit{transfer learning} \cite{pan2009survey}. The intuition is that the first layers of the network are typically extracting low-level features (e.g. edges, shapes, etc. for vision tasks) that are shared between similar tasks and only deeper in the DNN do the task-specific and semantically relevant features get extracted. Results from our analysis show that, excluding duplicate models, 9.02\% of the remaining models share at least 20\% of the weights with at least one other model. In fact, $4.2\%$ of the models only differ in up to three layers, indicating that some developers only fine-tune small portions of the network, resulting in a significantly smaller training footprint and exploiting transfer learning from other (typically off-the-shelf) networks. Moreover, we checked for traces of online fine-tuning done on device (e.g. through \texttt{TFLiteTransferConverter} \cite{tflite_personalisation}) and found none, indicating that on-device fine-tuning is not yet widely exploited in the wild due to the significant computation requirements and the limited availability of labelled high-quality on-device datasets. \noindent \textbf{Observations.} \textit{Based on this type of evidence, we deduce that it is common for developers to leverage a pre-trained model that is widely available and pay the significantly smaller cost of training offline only a subset of the last DNN layers. While online on-device training is a prominent future avenue, be it through fine-tuning or federated learning, current support in mobile frameworks is limited and so are such deployments.} \vspace{-0.2cm} \subsection{Temporal analysis across snapshots} \label{sec:temporal} \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{images/temporal_models_io_per_category.pdf} \vspace{-0.4cm} \caption{Individual models removed/added \cready{between two snapshots taken one year apart.}} \vspace{-0.4cm} \label{fig:temporal_models_added_removed} \end{figure} As aforementioned, we took two distinct snapshots of the most popular apps in the Google Play Store 12 months apart from each other. In this part of our analysis, we compare and contrast these two snapshots in terms of app popularity and in-the-wild DNN deployment and draw conclusions about the trajectory of ML penetration in smartphones nowadays. What is unique about our dataset is that we happened to measure DNN-deployment across the COVID-19 pandemic, which had a crucial impact on human activity during the course of 2020/2021. For this reason, we also compare our temporal analysis with similar analyses done in the past \cite{xu2019first} to i)~identify potential biases of our dataset during these exceptional circumstances and ii)~to see how app popularity and, as an extension, DNN adoption, has been affected by these circumstances. Results from our temporal analysis indicate a surging number of DNN models being deployed on the Android platform, essentially doubling in the course of 12 months. Specifically, our traced models went from $821$ to $1.6K$ for our latest snapshot (Table~\ref{tab:dataset}), with most additions belonging to vision tasks. \texttt{TFLite} remains the dominant mobile inference framework, going from $81.6\%$ to $86.1\%$ of the total models found ($2.15x$). The increase in models was less pronounced for \texttt{ncnn} ($1.18x$) and \texttt{caffe} ($1.69x$). The latter is surprising given the fact it has been deprecated and newer frameworks have taken its place (\texttt{caffe2} and PyTorch Mobile). Finally, we observe a drop in the \texttt{TF} ($0.56x$) adoption rate, which is expected given the increasing popularity of its mobile counterpart. Next, we analyse the DNN models across snapshots per category of application to which they belong. Fig.~\ref{fig:temporal_models_added_removed} depicts the number of individual models that were removed/added across our snapshots, sorted by the difference between the two. Interestingly, most additions of ML models happened for communication tools, taking the lead from ``photography'' applications, which was the top ML-powered category of 2020. This can potentially indicate that communication apps became more important due to the pandemic, and developer focus was diverted to this category. A similar trend could be witnessed for ``finance'' applications, where we observed many models aimed at the automated identification of people and their ID cards. Whilst this traditionally constituted a manual process done in person in financial institution (e.g. banks), the pandemic might have created a new need for ML models to fill. Last, apps related to ``health'' and ``medical'' purposes seem to have a surging deployment of DNN models. On the other side of the spectrum, ``lifestyle'', ``food \& drinks'' and ``Android Wear'' applications seem to be falling in terms of popularity, something that could be potentially attributed to the fact that people stay more at home. Next, we integrate the results of previous analyses~\cite{xu2019first,sec_dnns_apps} to shape a more general trend for DNN adoption in the Android ecosystem. In \cite{xu2019first}, the authors report the total ML-backed apps going from $166$ in June 2018 to $211$ in September 2018. In \cite{sec_dnns_apps}, the authors traced $178$ ML-powered apps, somewhere between \cite{xu2019first} and June 2020 \footnote{The snapshot date is not reported, thus we consider it between \cite{xu2019first}, with which it compares, and the work's venue submission date.}. Last, for our trace, we report ML-powered apps going from $236$ to $377$ from February 2020 to April 2021. From the previously reported figures, we witness a soaring trajectory of ML apps deployed in the wild, with the adoption rate of ML being accelerating. \noindent \textbf{Observations:} \textit{\cready{While there was a big reshuffling in the type of AI models deployed during the pandemic}, we observe a considerable \cready{general growth in the number of DNN models in AI-powered applications in the past 3 years (from 176 in 2018}~\cite{xu2019first} \cready{to 1,666 in April 2021}). These results demonstrate how the proliferation of mobile AI frameworks, the availability of pre-trained models and the constant improvement of mobile hardware have driven this growth and the need to keep up with this ever-increasing adoption.} \begin{figure}[t] \centering \includegraphics[width=0.8\columnwidth]{images/layers_per_high-task.pdf} \vspace{-0.44cm} \caption{Model layer composition per input modality for \texttt{TFLite}, \texttt{NCNN} and \texttt{caffe}.} \vspace{-0.4cm} \label{fig:layers} \end{figure} \vspace{-0.2cm} \subsection{Mobile DNNs layers and operations} \label{sec:dnn_ops} After having coarsely characterised the models based on their input modality, target task and app category, we take a finer-grained look into the models and analyse their structure in terms of the layers and operations they contain. \noindent \textbf{DNN layers and operation types.} First, we go through the graph representing each DNN and trace the layer types they contain, grouping results per input modality. Results are shown in Fig.~\ref{fig:layers} for \texttt{TFLite}, \texttt{NCNN} and \texttt{Caffe}. We see \textit{convolution} layers being amongst the most popular layer types across modalities (34\%, 10\%, 20\% for image, text and audio, respectively). Originally applied in visual tasks, their usage nowadays spreads across recommender systems, natural language processing and time-series analysis. Variants such as \textit{depthwise-separable convolutions} (\texttt{depth\_conv}) \cite{howard2017mobilenets} are computationally less heavy and are aimed for mobile deployments. \textit{Dense} (or \textit{linear}) layers are fully-connected layers that are typically found in the output of classification tasks, or in the implementation of RNNs. Majority of these layers are found in audio (19\%) and text (9\%) models. \textit{Activations} essentially impose non-linearity in DNNs, and can be fused with the previous layer in terms of implementation. Thus, the existence of such operations as distinct layers is framework dependent. Last, ``helper'' layers such as \textit{math}, \textit{quant}, \textit{resize} and \textit{slice} operations are performing math or matrix representation operations and can be found across modalities. \begin{figure}[t] \centering \includegraphics[width=0.9\columnwidth]{images/flops_params_per_task-low.pdf} \vspace{-0.4cm} \caption{FLOPs and parameters per DNN task.} \label{fig:flops-new} \vspace{-0.5cm} \end{figure} \begin{figure}[b] \vspace{-0.5cm} \centering \begin{subfigure}{\columnwidth} \centering \includegraphics[width=\columnwidth]{images/scatter_latency_per_flops_cropped.pdf} \vspace{-.3cm} \end{subfigure} \begin{subfigure}{\columnwidth} \centering \includegraphics[width=\columnwidth,clip,trim={0 0 0 4.83cm}]{images/scatter_latency_per_flops.pdf} \end{subfigure} \vspace{-0.4cm} \caption{\cready{Observed relationship between latency and FLOPs across six different devices.}} \label{fig:latency-flops-scatter} \end{figure} \noindent \textbf{DNN \#operations and \#parameters.} Next, we estimate the number of operations (in FLOPs) and parameters that each model contains by going through the graph in a trace-based manner. \cready{Concretely, we generate a random input with the DNN-specified input dimensions and perform a DNN inference. During the forward propagation step, we measure analytically the amount of operations being performed per layer (dependent on the kind of layer) and the number of trainable parameters associated with it.} Fig.~\ref{fig:flops-new} shows the result of this analysis per DNN task. We see that among the traced models, on average the heaviest deployed vision models belong to classification, hair reconstruction, segmentation and beauty tasks. For NLP the heaviest tasks belong to text auto-completion whereas for audio the heaviest deployed task is sound recognition. At this point, we note that these numbers only refer to the traced deployed models and \cready{do} not represent a generic commentary on the overhead of models per task. In fact, in many cases it is the opposite if we only take into consideration the task (e.g. classification vs. segmentation or speech vs. sound recognition). Also, we note that the number of models found for each task category varies significantly. \noindent \textbf{Observations:} \textit{We find that convolutions dominate the mobile DNN landscape due \cready{to their wide use in vision models}, as well as the fact that they can map well on mobile hardware for efficient execution, compared to e.g. recurrent layers \cite{10.1109/MICRO.2018.00022}. While depth-wise convolutions can significantly improve performance, their deployments are scarcer as they can impact the quality of the model. Furthermore, we find that there is huge variance in terms of FLOPs and parameters (four orders of magnitude) in the traced models. This might be attributed to the granularity of the task corresponding to a single inference. For example, in image recognition the input is typically an RGB image while in next-word prediction the input can be a couple of words.} \subsection{On-device DNN latency} \label{sec:on-device-lat} \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{images/latency_per_device.pdf} \vspace{-0.4cm} \caption{Latency per device ECDF.} \vspace{-0.4cm} \label{fig:latency-ecdf} \end{figure} \begin{figure*}[t] \centering \begin{subfigure}{0.32\textwidth} \includegraphics[width=\textwidth]{images/energy_hist.pdf} \vspace{-0.7cm} \caption{Inference energy} \label{fig:inference_energy} \end{subfigure} \hfill \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{images/power_hist.pdf} \vspace{-0.7cm} \caption{Inference power} \label{fig:inference_power} \end{subfigure} \hfill \begin{subfigure}{0.32\textwidth} \includegraphics[width=\linewidth]{images/power_eff_hist.pdf} \vspace{-0.7cm} \caption{Inference efficiency} \label{fig:inference_efficiency} \end{subfigure} \vspace{-0.3cm} \caption{Distributions of inference energy, power and efficiency of the collected models when run across 3 generations of Qualcomm SoCs. The lines represent kernel density estimations.} \vspace{-0.4cm} \label{fig:energy_power_hist} \end{figure*} Prior work \cite{almeida2019embench,ai_benchmark_2019} has shown that FLOPs is not necessarily a good proxy for estimating a model's on-device performance. Reasons for such discrepancies include the underutilisation of hardware due to e.g. memory-bound operations, thermal throttling due to continuous inference or even due to scheduling on cores of different dynamics due to energy-saving scheduler policies on Heterogeneous Multi-Processors \cite{kim2017enhancing}. To further corroborate this fact, \cready{in Fig.~\ref{fig:latency-flops-scatter} we depict the FLOPs and actual measured inference latency across devices for different models. Our analysis on real-world models on different devices reinforces this non-linear (line-fit) relationship as it not only varies for different model architectures, but also differs from one device to another.} \cready{To investigate this further}, in Fig.~\ref{fig:latency-ecdf} we show the ECDF of model runtime across all available devices. From the graph it is evident that the computing gap between a \textit{low-end} device (A20) and a \textit{mid-tier} device (A70) is considerably larger than the difference of \textit{mid-tier} to \textit{high-end} (S21). Specifically, \textit{low-end} and \textit{mid-tier} devices (A20 and A70) are $3.4\times$ and $1.51\times$ slower compared to S21. Across generations of high-end SoCs of the same manufacturer (Q845, Q855, Q888), we see incremental performance gains (i.e., average latency of $76$, $58$ and $35$ ms), but noticeable, to the point that a next-gen mid-tier phone may perform better than the high-end SoC of a prior generation, despite claims about significant boosts in AI acceleration between generations. Last, we want to mention that for the two devices that integrate the same SoC (Q888 and S21), the open-deck design of the development board along with the vanilla variant of the OS leads to incrementally better results and faster inference overall. Heat dissipation of the open design, cross-manufacturer configurations and low-level configuration of the Android Scheduler can all be contributing factors. \noindent \textbf{Observations:} \textit{We observe a wide variability of inference latency across devices even for models that have similar FLOP counts, which reaffirms the need for on-device benchmarking. Devices of different tiers and generations offer variable dynamics, with the lower-tier falling significantly behind in performance. Even devices integrating the same SoC can offer variable performance due to vendor-specific configurations, the installed apps and drivers or even due to different thermal characteristics. Therefore, given this heterogeneity, it is hard for developers to accurately predict the users' experience without testing their models on a large sample of devices.} \subsection{Energy consumption} In mobile settings, one cannot simply optimise for performance without taking energy consumption into consideration. While smartphone capabilities are growing larger every year, the same developments have not been witnessed in battery technology. Therefore, quantifying the cost of being smart in terms of energy is an important component in the mobile world. In this section, we report on the energy, power and efficiency of doing inference on device, across frameworks for the three Snapdragon boards representing different generations of devices. \subsubsection{Energy and power consumption per device} Fig.~\ref{fig:inference_energy} shows the distribution of models with respect to the energy required per inference across our three devices. Expectedly, we see from the kernel density function lines that all three devices follow a similar trajectory, indicating that a similar amount of energy is required for similar workloads regardless of the device. On the other hand, this is not the case in terms of power consumption (Fig.~\ref{fig:inference_power}), where we can see newer generations of devices consistently drawing more power to run models. This is a direct implication of the fact that newer generations of devices can execute models faster, as shown in Fig.~\ref{fig:latency-ecdf}, while energy required remains similar. Following these observations, we decided to calculate inference efficiency per each model by calculating how many floating-point operations can be executed in one second per one Watt\footnote{Effectively the same as calculating FLOPs per Joule.}. As can be seen in Fig.~\ref{fig:inference_efficiency}, trends in efficiency stay mostly the same across different devices, following energy consumption, but unlike energy we can see a minor improvement of the newer devices over Q845 in the middle of the distribution, suggesting that relatively more models can run more efficiently (median efficiency of 730, 765 and 873 MFLOP/sW, after removing outliers) on the newer hardware. \subsubsection{Use-case driven energy consumption} \label{sec:energy-scenarios} Up to here, we have seen performance and energy consumption for single inferences. However, the quanta of data associated with each inference may vary considerably between tasks or modalities as noted before in Sec.~\ref{sec:dnn_ops}. Thus, we dive deeper into three selected tasks representative of each modality, namely i)~\textit{sound recognition} for audio, ii)~\textit{auto-completion} for text and iii)~\textit{semantic segmentation} for vision. \begin{table}[] \centering \small \begin{tabular}{@{}lcccc@{}} \toprule \multirow{2}{*}{\centering \textbf{Use-case}} & \multicolumn{4}{c}{\textbf{Battery discharge (mAh)}} \\ \cmidrule(l){2-5} & Avg. & Median & Min & Max \\ \midrule \multicolumn{5}{l}{\textbf{Q845}} \\ % Sound R. & 0.6350$\pm$2.0226 & 0.0652 & 0.0351 & 2.5277 \\ Typing & 0.0752$\pm$0.1637 & 0.0292 & 0.0245 & 0.1993 \\ Segm. & 1221.7$\pm$2761.0 & 619.62 & 271.93 & 3835.2 \\ % \hline \multicolumn{5}{l}{\textbf{Q855}} \\ % Sound R. & 1.0311$\pm$3.3438 & 0.1821 & 0.0262 & 5.0327 \\ Typing & 0.1192$\pm$0.2835 & 0.0387 & 0.0279 & 0.3404 \\ Segm. & 1133.4$\pm$2468.1 & 489.10 & 262.85 & 3239.7 \\ % \hline \multicolumn{5}{l}{\textbf{Q888}} \\ % Sound R. & 0.7950$\pm$2.8060 & 0.1009 & 0.0316 & 4.4132 \\ Typing & 0.1001$\pm$0.2484 & 0.0315 & 0.0300 & 0.3403 \\ Segm. & 1062.7$\pm$2416.6 & 455.71 & 272.44 & 3290.8 \\ \bottomrule \end{tabular} \vspace{0.1cm} \caption{\cready{Scenario-driven energy consumption for three devices and use-cases in audio, text and vision.}} \vspace{-0.7cm} \label{tab:energy-usecases} \end{table} We make certain realistic assumptions on the data sizes, granularity input and frequency of results and then assess all relevant models belonging to this category. Specifically, for \textit{speech recognition}, we assumed each model is run in order to recognize 1 hour of audio input. To derive how long a model would need to be run, we manually investigated the models and assumed the most likely amount of audio input per inference considering the model's input dimension and common practices in speech ML~\cite{chan2015listen,Pratap2020scaling,mehrotra2021nasbenchasr}. For \textit{text auto-completion} we assumed each model is run once per new word typed by a user, and further assumed a workload of 275 words, derived from WhatsApp's statistics about average daily number and length of messages~\cite{what-users, what-messages, what-words}. Last, for \textit{semantic segmentation}, we assumed each model is used to segment a human at 15 FPS during a 1-hour-long video call in order to apply background effects, we further assumed that the model processes one frame per inference which is the usual approach~\cite{Long2015fcn,Zhao2018icn,chen2020fasterseg}. Results \cready{across the development boards} are depicted in Table \ref{tab:energy-usecases} and indicate that different tasks and use cases result in very different impact on the battery life. On the high-end of energy consumption, we see that one hour of segmentation can result in a significant \cready {average reduction of 26.6\% to 30.54\% }of a common 4000mAh battery capacity (e.g.~A20 and S21). \cready{Moreover, the most energy hungry segmentation models can almost deplete the full battery capacity within an hour, with a 80.9\% to 95.9\% reduction.} On the other end, models like auto-completion are ubiquitous across messaging apps and deliver both in terms performance and efficiency, allowing their frequent use without a significant impact on battery. \noindent \textbf{Observations.} \textit{Energy consumption is a major component in mobile, and intelligence comes at a cost to battery life. Unlike latency, which is visibly improved with new generations of devices, energy consumption seems to be predominantly dependent on the model architecture. Even though newer hardware might improve in power-efficiency, differences are much less pronounced compared to performance improvements, which are even less observable across different model architectures. This suggests that it is the AI developers who can optimise battery life the most, unlike plain latency which can be improved at multiple levels, including manufacturers. } \subsection{Model-level Optimisations} In this section, we focus on the adoption of three model-level optimisations, namely i)~\textit{weight clustering}, ii)~\textit{pruning} and iii)~\textit{quantisation}, for the identified \texttt{TFLite} models. \noindent \textbf{Clustering:} \label{sec:clustering} Clustering refers to the technique of reducing the number of distinct weight values by representing them through their clusters' centroids \cite{han2016deep}. We identify clusters of shared weights by searching for layers with a \textit{``cluster\_''} prefix on \texttt{TFLite} models. Despite the advertised potential for significant model size reductions \cite{tflite_clustering}, we report that none of the models in-the-wild seem to use weight clustering. This may be a result of either accuracy drops or the fact that the current clustering implementation does not reduce runtime memory and targets model compression only~\cite{tflite_clustering}. \noindent \textbf{Pruning:} \label{sec:pruning} Pruning refers to the technique of zero-ing out specific weights/channels of the network that have minimal impact on the output, due to representational redundancy in DNNs. Weight pruning can be detected during training by searching for layers with a \textit{``prune\_''} prefix for \texttt{TFLite} models. Nonetheless this prefix is often removed for inference~\cite{pruning}. We report that we did not find any occurrence of such layers either. While this approach has the potential to skip the zero weight computations during inference, the current implementation benefits only from increased sparsity \cite{tflite_pruning} which, like clustering, results only in model compressibility. To find if there is the potential of adopting magnitude-based weight pruning, we measured the weight sparsity for the tracked \texttt{TFLite} models. We find that, overall, 3.15\% of weights are near zero (within $\pm10^{-9}$), which might show limited prospects for weight magnitude-based pruning. \begin{figure}[t] \centering \includegraphics[width=.85\columnwidth]{images/threads-throughput.pdf} \vspace{-0.6cm} \caption{\text{TFLite}'s model throughput for different devices and compute targets.} \vspace{-0.5cm} \label{fig:threads} \end{figure} \noindent \textbf{Quantisation:} \label{sec:quantisation} Finally, quantisation constitutes a prominent method for minimizing the computational and memory demands of DNNs by means of reducing their representation precision~\cite{qcnns2016cvpr,int_only2018cvpr}. To study its adoption, we analysed the layer types and their weight and input bitwidth representations. We report that 10.3\% of the models make use of the \texttt{dequantize} layer, which indicates the deployment of lower-precision models as a way to perform model compression. Furthermore, by examining each model's weights, we found that 20.27\% of the models use \texttt{int8} for the weight tensors whereas 10.31\% of the models work with \texttt{int8} activations. Recent hardware advances have led to NPUs that support \textit{multiple arithmetic precisions}~\cite{snpe,arm_ethos_npu,huawei_npu2019hotchips}. Such examples are the Hexagon 698 processor on Qualcomm Snapdragon 865 (SDM865)~\cite{snpe} and the Arm Ethos processor~\cite{arm_ethos_npu}, which support 16-bit for activations and 8-bit for weights (A16W8). These schemes enable a better compromise between faster low-precision compute and having enough representational power to achieve good accuracy. In spite of the new opportunities of these hardware architectures, not only do existing deployment methodologies fail to exploit them but we also found no evidence of their adoption. We revisit the issue of quantisation with hardware-specific optimisations in Sec.~\ref{sec:hw-specific-optimisations}, where we use the Google's \texttt{NNAPI} and Qualcomm's \texttt{SNPE} to target specific processors in the SoC. \noindent \textbf{Observations:} \emph{While the research community has developed numerous ways to optimise DNNs for mobile execution, out-of-the-box support for such optimisations in modern frameworks' can be primitive and might not translate to run time gains at the expense of accuracy. Furthermore, most optimisations typically require model re-training and access to large-scale datasets. As such, we find that such optimisations are not widely adopted by the mobile AI developers. Quantisation, which can also be used to target different SoC accelerators, is the most widely-used optimisation. However, more advanced hybrid quantisation schemes remain unsupported.} \subsection{System-level optimisations} \label{sec:system-level-optimisations} Upon deploying a model, developers have different setup choices that can affect the model's performance. In this section, we discuss the impact of different tuneable model and system parameters on model performance. \noindent \textbf{Impact of batch size.} One common way of increasing a model's throughput is batching input samples together. By taking advantage of SIMD instructions of SoCs and accelerators, this technique increases the DNNs throughput by producing multiple inference results in one forward pass. In Fig.~\ref{fig:batch}, we show the batch throughput across devices when processing $2, 5, 10,$ and $25$ samples at a time with 4 threads. We only consider \texttt{TFlite} models that successfully ran all batch sizes across all devices (149 in total). % As expected, we see that the throughput increases as the batch size does. In fact, throughput scales almost linearly, which indicates that no bottleneck is hit up to that point. Moving the comparison across devices, we see that S21 offers significantly faster inference, with throughput being $2.14\times$ and $5.42\times$ higher compared to A70 and A20 respectively on the highest batch size. This result goes in line with our conclusions from Sec.~\ref{sec:on-device-lat}. We anticipate that when scaling to higher batch sizes, devices with lower core count and memory will hit memory bandwidth bottlenecks or out of memory errors, but we defer this for future work. \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/latency_energy_per_lib_without_snpe.pdf} \vspace{-0.65cm} \caption{ECDF of TFLite models latency and energy per CPU runtime.} \vspace{-0.4cm} \label{fig:nnapi-xnnpack-ecdf} \end{figure} \noindent \textbf{Impact of thread count.} Another tuneable parameter during mobile execution is the number of threads allocated for execution on CPU. By default, all cores of the device can be \cready{simultaneously} used during execution (ARM DynamIQ). However, in Heterogeneous Multi-core Processors (HMP) there usually exist multiple islands of cores, offering different dynamics and computational power. In Fig.~\ref{fig:threads} we show how the models' throughput varies when executed with different thread counts (2,4,8) and affinities (2,4). For the latter, we use process pinning to select which cores to target \cready{from} the heterogeneous core sets. We observe that the optimal thread count can vary across devices, with A20, A70 and S21 performing better with 4, 2 and 4 threads, respectively. We also see that the 8-threaded performance drops significantly across devices, indicating bottlenecked execution. Digging deeper into thread performance, we further plot four additional setups where we set the CPU affinity to run over a varying number of the largest cores. For example, \textit{4a2} means 4 threads with affinity 2, which means 4 threads will run over the top 2 cores of the mobile's SoC. As expected, we observe that any setup that sets the number of threads higher than the CPU affinity cores (\textit{4a2} and \textit{8a4}) results in significant performance degradation. This happens to due to time-sharing, having the other thread pinned on the same core waiting. Nonetheless we also witness some less expected findings, such as the fact that setting the affinity to the same number of top cores does not yield any significant gain, irrespective of our initial hypothesis that it would reduce process migration between cores. In fact, \textit{4a4} performs worse to 4 \cready{threads for A70} and similar is the case for \textit{2a2} and 2 \cready{threads} for A20. Predicting the optimal number of threads for mobile inference can be challenging as mobile devices have different CPU architectures with varying core frequencies as well as DVFS-enabled schedulers implementing energy-preserving policies \cite{kim2017enhancing}. Moreover, most mobile devices, nowadays, incorporate HMP SoCs (i.e. ARM big.LITTLE, DynamIQ) with varying number of cores per island (e.g. Q888 has $1\times$X1, $3\times$A78, $4\times$A55 ARM Cortex cores, whereas Q675 has $2\times$A76 and $2\times$A55 cores). Therefore, scheduling across core islands can bring sub-optimal results to DNN execution. However, when selecting the optimal thread count and affinity for each device, we see up to $2\times$ throughput gains overall. This suggests that tuning scheduling and thread count of DNN execution on heterogeneous devices and processors can yield significant improvements. \noindent \textbf{Observations:} \textit{Results from model-level optimisation indicate that there are alternative parameters for boosting inference throughput, but they should be tweaked in tandem with system-level factors, including the SoC topology and memory hierarchy to make efficient use of the underlying hardware.} \begin{figure}[t] \centering \includegraphics[width=\columnwidth]{images/latency_energy_per_lib_with_snpe.pdf} \vspace{-0.65cm} \caption{ECDF of TFLite and caffe models latency and energy per hardware target with SNPE.} \vspace{-0.4cm} \label{fig:snpe-ecdf} \end{figure} \subsection{Target generality vs. hardware-specific optimisations} \label{sec:hw-specific-optimisations} In the previous section, we have visited certain setup ``hyperparameters'', namely \textit{batch size} and \textit{process affinity} that depending on the use-case can enhance inference performance. In this section, we investigate framework-specific optimisations that can enhance performance, either by means of optimised operator kernel implementations or by moving computation to a different device altogether, i.e. targeting the GPU/NPU/DSP of the SoC. To this direction we run experiments measuring performance and energy of framework-specific optimisations on \texttt{TFLite} and \texttt{caffe} models across three alternative backends, namely \texttt{NNAPI}, \texttt{XNNPACK} and \texttt{SNPE}, on the Q845 board. We divert the reader to the Appendix for more information on these frameworks. \noindent \textbf{Traces of hardware-specific acceleration.} In our latest snapshot, we found some traces of hardware-specific acceleration. Specifically, we have found $71 (23.8\%)$ apps are using \texttt{NNAPI}, a single application using \texttt{XNNPACK} and three using \texttt{SNPE}. It is interesting to note that in the last case these models get blindly distributed to all devices, irrespective of having a Qualcomm-based SoC or not. In fact, they deploy both a \texttt{TFLite} and \texttt{dlc} variants of the same model. Overall, we see that many app models are missing on the efficiency promises of targeting specialized hardware or using target-optimized kernel operations. \noindent \textbf{Optimisation opportunities.} As a way to measure the potential benefit of using each of the aforementioned framework optimisations on different processing elements, we run two experiments, one on \texttt{TFLite} models for \texttt{NNAPI} and \texttt{XNNPACK} (Fig.~\ref{fig:nnapi-xnnpack-ecdf}) and another for \texttt{TFLite} and \texttt{caffe} models for \texttt{SNPE} (Fig.~\ref{fig:snpe-ecdf}). In each case, we compare the performance of framework-specific optimisations to the baseline CPU and GPU runs. The reason we do not compare across them is because the number of models commonly compatible is low. This highlights one succinct characteristic of such optimisations, the rudimentary support for operators across heterogeneous targets which in turn can hinder their widespread adoption. Results from our evaluation indicate that for CPU execution (Fig.~\ref{fig:nnapi-xnnpack-ecdf}), one is better off using the XNNPACK delegate for executing DNN inference $1.03\times$ faster and $1.13\times$ more efficiently on average. \texttt{NNAPI} did not prove its potential in our experiments, with its performance lagging behind the default CPU execution ($0.49\times$ slower and $1.66\times$ less efficient on average). This could be potentially attributed to unoptimised NN drivers from the vendor. On the other hand, when one is deploying with a vendor-specific platform, \texttt{SNPE} in our case, performance is better for DSP and GPU (Fig.~\ref{fig:snpe-ecdf}), compared to vanilla CPU and GPU runs. Specifically, these are $5.72\times$ and $2.28\times$ faster and $20.3\times$ and $8.39\times$ more efficient on average, compared to CPU runs. In comparison to GPU runs, these are $2.97\times$ and $1.19\times$ faster and $2.69\times$ and $1.11\times$ more efficient on average. In the case of CPU, however, the story is similar with our last experiment, further corroborating the story for non-optimised CPU drivers from the vendor. Note that CPU and GPU runs are executed at full-precision (\texttt{float32}), while the DSP runs in \texttt{int8}. Depending on the task this can result in accuracy variations, but we do not have access to model-specific data and labels to assess that. \noindent \textbf{Observations.} \textit{Results from our experiments say a mixed story about hardware and frameworks specific optimisations. While it can yield noticeably better performance across models, this is not always the case due to driver implementation or other low-level confounding factors. The dilemma of target generality vs hardware-specific optimisations ultimately lies in the hand of the developer and the resources they have at their disposal to extract every bit of performance in hardware.} \vspace{-0.2cm} \subsection{Cloud-based DNN models} Another approach to accelerate inference and bring intelligence to mobile apps, without having the need to specialise per target device is by offloading to the cloud. We can envision this approach being popular amongst developers who do not implement or train their own models or for models that are too computationally intensive to run locally on a mobile device or too expensive to optimise for each available target to offer a similar QoE. As mentioned in Sec.~\ref{sec:methodology_offline}, \texttt{gaugeNN}{} tracks app invocations of known cloud-based machine learning APIs in their code. This includes calls to Google (Google Cloud and Firebase ML) and Amazon services. \cready{Fig.}~\ref{fig:cloud} \cready{shows the number of applications invoking each of the cloud-based ML APIs across our dataset.} Overall, we find 524 distinct applications that use cloud AI APIs, a considerable increase of $2.33\times$ from our 2020 dataset. More specifically, 452 and 72 apps using Google AI services and Amazon respectively. \cready{This increase is inline with the increase in models deployed within the apps }(Sec.~\ref{sec:temporal}). \cready{Furthermore,} we observe that developers primarily use cloud-based image and video analytics to perform face identification, bar/QR code recognition, video analytics and chatbots. \noindent \textbf{Observations:} \emph{Our results indicate that cloud APIs from Google and Amazon are gaining in popularity as they allow developers to quickly deploy AI capabilities without the need for specialised ML expertise and costly infrastructure for training. Moreover, developers do not need to maintain training data on-premise and the resulting apps can be supported by heterogeneous devices with similar QoE.} \begin{figure}[t] \centering \includegraphics[width=0.75\columnwidth]{images/cloud_app_counts.pdf} \vspace{-0.2cm} \caption{\cready{Number of apps that invoke cloud-based ML APIs. Categories with less than 10 apps are excluded.}} \vspace{-0.4cm} \label{fig:cloud} \end{figure} \subsection{Implications \& Trends} \noindent \textbf{Proliferation of mobile AI.} Our results indicate that both on-device and cloud-supported DNN applications are increasing rapidly (doubled within a year). This is mostly driven by the availability of pre-trained models and easy-to-use cloud-based APIs, focusing mostly on vision tasks such as image detection and recognition. \noindent \textbf{Model reuse.} While there is \cready{much} research on bespoke model architectures, customisation and fine-tuning \cite{pan2009survey,emdl_ee_survey}, we observe that most developers use off-the-shelf DNN architectures. In fact, close to 80.9\% of the models are shared across two or more applications and a further 9.02\% of the remaining models share some layers (i.e., derived from a common model after fine-tuning). Simultaneously, there is a parallel trend of resorting to cloud-powered inference, further demonstrating a preference of developers towards turnkey solutions, instead of bespoke customised % \cready{ones}. With the current trajectory of AI, we expect more developers specialising on ML-based app development at least until the middleware (e.g. \texttt{NNAPI}) \cready{which} abstracts away ML-specific parameters becomes more prevalent. \noindent \textbf{DNNs and mobile hardware resources.} We witness that most applications do not take advantage of SoC-specific accelerators to accelerate their inference runtime, but rather target generality of their solutions, either by shipping vanilla CPU-only execution or by integrating framework-specific middleware options (e.g. \texttt{NNAPI}). Last, offloading inference to the cloud offers a consistent QoE, which is not dependent on the target device, at the expense of privacy \cite{spinn2020mobicom, almeida2021dyno} and monetary cost. This behaviour comes as a consequence of the fragmentation in the Android ecosystem in terms of hardware capabilities and software support (e.g. vendor-specific \texttt{NNAPI} drivers). Consequently, we anticipate the need of automated solutions for optimised development and deployment of ML solutions in mobile apps, which abstract away the complexity of efficiency and heterogeneity of the ecosystem. \noindent \textbf{Energy as a bottleneck.} While Deep Learning adoption is undisputed, with accelerating trajectory in the future, manufacturers turn to specialised hardware for faster and more efficient ML (e.g. NPUs). However, the same cannot be stated for battery technology and capacity, which remain relatively stale. Given what we observed for the segmentation scenario in Sec.~\ref{sec:energy-scenarios}, we anticipate energy sooner or later becoming a bottleneck in DNN deployment, requiring novel solutions to support mobile intelligence on the go. \noindent \textbf{DNN co-habitation.} With more and more applications shipping DNN-powered solutions, we also anticipate the co-existence and parallel runtime of more than one DNN in the future. Thus, researchers will need to tackle this emerging problem to efficiently support such runtimes, by means of OS or \mbox{hardware-level solutions.} \noindent \textbf{On-device learning and personalisation.} Last, so far in the paper we have only visited the task of mobile inference. In this setup, the weights of the model come pretrained on some centralised dataset and the device only performs forward propagation. However, with users becoming more and more privacy aware and with legislation discouraging the storage of user data without legitimate interest, on-device training and federated learning \cite{mcmahan2017communication,horvath2021fjord} become more and more prevalent \cite{paulik2021federated,MLSYS2019_bd686fd6}. Moreover, with the proliferation of on-device data, on-device personalisation \cite{10.1145/3446382.3448359} is also gaining traction. These tasks will create a different workload to be optimised for on-device runtime, for which current or future tools will need to provide support. \vspace{-0.2cm} \subsection{Limitations} In this work we have shed light to the use and performance of DNNs in real-world applications. However, we only focused on the Android smartphone landscape due to its larger market share and wide device fragmentation. These finding might only partially hold for other mobile ecosystems. % Furthermore, we have analysed the models that could be identified as DNN models. Obfuscated and encrypted models, or models that are downloaded outside of Google Play store were not benchmarked, despite us tracking the respective application as ML-powered. While there might be a different distribution of obfuscated models in the wild, the results from \cite{sec_dnns_apps} indicate otherwise. Our analysis included both offline introspection and dynamic benchmarking of the models. However, we did not investigate particular invocation paths and frequency of inference per app. We expect that some of these models are rarely used (e.g. credit card scanning) while others are utilised more frequently (e.g. activity detection). However, the real-world usage of these models requires device instrumentation and collecting telemetry data over a large user-base. % \cready{While previous works~\cite{almeida2018chimp,onwuzurike2018family} have proposed large-scale crowd-testing of virtualised mobile apps with real user interaction, these generally preclude testing sensor input-dependent functionality, on which DNNs depend.} We leave this as future work. Last, while we characterise DNN cloud offloading, we acknowledge that we miss any developers who use their own custom (e.g., REST-based) APIs to access remote execution. \section{conclusion} In this work, we have carried out a comprehensive empirical study of the most popular DNN-powered mobile apps. Using \texttt{gaugeNN}{}, we analyse thousands of mobile apps in the wild and identify a significant chasm between the deployed models and the state-of-the-art architectures and optimisation techniques. This is the first work to dig deeper into these aspects so as to provide guidelines for both the mobile application and the DNN-framework developer communities. \section{Additional platform information} \subsection*{DNN Model extraction} \label{sec:A-formats} In Sec.~\ref{sec:crawling} of the paper, we stated that \texttt{gaugeNN}{} supports file extraction from i)~the base \texttt{apk}, ii)~expansion files (\texttt{OBB}s) and iii)~Android App Bundles. The extracted files are matched against a compiled list of known DNN framework formats and validation rules to identify potential DNN models. The complete list of formats is shown in Table~\ref{tab:support}. \begin{table}[h!] \resizebox{0.99\linewidth}{!}{% \begin{tabular}{ll} \hline Framework & Extensions \\ \hline ONNX & .onnx, .pb, .pbtxt, .prototxt \\ \hline MXNet & .mar, .model, .json, .params \\ \hline Keras & .h5, .hd5, .hdf5, .keras, .json, .model, .pb, .pth \\ \hline Caffe & .caffemodel, .pbtxt, .prototxt, .pt \\ \hline Caffe2 & .pb, .pbtxt, .prototxt \\ \hline PyTorch & .pt, .pth, .pt1, .pkl, .h5, .t7, .model, .dms, .pth.tar, .ckpt, .bin, .pb, .tar \\ \hline Torch & .t7, .dat \\ \hline SNPE & .dlc \\ \hline FeatherCNN & .feathermodel \\ \hline TFLite & .tflite, .lite, .tfl, .bin, .pb \\ \hline TF & .pb, .meta, .pbtxt, .prototxt, .json, .index, .ckpt \\ \hline Sklearn & .pkl, .joblib, .model \\ \hline armNN & .armnn \\ \hline Mnn & .mnn \\ \hline Ncnn & .param, .bin, .cfg.ncnn, .weights.ncnn, .ncnn \\ \hline Tengine & .tmfile \\ \hline Flux & .bson \\ \hline Chainer & .npz, .h5, .hd5, .hdf5, .chainermodel \\ \hline \end{tabular}} \caption{Frameworks and formats validated by \texttt{gaugeNN}{}} \label{tab:support} \end{table} \section{Additional experiment information} \subsection*{Hardware-specific acceleration frameworks} As per Sec.~\ref{sec:hw-specific-optimisations}, we run our \texttt{TFLite} models against alterative backends, namely \texttt{NNAPI}, \texttt{XNNPACK} and \texttt{SNPE}. Below we provide additional information for each one: \begin{itemize}[leftmargin=0pt,topsep=0pt,label={}] \item \textbf{NNAPI}\footnote{\url{https://developer.android.com/ndk/guides/neuralnetworks}}\textbf{.} Neural Networks API (\texttt{NNAPI}) is a middleware-level library in Android that sits between the machine learning framework library used by an application (e.g. \texttt{TFLite}) and the the Android Hardware Acceleration Layer (HAL). It essentially provides an abstraction layer, handling hardware acceleration through vendor and hardware specific NN drivers, which provide efficient operator implementations for CPU, GPU, DSP, NPUs or other kinds of specialised hardware. Execution falls back to CPU in the absence of such drivers or unsupported operators. \texttt{TFLite} is at the foreforent of NNAPI delegation and PyTorch Mobile has announced support for it. Nonetheless, \texttt{NNAPI} being in its infancy comes with some shortcomings, mainly in the realm of OS version support (Android P and above), NN drivers availability and heterogeneity in performance gains. \item \textbf{XNNPACK}\footnote{\url{https://github.com/google/XNNPACK}}\textbf{.} \texttt{XNNPack} provides a low-level, highly optimised library for NN inference operators across platforms. Specifically for ARM, it supports efficient implementation of operators through Neon instructions, as well as inference on sparse networks, which offers a practical solution to the problem described in Sec.~\ref{sec:pruning}. Despite the claimed performance benefits, operator support is limited and if not careful can lead to performance penalties instead of gains when compared to the baseline CPU delegates. \item \textbf{SNPE}\footnote{\url{https://developer.qualcomm.com/docs/snpe/overview.html}}\textbf{.} The Snapdragon Neural Processing Engine (\texttt{SNPE}) constitutes a vendor-specific runtime for execution of DNNs on Qualcomm SoCs, targeting the CPU, Adreno GPU or Hexagon DSP of the SoC, handling quantisation in the proper precision internally. It uses its own representation for NNs (\texttt{.dlc} format) supports conversion from different frameworks, including \texttt{caffe} and \texttt{TFLite}. However, while SNPE can potentially take advantage of hardware-specific optimisations, it can only target Qualcomm SoCs, trading off generality for performance. Operator support can also be of issue in \texttt{SNPE}, supporting CPU fallback in case of hardware-specific unsupported operations. \end{itemize}
{ "timestamp": "2021-09-30T02:00:50", "yymm": "2109", "arxiv_id": "2109.13963", "language": "en", "url": "https://arxiv.org/abs/2109.13963" }
\section{Introduction} Evidence from nature suggest that even simple vision systems can provide meaningful information to enable perceptual motor control \cite{giurfa1997insect, warrant2017remarkable}. For instance, compound eyes in most insects consist of multiple but simple eyes geometrically arranged to provide visual information over a wide field of view and with low resolution (Figure \ref{fig:insecteye}(a)). Yet insects are able to exhibit various locomotion and flight behaviours using such simple forms of visual feedback. This work is motivated to investigate a particular research question: can perceptual legged locomotion be learned using \textit{only} sparse visual observations? Traditionally, control and perception are approached separately and then integrated in a modular manner, which can be computationally expensive to run in real-time and requires powerful on-board computers. In contrast, a learning-based approach using neural networks offers an alternative to bridge sensing and closed-loop control by feeding perceptual data directly into the neural network based control policy. In this way, we can train a policy that has access to both proprioceptive and exteroceptive data within the feedback loop, and thus is an integral manner to learn environment-aware behaviours. Here, we use the term \textit{perceptual locomotion} for such capabilities. \begin{figure}[t] \centering \includegraphics[width=86mm]{fig1.1d.png} \caption{Depth-sensing-driven perceptual locomotion and its bio-inspiration: (a) compound eyes consist of simple photo-receptor units, (b) robot approaching stairs, (c) descending stairs, (d) climbing stairs -- all using a small set of simple visual inputs. } \label{fig:insecteye} \vspace{-4mm} \end{figure} The capability of blind walking, i.e., by using proprioceptive feedback only, has achieved remarkable performance using either mixed-integer optimization \cite{MIT_Opti, RPC} and model-based control \cite{MPC_Guiyang, NMPC, Angelini_MPC, MIT}, or machine learning which heavily utilizes physics simulations to train control policies by reinforcement learning (RL) \cite{hwangbo2019learning, ETH, DRL_Haarnoja, DRL_Xie, MELA, RMA} or imitation learning \cite{ETH_Imitation, SonyImitation, peng2020learning}. While optimization-based controllers are very good at guaranteeing stability, they are limited in scalability to unmodeled settings in the real world, due to expensive optimization \cite{MIT}. Noticeably, optimization-based approaches can be capable of traversing the terrains studied in this work, but they would require detailed mapping of the environment and require substantial or offline computation of optimization-based planning. As an attractive alternative, learning-based controllers can leverage large amounts of experience gathered in simulation to produce control policies that can exhibit robust behaviours \cite{ETH, MELA, RMA} without online computational overhead as in optimization-based control. Learning-based controllers have demonstrated certain degree of environment awareness, although these approaches typically consist of blind locomotion and passive reflexes \cite{CPG_Victor_Claudio, ETH}. Such inherent limitation is that reflexes are only triggered after interactions with the environment take place, e.g., a fore leg first collides against a step before a reaction occurs. Hence, these approaches do not form active control and have limited environmental awareness. In contrast, the proposed perceptual locomotion here can perceive the presence of a step through exteroceptive inputs, and actively modulate the gait online and proactively traverse difficult terrains. There are existing works that have integrated visual feedback into controllers in some distinct way to our approach, typically requiring pre-processing or ground-truth data and adding online computational complexity such as by building maps or extracting encodings \cite{miki2022learning}, instead of providing the locomotion policy with exteroceptive observations directly. Camera feeds have been used as input to a neural network trained to predict safe footholds \cite{stereoVision2, siravuru2017deep}. The importance of Lidar sensors to produce environment-aware controllers has been highlighted \cite{sensorFusionVision}. Hybrid methods can combine RL and optimization, and use terrain height maps to produce adaptive locomotion behaviours \cite{ox, tsounis2020deepgait}. The use of height maps has been explored to develop hierarchical controllers for bipedal locomotion as well \cite{deeploco}. Outside of robotics applications, learning-based approaches have also been explored to map sparse visual perception of ground surfaces to whole-body motion synthesis of animation characters \cite{holden2017phase}. \vspace{-0.5mm} \subsection{Motivation and Contribution} Inspired by the vast capabilities that simple visual feedback as that of insects enables in nature \cite{giurfa1997insect, warrant2017remarkable}, we use a robot as a platform to study the capabilities that sparse visual observations can endow to legged locomotion. This work aims to answer the following research questions: (1) What is the basic type of visual representation to achieve perceptual locomotion on uneven human-centered terrains? (2) For the traversal of common steps and stairs in human-centered environments, are sparse visual observations sufficient to render visual abstraction? (3) What is the learning architecture that can integrate such visual abstraction easily and effectively? Especially being compatible with many existing \textit{blind locomotion} schemes? As a long-term goal to deploy useful applications in unstructured environments, we envision that the next stage of legged locomotion research is to investigate effective and efficient exploitation of visual information -- to achieve robust and agile perceptual locomotion. Hereby, this work investigates a learning-based approach to directly integrate visual perception, by using sparse observations for effective terrain-aware perceptual locomotion. To do so, we use a ANYmal B robot and the RaiSim physics engine \cite{raisim} for simulations, due to its good accuracy and efficiency for multi-body dynamics simulation. Our method is not specifically developed for one robot and should be applicable to other quadrupeds with joint position or impedance control. The contributions of this work are summarised as follows: \begin{enumerate} \item A study with positive results to show the feasibility of using only sparse visual inputs of surfaces ahead of the robot, and training of a perceptual locomotion policy that can traverse uneven human-centered terrains with high success rates even on unseen terrains; \item A design guide to estimate the set of depth observations which are sufficient for abstract representation of steppable terrain irregularities for perceptual locomotion; \item A novel learning curriculum that enables the training of a single policy that succeeds on multiple tasks, including omnidirectional walking on flat grounds, and walking over steps, ramps and stairs, as well as unseen terrains with ditches, barriers, and alternating stairs, while demonstrating robustness to exteroceptive noise and ablations. \end{enumerate} The remainder of the paper is organized as follows. In Section~\ref{sec:learning_perceptual_locomotion}, we provide our formulation of the chosen visual perception, a discussion of the learning-based controller architecture, a comprehensive description of the RL paradigm and our custom training curriculum. In Section~\ref{sec:results}, we provide a summary of our training process, a detailed analysis of locomotion over high stairs, and present the results of our extensive tests. Finally, we conclude in Section~\ref{sec:conclusion} \vspace{-1mm} \section{Learning Perceptual Locomotion} \label{sec:learning_perceptual_locomotion} Training perceptual locomotion via RL needs several considerations. First, the main elements in blind locomotion policies shall be relevant too, e.g., choice of proprioceptive state or reward terms. Second, the chosen representation for perceptual inputs, its compactness, and capability to capture environmental features is critical for successful policies. Yet this is relevant, as exteroceptive information often comes with much higher dimensions than proprioceptive data of the robot, e.g., images contain more pixels than the number of actuators or encoders on a robot. However, as the observation space of the policy increases, training becomes increasingly more expensive due to the curse of dimensionality \cite{bellman1966dynamic}. Hence, it is desirable to choose a representation that is relatively small to enable training of the policy, and is also sufficient to capture features from the environment that are relevant to perceptual locomotion, e.g., to stepping over stairs. In this sense, we propose a set of visual inputs that is designed to capture terrain with sufficient resolution as elaborated in Section \ref{sec:principle:aaa}, while also being small enough not to impact training time excessively. Lastly, we design a specific training curriculum to enable a single policy to robustly locomote over various terrains. Several key assumptions are: terrains -- those in human-centered environments, mainly consisting of flat ground, steps, ramps, stairs, barriers and ditches to be traversed in a forward direction, as commonly needed in industrial applications; robots -- medium size quadrupeds that can physically traserve in the aforementioned surfaces; visual feedback -- sparse exteroceptive perception which can be obtained with matching specifications of common commercial cameras. More details are in the following sub-sections. \vspace{-0.5mm} \subsection{Formulation of Sparse Visual Perception} \label{sec:principle:aaa} With regards to our choice of exteroceptive sensing, we are motivated by the capability to leverage small and affordable RGB-D and Lidar sensors to facilitate future deployment on real robot platforms. For this reason, we choose to replicate the vertical field-of-view of a popular commercial-off-the-shelf RGB-D sensor, the Intel RealSense D435, as shown in Figure \ref{fig:ray_res}. The number of rays used as an input for the perceptual locomotion policy was designed to provide sufficient spatial resolution and to perceive relevant features from the terrains selected, e.g., the presence of a step or stairs. We propose the following design guide to determine the number of depth observations: during locomotion, the size of the smallest segment between rays immediately ahead of the robot shall be smaller than its effective foot size, in order for the policy to be able to perceive surfaces where the feet can step on. The depth sensing device is oriented such that the edge of the field of view enables depth sensing in the downwards direction, therefore, the robot can perceive terrain features immediately in front and within its workspace. Using Figure \ref{fig:ray_res}, on flat ground, the inter-ray distance is given by $d_{i,j} = h ( \tan{\theta_{1,j}} - \tan{\theta_{1, i}})$, where $\theta_{1, i}$ is the angle between rays 1 and $i$. In our case, the smallest inter-ray distance is found between rays 1 and 2, which for $N$ rays is given by $d_{1,2} = h ( \tan{\theta_{1,2}} -0 )= h \tan(\frac{60 ^{\circ}}{N-1})$. Given the dimensions of our robot, we have $h \approx 50$ cm and an effective foot size of $d_{\text{foot}} =$ 5.5 cm. Therefore, $N=11$ is the smallest number of rays that yields $d_{1,2} < d_{\text{foot}}$ cm. Whilst being a sparse observation of the environment ahead of the robot, we provide this design guide as a means to determine a sufficiently fine resolution to capture the presence of the smallest obstacle the robot could possibly step on within immediate proximity. \begin{figure}[t] \centering \includegraphics[width=80mm]{images/raysfootsize5.png} \caption{Design guide to determine the resolution of sparse observations for perceiving steppable terrain irregularities. } \label{fig:ray_res} \vspace{-4mm} \end{figure} \subsection{Learning-based Control Architecture} \label{sec:principle:ccc} The control architecture of our learning-based policy is depicted in Figure \ref{fig:control_architecture}, which is identical during training and testing. The neural network policy receives a state observation $s$ that consists of the concatenation of two vectors, a proprioceptive state vector $s_p$ and an exteroceptive state vector $s_e$. The proprioceptive state includes: body height in global coordinates $d_z$, trigonometric terms from the body rotation matrix encoding roll $\phi$ and pitch $\theta$, yaw angular error $\bar{\psi}$, body horizontal velocity error $(\bar{v_x},\bar{v_y})$, joint angles $q$, body linear velocity $(v_x, v_y, v_z)$, body angular velocity $(\dot{\phi}, \dot{\theta}, \dot{\psi})$, and joint angular velocities $\dot{q}$. Note that $\bar{\psi}$ and $(\bar{v_x},\bar{v_y})$ enable omnidirectional base control. The exteroceptive states $s_e$ are the distances measured from the 11 rays in our setup, where the measurement provides distances to physical obstacle, and is clipped between 0.1 m and 8 m which matches the specifications of commercially available RGB-D cameras. Fully-connected feedforward neural networks with hidden layer dimensions as in Table \ref{tab:ppo} are used for policy and value networks in an actor-critic framework. The policy network outputs action $a$, which represents the desired joint positions. These are fed into a PD controller running at 100 Hz to obtain reference joint torques, which are then used by the physics engine \cite{raisim} to time step the simulation at 400 Hz. The corresponding state measurements are then fed back into the policy as a closed-loop. In line with recent works \cite{MELA, RMA}, we design our policy to learn joint angles instead of joint torques as this avoids learning low-level joint dynamics, which are often poorly simulated, with the expectation to improve simulation-to-reality transfer of our method in the future. \begin{figure \centering \includegraphics[width=80mm]{nn_architecture3.png} \caption{Control architecture used to train our neural network policy for perceptual locomotion.} \label{fig:control_architecture} \vspace{-3mm} \end{figure} \begin{table \small \centering \caption{Reward parameters in the training environment.} \label{tab:parameters} \scalebox{0.95}{\begin{tabular}{|l|l|l|} \hline $c_{\boldsymbol{\tau}} = -0.0001$ & $c_{\boldsymbol{\dot{q}}} = -0.0001$ & $c_{\phi, \theta} = -0.05$ \\ \hline $c_{\boldsymbol{v}} = 1$ & $c_{\psi} = -5$ & $\bar{\psi}_{\text{clip}} = 0.3$ \\ \hline \end{tabular}} \label{tab:reward} \vspace{-3mm} \end{table} \subsection{Learning Locomotion via Reinforcement Learning} \label{sec:principle:bbb} \subsubsection{Reinforcement Learning} \label{sec:principle:bbb:a} We formulate the control problem as a Markov Decision Process (MDP), a mathematical formalism for modelling sequential decision processes, formally defined by a 4-tuple $\langle \mathcal{S}, \mathcal{A}, P(s_{t+1} \vert s_{t}, a_{t}), R \rangle$, where $ \mathcal{S}$ is the set of possible states, $ \mathcal{A}$ is the set of possible actions, $ P(s_{t+1} \vert s_{t}, a_{t})$ is the transition probability given a state-action pair and $R$ is a reward provided by the environment when transitioning into a new state. The control policy is represented by a neural network $\pi_{\theta}$. Proximal Policy Optimization (PPO) \cite{PPO}, an on-policy actor-critic RL algorithm, is used to train the policy and the PPO update is given by \begin{align} \theta_{k+1} = \arg \max_{\theta} \mathbb{E}_{s,a \sim \pi_{\theta_k}} \left[ L(s,a,\theta_k, \theta)\right] \end{align} with a clipped loss function $L(s,a,\theta_k, \theta)$ that has a surrogate term and an entropy term. \subsubsection{Reward Design} \label{sec:principle:bbb:b} There are two groups of reward terms, those that are used for the general design of a locomotion policy and those that specifically enable the control policy to use state error terms $\bar{v_x},\bar{v_y}, \bar{\psi}$, as control inputs in the feedback loop, where each error term is defined as the difference between the reference and measurement state. The reward terms that are designed to enable smooth locomotion are a torque term $r_{\boldsymbol{\tau}}$ given by \begin{align} r_{\boldsymbol{\tau}} = c_{\boldsymbol{\tau}} \vert \vert \boldsymbol{\tau} \vert \vert^2, \; c_{\boldsymbol{\tau}} < 0, \label{eq:r_tau} \end{align} and a joint velocity term $r_{\boldsymbol{\dot{q}}}$ given by \begin{align} r_{\boldsymbol{\dot{q}}} = c_{\boldsymbol{\dot{q}}} \vert \vert \boldsymbol{\dot{q}} \vert \vert^2, \; c_{\boldsymbol{\dot{q}}} < 0 , \label{eq:r_qdot} \end{align} to penalize excessive use of actuators, and a body orientation term $ r_{\phi, \theta}$ given by \begin{align} r_{\phi, \theta} = c_{\phi, \theta} \frac{\vert \phi \vert + \vert \theta \vert}{\pi}, \; c_{\phi, \theta} < 0, \label{eq:r_pt} \end{align} to encourage the body to remain horizontal. Moreover, the reward terms associated with the control inputs are $r_{\boldsymbol{v}}$ given by \begin{align} r_{\boldsymbol{v}} = 1 - c_{\boldsymbol{v}} \vert \vert \frac{\boldsymbol{v_{\text{ref}}} - \boldsymbol{v}}{\vert \vert \boldsymbol{v_{\text{ref}}} \vert \vert } \vert \vert^2, \; c_{\boldsymbol{v}} > 0, \label{eq:r_v} \end{align} to minimize velocity error for any given reference velocity, and $r_{\psi}$ given by \begin{align} r_{\psi} = c_{\psi} \min (\vert \frac{\psi_{\text{ref}} - \psi}{\pi} \vert, \bar{\psi}_{\text{clip}}), \; c_{\psi} < 0, \label{eq:r_psi} \end{align} to minimize heading error up to a clipping value $\bar{\psi}_{\text{clip}}$. The total reward at any time is defined as the sum of rewards. Reward parameters used for training are in Table \ref{tab:reward}. \begin{figure \centering \includegraphics[width=80mm]{multi_terrains3.png} \caption{Types of terrains used for training and testing: (a) flat ground, (b) step, (c) ramp, (d) stairs.} \label{fig:terrain_types} \vspace{-4mm} \end{figure} \subsubsection{Curriculum Learning} \label{sec:principle:bbb:c} Whilst it may be possible to train a single policy directly by uniformly sampling the set of targeted tasks, due to inherent limitations with RL algorithms related to the exploration-exploitation dilemma and high-dimensional control problems, we designed a training curriculum to improve policy convergence for better sample efficiency. Two aspects are considered: first, training was conducted using 100 parallel environments, where a gradient step is taken using the averaged gradient across the environments; second, the training process followed a custom curriculum to easily allow the convergence of the control policy. At any point during training, the control task $\mathcal{T}_i$ for each of the parallel environments is sampled from a specified probability distribution $p(\mathcal{T})$, which is modified during the different stages of the training curriculum. Training a successful, convergent policy on the final stage of the curriculum \textit{ab initio} was not possible within our training experiments, which suggests that the training curriculum is instrumental to achieve successful convergence of the control policy. The training curriculum consists of a warm-up stage and four main stages. During each training stage, the first policy to achieve a minimum policy noise of 0.2 is selected, where noise is calculated as the standard deviation of action outputs from the mean output during that stage. In the warm-up stage, a control policy was trained on a uniform distribution of forward reference velocities [0, 0.7] m/s, where all values below 0.15 m/s are clipped to zero, and lateral reference velocity and reference yaw are zero. The resulting policy is then taken through four stages which modify $p(\mathcal{T})$ and $v_{\text{ref}}$ by including tasks in terrains shown in Figure \ref{fig:terrain_types}: \begin{itemize} \item Stage $\alpha$: introduce step, ramp, and stairs in the environment distribution with $p(\mathcal{T})$ of 0.1, 0.1, 0.5 respectively and 0.3 for flat floor. For step, ramp, and stairs, forward reference velocity is uniformly sampled from [0.15,0.7] m/s. For flat floor, reference velocity is sampled as in the warm-up stage. \item Stage $\beta$: same as previous stage, except that flat floor tasks include a lateral reference velocity component sampled from [-0.3, 0] m/s while reference yaw is zero, i.e., reference velocity can have forward and lateral left components. \item Stage $\gamma$: same as previous stage, except that flat floor tasks sample longitudinal velocity from [-0.7,0.7] m/s, i.e., a backward reference velocity component is introduced. \item Stage $\delta$: same as previous stage, except that flat floor tasks sample lateral velocity from [-0.3,0.3] m/s, i.e., a right lateral velocity component is introduced. \end{itemize} The terrains in Figure \ref{fig:terrain_types} are designed based on their typical dimensions in environments inhabited by humans \cite{calistairs}. Ramp inclinations are between $5.7^{\circ}$-$11.3^{\circ}$, steps heights are between 10-20 cm and stairs step length is 30 cm. \begin{figure \centering \subfloat[]{\includegraphics[clip, trim=0mm 0mm 0mm 0mm, width=78mm]{returns_m.pdf}} \\[-5ex] \hspace{0.08em} \subfloat[]{\includegraphics[clip, trim=0mm 0mm 0mm 0mm, width=78mm]{loss_m.pdf}} \\[-5.5ex] \hspace{0.08em} \subfloat[]{\includegraphics[clip, trim=0mm 0mm 0mm 0mm, width=78mm]{pnoise_m.pdf}} \\[-3ex] \caption{Episode returns, policy loss and noise during curriculum training from re-running our curriculum using 6 random seeds. Policy loss is the sum of surrogate loss and entropy loss. Results overlap between stages, as not all runs train each stage for the same number of steps. In contrast, \textit{Uniform} is trained from scratch on all tasks sampled uniformly, which yields lower performance. } \label{fig:train} \vspace{-2.5mm} \end{figure} \begin{table \small \centering \caption{Parameters used during training.} \scalebox{0.95}{\begin{tabular}{|l|l|} \hline Policy net: [128, 256, 128] & Value net: [128, 256, 128] \\ \hline $\gamma = 0.996$ & $\lambda = 0.95$ \\ \hline $\epsilon = 0.2$ & $\alpha = 0.0002$ \\ \hline $c_1 = 0.5$ & $c_2 = 0.01$ \\ \hline $N_{\text{learning epochs}} = 4$ & $N_{\text{mini batches}} = 4$ \\ \hline Maximum gradient norm: 0.5 & Minimum policy noise: 0.2 \\ \hline \end{tabular}} \label{tab:ppo} \vspace{-4mm} \end{table} \section{Results of Perceptual Locomotion} \label{sec:results} This section presents the training process and evaluates the performance of the policy over seen and unseen terrains, and under the presence of exteroceptive noise and ablations. \begin{figure*}[t] \centering \includegraphics[width=180mm]{all_snapshots3.png} \caption{ Perceptual locomotion policy tested on various seen and unseen tasks: (a)-(c) Tests on steps, ramps, and stairs; (d) Tests on flat ground with a turning command; (e) Ablation tests of the exteroceptive observations; (f) Tests on stairs using Gaussian noise with standard deviation $\sigma_n$ in exteroceptive observations; (g)-(i) Tests in terrains not seen during training using noise, consisting of (g) barriers, (h) ditches, and (i) alternating stairs of varying step length.} \label{fig:all_tests} \vspace{-4mm} \end{figure*} \subsection{Training Process} \label{sec:res:train} The policy was trained using the parameters in Table \ref{tab:ppo}. The training curriculum enables the policy to learn the mapping required to control the robot in two groups of tasks: those that involve omnidirectional base movements on flat ground as well as those that involve perceptive locomotion over obstacles in forward direction. In contrast to our training curriculum, training from scratch by uniformly sampling all tasks from the final stage of the curriculum yields significantly worse performance. Using 6 random seeds, we provide in Figure \ref{fig:train} we show episode returns, policy loss and noise during the stages of curriculum training, as well as using uniform sampling across all tasks. In Figure \ref{fig:train}, a training step corresponds to updating the policy using $N_{\text{learning epochs}}$ gradient steps, each with $N_{\text{mini batches}}$ trajectories. Results in Figure \ref{fig:train} show how the performance of the policies trained using our curriculum is significantly better than that of the uniform sampling runs. Policies trained with uniform sampling perform well on flat ground, but often trip or fail when stepping over obstacles, particularly in the stairs terrain, yielding the lower episode returns. Training with uniform sampling was stopped after the same number of steps in which our curriculum reaches our stop condition on policy noise. Considering the increasing trend of episode returns, it is possible that the uniform sampling approach could match performance of our curriculum, however this could happen after what we consider is prohibitively long time, especially compared to our training curriculum. In this sense, we argue that our training curriculum successfully aids policy convergence and improves sample efficiency to achieve top performance across our tasks. Using a desktop machine (8-core Intel i9 CPU, a 8GB GeForce RTX 2080 GPU), across all of our training runs, the average wall-clock training time for the warm-up training stage was 3.7 hours, stage $\alpha$ was 5.2 hours, stage $\beta$ was 5.3 hours, stage $\gamma$ was 6.3 hours, and stage $\delta$ was 6.5 hours. Due to the parallelized training environment, the real-time factor of the simulation during training was 300x, i.e. the policy was trained for an average total simulated time of 11.2 months. Most significantly, our preliminary training experiments showed that a blind locomotion policy which only uses the proprioceptive state $s_p$ as policy observation could be trained on the warm-up stage in 0.6 hours, which implies that the addition of the chosen exteroceptive state $s_e$ to train a perceptive locomotion policy does not increase training time prohibitively. \begin{figure \centering \subfloat[]{\includegraphics[clip, trim=0mm 0mm 0mm 0mm, width=0.42\textwidth]{data_stairs_04_rand08.0.pdf}} \\[-5.5ex] \hspace{0.08em} \subfloat[]{\includegraphics[clip, trim=0mm 0mm 0mm 0mm, width=0.42\textwidth]{data_stairs_04_rand06.0.pdf}} \\[-5.5ex] \hspace{0.08em} \subfloat[]{\includegraphics[clip, trim=0mm 0mm 0mm 0mm, width=0.42\textwidth]{data_stairs_04_rand04.0.pdf}} \\[-5.5ex] \hspace{0.08em} \subfloat[]{\includegraphics[clip, trim=0mm 0mm 0mm 0mm, width=0.42\textwidth]{data_stairs_04_randbh_l.pdf}} \\[-3.5ex] \caption{Body velocities and height measured during three sets of experiments on stairs. The dotted line on the velocity plots indicates the time at which the fore legs reach the first step of the stairs.} \label{fig:stairs_diff_vels} \vspace{-4mm} \end{figure} \begin{figure \centering \subfloat[]{\includegraphics[clip, trim=0mm 0mm 0mm 0mm, width=0.42\textwidth]{stepping2.png}} \\[-5ex] \hspace{0.08em} \subfloat[]{\includegraphics[clip, trim=0mm 6mm 0mm 0mm, width=0.42\textwidth]{data_stairs_08_fv.pdf}} \\[-5ex] \hspace{0.08em} \subfloat[]{\includegraphics[clip, trim=0mm 6mm 0mm 0mm, width=0.42\textwidth]{data_stairs_08_hf.pdf}} \\[-5.5ex] \hspace{0.08em} \subfloat[]{\includegraphics[clip, trim=0mm 0mm 0mm 0mm, width=0.42\textwidth]{data_stairs_08_kf.pdf}} \\[-3.5ex] \caption{Stairs climbing depicted at $t=2.5$ seconds. Body velocities, hip and knee flexor joint trajectories are feature modification of the oscillatory motions during stairs climbing as well as increased velocity fluctuations.} \label{fig:stairs_climbing_1} \vspace{-5mm} \end{figure} \subsection{Forward Walking on Stairs} \label{sec:res:terr} The trained policy was extensively tested on the stairs as those in Figure \ref{fig:all_tests}(c), as it presents the highest difficulty for perceptual locomotion. Figure \ref{fig:stairs_diff_vels} shows body velocities and height measured during 3 sets of experiments with step height uniformly sampled between 10-20 cm, with reference forward velocities of $0.4$, $0.6$ and $0.8$ m/s. A total of 60 experiments were run per reference velocity. The velocity data in Figure \ref{fig:stairs_diff_vels} show how the standard deviation of the measured forward and lateral velocity increases during stairs climbing when compared to locomotion over flat ground before reaching the stairs. Lateral velocity presents the highest increase in standard deviation upon reaching the stairs, particularly for forward reference velocities of 0.4 m/s and 0.8 m/s, which can be explained by the lateral rocking motions require to climb the stairs at different reference forward velocities. Upon visual examination of the gaits, we believe that forward velocity of 0.4 m/s performs worse because it requires the greatest lateral rocking to maintain a stable gait when climbing such large stairs, which agrees with Table \ref{tab:success_rates} reporting the largest mean $\bar{v_y}$ error on stairs across forward velocities tested. \begin{table* \small \centering \caption{Summary of performance over different terrains from 60 repeated tests per task. Success rate is included, as well as mean squared error, mean and standard deviation for the error terms relating to the three control variables of the learned control policy, namely $\bar{v_x}$, $\bar{v_y}$, $\bar{\psi}$.} \scalebox{0.92}{\begin{tabular}{|c|c|c|c|c|} \hline Task & \textbf{Success Rate} & $\bar{v_x}$ [m/s] & $\bar{v_y}$ [m/s] & $\bar{\psi}$ [deg] \\ \thickhline Stairs, $v_x = 0.8, v_y = 0$ & 93.3\% & 0.03, 0.02 $\pm$ 0.14 & 0.08, 0.07 $\pm$ 0.27 & 1.80, 0.26 $\pm$ 1.32\\ \hline Stairs, $v_x = 0.6, v_y = 0$ & 100\% & 0.02, 0.01 $\pm$ 0.15 & 0.17, 0.15 $\pm$ 0.39 & 1.73, 0.15 $\pm$ 1.31\\ \hline Stairs, $v_x = 0.4, v_y = 0$ & 91.7\% & 0.02, -0.01 $\pm$ 0.13 & 0.14, 0.18 $\pm$ 0.32 & 1.31, -0.04 $\pm$ 1.15\\ \hline Ramp, $v_x = 0.8, v_y = 0$ & 100\% & 0.04, 0.14 $\pm$ 0.13 & 0.05, -0.05 $\pm$ 0.22 & 1.08, 0.09 $\pm$ 1.04\\ \hline Ramp, $v_x = 0.6, v_y = 0$ & 100\% & 0.07, 0.22 $\pm$ 0.16 & 0.15, -0.04 $\pm$ 0.39 & 0.39, -0.16 $\pm$ 0.60\\ \hline Ramp, $v_x = 0.4, v_y = 0$ & 100\% & 0.05, 0.20 $\pm$ 0.08 & 0.08, -0.02 $\pm$ 0.29 & 0.30, -0.19 $\pm$ 0.51\\ \hline Step, $v_x = 0.8, v_y = 0$ & 100\% & 0.02, -0.02 $\pm$ 0.13 & 0.05, 0.00 $\pm$ 0.23 & 1.42, 0.09 $\pm$ 1.19\\ \hline Step, $v_x = 0.6, v_y = 0$ & 100\% & 0.01, -0.04 $\pm$ 0.18 & 0.03, 0.04 $\pm$ 0.18 & 0.92, -0.07 $\pm$ 0.96\\ \hline Step, $v_x = 0.4, v_y = 0$ & 100\% & 0.05, -0.02 $\pm$ 0.22 & 0.09, 0.11 $\pm$ 0.32 & 1.34, -0.16 $\pm$ 1.14\\ \hline Flat ground, $v_x = 0.7, v_y = 0$ & 100\% & 0.01, -0.01 $\pm$ 0.09 & 0.04, -0.01 $\pm$ 0.19 & 0.69, -0.12 $\pm$ 0.82\\ \hline Flat ground, $v_x = -0.7, v_y = 0$ & 100\% & 0.01, 0.03 $\pm$ 0.08 & 0.03, 0.01 $\pm$ 0.10 & 0.30, -0.06 $\pm$ 0.55\\ \hline Flat ground, $v_x = 0.5, v_y = 0$ & 100\% & 0.00, -0.03 $\pm$ 0.06 & 0.03, 0.11 $\pm$ 0.14 & 0.42, -0.15 $\pm$ 0.63\\ \hline Flat ground, $v_x = -0.5, v_y = 0$ & 100\% & 0.01, 0.07 $\pm$ 0.05 & 0.01, 0.04 $\pm$ 0.09 & 0.27, -0.08 $\pm$ 0.51\\ \hline Flat ground, $v_x = 0, v_y = 0.3$ & 100\% & 0.04, -0.03 $\pm$ 0.18 & 0.05, 0.02 $\pm$ 0.09 & 0.24, -0.09 $\pm$ 0.48\\ \hline Flat ground, $v_x = 0, v_y = -0.3$ & 100\% & 0.03, 0.02 $\pm$ 0.16 & 0.03, 0.01 $\pm$ 0.10 & 0.19, -0.09 $\pm$ 0.42\\ \hline Flat ground, $v_x = 0.3, v_y = 0.3$ & 100\% & 0.04, 0.03 $\pm$ 0.14 & 0.04, -0.03 $\pm$ 0.19 & 0.32, -0.02 $\pm$ 0.56\\ \hline Flat ground, $v_x = -0.5, v_y = -0.3$ & 100\% & 0.05, 0.06 $\pm$ 0.22 & 0.03, 0.07 $\pm$ 0.14 & 0.33, -0.03 $\pm$ 0.57\\ \hline \end{tabular}} \label{tab:success_rates} \vspace{-5mm} \end{table*} \begin{table* \small \centering \caption{Summary of performance on the stairs task with $v_x = 0.6, v_y = 0$, where metrics are obtained from 60 repetitions per task. Success rate per task is included, as well as the \textbf{ratio} of error terms with performance in the noiseless case for this task as shown in Table \ref{tab:success_rates}. Error terms are as in Table \ref{tab:success_rates}. Gaussian noise standard deviation is given by $\sigma_n$.} \scalebox{0.95}{\begin{tabular}{|c|c|c|c|c|} \hline Exteroceptive Noise $\sigma_n$ [m] & \textbf{Success Rate} & Ratio $\bar{v_x}$ [-] & Ratio $\bar{v_y}$ [-] & Ratio $\bar{\psi}$ [-] \\ \thickhline 0.01 & 100\% & 0.81, 0.85 $\pm$ 0.89 & 0.92, 1.11 $\pm$ 0.95 & 0.74, 0.96 $\pm$ 0.85\\ \hline 0.03 & 100\% & 0.91, 0.94 $\pm$ 0.95 & 0.92, 0.86 $\pm$ 0.97 & 0.76, 0.91 $\pm$ 0.87\\ \hline 0.07 & 100\% & 1.28, 1.63 $\pm$ 1.12 & 1.17, 1.36 $\pm$ 1.06 & 1.00, 0.93 $\pm$ 1.01\\ \hline 0.11 & 73.33\% & 1.45, 1.55 $\pm$ 1.21 & 1.55, 1.60 $\pm$ 1.22 & 1.11, 1.16 $\pm$ 1.04\\ \hline 0.15 & 51.67\% & 2.5, 1.67 $\pm$ 1.59 & 2.54, 1.59 $\pm$ 1.57 & 1.75, 1.11 $\pm$ 1.35\\ \hline \end{tabular}} \label{tab:success_rates_noise} \vspace{-5mm} \end{table*} \begin{table \centering \caption{Success rates in terrains not seen during training, tested with exteroceptive noise $\sigma_n = 3$ cm.} \scalebox{0.95}{\begin{tabular}{|c|c|} \hline Terrain & \textbf{Success Rate} \\ \thickhline Barriers & 78.33$\%$ \\ \hline Ditches & 71.16$\%$ \\ \hline Alternating stairs (full) & 56.66$\%$ \\ \hline Alternating stairs (first block) & 96.66$\%$ \\ \hline \end{tabular}} \label{tab:success_unseen} \vspace{-4mm} \end{table} \vspace{-2mm} \subsection{Analysis of Perception-Action Reflex} \label{sec:res:reflex} The introduction of perceptual information in the state observation enables the control policy to anticipate the presence of obstacles and adapt the gait as needed to overcome the obstacle, as in Figures \ref{fig:all_tests}(a) and \ref{fig:stairs_climbing_1}: the responsive motion of the fore legs \textit{anticipate} and react to the presence of obstacles. Figure \ref{fig:stairs_climbing_1} presents the joint trajectories of fore leg joint positions during stairs climbing, and show how the perceptual state inputs corresponds to a perception-action reflex that allows the policy to step over the steps without colliding. Further, particular tests were additionally performed to analyze the influence of the exteroceptive inputs in policy performance, where one or more rays were forcefully set to the minimum allowed threshold. These ablation tests, illustrated in Figure \ref{fig:all_tests}(e), show that when any one of the 8th-11th rays (as per the numeration in Figure \ref{fig:ray_res}) is ablated, the performance on various terrains remains within $5\%$ difference in success rate of that in Table \ref{tab:success_rates}, i.e., tests without any ablation. However, when all 8th-11th rays (as per Figure \ref{fig:ray_res}) were ablated, the success rate on stairs dropped to $40-55\%$, depending on the reference forward velocity, with failure typically observed while approaching or during stairs descent. Whenever any one or multiple rays from 1-7 as per Figure \ref{fig:ray_res} were ablated, the policy was unable to succeed on any terrain. This is to be expected as the policy does not have any recurrent connections within the neural network, i.e., the policy is memory-less and thus heavily relies on exteroceptive observations in close proximity to the fore legs. In conclusion, our analysis highlights the level of importance of perceptual inputs which trigger the perception-action reflex and enable successful locomotion over the terrains tested. \vspace{-2mm} \subsection{Statistical Performance on Seen Terrains} Results of the policy performance on tasks involving terrains seen during training are in Table \ref{tab:success_rates}, such terrains are shown in Figure \ref{fig:all_tests}(a)-(c). For tasks on step, ramp, and stairs, tests are presented with forward reference velocities of 0.4, 0.6, and 0.8 m/s. Terrain heights were uniformly sampled from values ranging between 10-20 cm for step height of the step and stairs terrains, and slope angles of $5^{\circ}$-$11^{\circ}$ for the ramp terrain. For tasks on flat ground, forward reference velocities of $\pm$0.7 and $\pm$0.5 m/s are tested, as well as lateral reference velocities of $\pm$0.3 m/s, and two tasks with omnidirectional base commands that require diagonal walking with forward and lateral velocities (0.3, 0.3) and (-0.5, -0.3) m/s. The results shown in Table \ref{tab:success_rates} show the high success rate of the trained policy, where all tasks present 100$\%$ success except for two tasks on stairs, which nevertheless present success above 90$\%$. The training curriculum also enables the policy to generalize to perform turning while walking as shown in Figure \ref{fig:all_tests}(d), achieved by constantly adjusting $\psi_{\text{ref}}$ such that $\vert \psi_{\text{ref}} - \psi \vert = 0.3$, whereas in training $\psi_{\text{ref}} = \psi_{\text{initial}}$. Moreover, we evaluated the performance of the trained policy for the stairs task with five levels of directly added Gaussian noise with standard deviation $\sigma_n$ in the exteroceptive measurements, as shown in Table \ref{tab:success_rates_noise} and Figure \ref{fig:all_tests}(f). The success rate of the policy is not affected while $\sigma_n$ is within 1-7 cm, which well covers the level of noise exhibited by commercially available hardware for the depth values we consider. Our evaluation found that performance degrades progressively as $\sigma_n$ continues to increase to 11-15 cm. While the policy was trained in a noiseless setting, the above tests show that performance is unaffected for low values of added noise within 7 cm. We hypothesize this may be due to the oscillatory motion the body exhibits during training which in itself yields small fluctuations in the depth measurements, although this hypothesis remains untested and could be incorrect as these fluctuations are correlated to one another and to the robot joint configuration. \vspace{-2mm} \subsection{Statistical Performance on Unseen Terrains} We test the policy on various terrains not seen during training to evaluate its generalizability and determine if the proposed training curriculum is well designed to enable one single policy to traverse different terrains. To do so, we test the policy in three unseen terrains as shown in Figure \ref{fig:all_tests}(g)-(i): barriers (g), ditches (h), and alternating stairs (i). We perform these tests at reference velocity $v_x = 0.6, v_y = 0$, and with $\sigma_n = 3$ cm to further demonstrate policy robustness. The barriers consist of square sections of 10, 15, and 20 cm. The ditches are platforms raised above ground with inter-platform separation of 5, 10, 15, 20, 30, 40 cm, and height of 15 cm. The alternating stairs consist of two blocks of stairs with steps lengths varying between 20 and 55 cm, whereas our policy was only trained on step length 30 cm. Success rates for the tests in these unseen terrains with exteroceptive noise are in Table \ref{tab:success_unseen}, obtained from 60 trials. The policy generalizes well to barriers and ditches. In alternating stairs, we found that the most frequent failures occur during the transition between two blocks of stairs, while the hind legs are still on the last step of the first block. Since the policy is memory-less and the forward-facing visual observations only see the front, the hind legs have limited ability to overcome the gap between stairs, resulting in a lower success rate of 56.66$\%$. The first block of stairs was traversed with success rate 96.66$\%$, demonstrating adaptability to stairs of varying step lengths. \section{Conclusion and Future Work} \label{sec:conclusion} This work investigates a learning scheme for perceptual locomotion using only sparse visual feedback. Our policy is able to learn the state-action mapping with exteroceptive observations and perform successfully on steps, ramps and stairs. The latter terrains are particularly the cases where blind locomotion policies cannot achieve the ascent or descent of high stairs at a 0.8 m/s velocity command. Further, our policy is robust to exteroceptive noise and ablations, and generalizes well to multiple unseen terrains featuring barriers, ditches, and alternating stairs. The results suggest a potential of this method to be used for producing more affordable robotic solutions, which can be equipped with simple, low-cost, and low-weight vision systems while still being able to accomplish designated industrial applications. As commercially-available quadruped robots have reduced cost significantly (less than \$2,000 per unit), to achieve cost-effective perceptual locomotion can generate a big momentum to populate the use of legged machines in a wider range of industrial and human-centered applications. Within the scope of this work, our particular design is to investigate whether perceptual locomotion can be learned by using sparse visual inputs, so the exteroceptive observations are kept at a very low dimension, as we are exploring computationally efficient and functionally effective solutions. However, this simplicity trades off the completeness of information: the policy only sees the height in-between two legs; when terrain heights are asymmetric between the left/right leg, sparse rays do not scan these details; though the robot demonstrated good performance on steps and stair-like surfaces, such low scanning resolution is inevitably limited on more complex surfaces or smaller obstacles. Hence, for future work, it will be interesting to exploit the use of attention mechanisms for visual perception where more detailed scanning will be triggered only when necessary, and otherwise sparse observations are used for lower computational power and longer mission time.
{ "timestamp": "2022-05-27T02:15:01", "yymm": "2109", "arxiv_id": "2109.14026", "language": "en", "url": "https://arxiv.org/abs/2109.14026" }
\section{Introduction} \label{sec:intro} The need to understand and analyze multi-dimensional data and their interdependencies has led to the extended use of tensors in diverse scientific fields, ranging, for example, from medicine to geodesy. An overview of tensor applications can be found in \cite{rabanser2017introduction}. \cite{sidiropoulos2017tensor}. Canonical Polyadic Decomposition (CPD) or Parallel Factor Analysis (PARAFAC) is a widely used model since, in many cases, it extracts meaningful structure from a given dataset. Alternating Optimization (AO), All-at-Once Optimization(AOO), and Multiplicative Updates (MUs) are the workhorse methods towards the computation of the CPD \cite{cichocki2009nonnegative}, \cite{CichockiMPCZZL14}. However, the very large size of the collected tensor data makes the implementation of these algorithms very computationally demanding. Recently, various approaches have been introduced in order to deal with large-scale CPD problems. An obvious approach is the development and implementation of parallel algorithms (distributed or shared-memory) \cite{Ballard_et_al_2018}, \cite{smith2016medium}, \cite{liavas2017nesterov}. From a different perspective, stochastic gradient based algorithms have gained much attention, since they are relatively easy to implement, have lower computational cost, and can guarantee accurate solutions. \subsection{Related Work} \label{subsec:Related_Work} Sub-sampling of the target tensor $\mathbfcal{X}$ using regular sampling techniques has been introduced in \cite{8902959}. In \cite{vervliet2015randomized}, entries of the target tensor are sampled in a random manner and the respective blocks of the latent factors are updated at each iteration. In \cite{papalexakis2012parcube} and \cite{sidiropoulos2014parallel}, a distributed framework is employed, where smaller replicas of the target tensor are independently factored. The resulting factors of each independent decomposition are effectively merged at the end to obtain the final latent factors. In \cite{battaglino2018practical} and \cite{fu2020block}, a set of fibers is randomly selected at each iteration and a stochastic proximal gradient step is performed. We improve upon the work of \cite{fu2020block} by incorporating Nesterov acceleration at each iteration and a proximal term to deal with ill-conditioned cases. \subsection{Notation} \label{subsec:Notation} Vectors and matrices are denoted by lowercase and uppercase bold letters, for example, $\mathbf{x}$ and $\mathbf{X}$. Tensors are denoted by bold calligraphic capital letters, namely, $\mathbf{\mathbfcal{X}}$. $\| \cdot \|_F$ denotes the Frobenius norm of the matrix or tensor argument. $\mathbf{A} \odot \mathbf{B}$ and $\mathbf{A} \circledast \mathbf{B}$ denote, respectively, the Khatri-Rao and the Hadamard product of matrices ${\bf A}$ and ${\bf B}$. The outer product between vectors is denoted by the operator $\circ$. Finally, MATLAB notation is used when it seems appropriate. \section{Canonical Polyadic Decomposition} \label{sec:cpd} Let tensor $\mathbfcal{X}^o \in \mathbb{R}^{I_1 \times I_2 \times \dots \times I_N}$ admit a rank-$R$ factorization of the form \begin{equation} \mathbfcal{X}^o \hspace{-0.1cm}= \mbox{\textlbrackdbl} \mathbf{A}^{o(1)}, \dots, \mathbf{A}^{o(N)} \mbox{\textrbrackdbl} = \sum_{r=1}^R \mathbf{a}_r^{o(1)}\hspace{-.1cm}\circ \dots \circ \mathbf{a}_r^{o(N)}, \label{CPD_def} \end{equation} where $\mathbf{A}^{o(i)}=[\mathbf{a}_1^{o(i)} ~\cdots ~ \mathbf{a}_R^{o(i)}]\in\mathbb{R}^{I_i \times R}$, for $i =1,\ldots,N$. In many cases, the latent factors have a special structure or satisfy a specific property, which is denoted as ${\bf A}^{o(i)}\in \mathbb{A}^i$. The actual observation is corrupted by additive noise $\mathbfcal{E}$, thus, we observe tensor $\mathbfcal{X} = \mathbfcal{X}^o + \mathbfcal{E}$. Estimates of $\mathbf{A}^{o(i)}$ can be obtained by computing matrices $\mathbf{A}^{(i)}$ that solve the optimization problem \begin{equation} \label{LS_with_con} \underset {\{ \mathbf{A}^{(i)} \in\mathbb{A}^i \}_{i=1}^n } \min f_{\mathbfcal{X}}\left(\mathbf{A}^{(1)}, \dots , \mathbf{A}^{(N)}\right), \end{equation} where $f_{\mathbfcal{X}}$ is a function that measures the quality of the factorization, with a common choice being \begin{equation} f_{\mathbfcal{X}}\left(\mathbf{A}^{(1)}, \dots , \mathbf{A}^{(N)}\right) = \left\| \mathbfcal{X} - \mbox{\textlbrackdbl} \mathbf{A}^{(1)}, \dots, \mathbf{A}^{(N)} \mbox{\textrbrackdbl} \right\|_F^2\hspace{-0.1cm}. \label{CPD_problem} \end{equation} This optimization problem is nonconvex and, thus, difficult to solve. The matrix unfolding (or tensor matricization) has been very useful towards the solution of problem (\ref{CPD_problem}). More specifically, if $\mathbfcal{\widehat{X}}= \mbox{\textlbrackdbl} \mathbf{A}^{(1)}, \dots, \mathbf{A}^{(N)} \mbox{\textrbrackdbl}$, then the matrix unfolding for an arbitrary mode $i$ is given by \cite{KoBa09}, \cite{sidiropoulos2017tensor} \vspace{1em} \begin{equation} {\bf \widehat{X}}^{(i)} = \mathbf{K}^{(i)} \mathbf{A}^{(i)T}, \label{mt_approximate_factors} \end{equation} \vspace{1em} where ${\bf K}^{(i)}$ is defined as \begin{equation} \label{K_KRPR} \mathbf{K}^{(i)} := \mathbf{A}^{(N)} \hspace{-0.1cm} \odot \dots \odot \hspace{-0.06cm} \mathbf{A}^{(i+1)} \hspace{-0.1cm} \odot \mathbf{A}^{(i-1)} \hspace{-0.1cm} \odot \dots \odot \mathbf{A}^{(1)}. \end{equation} It can be shown that, for $i=1,\ldots,N$, \begin{equation} \begin{split} f_{\mathbfcal{X}}(\mathbf{A}^{(1)}, \dots , \mathbf{A}^{(N)}) & = \,\left\| \mathbf{X}^{(i)} - \mathbf{\widehat{X}}^{(i)} \right\|_F^2. \end{split} \label{f_X_matr} \end{equation} These expressions form the basis of the CPD AO method. If, at iteration $k$, the estimated matrix factors have values $\mathbf{A}_k^{(j)}$, for $j=1,\ldots,N$, we can update $\mathbf{A}_k^{(i)}$ by solving the matrix least-squares problem \begin{equation} \label{LS} \mathbf{A}_{k+1}^{(i)} = ~ \underset{\mathbf{A}^{(i)}\in\mathbb{A}^i}{\rm argmin} ~ \| \mathbf{X}^{(i)} - \mathbf{K}_k^{(i)} \mathbf{A}^{(i)T} \|_F^2, \end{equation} where \begin{equation} \label{K_KRPR} \mathbf{K}_k^{(i)} := \mathbf{A}_k^{(N)} \hspace{-0.1cm} \odot \dots \odot \hspace{-0.06cm} \mathbf{A}_k^{(i+1)} \hspace{-0.1cm} \odot \mathbf{A}_{k+1}^{(i-1)} \hspace{-0.1cm} \odot \dots \odot \mathbf{A}_{k+1}^{(1)}. \end{equation} The gradient of $f_{\mathbfcal X}$, with respect to ${\bf A}^{(i)}$, is given by \begin{equation} \begin{split} & \nabla_{{\bf A}^{(i)}} f_{\mathbfcal{X}}({\bf A}_k^{(1)},{\bf A}_k^{(2)}, \ldots, {\bf A}_k^{(N)}) \cr & \quad \qquad = {\bf A}_k^{(i)} {\bf K}_k^{(i)T} {\bf K}_k^{(i)} - {\bf X}^{(i)T} {\bf K}_k^{(i)}. \end{split} \end{equation} Quantity ${\bf X}^{(i)T} {\bf K}_k^{(i)}$ is called Matricized Tensor Times Khatri-Rao Product (MTTKRP) and is the most computationally demanding part of the CPD AO algorithm. Thus, the development of efficient algorithms which do not compute a full MTTKRP during each iteration is of great interest. \section{Stochastic Gradient CPD - BrasCPD} \label{sec:BrasCPDaccel} In \cite{vervliet2015randomized}, a stochastic gradient method has been developed for the unconstrained case, where, at each iteration, only a part of a factor is updated. A fiber sampling technique has been developed in \cite{battaglino2018practical} and \cite{fu2020block}, where, at each iteration, a whole factor is updated. The scheme proposed in \cite{fu2020block}, named BrasCPD, can handle both unconstrained and constrained problems. BrasCPD combines the fiber sampling technique with the AO algorithm. Assume, again, that, at the beginning of iteration $k$, the values of the estimated factors are ${\bf A}_k^{(j)}$, for $j=1,\ldots,N$. At iteration $k$, an index $i$ is picked at random. Then, $B^i$ mode-$i$ fibers are sampled, indexed by $\mathcal{F}_k^i \subset \lbrace 1,2, \dots J^i \rbrace$, where $J^i$ denotes the number of rows of ${\bf X}^{(i)}$, and a smaller problem is constructed, namely, \begin{equation} \label{LS_small} \underset{{\bf A}^{(i)}\in\mathbb{A}^i}{\min} f_k^{(i)}({\bf A}^{(i)}), \end{equation} where \begin{equation*} f_k^{(i)}({\bf A}^{(i)}) = \| {\bf X}^{(i)}(\mathcal{F}_k^i,:) - {\bf K}_k^{(i)}(\mathcal{F}_k^i,:){\bf A}^{(i)T} \|_F^2. \end{equation*} BrasCPD performs a proximal gradient step and updates ${\bf A}^{(i)}_k$ as \begin{equation} {\bf A}^{(i)}_{k+1} = {\rm prox}_{h_i} \left( {\bf A}^{(i)}_{k} - \frac{\alpha_{k}}{|{\cal F}_k^i|} \nabla f_k^{(i)}({\bf A}) \right), \end{equation} where $h_i$ is the indicator function of set $\mathbb{A}^i$ and \begin{equation} \begin{split} \nabla f_k^{(i)}({\bf A}) & = {\bf A}^{(i)}_k{\bf K}_k^{(i)T}(\mathcal{F}_k^i,:){\bf K}_k^{(i)}(\mathcal{F}_k^i,:) \cr & \quad\quad - {\bf X}^{(i)T}(\mathcal{F}_k^i,:){\bf K}_k^{(i)}(\mathcal{F}_k^i,:). \end{split} \label{nabla_f_k_i} \end{equation} The other factors do not change during iteration $k$, that is, for $j\ne i$, ${\bf A}^{(j)}_{k+1} = {\bf A}^{(j)}_{k}$. Note that the computational cost of the {\em partial}\/ MTTKRP appearing in (\ref{nabla_f_k_i}) drops to $\mathcal{O}(|\mathcal{F}_k^i| R I_i )$ flops. The performance of the algorithm is mainly determined by the step-sizes $\alpha_k$ \cite{bottou2018optimization}. In BrasCPD, diminishing step-sizes are employed, namely, $\alpha_k = \frac{\alpha}{k^{\beta}}$, for appropriate values of parameters $\alpha$ and $\beta$. A method based on Adagrad \cite{duchi2011adaptive}, called AdaCPD, has also been proposed in \cite{fu2020block}. The AdaCPD computes its step-sizes using an accumulated-gradient mechanism with parameters $\eta > 0$, $\beta > 0$, and $\epsilon>0$, namely \vspace{1em} \begin{equation} \alpha_k^{(i)} = \frac{\eta}{\left( \beta + \sum_{l=1}^{k}{\nabla f_l^{(i)}({\bf A})}\right)^{1/2 + \epsilon}}. \end{equation} \section{Accelerated Stochastic Gradient CPD} \label{section_ASCPD} We propose an accelerated version of BrasCPD, which we call Accelerated Stochastic CPD (ASCPD). At iteration $k$, we follow the same sampling scheme as in \cite{fu2020block}, \cite{battaglino2018practical}, and form the problem \begin{equation} \label{LS_small_NAG} \underset{{\bf A}^{(i)}\in\mathbb{A}^i}{\rm min} F_k^{(i)}({\bf A}^{(i)}) = f_k^{(i)}({\bf A}^{(i)}) + \frac{\lambda^{(i)}_k}{2} \| {\bf A}^{(i)} - {\bf A}^{(i)}_k\|_F^2. \end{equation} The parameter $\lambda^{(i)}_k$ determines the condition number of the problem and is chosen such that the condition number remains ``small.'' More specifically, the Hessian of $f_k^{(i)}$ is \vspace{1em} \begin{equation} \hspace{-.1cm} {\bf H}_k^{(i)}\hspace{-.1cm}:= \hspace{-.05cm}\nabla^2 f_k^{(i)}({\bf A}^{(i)}) = {\bf K}_k^{(i)T}({\cal F}^i_k,:) {\bf K}_k^{(i)}({\cal F}^i_k,:) \otimes {\bf I}_{I_iR}. \label{Hessian_f_k_i} \end{equation} Let $L_k^{(i)}$ and $\mu_k^{(i)}$ be, respectively, the largest and smallest eigenvalues of ${\bf H}_k^{(i)}$. We choose $\lambda_k^{(i)}$ such that the condition number of (\ref{LS_small_NAG}) becomes ``almost constant,'' that is \vspace{1em} \begin{equation} \frac{\bar{L}^{(i)}_k}{\bar{\mu}^{(i)}_k}:= \frac{L_k^{(i)}+\lambda_k^{(i)}}{\mu_k^{(i)}+\lambda_k^{(i)}} \lesssim {\cal C}, \end{equation} \vspace{1em} where ${\cal C}$ is a given value (in our experiments, we set ${\cal C}=10, 10^2, 10^3$). More specifically, we set \begin{equation} \lambda_k^{(i)}=\left\{ \begin{array}{ll} \mu_k^{(i)}, & \mbox{if}~\frac{L_k^{(i)}}{\mu_k^{(i)}} < {\cal C}, \cr \frac{L_k^{(i)}}{\cal C}, & \mbox{otherwise}. \end{array} \right. \end{equation} We perform a proximal step \begin{equation} \label{update_NAG} {\bf A}^{(i)}_{k+1} = {\rm prox}_{h_i} \left( {\bf Y}^{(i)}_{k} - \frac{1}{\bar{L}^{(i)}_k} \nabla F_k^{(i)}({\bf Y}_k^{(i)}) \right), \end{equation} followed by a momentum step \begin{equation} \label{interpolation_NAG} {\bf Y}_{k+1}^{(i)} = {\bf A}_{k+1}^{(i)} + \beta_k^{(i)} \left( {\bf A}_{k+1}^{(i)} - {\bf A}_k^{(i)} \right), \end{equation} where \begin{equation} \beta_k^{(i)} := \frac{ \sqrt{\bar{L}_k^{(i)}}-\sqrt{\bar{\mu}_k^{(i)}} } {\sqrt{\bar{L}_k^{(i)}}+\sqrt{\bar{\mu}_k^{(i)}}}. \end{equation} The values of $\bar{L}^{(i)}_k$ and $\beta_k^{(i)}$ are the steps used by the ``constant step scheme III'' of the accelerated gradient algorithm \cite[p. ~81]{Nesterov_Book_2004}. They can be considered as ``locally optimal'' for problem (\ref{LS_small_NAG}). Note that we essentially compute ${\bf H}_k^{(i)}$ during the computation of $\nabla f_k^{(i)}$ (see (\ref{nabla_f_k_i}) and (\ref{Hessian_f_k_i})). Furthermore, the computation of its largest and smallest eigenvalues does not pose significant computational cost, especially in the cases of small $R$. The ASCPD algorithm appears in Algorithm \ref{BrasCPDaccel_algo}. A variation of our scheme is to use only the stochastic proximal gradient step of (\ref{update_NAG}), without the acceleration step. We will test the effectiveness of this variation in our numerical experiments. An important future research topic is the development of algorithms that fully exploit the second-order information ${\bf H}_k^{(i)}$. Some initial efforts have not resulted in algorithms superior to the one proposed in this paper, especially in the noisy cases. \begin{algorithm}[h] \SetAlgoLined \KwResult{ $\lbrace {\bf A}^{(i)} \rbrace_{i=1}^N$} \textbf{Input:}{$~ {\rm tensor}~ \mathbfcal{X}, {\bf A}^{(i)}_{0}={\bf Y}^{(i)}_{0}$, $i=1,\ldots,N$, blocksizes $B^i$, $i=1,\ldots,N$. \\$\textit{k} = 0$}\; \While{terminating condition is not satisfied}{ { Uniformly sample $i$ from $\lbrace 1,2 \dots N \rbrace$ \; Uniformly sample $B^i$ mode-$i$ fibers\; Compute stochastic gradient $\nabla F^{(i)}_k({\bf Y}_k^{(i)})$\; Compute $L_k^{(i)}$, $\mu_k^{(i)}$, and $\lambda_k^{(i)}$\; Compute ${\bf A}^{(i)}_{k+1}$ using (\ref{update_NAG}) \; Compute ${\bf Y}^{(i)}_{k+1}$ using (\ref{interpolation_NAG}) \; \textit{k} = \textit{k} + 1\; } } \caption{ASCPD} \label{BrasCPDaccel_algo} \end{algorithm} \section{Numerical Experiments} \label{sec:num_exps} In this section, we run numerical experiments and test the performance, in terms of convergence speed and estimation accuracy, of the NALS algorithm of \cite{liavas2017nesterov}, AdaCPD, ASCPD, and BrasCPD with locally optimal step-size, using both synthetic and real-world data. In our experiments, AdaCPD always outperformed BrasCPD, thus, we do not consider BrasCPD. For both synthetic and real-world data, the results are extracted after averaging over $10$ Monte-Carlo trials. \subsection{Synthetic Data} \begin{figure} \centerline{\includegraphics[width=9.5cm]{noise_F_100_bs_500_SNR_10-eps-converted-to.pdf}} \centerline{\includegraphics[width=9.5cm]{noise_F_100_bs_500_SNR_1000-eps-converted-to.pdf}} \caption{Relative tensor reconstruction error for $I_1=I_2=I_3=200$, $R=100$, $B=500$, and ${\tt SNR}= 10{\rm dB}$ (top) and $30$\,dB (bottom).} \label{fig_I_200_R_100_B_500_SNR_10_30} \end{figure} \begin{figure} \centerline{\includegraphics[width=9.5cm]{noise_F_100_bs_100_SNR_10-eps-converted-to.pdf}} \centerline{\includegraphics[width=9.5cm]{noise_F_100_bs_100_SNR_1000-eps-converted-to.pdf}} \caption{Relative tensor reconstruction error for $I_1=I_2=I_3=200$, $R=100$, $B=100$, and ${\tt SNR} = 10$\,dB (top) and $30$\,dB (bottom).} \label{fig_I_200_R_100_B_100_SNR_10_30} \end{figure} We generate a $3$-rd order nonnegative tensor $\mathbfcal{X}^o \in \mathbb{R}_+^{I_1 \times I_2 \times I_3}$ as $\mathbfcal{X}^o = \mbox{\textlbrackdbl} \mathbf{A}^{o(1)}, \dots, \mathbf{A}^{o(N)} \mbox{\textrbrackdbl}$, where the elements of each factor are independent and identically distributed (i.i.d.), uniformly distributed in $[0,1]$. The noisy tensor is given by $\mathbfcal{X} = \mathbfcal{X}^o + \sigma_{\epsilon} \mathbfcal{E}$, where the elements of $\mathbfcal{E}$ are i.i.d. ${\cal N}(0,1)$. We define the Signal-to-Noise Ratio (SNR) \begin{equation*} {\tt SNR} := \frac{\| \mathbfcal{X}^o \|^2_F}{\sigma_{\epsilon}^2 \| \mathbfcal{E} \|^2_F}. \end{equation*} We adopt as performance metric the relative tensor reconstruction error at iteration $k$ \begin{equation*} m_k:= \frac{\| \mathbfcal{X} - \mathbfcal{\widehat{X}}_k\|_F}{\| \mathbfcal{X} \|_F}. \end{equation*} All stochastic gradient based algorithms use the same blocksize, that is, $B^1=B^2=B^3=B$. Furthermore, we note that one full iteration of the NALS algorithm of \cite{liavas2017nesterov} requires the computation of four full MTTKRPs (three for the factor updates and one for the acceleration step). Thus, in order to be fair, in our plots, we depict the performance metric $m_k$ attained by each algorithm after the {\em same number of full iterations}, that is, we compute the performance metric for each stochastic algorithm after the required number of stochastic iterations that correspond to one full iteration of the NALS algorithm. In Fig. \ref{fig_I_200_R_100_B_500_SNR_10_30}, we plot the metric $m_k$ versus the number of full iterations for the case where $I_1=I_2=I_3=200$, $R=100$, $B=500$, and SNR = $10$\,dB (top) and $30$\,dB (bottom). In Fig. \ref{fig_I_200_R_100_B_100_SNR_10_30}, we set $B=100$ and present the corresponding plot. In all cases, we set the Adagrad parameter $\eta=1$ (in our experiments, this was the best value), while we set the ASCPD parameter ${\cal C}= 10$ in the low SNR cases and ${\cal C}= 10^2, 10^3$ in the high SNR cases. We observe that \begin{enumerate} \itemsep0em \item in the low SNR cases, the NALS algorithm outperforms all stochastic gradient based approaches. The relative performance of AdaCPD and ASCPD depends on the block size. For ``small'' block sizes, AdaCPD outperforms the ASCPD, while, for ``large'' block sizes, the opposite happens. \item in the high SNR cases, the ASCPD outperforms all other methods. We note that the BrasCPD with locally optimal step-size in some cases outperforms AdaCPD. \end{enumerate} \begin{figure} \centerline{\includegraphics[width=9.5cm]{ip_F_10-eps-converted-to.pdf}} \centerline{\includegraphics[width=9.5cm]{ip_F_100-eps-converted-to.pdf}} \caption{Indian Pines dataset: relative tensor reconstruction error for $B=500$, and $R=10$ (top), $R = 100$ (bottom).} \label{fig_indianpine} \end{figure} \begin{figure} \centerline{\includegraphics[width=9.5cm]{pU_F_50-eps-converted-to.pdf}} \centerline{\includegraphics[width=9.5cm]{pU_F_200-eps-converted-to.pdf}} \caption{PaviaU dataset: relative tensor reconstruction error for $B=500$ and $R=50$ (top), $R = 200$ (bottom).} \label{fig_PaviaU} \end{figure} \subsection{Real-world Data} \label{subsec:real} Similarly to \cite{fu2020block}, we use the Indian Pine and PaviaU datasets which are Hyperspectral Images (HSIs). HSI sensors collect data in a group of images, for different wavelength ranges. The resulting data-cube is a third order tensor. The Indian Pine dataset is of size $145 \times 145 \times 220$ and consists of data acquired via the AVIRIS sensor, on Indian Pines site in Indiana (USA). The PaviaU dataset has size $610 \times 340 \times 103$ and consists of a scene of Pavia University in Italy.$^1$\begin{footnote*}[hb!] {\small $^1$Datasets available at \url{http://www.ehu.eus/ccwintco/index.php/Hyperspectral_Remote_Sensing_Scenes}. } \end{footnote*} In Fig. \ref{fig_indianpine}, we plot the quantity $m_k$ versus the number of full iterations for the Indian Pine dataset for $B=500$ and $R=10$ (top) and $R=100$ (bottom). In Fig. \ref{fig_PaviaU}, we plot the corresponding results for the PaviaU dataset for $B=500$ and $R=50$ (top) and $R=200$ (bottom). In all cases, we set $\eta=2$ and ${\cal C}=10^3$. We observe that, in all cases, the ASCPD outperforms all other methods in terms of $m_k$. \section{Conclusion} \label{sec:conclusions} We adopted a stochastic gradient based framework for the solution of the structured CPD problem. We improved upon known stochastic proximal gradient schemes by incorporating Nesterov-type acceleration using parameters that are ``locally optimal.'' In numerical experiments, with both synthetic and real-world data, our algorithm has been proven efficient. Convergence analysis of the proposed algorithm is an interesting future topic. \balance \bibliographystyle{IEEEtran} \input{Accelerated_Stochastic_CPD.bbl} \end{document}
{ "timestamp": "2021-09-30T02:00:56", "yymm": "2109", "arxiv_id": "2109.13964", "language": "en", "url": "https://arxiv.org/abs/2109.13964" }
\section{Introduction} \label{sec:intro} Cosmological phase transitions represent a dramatic change in the properties of the Universe over a relatively short period of its history, and may play an important role in our understanding of conditions today. Already in the Standard Model (SM), the QCD phase transition at which \(SU(3)\) color confines \cite{OLIVE1981483} is thought to separate a phase dominated by free quarks and gluons from one where the relevant degrees of freedom are baryons, and the electroweak (EW) phase transition demarcates a period where the electroweak gauge symmetry is exact from one in which the weak bosons and SM fermions have non-zero masses. In the context of physics beyond the Standard Model (BSM), first order phase transitions (FOPTs) are frequently invoked to catalyze interesting dynamics. For instance, the interactions between the thermal bath and the expanding bubbles of true vacuum typically present in a FOPT play a central role in mechanisms such as electroweak baryogenesis \cite{Kuzmin:1985mm, Carena:1996wj, Cohen:1993nk, Croon_2018}, where the FOPT realizes a departure from thermal equilibrium -- one of the three necessary conditions required for baryogenesis \cite{Sakharov:1967dj}. In this article, we investigate a FOPT producing a large shift in the mass of a BSM particle \(\chi\), and explore how this leads to an interesting interplay between the role of \(\chi\) decay and \(\chi \chi\) annihilation into SM particles during the FOPT itself. After bubbles of the true vacuum nucleate, the $\chi$ mass can be radically different inside and outside. As the bubbles expand and collide (using the terminology of Ref.~\cite{Asadi:2021pwo}), segmented ``pockets" of unbroken phase remain, and experience contraction as the bubbles grow to fill the entire Universe. While this happens, $\chi$ particles in the pockets reflect off the bubble walls due to the large \(M^{\rm{in}}_{\chi}/T\) in the broken phase and as a result are trapped in the pockets,``squeezing" them together. We focus on the interplay between decay and annihilation processes during the pocket collapse, and analyze under which situations one or the other can become the dominant mechanism depleting the particles. Generically, one would expect that decays, if allowed, would dominate over annihilation processes such that the depletion is governed by the decays alone. However, as they are squeezed inside a contracting pocket, the particle densities may grow large enough to provide enough enhancement of the annihilation rate that a significant number of \(\chi\) annihilate rather than decay, even for large decay widths. We find that depending on the parameters of the theory, the decay and annihilation can compete or be relevant at different times during the phase transition. This mechanism thus provides a novel relationship between the depletion processes, and can open up large regions of the parameter space in which annihilation can become important or even dominate over decay. As a specific application, we apply this scenario to baryogenesis. Interference between tree-level and loop-level diagrams can lead to a CP asymmetry in both decay and annihilation, and even if the decay and annihilation processes are governed by the same couplings (which they need not be), there are additional contributions to a CP asymmetry from the annihilation processes, and therefore the asymmetries generated by decay and annihilation are not constrained to be the same. We work in a generic framework, in which a FOPT traps the particles to decay or annihilate in the pockets of unbroken phase. Previous related work \cite{Baldes:2021vyz} has examined baryogenesis in a similar context with relativistic bubble walls, but under the assumption that the effect of reflection off of the bubble walls is negligible. Recent studies have investigated similar ideas in the context of dark matter (DM), and how the DM relic abundance may be set by interactions with non-relativistic bubble walls via a ``filtering'' effect \cite{Baker:2019ndr,Chway:2019kft}, leading to an exponentially suppressed abundance of DM inside the bubbles. Other work has focused on the fate of the DM particles that reflect off the bubble wall and are trapped in the unbroken phase. The particles trapped in the pockets are eventually ``squeezed" together, leading to a number of possible outcomes, depending on the specifics of their interactions. The squeezing could enhance their annihilation rate, which may determine the DM relic density \cite{Asadi:2021pwo}, or increase the density sufficiently enough to create compact objects such as primordial black holes or Fermi-balls \cite{Baker:2021nyl,Marfatia:2021twj}, which may themselves play the role of dark matter in the Universe today. Our paper is organized as follows. Section~\ref{sec:mech} introduces the general framework and outlines the relevant features of a first order phase transition. Section~\ref{sec:squeeze} examines the interplay between decay and squeezed annihilation via the Boltzmann equation, and determines whether decay or annihilation is the dominant depletion process. Section~\ref{sec:asymmetry} discusses the asymmetry that is generated for different amounts of decay and annihilation. Section~\ref{sec:GW} shows the gravitational wave spectrum that could be produced within this general framework. We reserve section~\ref{sec:conclusions} for our conclusions and outlook. \section{General Scenario} \label{sec:mech} \begin{figure*} \centering \includegraphics[width = .49\textwidth]{Bubble_Filtering_Diagram.pdf} \includegraphics[width = .49\textwidth]{Squeezing_Diagram.pdf} \caption{A cartoon depiction of the bubbles nucleating and expanding (left). As these bubbles collide, they create contracting pockets of unbroken phase, which trap and squeeze \(\chi\) particles (black), enhancing their density (right), whereas SM particles (red) are able to traverse unimpeded. We approximate the contracting pockets to be spherical.} \label{fig:SqueezeDiagram} \end{figure*} We consider a scenario where a fermion \(\chi\) is coupled to a complex scalar \(\Phi\) described by the Lagrangian, \begin{align} \label{eq:chiL} \mathcal{L} = \overline{\chi}(i \slashed{D}) \chi- y\Phi \overline{\chi}\chi+ \rm{h.c.} - V(\Phi) \end{align} where we assume for simplicity that both \(\chi\) and \(\Phi\) are SM singlets. In order to consider a wide spectrum of scenarios, we assume that there are couplings which mediate both the decay and annihilation of \(\chi\) into appropriate SM states, but do not specify their specific form. To successfully realize baryogenesis, there must be a source of CP violation in either the new sector itself or couplings between the new sector and the SM. If there are multiple flavors of \(\chi_i\), this CP violation may come directly from the \(\Phi\) couplings, \begin{align} \mathcal{L}_{CP} \supset y_{ij}\Phi \overline{\chi}_i\chi_j + y^*_{ij}\Phi^* \overline{\chi}_j\chi_i \end{align} which could generate CP violation via vertex corrections, self-energy corrections, and other loop level processes involving \(\Phi\). For now, we consider \(\chi\) to be the lightest species of the multiple generations, with any heavier states showing up only inside these loop-level processes. The Sakharov conditions additionally require the presence of C and baryon number violation, which constrains the space of the generic couplings. We assume that the thermal potential for \(\Phi\) is such that at some temperature in the early Universe it undergoes a first order phase transition, nucleating bubbles in the process. The form of Eq.~(\ref{eq:chiL}) is such that at temperatures above the \(\Phi\) phase transition, the \(\chi\) have zero tree level mass. After the \(\Phi\) phase transition, the \(\chi\) are massive inside the bubbles of broken phase (the phase where \(\Phi\) has a vev) and their mass is \(M^{\text{in}}_{\chi}= y \langle \Phi \rangle\). If the ratio \(M^{\rm{in}}_{\chi}/T \gg 1\), then only the high momentum modes of \(\chi\) can penetrate the bubble wall, resulting in a large number of the \(\chi\) particles being trapped in the unbroken phase. Altogether this amounts to an out-of-equilibrium process with C, CP, and baryon number violation: all of the necessary ingredients to generate a baryon asymmetry. Throughout the remainder of the paper, we will use terminology introduced in Ref.~\cite{Asadi:2021pwo}. The regions we refer to as bubbles are the usual FOPT bubbles that nucleate and expand. As these bubbles collide, segmented regions of unbroken phase contract, which we refer to as ``pockets''. As the bubbles nucleate and expand, the particles with insufficient kinetic energy to enter the broken phase reflect off the bubble wall. The bubbles eventually collide, and isolated pockets of unbroken phase are left to contract (see Fig.\,\ref{fig:SqueezeDiagram}). During this pocket collapse, both decay and annihilation processes can both be important in the depletion of \(\chi\) as shown in Fig.\,\ref{fig:GeneralBehavior} for a specific choice of parameters. Although the tree-level mass is zero in the unbroken phase, the thermal mass can allow the decays to become kinematically accessible. If the decay lifetime of \(\chi\) is shorter than the collapse time, then the \(\chi\) will start depleting via decays. Simultaneously, the pocket contracts, enhancing the annihilation processes as the pocket squeezing increases the density of the leftover \(\chi\). Whether the decay or annihilation processes dominate in depleting the \(\chi\) abundance depends on the relationship between the decay width, \(\Gamma_{\chi}\), annihilation cross section, \(\langle \sigma v \rangle\), and the pocket collapse rate. \begin{figure}[t] \centering \includegraphics[width = \linewidth]{Decay_and_annihilation_behavior.pdf} \caption{Example of \(\chi\) evolution during the pocket collapse, showing the fraction of the \(\chi\) that are present at different radii as the pocket collapses in cases where only decay processes are present (red), only annihilation processes are present (blue), and both decay and annihilation processes are present (black).} \label{fig:GeneralBehavior} \end{figure} \subsection{Phase Transition} In general, the properties of the phase transition are largely governed by the potential \(V(\phi)\) (including thermal corrections). Typically (even in the absence at tree-level), a cubic term arises from the high temperature expansion of the thermal loop corrections. This creates a barrier between the two minima, inducing a first order phase transition. One can generically write the finite temperature potential as \cite{Kehayias:2009tn} \begin{align} V(\phi,T) = D (T^2 - T_0^2) \phi^2 - ET\phi^3 + \frac{\lambda(T)}{4} \phi^4 \end{align} with \(D,~E,~\rm{and}~\lambda(T)\) determined by a combination of tree level potential parameters and both thermal and zero-temperature loop corrections. These parameters determine the critical temperature, \(T_c\), nucleation temperature, \(T_n\), and the strength of the phase transition, with a strong first-order phase transition satisfying the condition \(\langle \phi\rangle/T_c \gg 1\). In a specific theory, the coefficients $D$, $E$, and $\lambda$ can be computed, but rather than get distracted by these specific details, we treat them as parameters that we can freely tune to realize a FOPT with various properties. We also assume that the phase transition completes quickly enough that the temperature can be treated as a constant throughout its progress. The typical initial size of the pockets will directly influence the dynamics and timescales that govern the \(\chi\) particles during the pocket collapse. The average number of bubbles that nucleate per Hubble volume scales as \(N_b \sim \beta_H^3\), where \(\beta_H\), is typically of order \(\mathcal{O}(10-10^4)\) for strongly FOPTs \cite{Baldes:2021vyz}, but could be as large as \(\mathcal{O}(10^{11})\)\cite{Marfatia:2020bcs}. This determines the initial size of the pockets by specifying the number density of bubbles that nucleate, \(n_{b} \sim \beta_H^3 H^3\), and the distance between bubble centers scales as \(d_b \sim n_b^{-1/3} \sim R_H/\beta_H\) \cite{Megevand:2017vtb}. We consider both small and large initial pocket sizes by exploring two representative choices of the initial radii, \(R_0 = R_H\), and \(R_0 = 5\times 10^{-6}\, R_H\). The bubble wall velocity \(v_w\) influences the rate at which the pockets contract. \(v_w\) can be estimated from thermodynamic arguments, but this neglects the pressure exerted by \(\chi\) particles reflecting off the wall, which could slow down the bubble expansion considerably~\cite{Asadi:2021pwo,Baker:2021nyl,Marfatia:2021twj}. We consider both relativistic and non-relativistic wall velocities, where the larger the wall velocity, the larger the mass needs to be in the broken phase in order to trap \(\chi\) in the pockets. We choose \(v_w = 0.9\), \(M^{\rm{in}}_{\chi}/T=10^2\) and \(v_w = 10^{-3}\), \(M^{\rm{in}}_{\chi}/T=10\) as two representative examples, and assume that the wall velocity is approximately constant throughout the phase transition. \section{Decays and Squeezed Annihilation} \label{sec:squeeze} Throughout the process of collapse, interactions with the thermal bath generate a thermal mass for \(\chi\) of order \(\Pi_{\chi}^2 \sim g^2 T^2\) (where \(g\) represents a generic coupling to the thermal bath). The particles that are trapped in these pockets are subsequently squeezed together and effectively obtain a Casimir mass, \(M_{\chi}^{\rm{cas}} \sim 1/R\), where \(R\) is the pocket radius. This Casimir energy is an inherently quantum mechanical effect, due to the \(\chi\) wave-functions' energies being bounded from below because of the size of the pocket they are confined within. For sufficiently confined \(\chi\), this mass can allow \(\chi\) to rapidly decay even when its tree level/thermal mass would otherwise forbid it from doing so. We denote the decay width of \(\chi\) in the pocket as \(\Gamma_{\chi}\). We further assume that \(\chi \chi \) is also able to annihilate into SM final states with an annihilation cross section \(\langle \sigma v \rangle\). As the pocket radius decreases to \(R \sim 1/M^{\rm{in}}_{\chi}\), the Casimir energy overcomes the potential energy barrier between the unbroken and broken phases, and the remaining abundance of \(\chi\) is forced into the bubbles where they eventually decay away. In Fig.\,\ref{fig:GeneralBehavior}, we show an example of the evolution of the number of \(\chi\) throughout pocket collapse. Decays start immediately, governing the abundance early on in the phase transition. As the radius shrinks, there is less time left in the phase transition to allow for decays to occur, and the abundance due to decays flattens. However, at very small radii, the density of \(\chi\) increases enough to enhance the annihilation rate appreciably, allowing for a new depletion process to become relevant. \subsection{Boltzmann Equation} \begin{figure*} \centering \includegraphics[width =\textwidth]{Annihilation_Fraction_FourContours.pdf} \caption{Contours in the \(\Gamma\)-\(\langle \sigma v \rangle\) plane corresponding to the points where \( \chi \) is depleted by an equal number of decays and annihilations during the pocket collapse. The four curves correspond to combinations of temperature \(T = 10^7\) GeV and initial radius \(R_0 = 5\times 10^{-6} R_H\) (black); \(T = 10^7\) GeV and \(R_0 = R_H\) (red); \(T = 10^3\) GeV and \(R_0 = 5\times 10^{-6} R_H\) (blue); and \(T = 10^3\) GeV and \(R_0 = R_H\) (green).} \label{fig:Ann_Dec_Half} \end{figure*} We track the abundance of \(\chi\) throughout the pocket collapse by solving a Boltzmann equation for the number density of \(\chi\) confined inside the contracting pocket \begin{align} \frac{dn_{\chi}}{dt} + 3\frac{\dot{R}}{R}n_{\chi} = &-\langle\sigma v\rangle (n_{\chi}^2 - n_{\rm{eq}}^2)\nonumber\\ &-\Gamma_{\chi} (n_{\chi} - n_{\rm{eq}}) . \end{align} We assume that \(\chi\) have sufficiently strong interactions with the SM plasma that they have their equilibrium abundance at the beginning of the phase transition, and we approximate the pocket to be spherical with a constant wall velocity, \(v_w\). The Boltzmann equation can be recast into an equation differential in the radius of the pocket by making use of the relation \(\frac{dn_{\chi}}{dt}=\frac{dn_{\chi}}{dR}\frac{dR}{dt} = -v_w \frac{dn_{\chi}}{dR} \), \begin{align} -v_w \frac{dn_{\chi}}{dR} - 3\frac{v_w}{R}n_{\chi} = &-\langle\sigma v\rangle (n_{\chi}^2 - n_{\rm{eq}}^2)\nonumber\\ &-\Gamma_{\chi} (n_{\chi} - n_{\rm{eq}}) . \label{Boltz} \end{align} For a generic point in parameter space, both annihilation and decay may be significant. The total number of \(\chi\) inside the pocket is \(N_{\chi} = 4\pi R^3 n_{\chi}/3\), for which: \begin{align} \frac{dN_{\chi}}{dR} &= -\frac{4 \pi R^3}{3 v_w} \bigg(\langle\sigma v\rangle (n_{\chi}^2 - n_{\rm{eq}}^2) +\Gamma_{\chi} (n_{\chi} - n_{\rm{eq}})\bigg) . \end{align} To determine the dominant process responsible for depleting the abundance inside the pocket, we compute the fraction of the depletion that was from annihilation, \(f_{A} = \Delta N_{\rm{annihilation}}/\Delta N_{\rm{total}}\), by comparing the integral of the corresponding terms in the Boltzmann equation, \begin{align} f_{A} &= \frac{1}{\Delta N_{\rm{total}}}\int dN_{\rm{annihilation}}\\ &= \frac{\displaystyle\int_{R_0}^{1/M} R^3 dR ~\langle\sigma v\rangle (n_{\chi}^2 - n_{\rm{eq}}^2)}{\displaystyle\int_{R_0}^{1/M} R^3 dR~ \bigg(\langle\sigma v\rangle (n_{\chi}^2 - n_{\rm{eq}}^2)+\Gamma_{\chi} (n_{\chi} - n_{\rm{eq}})\bigg)} . \nonumber \end{align} In Fig.~\ref{fig:Ann_Dec_Half} we show contours in the plane of \(\Gamma\)-\(\langle \sigma v \rangle\) corresponding to equal depletion by decay and annihilation for \(v_w = 10^{-3}\), \(M^{\rm{in}}_{\chi}/T=10\) and for four different combinations of the initial pocket size \( R_0 \) and the temperature $T$ at which the phase transition takes place. Generally as expected, larger widths correspond to decay domination, and larger cross sections to annihilation cross section, with the boundary of \( f_A = 1/2 \) determined by the temperature, which controls the initial density of \( \chi \) and thus the rate of annihlation. However, there is a flattening at low \(\langle \sigma v \rangle\) which occurs when the decay and annihilation processes are operating during different times. In this case, the decays start immediately and the contour of \(f_A = 1/2\) corresponds to the point where half of the initial abundance inside the pocket decays before the time where squeezing becomes sufficient that annihilations turn on and deplete the rest of the abundance. For a phase transition with different \( v_w \), the dominant difference is through the explicit dependence in equation~(\ref{Boltz}), which can be rescaled such that the quantities driving the evolution of \( n_\chi\) are \( \Gamma_{\chi} / v_w \) and \(\langle \sigma v \rangle / v_w \). For larger \(v_w\), in order to keep the \(\chi\) confined to the pockets, the phase transition must also have a larger value of \(M^{\rm{in}}_{\chi}/T\) which further implies that the \(\chi\) reach sufficient Casimir energy to escape the pockets at a smaller pocket radius, and thus there is a slightly longer period for decay and annihilation to operate. For the relativistic wall velocity case we consider with \(v_w = 0.9\) and \(M^{\rm{in}}_{\chi}/T=10^2\), this second effect is numerically unimportant, and the contours of fixed $f_A$ are very close to those shown in Figure~\ref{fig:Ann_Dec_Half} with appropriate rescaling by \( v_w \). \begin{figure*} \centering \includegraphics[width = .49\textwidth]{Annihilation_Fraction_T7k6.pdf} \includegraphics[width = .49\textwidth]{Annihilation_Fraction_T3k6.pdf} \includegraphics[width = .49\textwidth]{Annihilation_Fraction_T7k1.pdf} \includegraphics[width = .49\textwidth]{Annihilation_Fraction_T3k1.pdf} \caption{Contours in the \(\Gamma\)-\(\langle \sigma v \rangle\) plane indicating the fraction of the initial abundance that remains in the unbroken phase when the pocket radius reaches \(R = 1/M^{\rm{in}}_{\chi}\), for the same four combinations of \( T \) and \( R_0 \) as in Figure~\ref{fig:Ann_Dec_Half}. The black contours correspond to points shown in Figure~\ref{fig:Ann_Dec_Half} where \( f_A = 1/2 \).} \label{fig:Ann_Dec_Frac} \end{figure*} Fig.\,\ref{fig:Ann_Dec_Frac} displays for each of the parameter sets shown in Fig.\,\ref{fig:Ann_Dec_Half} the total depletion in the plane of \(\Gamma\)-\(\langle \sigma v \rangle\). Depletion in very efficient in most of the plane, but in regions with both very small decay widths and annihilation cross sections, there could be a population of \( \chi \) that survive the pocket collapse. \section{Application to Baryogenesis} \label{sec:asymmetry} We consider an application of these results to baryogenesis, continuing to work in a generic framework in which \( \chi \) particles can both decay and annihilate into SM states, and including the possibility of CP violation (as well as C and baryon-number violation) being present in both processes. As noted above, the specific interactions mediating \(\chi\) decay or annihilation may be different (and thus have intrinsically different CP violation). Even if the underlying source of CP violation is the same for both processes, they will still generically manifest themselves differently, because of different topologies of loop diagrams that contribute. We parameterize the asymmetries present in the decay and annihilation processes as \(\epsilon_{D}\) and \(\epsilon_A\), respectively: \begin{align} \epsilon_{D} &\equiv \frac{\sum_{\alpha}\left[\Gamma\left(\chi \rightarrow \rm{SM} \right)-\overline{\Gamma}\left(\chi \rightarrow \rm{SM}\right)\right]} {\sum_{\alpha}\left[\Gamma\left(\chi \rightarrow \rm{SM}\right)+\overline{\Gamma}\left(\chi \rightarrow \rm{SM}\right)\right]} \\ \epsilon_{A} &\equiv \frac{\sum_{\alpha}\left[\sigma\left(\chi \chi \rightarrow \rm{SM\, SM}\right)-\overline{\sigma}\left(\chi \chi \rightarrow \rm{SM\, SM}\right)\right]} {\sum_{\alpha}\left[\sigma\left(\chi \chi \rightarrow \rm{SM\, SM}\right)+\overline{\sigma}\left(\chi \chi \rightarrow \rm{SM\, SM}\right)\right]} . \end{align} We assume, as is typically the case, that \( \epsilon_{D}, \epsilon_{A} \ll 1 \). The asymmetry generated by the combined decay and annihilation processes, \(\epsilon_{\rm{total}}\), is obtained by integrating the Boltzmann equation, keeping track of the fraction of \(\chi\) that annihilate versus decay: \begin{align} \epsilon_{\rm{total}} =& ~\frac{1}{\Delta N_{\chi,\rm{total}}}\int \bigg(\epsilon_{A} \frac{dN_{\rm{ann}}}{dR}+ \epsilon_{D} \frac{dN_{\rm{decay}}}{dR}\bigg)dR \nonumber\\ =& ~ \epsilon_{A} f_{A} + \epsilon_{D} (1-f_{A}) . \end{align} The resulting baryon asymmetry can be parameterized as: \begin{align} Y_{\Delta B} = \Delta Y_{\chi}\epsilon_{\rm{total}}C = \frac{\Delta n_{\chi}(T)}{s(T)}\epsilon_{\rm{total}}C \end{align} where \(Y_i = n_i/s\), \(\Delta n_{\chi}\) is the total change in the number density of \(\chi\) during the pocket collapse, \(s(T)\) is the entropy density, and \(C\) translates from the CP asymmetry present in the \(\chi\) depletion processes to the final asymmetry in baryons. For example, if \(\chi\) depletion produces an asymmetry in lepton number that is subsequently converted into a baryon asymmetry via electroweak sphalerons (``leptogenesis"), \(C \simeq 12/37\) \cite{Davidson:2008bu}. \begin{figure*} \centering \includegraphics[width = .7\linewidth]{MinimumAsymmetryk5e-6.pdf} \caption{The minimum asymmetry needed to be generated by the annihilation processes in the \(\langle \sigma v \rangle - T \) plane, in the scenario with \(R_0 = 5 \times 10^{-6} R_H\), where annihilation dominates.} \label{fig:MinAsymmetry} \end{figure*} In Fig.~\ref{fig:MinAsymmetry}, we display the minimum \(\epsilon_A^{\rm{min}}\) in annihilations that is necessary to produced the observed baryon asymmetry, in the \(\langle \sigma v \rangle - T\) plane, for the annihilation-dominated phase transition with \(R_0 = 5 \times 10^{-6} R_H\) and \(C = 1\). Even for the tiny \(\langle \sigma v \rangle\) considered, there is sufficient enhancement provided by the squeezed annihilation rate in much of the parameter space to generate a sufficient baryon asymmetry such that only a rather modest \(\epsilon \sim \mathcal{O}(10^{-8})\) is needed. \section{Gravitational Waves} \label{sec:GW} \begin{figure*}[t] \includegraphics[width = .49\linewidth]{GW_squeezed_nonrel.pdf} \includegraphics[width = .49\linewidth]{GW_squeezed_rel.pdf} \caption{Spectrum of stochastic gravitational waves produced by a FOPT. The four bands represent phase transitions with: \( (T,\beta_H) = (10^3 \,\rm{GeV}, 1)\) (P1, blue), \((10^3 \,\rm{GeV}, 10^4)\) (P2, red), \((10^7 \,\rm{GeV}, 1)\) (P3, green), and \((10^7 \,\rm{GeV}, 10^4)\) (P4, purple) with latent heat values \(\alpha \geq 0.01\). The black dashed lines show projected experimental reaches from the future experiments LISA, DECIGO, BBO, and ET.} \label{fig:GW} \end{figure*} A first order phase transition produces a stochastic background of gravitational waves that could be observable. The gravitational wave spectrum is thought to be composed of three primary components, \begin{align} \Omega_{gw}h^2 = \Omega_{b}h^2 +\Omega_{s}h^2 +\Omega_{t}h^2 , \end{align} where b, s, and t represent the bubble collision, sound wave, and turbulence contributions to the gravitational wave spectrum, and whose contributions can be estimated in terms of quantities characterizing the properties of the transition itself, \(\alpha\), \(\beta_H \equiv \beta/H_*\), \(T_*\), \(v_w\), and \(\kappa_i\), where: \begin{align} \beta_H = \left.\left(T \frac{\mathrm{d}}{\mathrm{d} T}\left(\frac{S_{3}(T)}{T}\right)\right)\right|_{T=T_{*}} . \end{align} These parameters can be estimated based on the properties of the phase transition, and mapped on to the peak amplitudes, frequencies, and spectral shapes of the gravitational wave signal~\cite{Alanne:2019bsm}, with the spectral shape largely governed by \(v_w\), \(\beta_H\) and \(T\), whereas the amplitude is also sensitive to the latent heat, \(\alpha\) and efficiency factors, \(\kappa_i\). The amplitudes increase for large wall velocities making the relativistic bubble wall case more potentially observable by future experiments. We sample all values of \(\alpha \geq 0.01\) since the amplitudes scale as \((\Omega h^2)^{\rm{peak}}\sim (\alpha/(1+\alpha))^n\) with \(n>0\), and the amplitude asymptotically approaches its maximum as \(\alpha\gg 1\). We use typical values found in \cite{Alanne:2019bsm} for the efficiencies, \(\kappa_b = 10^{-8}\) and \(\kappa_v = 10^{-3}\). In Fig.~\ref{fig:GW} , we plot bands that represent the range of sampled \(\alpha\) values, and consider four benchmark parameter points for both relativistic and non-relativistic bubble walls with \(\beta_H \sim 1, \,10^4\), \(v_w \sim 0.9,\, 10^{-3}\), and \(T = 10^3,\,10^7\) GeV. Also shown are the projected sensitivities of the future GW experiments, LISA \cite{Auclair:2019wcv}, DECIGO \cite{Kudoh:2005as}, BBO \cite{Corbin:2005ny}, and ET \cite{Hild:2010id}. Gravitational wave signals generated in this scenario could be discovered for some of the parameter space considered, with phase transitions resulting in relativistic bubble expansion being more easily detectable. \section{Conclusions and Outlook} \label{sec:conclusions} We have explored a novel interplay between decay and annihilation that arises during a first-order phase transition in which the mass of the annihilating/decaying particle receives a large contribution from the phase transition. We find that pocket collapse produces an enhancement of the particle density, and opens up a large range of parameter space where annihilation can be important, which would typically otherwise be dominated by decay. We investigate baryogenesis during this type of scenario, where the decay and annihilation of \(\chi\) may both separately contribute to generating the observed baryon asymmetry. While this is a novel application, the interplay between annihilation and decay during such a phase transition is interesting in its own right, and may prove useful in other applications as well. There are many interesting avenues for future exploration. For example, we approximate a constant temperature of the thermal bath throughout the phase transition, but this need not be the case. Indeed, the duration of the phase transition is longer than the Hubble scale for pockets whose sizes are initially \(\sim R_H\), such that the temperature of the universe may cool appreciably. Other types of phase transition may themselves generate significant amounts of heating. We further assumed a constant bubble wall velocity, but depending on the heating during the phase transition, and the pressure exerted on the bubble wall by \(\chi\) could lead to a non-trivial wall velocity profile. Studying this more complicated evolution is left for future work. It would also be interesting to move beyond generic characterizations and see how these results could be applied to specific models of baryogenesis. For example, \(\chi\) could be a right handed neutrino in a seesaw model of neutrino masses, whose large mass could be the result of the vacuum expectation value of a field spontaneously breaking lepton number. Typically in leptogenesis models, decays dominate and annihilation is negligible; but for the an appropriate type of phase transition this expectation could be upset, leading to a different mapping between the phases of the neutrino masses and Yukawa couplings and the resulting baryon asymmetry (from annihilating sterile neutrinos), and violating the Davidson-Ibarra bound \cite{Davidson:2002qv}, or the need for tiny mass differences required by resonant leptogenesis \cite{Pilaftsis:1997jf}. If the \(\chi\) is stable, it could play the role of dark matter, and it might be possible to generate the observed dark matter relic abundance and baryon asymmetry at the same time \cite{Cui:2011qe,Cui:2011ab}. Even without addressing baryogenesis, the enhancement of the annihilation could relax the relationship between the annihilation cross section and the mass implied by freeze-out production of the dark matter, allowing small values of the cross section to generate the correct amount of dark matter. We leave the investigation of these ideas for future work. \section*{Acknowledgements} This work was supported in part by the NSF via grant number PHY-1915005.
{ "timestamp": "2021-09-30T02:00:28", "yymm": "2109", "arxiv_id": "2109.13941", "language": "en", "url": "https://arxiv.org/abs/2109.13941" }
\section{Introduction}\label{section:introduction} Understanding the interaction of the solar wind with the Earth’s magnetosphere and upper atmosphere is one of the key topics in heliophysics~\cite{2014scienceplan,2013nrc}. Aurora Borealis and Aurora Australis are formed when high--energy electrons and ions precipitate from the magnetosphere to the upper atmosphere due to the coupling of solar wind and magnetosphere. These auroral features at a scale of 10 -- 10,000km hold the key information of the magnetospheric coupling processes at a scale of 1 -- 1,000 earth radii. Since it is difficult to cover the entire magnetosphere with a limited number of satellites, ground--based auroral images have been critical to understanding the propagation of the solar wind energy to near--Earth space~\cite{akasofu1964development,akasofu1981energy,clausen2014thermospheric}. With decades of ground and space observations, global aurora patterns of 1,000 -- 10,000km and their general mechanisms are well understood~\cite{newell2009diffuse}. However, the morphology of smaller--scale aurora forms (less than 1000km) and their connections to mid-- to large--scale magnetospheric dynamics (less than 100 earth radii) are still under question, due both to the complexity of aurora features and the abundance of aurora images collected. A better understanding of local--scale auroral morphology is critical to improving our understanding of the solar wind--magnetosphere--upper atmosphere interaction. The heliophysics community has over decades amassed vast collections of images of Aurora Borealis and Aurora Australis. To date, the vast majority of this data is unclassified. Classifying this data is a first and important key step to identify local--scale auroral features and thus to enable a deeper analysis on the link between auroral features and magnetospheric dynamics. \section{Related Work}\label{section:related} Automatic classification of auroral images has been studied in the past; tradition computer vision approaches reliant on the construction of hand--designed features include~\cite{syrjasuo2002analysis,syrjasuo2004diurnal,syrjasuo2007automatic,rao2014automatic}. While most of these approaches are only effective when limited to binary \emph{aurora/no aurora} classification, the approach of~\cite{yang2012auroral} using a hidden Markov model and incorporating temporal dynamics is able to identify four distinct categories of aurora with a positive detection rate of up to 85\%. \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/clausen-random-kc-comparison/02376_k3_c0.png} \label{fig:clausen-arc-top} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/clausen-random-kc-comparison/00419_k9_c1.png} \label{fig:clausen-diffuse-top} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/clausen-random-kc-comparison/00697_k1_c2.png} \label{fig:clausen-discrete-top} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/clausen-random-kc-comparison/01303_k3_c3.png} \label{fig:clausen-cloudy-top} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/clausen-random-kc-comparison/04154_k11_c4.png} \label{fig:clausen-moon-top} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/clausen-random-kc-comparison/02454_k5_c5.png} \label{fig:clausen-clear-top} \end{subfigure} \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/03368_class_0.png} \caption{Arc} \label{fig:clausen-arc} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/01646_class_1.png} \caption{Diffuse} \label{fig:clausen-diffuse} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/01660_class_2.png} \caption{Discrete} \label{fig:clausen-discrete} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/03112_class_3.png} \caption{Cloudy} \label{fig:clausen-cloudy} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/00575_class_4.png} \caption{Moon} \label{fig:clausen-moon} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/00562_class_5.png} \caption{Clear} \label{fig:clausen-clear} \end{subfigure} \caption{Random samples of images from the OATH dataset showing the distinct categories of aurora.} \label{fig:image-samples} \end{figure*} Computer vision is now dominated by convolutional neural network (CNN)--based algorithms that have achieved remarkable performance on a wide range of challenging tasks. Following the general trend in computer vision, there have been several recent efforts to use CNN--based models for automatic auroral classification. In~\cite{clausen2018automatic}, Clausen \emph{et. al.} manually label 5,824 images of aurora obtained from THEMIS all--sky imagers into the distinct categories \emph{Arc}, \emph{Discrete}, \emph{Diffuse}, \emph{Clear/No Aurora}, \emph{Cloudy}, and \emph{Moon}. Using an Inception model~\cite{szegedy2015going} pretrained on the ImageNet dataset~\cite{deng2009imagenet,sharif2014cnn}, they extract feature vectors and fit a ridge classifier to them to obtain an 82\% accuracy. In~\cite{kvammen2020auroral}, the authors manually classified 3,846 auroral images into the highly imbalanced categories \emph{Arc aurora}, \emph{Auroral breakup}, \emph{Colored aurora}, \emph{Discrete aurora}, \emph{Edge aurora}, \emph{Faint Aurora}, \emph{Patchy Aurora}, then compare classification results from a range of machine learning algorithms, including CNN--based architectures. There are two closely related challenges inherent to using conventional CNN--based approaches to auroral image analysis. First, supervised learning methods require a substantial quantity of unambiguous, high--quality labeled data be available for training. While there exists massive amounts of high--quality auroral image data, to date very little of this data has been labeled. The use of ImageNet--pretrained CNNs as in~\cite{clausen2018automatic} presents a possible alternative, as pretraining is often effective at reducing the amount of data necessary to train a CNN to convergence~\cite{razavian2014cnn}, but pretraining only partially addresses the challenge of limited data and may introduce other issues and biases~\cite{geirhos2018imagenet}. Secondly and more importantly, manually labeling auroral images itself presents additional challenges beyond the time and expense required: labels for auroral images are often chosen based on subjective, qualitative judgements, and there is frequent disagreement between researchers regarding what an appropriate label set is. Complicating the picture further is the fact that even after a label set is decided on it is common for all--sky auroral images to contain more than one type of aurora, raising the additional question of how to decide what the correct classification should be. In the past few years, unsupervised machine learning algorithms such as \cite{chen2020simple}, \cite{caron2020unsupervised}, \cite{Dai_2021_CVPR} have begun to achieve levels of performance rivaling those of their supervised counterparts on benchmark tasks and datasets, including CIFAR10, CIFAR100, and ImageNet. However, the utility and efficacy of recent unsupervised models for practical tasks and real--world data is not yet well established. Unsupervised models afford an opportunity to address the challenges just described in auroral imaging simultaneously: the learned representations of auroral images may be useful for quantitatively determine meaningful labels, and can be used to apply such labels automatically, facilitating the creation of much larger labeled auroral image datasets. Our contribution in this work is as follows: \begin{itemize} \item We modify and adapt the \emph{Simple Framework for Contrastive Learning of Representations} (SIMCLR) model introduced in~\cite{chen2020simple} and apply it to a publicly available auroral image dataset released with the publication of~\cite{clausen2018automatic}, consisting of 5,824 images of aurora obtained from THEMIS all--sky imagers~\cite{donovan2006themis} and manually categorized into the distinct categories \emph{Arc}, \emph{Discrete}, \emph{Diffuse}, \emph{Clear/No Aurora}, \emph{Cloudy}, and \emph{Moon}. Our approach leads to sufficiently informative representations to enable a simple linear classifer to obtain state--of--the--art classification performance across these categories, reducing the top--1\% error rate from the previous benchmark by almost 10 percentage points. Moreover, our model obtains this performance while requiring less than 25\% of the number of parameters of the model used to obtain the previous state--of--the--art. \item We demonstrate that the representations our model learns for these images cluster naturally into more categories than exist manually assigned ground--truth labels, suggesting that the current labels may be overly coarse and may obscure important information about auroral morphology that may be useful in understanding the connection between auroral morphology and magnetospheric dynamics. \end{itemize} The rest of the paper is organized as follows: in Section \ref{section:methodology}, we describe our model, training strategy, and results. In Section \ref{section:conclusion}, we conclude with a discussion of future work. \section{Methodology}\label{section:methodology} \subsection{Data}\label{section:data} The dataset used in this study consists of 5,824 images collected from various THEMIS all--sky imagers. This dataset was initially constructed for the study presented in~\cite{clausen2018automatic}, and is publicly available at \url{http://tid.uio.no/plasma/oath/oath_v1.1_20181026.tgz}. We refer to this dataset as the Oslo Auroral THEMIS (OATH) dataset. Each image $\bm{x}$ in the dataset was cropped by 15\% to remove pixels corresponding to very low elevation angles and then scaled to enhance dim features by applying the following formula: \begin{equation} x_{ij} = \max_{i, j}(\min_{i, j}(\frac{x_{ij} - m_{\bm{x}}}{M_{\bm{\tilde{x}}}}, 1), 0), \end{equation} where $x_{ij}$ denotes the $i, j-th$ pixel value, $m_{\bm{x}}$ denotes the $1^{st}$ percentile brightness value, and $M_{\bm{\tilde{x}}}$ denotes the $99^{th}$ percentile brightness of $\bm{x} - m_{\bm{x}}$. The resulting cropped and scaled images were then manually assigned a distinct label from the categories \emph{Arc} (774 images), \emph{Discrete} (1102 images), \emph{Diffuse} (1400 images), \emph{Cloudy} (817 images), \emph{Moon} (585 images), \emph{Clear/No aurora} ((1082 images). These categories are described in~\cite{clausen2018automatic} as follows: \begin{figure}[t] \small \centering \begin{tikzpicture} \node at (0, 2.4) (h) {$\longleftarrow\,$Projection Head$\,\longrightarrow$}; \node at (0,1.8) (h) {$\longleftarrow\,$\hspace{0.25em}Representation\hspace{0.25em}$\,\longrightarrow$}; \node at (0,1) (h) {$\longleftarrow\,$\hspace{1.5em}Encoder\hspace{1.5em}$\,\longrightarrow$}; \node[draw, circle] at (0,-1) (x) {$\,~\bm{x}~\,$}; \node[draw, circle] at (-2.5,0) (x1) {$\bm{x}_i$}; \node[draw, circle] at (2.5,0) (x2) {$\bm{x}_j$}; \node at (-2.5,1.8) (h) {$\bm h_i$}; \node at (2.5,1.8) (c) {$\bm h_j$}; \node at (-2.5,3) (hh) {$\bm z_i$}; \node at (2.5,3) (cc) {$\bm z_j$}; \path[->] (x) edge [>=latex] node[below,rotate=-25] {$\mathcal{T}_i$} (x1) (x) edge [>=latex] node[below,rotate=25] {$\mathcal{T}_j$} (x2) (x1) edge [>=latex] node[left,rotate=0] {$f(\cdot)$} (h) (x2) edge [>=latex] node[right,rotate=0] {$f(\cdot)$} (c) (h) edge [>=latex] node[left,rotate=0] {$g(\cdot)$} (hh) (c) edge [>=latex] node[right,rotate=0] {$g(\cdot)$} (cc); \path[<->] (hh) edge [>=latex] node[above,rotate=0] {Maximize agreement} (cc); \end{tikzpicture} \caption{A simple framework for contrastive learning of visual representations. Given a batch of images, for each sample $\bm x$, two distinct transformations are randomly applied ($\mathcal{T}_i$ and $\mathcal{T}_j$) to obtain two correlated views of $\bm x$, a `positive pair'. A base encoder network $f(\cdot)$ and a projection head $g(\cdot)$ are then trained to identify the positive pairs among pairs constucted from all transformed samples in a minibatch. After training is completed, the encoder $f(\cdot)$ is applied to each $\bm x$ to obtain a representation $\bm h$ which can then be used for downstream tasks. Diagram adapted from \cite{chen2020simple}.} \label{fig:framework} \end{figure} \begin{itemize} \item \emph{Arc}: Showing one or multiple bands of aurora that stretch across the field-of-view; typically, the arcs have well-defined, sharp edges. \item \emph{Diffuse}: Large patches of aurora, typically with fuzzy edges. \item \emph{Discrete}: Auroral forms with well-defined, sharp edges, that are, however, not arc--like. \item \emph{Cloudy}: Dominated by clouds or the dome of the imager is covered with snow. \item \emph{Moon}: Dominated by light from the Moon. \item \emph{Clear/No aurora}: Images which show a clear sky (stars and planets are clearly visible) without the appearance of aurora. \end{itemize} Examples from each class are displayed in Figure \ref{fig:image-samples}. \subsection{SimCLR}\label{section:simclr} The Simple framework for Contrastive Learning of Representations (SimCLR) algorithm was introduced by Chen \emph{et. al.} in \cite{chen2020simple}. As the name suggests, SimCLR is a relatively simple and yet highly effective algorithm for unsupervised representation learning. The SimCLR framework consists of four components, described below and diagrammed in Figure \ref{fig:framework}: \begin{itemize} \item A stochastic \emph{data augmentation module}. Given a sample $\bm x$, the data augmentation module randomly applies two transformations $\mathcal{T}_i, \mathcal{T}_j$ to $\bm x$, resulting in two distinct views $\bm{x}_i, \bm{x}_j$ of $\bm x$ that we consider a \emph{positive pair} in the context of the contrastive loss function described below. \item A \emph{base encoder}. The base encoder is used to extract an initial representation of a transformed input $\mathcal{T}_k(\bm{x})$. In principle, this could be any machine learning model; in most applications a residual neural network (ResNet)~\cite{he2016deep}, sometimes pretrained on ImageNet, is used. \item A \emph{projection head}. The projection head maps the output of the base encoder to the space where the contrastive loss function is to be applied. Again, in principle this could be any machine learning model; in practice the projection head is usually a shallow (one--layer) dense neural network. \item A \emph{contrastive loss function} designed for the following contrastive prediction task: given a set of pairs of examples that includes a positive pair, identify the positive pair. \end{itemize} The contrastive loss function used here is the \emph{normalized temperature--scaled cross--entropy loss} \cite{sohn2016improved,wu2018unsupervised,oord2018representation}. Letting $\text{sim}(\bm{x}, \bm{y}) = \bm{x}^T\bm{y} /\norm{\bm{x}}||\bm{y}||$ (\emph{cosine similarity}), the loss for a positive pair of examples $\bm{x}_i$, $\bm{x}_j$ is given by \begin{equation}\label{eq:nxent} \ell(\bm{x}_i, \bm{x}_j) = -\log\frac{\exp(\text{sim}(\bm{z}_i, \bm{z}_j)/\tau)}{\sum_{k=1}^{2N}\mathbbm{1}_{[k\neq i]}\exp(\text{sim}(\bm{z}_k, \bm{z}_i)/\tau)}. \end{equation} In Equation \ref{eq:nxent}, $\bm{z}_i, \bm{z}_j$ are the outputs of the projection head for two views $\bm{x}_i, \bm{x}_j$ of an input $\bm{x}$. $\tau$ is a temperature parameter, $\mathbbm{1}_{[k\neq i]}$ is an indicator function that evaluates to 1 if $i \neq k$, and the $2N$ examples represented in the sum in the denominator are obtained by using the $2(N-1)$ augmented pairs in a minibatch as negative examples. This mitigates any need to sample negative pairs explicitly. The model is then trained to minimize \begin{equation} \frac{1}{2N}\sum_{k=1}^N[\ell(2k-1, 2k) + \ell(2k, 2k-1)]. \end{equation} The choice of transformations sampled from is believed to be a critical component in learning successful representations using the SimCLR algorithm. We opt for simplicity, and use only two: a random resized cropping of the input image and a random flip over a horizontal axis. The latter choice is motivated by the view that the location of auroral forms in the image will be relevant to the ultimate goal of obtaining a better understanding of the magnetospheric dynamics involved; a vertical flip causes the model to judge as similar forms that occur on the left side of the image and forms that occur on the right, but a horizontal flip decorrelates the image pair while preserving the geographic location of the form. \begin{table*}[!t] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Model & Params. & Top--1 Accuracy (\%)& Std. & Precision & Recall & F1--Score \\ \hline Ridge + Inception (pretrained)\cite{clausen2018automatic} & 43m & 81.7 & 0.1 & 83.0$^{*}$ & 84.3$^*$ & 83.6$^*$ \\ \hline ResNet18 (pretrained, supervised) & 12m & 84.7 & 0.7 & 85.6 & 85.2 & 85.3 \\ \hline SimCLR & 12m & $\bm{90.9}$ & $0.7$ & $\bm{91.6}$ & $\bm{91.5}$ & $\bm{91.5}$ \\ \hline \end{tabular} \caption{\textbf{Auroral image classification results.} Data in row 1 is taken from \cite{clausen2018automatic}. The starred entries in row 1 were not reported in the cited paper, and instead were calculated from a single fold confusion matrix provided in the paper. Supervised ResNet18 results included for comparison. Reported results are averages from $5$--fold cross--validation. To optimize comparability, the same folds are used for cross--validation as in \cite{clausen2018automatic}.} \label{table:results} \end{center} \end{table*} In \cite{chen2020simple}, it is noted that random color distortion, when paired with random cropping, is believed to be among the most important transformations to include in order to learn high--quality representations for image classification tasks. Our experience with this was mixed: incorporating random color distortions did improve classification performance somewhat beyond the results documented here, but it also degraded the quality of the representations learned for clustering and other downstream tasks. We therefore omitted random color distortions along with other commonly included transformations such as random rotations and perspective shifts. In all of our experiments, we use a projection head consisting of two fully connected linear layers with an intermediate ReLU activation and without biases; that is, $g(\bm{h}) = W_2(\max(0, W_1(\bm{h}))$, where $W_1$ and $W_2$ denote weight matrices. As an encoder, we use a lightweight ResNet18 model that is randomly initialized. The temperature hyperparameter $\tau$ of the loss function (c.f. Eq. \ref{eq:nxent}) was set to 0.5. \subsection{Training}\label{section:training} We train the SimCLR model for 100 epochs, using the Adam optimizer~\cite{kingma2014adam} with a fixed learning rate of 0.0003 and no weight decay. We use a minibatch size of 128. All models were implemented using the open source machine learning library PyTorch \cite{paszke2019pytorch} and trained using a single NVIDIA Titan Xp GPU. \subsection{Classification}\label{section:classification} After training, we fit an $\ell2$--regularized logistic regression classifier to the obtained representations. We use 5--fold cross--validation to obtain average classification performance as measured by a range of metrics; c.f. Table \ref{table:results}. For comparison, we also include results from fine--tuning an ImageNet--pretrained ResNet18. In order to insure maximum comparability between models we use the same folds for cross--validation as in \cite{clausen2018automatic}. Linear classifiers were fit using Scikit-Learn~\cite{scikit-learn}. A confusion matrix for one of the held--out validation sets is presented in Figure~\ref{fig:clausen-conf}. Confusion matrices for the other folds are very similar. \begin{figure}[H] \centering \includegraphics[width=0.48\textwidth]{images/oath-confusion-matrix.png} \caption{\textbf{Confusion matrix for a single fold.} Rows are ground--truth labels, columns are predicted labels.} \label{fig:clausen-conf} \end{figure} \begin{figure}[htbp] \centering \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/02499.png} \caption{G: 4 S: 0} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/00974.png} \caption{G: 4 S: 0} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/00583.png} \caption{G: 4 S: 0} \end{subfigure} \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/01265.png} \caption{G: 3 S: 3} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/01302.png} \caption{G: 3 S: 3} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/03261.png} \caption{G: 3 S: 3} \end{subfigure} \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/01160.png} \caption{G: 2 S: 2} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/02446.png} \caption{G: 2 S: 7} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/03545.png} \caption{G: 2 S: 11} \end{subfigure} \caption{A random sample of images from three ground--truth categories and their cluster labels. `G' indicates the ground--truth label (0='Arc aurora', 1='Diffuse aurora', 2='Discrete aurora', 3='Cloudy', 4='Moon', 5='Clear/No aurora'); `S' the category assigned by the K-means algorithm to the learned representation of the image. In rows 1 and 2, the categories largely agree. In row 3, the `Discrete aurora' category, which can be seen in the sample to contain very different auroral forms, is split over several distinct clusters.}\label{figure:clausen-kc-comparison} \end{figure} \subsection{Clustering}\label{section:clustering} After training, the $K$--means clustering algorithm was applied to cluster the learned representations. By comparing average silhouette scores for $k \in \{3,\dots,15\}$, we find evidence that the number of clusters present in the data is greater than the number of ground--truth labels currently used to classify the dataset. Specifically; we attain a maximum average silhouette score of $0.212$ for $k=12$. For comparison, using the ground--truth labels as cluster assignments yields an average silhouette score of $0.011$. Significantly, while some clusters align closely with the ground--truth classifications (the classes \emph{Cloudy} and \emph{Moon}, for example), other less well--defined ground--truth classes (in particular, the Discrete class) are split among various clusters; c.f. Figure~\ref{figure:clausen-kc-comparison}, \ref{fig:clausen-kc-comparison-2}. Moreover, random samples drawn from the learned clusters exhibit clear qualitative similarities; c.f. Figure~\ref{fig:clausen-kc-comparison-2}. \section{Discussion}\label{section:discussion} There are several distinctions to be made between the approach outlined here and the standard SimCLR model. As mentioned previously, we greatly reduce the number of transformations sampled from and nevertheless obtain high performance. We train on a small dataset using a lightweight model with 12m parameters, and improve on the current state--of--the--art, suggesting that while the conventional wisdom is that to learn effective representations unsupervised models should be more highly parameterized than their supervised counterparts, this may in fact not generally be the case. \begin{figure}[t] \centering \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/00351.png} \caption{G: 2 S: 2} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/02123.png} \caption{G: 2 S: 2} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/00901.png} \caption{G: 2 S: 2} \end{subfigure} \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/02446.png} \caption{G: 2 S: 7} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/02219.png} \caption{G: 2 S: 7} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/04859.png} \caption{G: 2 S: 7} \end{subfigure} \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/03545.png} \caption{G: 2 S: 11} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/04817.png} \caption{G: 2 S: 11} \end{subfigure} \hfill \begin{subfigure}[b]{0.15\textwidth} \centering \includegraphics[width=\textwidth]{images/visualization-sample/clausen/03503.png} \caption{G: 2 S: 11} \end{subfigure} \caption{A random sample of images from the `Discrete aurora' category in specific cluster assignments. `G' indicates the ground--truth label; `S' the cluster label (c.f. Figure~\ref{figure:clausen-kc-comparison}). The images in each row share characteristics that distinguish them from the images in the other rows.}\label{fig:clausen-kc-comparison-2} \end{figure} In \cite{syrjasuo2011numeric}, it is noted that classification error rates of less than 10\% are likely to be sufficient for most practical purposes; in particular, for automatic labeling of auroral images to take place on a large scale, thereby enabling statistical studies of auroral images not previously possible. The results presented in Section \ref{section:classification}, Table~\ref{table:results}, and Figure \ref{fig:clausen-conf} demonstrate that the approach described here meets this criteria. Moreover, the model is relatively lightweight, using slightly less than 12m parameters and requiring less than 110MB of space to train to convergence. We obtain our state--of--the--art results in spite the fact that our model does not rely on ImageNet pretraining, the usual approach when working with similarly sized datasets, thus removing another potential source of bias\cite{geirhos2018imagenet}. The results presented in Section \ref{section:clustering} align with the general fact that the morphology of auroral forms is not well understood. The approach outlined here leads to representations that provide a much finer categorization of auroral forms, with clear qualitative distinctions between categories. This suggests that unsupervised learning may well have an important role to play in improving our understanding of the connection between what is viewed from the ground and the dynamics and coupling of Earth’s magnetosphere and upper atmosphere. \section{Conclusions and Future Work}\label{section:conclusion} In this paper, we have presented a novel approach based on adapting the SimCLR model introduced in \cite{chen2020simple} to the practical challenge of auroral image classification. Our approach leads to state--of--the--art results as measured by a range of classification metrics, surpassing an important threshold for practical utility while requiring less than 25\% of the parameters of recent benchmark models and without relying on ImageNet pretraining. Our approach provides preliminary evidence to suggest that current choices of ground--truth labels for auroral images are likely overly coarse. We also demonstrate that the guidelines for learning effective representations put forth in \cite{chen2020simple} may be ideal for conventional object recognition tasks, but may require adjustment for data that does not share the same general characteristics. More research is needed to better understand the connection between the representations learned by our models and the magnetospheric processes involved. This will be a focus of future work. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-09-30T02:08:29", "yymm": "2109", "arxiv_id": "2109.13899", "language": "en", "url": "https://arxiv.org/abs/2109.13899" }
\section{Introduction} \label{sec:introduction} Segmenting anatomical organs or tumors from medical images plays an important role in many image-guided therapies. Recently, deep learning (DL)-based approaches have achieved satisfactory performance in various medical image analysis tasks \cite{li2018h,dou20163d,chen2020realistic,xu2020adversarial}. However, the success of DL-based segmentation approaches usually relies on a large amount of labeled data, which is particularly difficult and costly to obtain in the medical imaging domain where only experts can provide reliable and accurate annotations and the images are often in 3D volumes. The heavy burden of acquiring the costly expert-examined labels motivates many annotation-efficient researches such as semi-supervised learning (SSL) \cite{tarvainen2017mean,zhang2017DAN,sedai2017GANsemi,vu2019EM,verma2019ICT,luo2020DTC}, weakly supervised learning \cite{xu2021noisy,yang2020weakly}, and self-supervised learning \cite{zhuang2019self,chen2019self}. In this work, we focus on semi-supervised segmentation since it is clinically practical to obtain a small set of expert-examined labeled data and a large quantity of unlabeled images. \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{fig_intro_comparison.pdf}} \caption{Comparison between (a) the typical perturbed consistency-based self-ensembling SSL paradigm and (b) our proposed cyclic prototype consistency learning (CPCL) paradigm. Both paradigms adopt a student model $f_{s}(\theta)$ and a self-ensembling teacher model $f_{t}(\tilde{\theta})$. $X_{l}$ and $X_{u}$ denote the labeled images and unlabeled images, respectively, along with corresponding real label $Y_{l}$ for $X_{l}$. $\xi$ and $\xi^{\prime}$ in (a) denote different perturbations (e.g., random Gaussian noises). $P$ represents the segmentation prediction and $F$ in (b) denotes the features extracted from the encoder. Note that the features $F$ on the arrows indicate that they are the to-be-segmented features. ‘$//$’ means stop-gradient. Obviously, the main difference is that the typical paradigm utilizes the costly pixel-wise real labels $Y_{l}$ only for supervising the corresponding labeled data, i.e., $\mathcal{L}_{\boldsymbol{s}}$, while the real label supervision can circulate throughout our CPCL with the help of non-parametric cyclic prototype learning, that is, the unlabeled data can be exploited via explicit real label supervision. Detailed architecture of CPCL is illustrated in Sec. \ref{sec:method}.} \label{fig_intro_comparison} \end{figure} Recent impressive progress in semi-supervised medical image segmentation has featured the smoothness assumption \cite{luo2018smooth} based consistency learning \cite{li2020transformation,yu2019uncertainty,zhang2021DTML}. Besides the supervised loss for labeled data, this paradigm leverages unlabeled data by enforcing \textit{``unsupervised"} perturbation-based consistency between predictions of the self-ensembling teacher model and the student model, as shown in Fig. \ref{fig_intro_comparison} (a). Based on it, some improved works, e.g., using more suitable types or strengths of the perturbation \cite{li2020transformation,verma2019ICT,french2019semi,xie2019UDA} and adopting uncertainty to select more reliable voxel-wise consistency targets \cite{yu2019uncertainty,wang2020double}, are proposed. Intuitively, the quality of the unsupervised perturbed consistency target determines whether such paradigm can successfully exploit the intrinsic information in unlabeled images. As such, this paradigm may have the following limitations that can lead to sub-optimal results: (i) difficulty to define the most suitable types or strengths of the perturbation for different tasks that will greatly affect the performance; (ii) limited reliability during training: since the real labels are only provided to their paired labeled images, the unlabeled data cannot benefit from explicit expert-examined supervision. However, distinct from these examined labels, the reliability of the unsupervised consistency targets is hard to guarantee. Observing the above limitations, it is natural to ask the following question: \textit{can we exploit the unlabeled data via explicit real label supervision for semi-supervised training?} To this end, first, we discard the previous perturbation-based consistency but absorb the essence of non-parametric prototype learning \cite{snell2017prototypical,dong2018few} commonly used in few-shot learning. Based on the prototypical networks, we then propose a novel real label-centric cyclic prototype consistency learning (CPCL) framework for semi-supervised segmentation. The paradigm is shown in Fig. \ref{fig_intro_comparison} (b) and detailed architecture is shown in Fig. \ref{fig_framework}. Specifically, on top of the self-ensembling strategy, the proposed CPCL consists of an L2U forward process, i.e., using \underline{l}abeled prototypes to segment \underline{u}nlabeled data (Fig. \ref{fig_framework} (a)), and a U2L backward process, i.e., using \underline{u}nlabeled prototypes to segment \underline{l}abeled data back (Fig. \ref{fig_framework} (b)). When the unlabeled data are fed into the self-ensembling model, the L2U forward consistency targets are driven by the real label prototypes and the extracted features from unlabeled data. Synergistically, the U2L backward process utilizes the unlabeled prototypes, which are driven by the learned unlabeled features and the unlabeled prediction from the self-ensembling model, to perform prototypical segmentation for labeled data back, so that the backward consistency target can be directly guided by the reliable real label. Such a cyclic scheme enhances the segmentation network by encouraging the network learning more discriminative and compact representation from labeled and unlabeled data, and turns previous \textit{``unsupervised"} consistency into new \textit{``supervised"} consistency, obtaining the \textit{``all-around real label supervision"} property of our method. Overall, the main contributions of this work are as follows: \begin{itemize} \item We present a new perspective for semi-supervised medical image segmentation, that is, exploiting the unlabeled data via explicit real label supervision. \item Based on the above perspective, we propose a real label-centric cyclic prototype consistency learning (CPCL) framework for semi-supervised training, which turns previous \textit{``unsupervised"} consistency to a new \textit{``supervised"} consistency with the help of prototypical networks. \item We have conducted extensive experiments on brain tumor segmentation from magnetic resonance imaging (MRI) and kidney segmentation from computed tomography (CT) images. The comparison and ablation studies demonstrate the superiority of our CPCL over other state-of-the-art semi-supervised methods and the effectiveness of each component. \end{itemize} \begin{figure*}[t] \centerline{\includegraphics[width=2\columnwidth]{fig_framework.pdf}} \caption{Illustration of the proposed cyclic prototype consistency learning (CPCL) framework for semi-supervised medical image segmentation (using kidney segmentation as an example). \textsl{Enc} and \textsl{Dec} represent encoder and decoder of the student model, while \textsl{EMA Enc} and \textsl{EMA Dec} denote the self-ensembling teacher model updated as the exponential moving average (EMA) of student weights. The feature maps $F_{l}$ and $F_{u}$ are extracted from the last convolution layer of the encoder and upsampled to match the size of the segmentation mask by trilinear interpolation operation. $\otimes$ denotes the element-wise multiplication operation. ``sim" denotes the cosine similarity calculation between each prototype and to-be-segmented features at each spatial location, followed by a softmax operation to produce the probability map. Apart from the supervised loss ($\mathcal{L}_{s}$) on labeled data, the framework is also supervised by a real label-centric cyclic consistency learning mechanism, which is consisted of (a) labeled-to-unlabeled prototypical forward process (contributing to $\mathcal{L}_{fpc}$), and (b) unlabeled-to-labeled prototypical backward process (contributing to $\mathcal{L}_{bpc}$).} \label{fig_framework} \end{figure*} \section{Related Work} \subsection{Semi-supervised Medical Image Segmentation} To reduce the annotation efforts, semi-supervised medical image segmentation has been studied for a long period. In most early semi-supervised works, they perform segmentation with the help of hand-crafted features. For example, \textit{You et al.} \cite{you2011segmentation} presented a prior-based method to improve retinal vessel segmentation from fundus images, and \textit{Portela et al.} \cite{portela2014semi} proposed a clustering-based Gaussian mixture model to segment brain MR images. However, these approaches often suffer from unsatisfactory performance due to the limited representation capacity of the hand-crafted features. With a stronger ability to automatically learn high-level representation, deep learning has greatly advanced semi-supervised medical image segmentation. \textit{Bai et al.} \cite{bai2017semi} proposed a self-training approach for cardiac segmentation from MRI, which iteratively updates the network parameters and pseudo labels for unlabeled data. Then, multi-view co-training is explored for semi-supervised liver segmentation \cite{zhou2019semicotrainig} and breast cancer analysis \cite{xia2020cotrainig}. Adversarial learning also becomes a popular solution for semi-supervised segmentation \cite{zhang2017DAN,nie2018GANasdnet,sedai2017GANsemi}. As a typical example, \textit{Zhang et al.} \cite{zhang2017DAN} proposed a deep adversarial network (DAN) for biomedical image segmentation by encouraging the predicted segmentation of unlabeled data to be similar to that of labeled data. More recently, consistency learning, which leverages unlabeled data by enforcing unsupervised perturbation-based consistency, achieves impressive performance in semi-supervised learning. For example, the $\Pi$ model \cite{samuli2017temporal} performs multiple forward predictions under different perturbations and encourages consistency of the network outputs. Then, the temporal ensembling strategy \cite{samuli2017temporal} improves the $\Pi$ model by using \textit{exponential moving average} (EMA) predictions for unlabeled data as the consistency targets. However, maintaining the EMA predictions becomes a heavy burden during training. To cope with it, the mean teacher (MT) framework \cite{tarvainen2017mean,cui2019semi} proposed to use a teacher model with the EMA weights of the student model. Inspired by the MT model, some recent improved works were further proposed. \textit{Yu et al.} \cite{yu2019uncertainty} extended the MT model by introducing uncertainty map to select reliable voxels as the consistency targets. \textit{Luo et al.} \cite{luo2021efficient} extended the uncertainty rectified consistency with pyramid multi-scale strategy. \textit{Li et al.} \cite{li2020transformation} introduced a transformation-consistency to further improve the semi-supervised skin lesion segmentation. \textit{Luo et al.} \cite{luo2020DTC} and \textit{Zhang et al.} \cite{zhang2021DTML} combined the regression task with the segmentation task to form a dual-task consistency and thereby explicitly impose geometric constraints. \textit{You et al.} \cite{you2021simcvd} introduced contrastive loss as the auxiliary supervision for unlabeled data. Based on contrastive loss, \textit{Lai et al.} \cite{lai2021semiCVPR21} constructed a context-aware consistency to make representations more robust to the environment. Generally, to exploit the unlabeled data, recent consistency-based works are similarly dedicated to investigating better perturbation (e.g., noises and transformations) for unsupervised consistency learning or introducing other parallel tasks to provide auxiliary guidance. However, the existing works utilized the expert-examined labels only for the labeled data training, while the unlabeled data training regrettably cannot obtain any explicit guidance from those costly labels. As another alternative, we present a new perspective that exploit the unlabeled data via the trustworthy real label, i.e., turning existing \textit{``unsupervised"} consistency to a new \textit{``supervised"} consistency. \begin{figure}[t] \centerline{\includegraphics[width=0.9\columnwidth]{fig_pronet.pdf}} \caption{Concise diagram of the prototypical networks in image classification \cite{snell2017prototypical}. The prototypical networks aim to obtain well-separated per-class prototypes (stars) in the feature space so that we can classify each query image by comparing the distances (dotted line) between its embedded features and class prototypes.} \label{fig_pronet} \end{figure} \subsection{Prototype Learning} Apparently, the above perspective poses a major challenge: the expert-examined real labels and the unlabeled data are unpaired. In our work, we mainly draw on the spirit of non-parametric prototype learning in few-shot segmentation to solve this problem. Few-shot segmentation aims to segment targets with only a few labeled samples, wherein the prototype-based approaches perform pixel-wise matching on query images with holistic prototypes of different classes extracted from support set \cite{snell2017prototypical,zhang2020late,yang2020mixprototype,wang2019panet,li2021adaptive,dong2018few}. Specifically, the prototypical networks \cite{snell2017prototypical} were firstly proposed to learn a metric space and perform image classification via computing distances to the class-related prototypes. Concisely, such strategy relies on the idea that there exists an embedding space in which relevant features cluster around a representative prototype for each class, where the prototypical networks aim to learn per-class prototypes on top of sample averaging in the feature space. As shown in Fig. \ref{fig_pronet}, each prototype is the mean vector of the embedding features belonging to its class and can be regarded as the representative of this class. Thereby, the classification can be performed by computing distances between the prototype of each class and the embedding features of the query images. Intuitively, such design encourages more discriminative and compact features towards more appealing classification performance. Following this spirit, \textit{Dong et al.} \cite{dong2018few} further adapted the prototypical networks to tackle the few-shot pixel-wise classification task, i.e., image segmentation. SG-One \cite{zhang2020late} designed a masked average pooling strategy to obtain the squeezed representation of support images and then applied the cosine similarity to perform pixel-wise matching between pixels in query images and the prototypes. PANet \cite{wang2019panet} further introduced a backward prototype alignment between support and query branches to regularize the prototype learning. ASGNet \cite{li2021adaptive} introduced the superpixel technique to form an adaptive prototype allocation mechanism. FWB \cite{nguyen2019FWB} improved the quality of support prototypes by considering the support feature differences between foreground and background. To tackle the challenge caused by unpaired supervision, we bridge semi-supervised and few-shot segmentation tasks with prototype learning, where the labeled set can be regarded as a support set while the unlabeled images as query images, respectively. \section{Methodology} \label{sec:method} Fig. \ref{fig_framework} depicts the proposed cyclic prototype consistency learning (CPCL) framework for semi-supervised medical image segmentation. In the existing consistency-based SSL methods, the expert-examined real labels $Y_{l}$ are only exploited for their corresponding images by means of the supervised loss $\mathcal{L}_{\boldsymbol{s}}$, as shown in Fig. \ref{fig_intro_comparison}(a). Here, we give a new perspective for semi-supervised segmentation. We aim to make full use of those costly pixel-wise real labels, that is, both labeled and unlabeled data can obtain effective guidance from the expert-examined real labels. To this end, we absorb the essence of prototype learning commonly used in few-shot segmentation and form a real label-centric cyclic consistency learning framework (conceptual paradigm is shown in Fig. \ref{fig_intro_comparison}(b)) by constructing the labeled-to-unlabeled (L2U) prototypical forward process (Fig. \ref{fig_framework}(a)) and the unlabeled-to-labeled (U2L) prototypical backward process (Fig. \ref{fig_framework}(b)). Noteworthily, the proposed CPCL does not employ any perturbation. The detailed description of the proposed framework is elaborated in the following sections. \subsection{Framework Overview} \label{sec:overview} To ease the description of the methodology, we first formulate the semi-supervised segmentation problem. In this task, the training set consists of $N$ samples in total, while only $M$ samples have the expert-examined real labels and the remaining $N-M$ samples only include the images, e.g., MRI. We denote the labeled set as $\mathcal{S}_{L}=\left\{\left(X_{l(i)}, Y_{l(i)}\right)\right\}_{i=1}^{M}$ and the unlabeled set as $\mathcal{S}_{U}=\left\{X_{u(i)}\right\}_{i=M+1}^{N}$, where $X_{l(i)}, X_{u(i)} \in \mathbb{R}^{H \times W \times D}$ represent the input 3D volumes of height $H$, width $W$, depth $D$ and $ Y_{l(i)} \in\{0,1\}^{H \times W \times D}$ is the ground-truth segmentation label of $X_{l(i)}$. Although the core spirit of our CPCL is substantially different from previous perturbation-based approaches, the proposed method can still be regarded as another type of consistency regularization, i.e., we utilize the real label supervision signals to regularize the learning from the unlabeled set. Therefore, this task can be formulated as training the network by optimizing the following objective: \begin{equation} \label{eq:overview} \min _{\theta} \mathcal{L}_{s}\left(\theta,\mathcal{S}_{L}\right)+\lambda \mathcal{L}_{c}(\theta, \tilde{\theta}, \mathcal{S}_{L}, \mathcal{S}_{U}), \end{equation} where $\mathcal{L}_{s}$ is the supervised loss for labeled data; $\mathcal{L}_{c}$ represents the general consistency loss, e.g., the proposed cyclic prototype consistency in our work or the perturbation-based prediction consistency in previous works \cite{cui2019semi,tarvainen2017mean}; $\theta$ and $\tilde{\theta}$ denote the weights of the student model and the teacher model, respectively; $\lambda$ is a ramp-up weight commonly scheduled by the time-dependent Gaussian function $\lambda(t)=w_{max} \cdot e^{\left(-5\left(1-\frac{t}{t_{m a x}}\right)^{2}\right)}$ to make the tradeoff between $\mathcal{L}_{s}$ and $\mathcal{L}_{c}$ \cite{cui2019semi}, where $w_{max}$ is the final consistency weight and $t_{\max}$ is the maximum number of training steps. Such design for $\lambda$ can avoid the domination by meaningless consistency targets at the beginning of network training. Besides, recent progress on semi-supervised segmentation indicates that constructing a self-ensembling teacher model from the student model at different training steps can enhance the reliability of the teacher model's prediction \cite{cui2019semi}. Thus, following this spirit, we update the teacher model's weights $\tilde{\theta}_{t}$ at the training step $t$ by means of the \textit{exponential moving average} (EMA) approach, which can be formulated as: \begin{equation} \label{eq:EMA} \tilde{\theta}_{t}=\alpha \tilde{\theta}_{t-1}+(1-\alpha) \theta_{t}, \end{equation} where $\alpha$ is the EMA decay rate and set to 0.99 as recommended by \cite{tarvainen2017mean,xu2021noisy}. \subsection{Cyclic Prototype Consistency Learning} As mentioned in Sec. \ref{sec:introduction}, the previous unsupervised targets easily suffer from limited reliability. Without expert-examined supervision, it is difficult to obtain precise segmentation masks from unlabeled images. Instead of constraining unsupervised consistency like in previous approaches, we propose a cyclic prototype consistency learning schema on top of the prototypical networks \cite{snell2017prototypical} to encourage network to learn robust well-representative and well-separated prototype representation for the segmentation targets and concurrently realizing contextual information incorporation between labeled and unlabeled data. Specifically, since the voxel embeddings belonging to the same segmentation targets should be similar, we adopt the masked average pooling operation with the late prototype generation strategy \cite{zhang2020late} which masks the intermediate foreground/background features extracted from the encoder with the segmentation masks. These prototypes with abundant contextual information from labeled data (or unlabeled data) can be then explored to measure the similarity to segment the unlabeled data (or labeled data) via non-parametric metric learning. Such strategy is integrated into our cyclic framework including two tailored processes, i.e., L2U forward consistency and U2L backward consistency, as elaborated successively. \subsubsection{L2U Forward Consistency} The purpose of L2U forward process (i.e., using labeled prototypes to segment unlabeled data) is to utilize the generated prototype to transfer the real label supervision signals from labeled data to unlabeled data, as shown in Fig. \ref{fig_framework} (a). Note that since most previous efforts are evaluated in binary segmentation tasks, we will consistently focus on binary segmentation in this work, i.e., one foreground target and one background target are considered. However, the proposed framework can be easily adapted to multi-class segmentation, i.e., generating multiple foreground prototypes for different segmentation targets. Specifically, let $F_{l(k)}$ be the feature map extracted by the encoder for the labeled image $X_{l(k)}$, where $k=1, ..., K$ indexes the images in a mini-batch during training. The corresponding real label of $X_{l(k)}$ is $Y_{l(k)}$. The prototype $p_{l\mathrm{(fg)}}$ of the foreground segmentation target $\mathcal{C_\mathrm{fg}}$ is generated via masked average pooling \cite{zhang2020late}: \begin{equation} \label{eq:plfg} p_{l\mathrm{(fg)}}=\frac{1}{K} \sum_{k} \frac{\sum_{x, y, z} F_{l(k)}^{(x, y, z)} \mathds{1}\left[Y_{l(k)}^{(x, y, z)}\in\mathcal{C_\mathrm{fg}}\right]}{\sum_{x, y, z} \mathds{1}\left[Y_{l(k)}^{(x, y, z)}\in\mathcal{C_\mathrm{fg}}\right]}, \end{equation} where the feature map $F_{l(k)}$ is upsampled to match the size of the segmentation mask by trilinear interpolation; $(x, y, z)$ denotes the spatial location for each voxel; and $\mathds{1}(\cdot)$ represents the indicator function that returns 1 when the condition is true or 0 otherwise. Note that to simplify the illustration, we omit the prototype of the background in Fig. \ref{fig_framework}. Accordingly, the prototype of background can also be generated by: \begin{equation} \label{eq:plbg} p_{l\mathrm{(bg)}}=\frac{1}{K} \sum_{k} \frac{\sum_{x, y, z} F_{l(k)}^{(x, y, z)} \mathds{1}\left[Y_{l(k)}^{(x, y, z)}\notin \mathcal{C_\mathrm{fg}}\right]}{\sum_{x, y, z} \mathds{1}\left[Y_{l(k)}^{(x, y, z)}\notin \mathcal{C_\mathrm{fg}}\right]}. \end{equation} To learn the optimal prototypes collaboratively with the segmentation network, we adopt a non-parametric metric learning mechanism. Noteworthily, this mechanism introduces no extra learnable parameters that may lead to over-fitting. Concretely, we introduce a distance function $d(\cdot)$ to measure the similarity between the unlabeled feature map (denoted as $F_{u}$) extracted from the self-ensembling teacher model and the labeled prototypes $p_{l\mathrm{(fg)}}$ or $p_{l\mathrm{(bg)}}$. Then, the softmax function over the similarities is applied to produce the probability map $P_{l2u}$ over the classes. Let $\mathcal{P}_{l}=\left\{p_{l_\mathrm{(fg)}}\right\} \cup\left\{p_{l_\mathrm{(bg)}}\right\}$, for each $p_{l(j \in\{bg,fg\} )} \in \mathcal{P}_{l}$, we have: \begin{equation} \label{eq:pl2u} P_{l2u(j)}^{(x, y, z)}=\frac{\exp \left(-\alpha d\left(F_{u}^{(x, y, z)}, p_{l(j)}\right)\right)}{\sum_{p_{l(j)} \in \mathcal{P}_{l}} \exp \left(-\alpha d\left(F_{u}^{(x, y, z)}, p_{l(j)}\right)\right)}, \end{equation} where $d(\cdot)$ adopts the cosine distance, and multiplier $\alpha$ is a scaling factor set as 20 recommended by \cite{wang2019panet}. Note that the unlabeled feature map $F_{u}$ also undergoes the trilinear-interpolation-based upsampling process. After obtaining labeled-to-unlabeled prototypical prediction $P_{l2u}$ via metric learning, we propagate such real label signal to the unlabeled data via the forward prototype consistency loss $\mathcal{L}_{fpc}$, calculated by: \begin{equation} \mathcal{L}_{fpc}=\mathcal{L}_{mse}(P_{l2u}, P_{u}), \end{equation} where $P_{u}$ is the predicted probability of the unlabeled data from the entire self-ensembling teacher model and $\mathcal{L}_{mse}$ denotes the loss of mean squared error (MSE). In this way, the real label driven prototypical segmentation $P_{l2u}$ can serve as effective guidance for the unlabeled data to discover the desired target regions, and thereby provide more meaningful knowledge for the network training, as experimentally demonstrated in Sec. \ref{sec:ablation}. \subsubsection{U2L Backward Consistency} Synergistically, another backward process is expected to discover the most representative prototype of the same target regions from unlabeled data, where more strict and direct guidance will be imposed by the labeled data via U2L backward consistency, i.e., using unlabeled prototypes to segment labeled data, as shown in Fig. \ref{fig_framework} (b). To well-roundly enhance $P_{u}$, $F_{u}$ and $F_{l}$, we use the unlabeled prediction ($P_{u}$) and the extracted feature maps ($F_{l}$ and $F_{u}$) from the above L2U forward process to deduce the prototypical predictions $P_{u2l}$ back for labeled data. By such, the real label $Y_{l}$ can be directly employed again to guide the learning from unlabeled data (i.e., $\mathcal{L}_{bpc}$ in Fig. \ref{fig_framework}), obtaining the \textit{``all-around real label supervision"} property of our method. Specifically, following Eqns. (\ref{eq:plfg}), (\ref{eq:plbg}) and (\ref{eq:pl2u}), we can reversely obtain the unlabeled-to-labeled prototypical prediction $P_{u2l}$ via non-parametric metric learning as well. First, the unlabeled binary prediction mask is given by \begin{equation} \hat{Y}_{u}=\underset{j\in\{bg,fg\}}{\arg \max }(P_{u(j)}). \end{equation} Then, the foreground and background holistic prototypes for unlabeled data can be obtained by masked average pooling: \begin{equation}\left\{\begin{array}{l} p_{u\mathrm{(fg)}}=\frac{1}{K} \sum_{k} \frac{\sum_{x, y, z} F_{u(k)}^{(x, y, z)} \mathds{1}\left[\hat{Y}_{u(k)}^{(x, y, z)}\in\mathcal{C_\mathrm{fg}}\right]}{\sum_{x, y, z} \mathds{1}\left[\hat{Y}_{u(k)}^{(x, y, z)}\in\mathcal{C_\mathrm{fg}}\right]}; \\ p_{u\mathrm{(bg)}}=\frac{1}{K} \sum_{k} \frac{\sum_{x, y, z} F_{u(k)}^{(x, y, z)} \mathds{1}\left[\hat{Y}_{u(k)}^{(x, y, z)}\notin \mathcal{C_\mathrm{fg}}\right]}{\sum_{x, y, z} \mathds{1}\left[\hat{Y}_{u(k)}^{(x, y, z)}\notin \mathcal{C_\mathrm{fg}}\right]}. \end{array}\right.\end{equation} After obtaining the unlabeled prototypes $\mathcal{P}_{u}=\left\{p_{u_\mathrm{(fg)}}\right\} \cup\left\{p_{u_\mathrm{(bg)}}\right\}$, we can further calculate the cosine distances and produce the U2L probability map $P_{u2l}$ by: \begin{equation} \label{eq:pu2l} P_{u2l(j)}^{(x, y, z)}=\frac{\exp \left(-\alpha d\left(F_{l}^{(x, y, z)}, p_{u(j)}\right)\right)}{\sum_{p_{u(j)} \in \mathcal{P}_{u}} \exp \left(-\alpha d\left(F_{l}^{(x, y, z)}, p_{u(j)}\right)\right)}. \end{equation} Finally, the backward prototype consistency loss $\mathcal{L}_{bpc}$ is calculated by: \begin{equation} \mathcal{L}_{bpc}=\mathcal{L}_{ce}(Y_{l}, P_{u2l}), \end{equation} where $\mathcal{L}_{ce}$ denotes the commonly used cross-entropy (CE) loss. Intuitively, if the self-ensembling teacher model predicts a good segmentation mask for the unlabeled data along with well-discriminative features $F_{u}$, the unlabeled prototypes are able to distinguish the labeled features $F_{l}$ voxel-by-voxel and thereby segment the labeled images well. Thus, the backward process further regularizes the quality of $P_{u}$ in turn to enhance the forward process. Both processes highly collaborate with each other in a cyclic scheme to turn the previous \textit{``unsupervised"} consistency to new \textit{``supervised"} consistency, and such a cycle forces the model to enhance the quality of both embedding spaces and segmentation predictions, as experimentally demonstrated in Sec. \ref{sec:ablation}. \subsection{Semi-supervised Training} \label{sec:loss} The total objective function to train our cyclic prototype consistency learning framework is a weighted combination of the supervised loss $\mathcal{L}_{s}$ on labeled data only, and the forward and backward prototype consistency loss $\mathcal{L}_{fpc}$ and $\mathcal{L}_{bpc}$ driven by both labeled data and unlabeled data synergistically. Formally, the total loss is calculated as follows: \begin{equation} \label{eq:total_loss} \mathcal{L}=\mathcal{L}_{s}+\lambda \mathcal{L}_{c}, \quad \text { with } \quad \mathcal{L}_{c}=\mathcal{L}_{fpc}+\beta \mathcal{L}_{bpc} \end{equation} where $\lambda$ is the time-dependent ramp-up weight described in Sec. \ref{sec:overview}; $\beta$ is a hyper-parameter to balance $\mathcal{L}_{fpc}$ and $\mathcal{L}_{bpc}$, which is generally set as 10 and the effect of this hyper-parameter is also studied in Sec. \ref{sec:beta}. Note that the supervised loss $\mathcal{L}_{s}$ is a combination of cross-entropy loss and Dice loss: \begin{equation} \label{eq:Ls} \mathcal{L}_{s}=0.5*\mathcal{L}_{ce}(Y_{l}, P_{l})+0.5*\mathcal{L}_{Dice}(Y_{l}, P_{l}), \end{equation} since we found that such a combination can provide better performance under most supervised-only settings in our exploratory study. \section{Experiments} We evaluate our proposed semi-supervised segmentation method on both whole brain tumor segmentation from T2 fluid attenuated inversion recovery (T2-FLAIR) MRI and kidney segmentation from arterial phase abdominal CT scans, with extensive ablation analysis and comparison study with state-of-the-art semi-supervised methods. \subsection{Datasets and Experimental Setup} \subsubsection{Brain Tumor Segmentation Dataset} The experiment of whole brain tumor segmentation is performed using the T2-FLAIR MRI data from the BraTS 2019 challenge \cite{brats19}. The entire dataset contains multi-institutional preoperative MRI of 335 glioma patients, including 259 high-grade glioma (HGG) patients and 76 low-grade glioma (LGG) patients, where each patient has four modalities of MRI scans including T1, T1Gd, T2 and T2-FLAIR, with neuroradiologist-examined pixel-wise labels. Here, we use T2-FLAIR for whole tumor segmentation since such modality can better manifest the malignant tumors and is critical to brain surgery of LGG \cite{zeineldin2020deepseg}. In our experiments, the MRI scans are resampled to the same resolution (1 $mm^3$) with intensity normalized to zero mean and unit variance. The data split follows the common settings in the public benchmark \cite{SSL4MIS}, where 250 samples are used for training, 25 for validation and the remaining 60 for testing. \subsubsection{Kidney Segmentation Dataset} We conduct the experiment on kidney segmentation using the KiTS19 dataset \cite{heller2019kits19}. This dataset collects preoperative 3D abdominal CT images in the late-arterial phase from 210 patients, along with the manual segmentation labels provided by experts. We preprocess the data by means of resampling to 1 $mm^3$ resolution, intensity truncation to $[-75, 175]$ HU followed by intensity normalization, and region-of-interest (ROI) cropping (extracting the 3D patches centering at the kidney). Then, we arbitrarily divide the dataset into three groups: 150 for training, 10 for validation and the remaining 50 for testing. \subsubsection{Baseline Approaches} We compare our method with the supervised-only baselines and several state-of-the-art semi-supervised medical image segmentation methods including: mean-teacher self-ensembling model (MT) \cite{cui2019semi}, uncertainty-aware MT (UA-MT) \cite{yu2019uncertainty}, entropy minimization approach (Entropy Mini) \cite{vu2019EM}, deep adversarial network (DAN) \cite{zhang2017DAN}, interpolation consistency training (ICT) \cite{verma2019ICT} and dual-task mutual learning (DTML) \cite{zhang2021DTML}. We compare those methods using the same backbone and partition protocols to ensure fairness. \subsubsection{Implementation Details and Evaluation Metrics} The framework is implemented in Python with PyTorch, using an NVIDIA GeForce RTX 3090 GPU with 24GB memory. In all experiments, we adopt the same 3D U-Net \cite{3Dunet} as the backbone for a fair comparison. The network is trained using the SGD optimizer (weight decay=$0.0001$, momentum=$0.9$). The batch size is set to 4, including 2 labeled images and 2 unlabeled images in each mini-batch. The final consistency weight $w_{max}$ is empirically set to 0.1, following previous consistency-based methods \cite{yu2019uncertainty}. The maximum number of training steps is set to 20,000 for both tasks. The learning rate is initialized as $0.01$ and decayed with a power of 0.9 after each step. We randomly crop patches of $96 \times 96 \times 96$ voxels as the network input. Standard data augmentation, including randomly cropping, flipping and rotating, is also applied. For a fair comparison, no extra post-processing or ensemble methods are utilized. We use a sliding window strategy with a stride of $64 \times 64 \times 64$ voxels for the inference stage. Then, we adopt four metrics for a comprehensive evaluation, including Dice score, Jaccard, average surface distance (ASD) and 95\% Hausdorff distance (95HD). Two-sided paired t-test with $p \leq 0.05$ is also introduced to test if there is a statistically significant difference in the performance of the proposed method and others. \begin{table*}[!ht] \centering \caption{Quantitative comparison study on the brain tumor segmentation task \cite{brats19}. Standard deviations are shown in parentheses. $*$ indicates $p\leq 0.05$ from a two-sided paired t-test when comparing ours with others. The best mean results are shown in bold.}\label{com_result_brain} \scalebox{1}{ \begin{tabular}{c|c|c|l|l|l|l} \Xhline{1pt} \multirow{2}{*}{Method} & \multicolumn{2}{c|}{\# Training set} & \multicolumn{4}{c}{Metrics} \\ \cline{2-7} & Labeled & Unlabeled & \multicolumn{1}{c|}{Dice (\%) $\uparrow$} & \multicolumn{1}{c|}{Jaccard (\%) $\uparrow$} & \multicolumn{1}{c|}{95HD (mm) $\downarrow$} & \multicolumn{1}{c}{ASD (mm) $\downarrow$} \\ \hline Supervised-only & 100\% & 0\% & 83.84 \scriptsize{(11.93)} & 74.79 \scriptsize{(15.93)} & 8.32 \scriptsize{(9.87)} & 2.13 \scriptsize{(1.66)} \\\hline Supervised-only & 10\% & 0\% & 74.43 \scriptsize{(16.67)}$^{*}$ & 61.86 \scriptsize{(19.62)}$^{*}$ & 37.11 \scriptsize{(34.12)}$^{*}$ & 2.79 \scriptsize{(2.08)}$^{*}$ \\ MT \cite{cui2019semi} & 10\% & 90\% & 81.94 \scriptsize{(14.53)}$^{*}$ & 71.67 \scriptsize{(18.51)}$^{*}$ & 13.62 \scriptsize{(16.05)}$^{*}$ & 2.33 \scriptsize{(1.99)}$^{*}$ \\ UA-MT \cite{yu2019uncertainty} & 10\% & 90\% & 80.72 \scriptsize{(16.18)}$^{*}$ & 70.30 \scriptsize{(19.54)}$^{*}$ & 11.76 \scriptsize{(13.40)} & 2.72 \scriptsize{(2.84)}$^{*}$ \\ Entropy Mini \cite{vu2019EM} & 10\% & 90\% & 82.27 \scriptsize{(14.61)}$^{*}$ & 72.15 \scriptsize{(18.41)}$^{*}$ & 11.98 \scriptsize{(13.75)} & 2.42 \scriptsize{(2.10)}$^{*}$ \\ DAN \cite{zhang2017DAN} & 10\% & 90\% & 81.71 \scriptsize{(14.92)}$^{*}$ & 71.43 \scriptsize{(18.79)}$^{*}$ & 15.15 \scriptsize{(20.38)}$^{*}$ & 2.32 \scriptsize{(2.15)}$^{*}$ \\ ICT \cite{verma2019ICT} & 10\% & 90\% & 77.60 \scriptsize{(22.25)}$^{*}$ & 67.72 \scriptsize{(24.10)}$^{*}$ & 15.19 \scriptsize{(18.99)}$^{*}$ & 3.76 \scriptsize{(4.86)}$^{*}$ \\ DTML \cite{zhang2021DTML} & 10\% & 90\% & 81.86 \scriptsize{(13.45)}$^{*}$ & 71.83 \scriptsize{(17.58)}$^{*}$ & 16.24 \scriptsize{(21.65)}$^{*}$ & 2.48 \scriptsize{(1.80)}$^{*}$ \\ CPCL (ours) & 10\% & 90\% & \textbf{83.36} \scriptsize{(12.58)} & \textbf{73.23} \scriptsize{(16.43)} & \textbf{11.74} \scriptsize{(10.02)} & \textbf{1.99} \scriptsize{(1.57)} \\ \hline Supervised-only & 30\% & 0\% & 78.07 \scriptsize{(11.12)}$^{*}$ & 66.53 \scriptsize{(14.89)}$^{*}$ & 28.58 \scriptsize{(11.67)}$^{*}$ & 2.67 \scriptsize{(1.85)}$^{*}$ \\ MT \cite{cui2019semi} & 30\% & 70\% & 83.46 \scriptsize{(13.90)}$^{*}$ & 73.62 \scriptsize{(16.98)}$^{*}$ & 10.38 \scriptsize{(12.70)}$^{*}$ & 2.35 \scriptsize{(2.45)}$^{*}$ \\ UA-MT \cite{yu2019uncertainty} & 30\% & 70\% & 82.63 \scriptsize{(13.29)}$^{*}$ & 72.28 \scriptsize{(16.69)}$^{*}$ & 10.01 \scriptsize{(12.13)}$^{*}$ & 2.42 \scriptsize{(1.97)}$^{*}$ \\ Entropy Mini \cite{vu2019EM} & 30\% & 70\% & 84.75 \scriptsize{(12.85)}$^{*}$ & 75.31 \scriptsize{(16.07)} & 9.10 \scriptsize{(11.44)} & 2.16 \scriptsize{(2.23)}$^{*}$ \\ DAN \cite{zhang2017DAN} & 30\% & 70\% & 84.33 \scriptsize{(13.06)}$^{*}$ & 74.74 \scriptsize{(16.44)}$^{*}$ & 10.46 \scriptsize{(16.23)}$^{*}$ & 2.24 \scriptsize{(2.25)}$^{*}$ \\ ICT \cite{verma2019ICT} & 30\% & 70\% & 82.17 \scriptsize{(17.01)}$^{*}$ & 72.49 \scriptsize{(19.45)}$^{*}$ & \textbf{8.82} \scriptsize{(9.79)} & 2.61 \scriptsize{(3.21)}$^{*}$ \\ DTML \cite{zhang2021DTML} & 30\% & 70\% & 84.81 \scriptsize{(11.17)}$^{*}$ & 75.00 \scriptsize{(14.38)}$^{*}$ & 8.99 \scriptsize{(10.52)} & 2.02 \scriptsize{(1.72)} \\ CPCL (ours) & 30\% & 70\% & \textbf{85.22} \scriptsize{(11.12)} & \textbf{75.68} \scriptsize{(13.89)} & 8.97 \scriptsize{(11.65)} & \textbf{1.96} \scriptsize{(1.84)} \\ \hline Supervised-only & 50\% & 0\% & 79.38 \scriptsize{(14.81)}$^{*}$ & 68.37 \scriptsize{(17.87)}$^{*}$ & 30.19 \scriptsize{(22.13)}$^{*}$ & 2.69 \scriptsize{(2.02)}$^{*}$ \\ MT \cite{cui2019semi} & 50\% & 50\% & 85.67 \scriptsize{(11.58)}$^{*}$ & 76.46 \scriptsize{(15.18)}$^{*}$ & \textbf{8.77} \scriptsize{(12.18)}$^{*}$ & 1.93 \scriptsize{(1.82)} \\ UA-MT \cite{yu2019uncertainty} & 50\% & 50\% & 84.10 \scriptsize{(14.11)}$^{*}$ & 74.65 \scriptsize{(17.26)}$^{*}$ & 8.79 \scriptsize{(11.01)}$^{*}$ & 2.16 \scriptsize{(2.16)}$^{*}$ \\ Entropy Mini \cite{vu2019EM} & 50\% & 50\% & 85.71 \scriptsize{(11.74)}$^{*}$ & 76.54 \scriptsize{(15.23)}$^{*}$ & 9.07 \scriptsize{(11.71)} & 1.95 \scriptsize{(1.83)} \\ DAN \cite{zhang2017DAN} & 50\% & 50\% & 86.05 \scriptsize{(11.40)} & 76.99 \scriptsize{(14.88)} & 9.96 \scriptsize{(12.07)}$^{*}$ & 1.92 \scriptsize{(1.88)} \\ ICT \cite{verma2019ICT} & 50\% & 50\% & 82.76 \scriptsize{(15.06)}$^{*}$ & 72.90 \scriptsize{(18.19)}$^{*}$ & 9.93 \scriptsize{(11.37)}$^{*}$ & 2.53 \scriptsize{(2.56)}$^{*}$ \\ DTML \cite{zhang2021DTML} & 50\% & 50\% & 86.07 \scriptsize{(11.04)} & 76.67 \scriptsize{(14.60)}$^{*}$ & 9.54 \scriptsize{(11.22)}$^{*}$ & 1.94 \scriptsize{(1.68)} \\ CPCL (ours) & 50\% & 50\% & \textbf{86.41} \scriptsize{(10.08)} & \textbf{77.29} \scriptsize{(13.79)} & 9.04 \scriptsize{(10.88)} & \textbf{1.90} \scriptsize{(1.74)} \\ \Xhline{1pt} \end{tabular}} \end{table*} \begin{figure*}[!t] \centerline{\includegraphics[width=2\columnwidth]{fig_results_brain.pdf}} \caption{Examples of the whole brain tumor segmentation results of the proposed CPCL and other state-of-the-art approaches under 10\% labeled data setting. Red color represents the segmented whole tumor.} \label{fig_results_brain} \end{figure*} \subsection{Experiments on Brain Tumor Segmentation} To comprehensively validate the proposed semi-supervised approach, we conduct a comparison study along with rigorous ablation study over the brain tumor segmentation task. \subsubsection{Comparison Study} Table \ref{com_result_brain} presents the performance of our method and other state-of-the-art methods under 10\%, 30\% and 50\% labeled data settings. The supervised-only methods serve as the baselines. We observe that segmenting whole brain tumors is much more challenging than the kidney segmentation task (elaborated in Sec. \ref{sec:kidney}) due to the ambiguous tumor boundary and high diversity in tumor appearance. As shown in Table \ref{com_result_brain}, all semi-supervised methods yield substantial improvements over the supervised-only baselines, revealing that the unlabeled data contains rich and diverse information that can facilitate network learning. Interestingly, we observe that some semi-supervised methods (including ours) can approach or even surpass the supervised-only baseline with 100\% labeled data, implying that the semi-supervised training may provide more productive guidance than the label-only supervision in this challenging task to alleviate the over-fitting issue. Among the existing methods, MT \cite{cui2019semi}, Entropy Mini \cite{vu2019EM}, DAN \cite{zhang2017DAN} and DTML \cite{zhang2021DTML} achieve more competitive performance. Especially, the Entropy Mini framework trained with only 10\% labeled data achieves the overall largest improvements over the supervised-only baseline in terms of the four metrics. The perturbation-based self-ensembling model, i.e., MT, also shows great performances over the baseline but is slightly worse than Entropy Mini, especially on the 95HD metric. A similar observation is also found on DAN and DTML under 10\% labeled data setting, where their 95HD values are obviously higher than Entropy Mini (15.15 $mm$/16.24 $mm$ vs. 11.98 $mm$). Although the UA-MT obtains slightly worse overall results, it achieves the best 95HD performance over the existing methods under the 10\% labeled data setting. With the increase of labeled data, the performances of all models improve consistently but the gains gradually become limited. Especially, under the most challenging 10\% labeled data setting, the proposed CPCL achieves consistently better performances with most metrics having significant difference (under the two-sided paired t-test) compared with other baselines. Overall, the proposed CPCL outperforms the supervised-only baselines and other state-of-the-art semi-supervised methods with different amounts of labeled data, demonstrating that \textit{``all-around real label supervision"} has a stronger and more reliable capability to exploit the effective information from the unlabeled data. Fig. \ref{fig_results_brain} presents some whole tumor segmentation results of the proposed CPCL and other approaches under 10\% labeled data setting. Consistently, the prediction mask of our proposed CPCL fits more accurately with the ground-truth mask, which further demonstrates the effectiveness of our method. \begin{table}[t]\scriptsize \centering \caption{Ablation study on whole brain tumor segmentation under 10\% labeled data setting. Standard deviations are shown in parentheses. Best results are in bold.}\label{table_ablation_brain} \scalebox{1}{ \begin{tabular}{c|c|c|c|c} \Xhline{1pt} \multirow{2}{*}{Method} & \multicolumn{4}{c}{Metrics} \\ \cline{2-5} & Dice (\%) $\uparrow$ & Jaccard (\%) $\uparrow$ & 95HD (mm) $\downarrow$ & ASD (mm) $\downarrow$ \\ \hline MT & 81.94 \tiny{(14.53)} & 71.67 \tiny{(18.51)} & 13.62 \tiny{(16.05)} & 2.33 \tiny{(1.99)} \\ F-PCL & 82.93 \tiny{(13.22)} & 72.76 \tiny{(17.12)} & 12.87 \tiny{(14.69)} & 2.03 \tiny{(1.56)} \\ B-PCL & 83.16 \tiny{(12.56)} & \textbf{73.39} \tiny{(16.58)} & 13.20 \tiny{(15.95)} & 2.00 \tiny{(1.55)} \\ MT-F-PCL & 82.72 \tiny{(12.96)} & 72.40 \tiny{(17.06)} & 14.03 \tiny{(17.60)} & 2.11 \tiny{(1.64)} \\ MT-B-PCL & 82.90 \tiny{(13.09)} & 72.70 \tiny{(17.19)} & 15.32 \tiny{(16.57)} & 2.09 \tiny{(1.70)} \\ MT-C-PCL & 82.86 \tiny{(13.68)} & 72.79 \tiny{(17.64)} & 13.34 \tiny{(17.28)} & 2.14 \tiny{(1.78)} \\ CPCL & \textbf{83.36} \tiny{(12.58)} & 73.23 \tiny{(16.43)} & \textbf{11.74} \tiny{(10.02)} & \textbf{1.99} \tiny{(1.57)} \\ \Xhline{1pt} \end{tabular}} \end{table} \subsubsection{Analytical Ablation Study} \label{sec:ablation} Our CPCL framework partly benefits from the self-ensembling strategy. Besides the typical self-ensembling mean-teacher (MT) model, we further propose different variants to perform an ablation study under the most challenging 10\% labeled data setting: a) \textbf{F-PCL}: only preserving the forward prototype consistency learning, i.e., $\mathcal{L}=\mathcal{L}_{s}+\lambda \mathcal{L}_{fpc}$; b) \textbf{B-PCL}: only preserving the backward prototype consistency, i.e., $\mathcal{L}=\mathcal{L}_{s}+\lambda \beta\mathcal{L}_{bpc}$; c) \textbf{MT-F-PCL}: combining the previous perturbation-based consistency (adding random Gaussian noises) and our forward consistency; d) \textbf{MT-B-PCL}: combining the previous perturbation-based consistency and our backward consistency; e) \textbf{MT-C-PCL}: combining the previous perturbation-based consistency and our cyclic prototype consistency. As shown in Table \ref{table_ablation_brain}, both the forward and backward prototype learning mechanisms can independently contribute to the performance gains compared to the previous perturbation-based consistency method (i.e., the MT model). Specifically, the improvement brought by F-PCL demonstrates that the real label prototypes bring more effective and meaningful knowledge for training the segmentation network than that of previous perturbed unsupervised targets. Interestingly, it can be observed that B-PCL achieves more competitive results compared to F-PCL. Let us refer back to Fig. \ref{fig_framework}, we can find that the prototypical prediction of the backward process relies on well-rounded $P_{u}$, $F_{u}$ and $F_{l}$, and has a more direct interaction with the real label $Y_{l}$, while $Y_{l}$ in the forward process serves as an oblique but critical role for prototype extraction. Such an observation indicates that more direct real label supervision can help learn more effective information in our framework. Besides, we can observe that combining our prototype consistency with the previous perturbation-based consistency (MT-F-PCL, MT-B-PCL and MT-C-PCL) can also improve the overall performance but not as much as only using our prototype consistency mechanism (F-PCL, B-PCL and CPCL). Particularly, MT-F-PCL, MT-B-PCL and MT-C-PCL obtain relatively worse 95HD than only using our prototype consistency. Therefore, we recommend not mixing the two mechanisms. Thanks to the complementary forward and backward processes, the overall superior and robust performance can be achieved via our CPCL framework. The ablation study further demonstrates that CPCL can serve as a superior alternative to the previous perturbation-based consistency method. \subsubsection{Impact of Different Loss Weight $\beta$} \label{sec:beta} \begin{table}[t]\scriptsize \centering \caption{Results of whole brain tumor segmentation with different loss weight $\beta$ under 10\% labeled data setting. Standard deviations are shown in parentheses. Best results are in bold.}\label{table_beta_brain} \scalebox{1}{ \begin{tabular}{c|c|c|c|c} \Xhline{1pt} \multirow{2}{*}{$\beta$} & \multicolumn{4}{c}{Metrics} \\ \cline{2-5} & Dice (\%) $\uparrow$ & Jaccard (\%) $\uparrow$ & 95HD (mm) $\downarrow$ & ASD (mm) $\downarrow$ \\ \hline 1 & 82.89 \tiny{(13.10)} & 72.67 \tiny{(17.07)} & 15.32 \tiny{(22.14)} & 2.01 \tiny{(1.67)} \\ 5 & 83.24 \tiny{(13.05)} & 73.18 \tiny{(17.01)} & \textbf{11.47} \tiny{(14.21)} & 2.02 \tiny{(1.60)} \\ 10 & \textbf{83.36} \tiny{(12.58)} & \textbf{73.23} \tiny{(16.43)} & 11.74 \tiny{(10.02)} & \textbf{1.99} \tiny{(1.57)} \\ 15 & 83.29 \tiny{(12.58)} & 73.14 \tiny{(16.54)} & 14.10 \tiny{(18.60)} & 2.00 \tiny{(1.60)} \\ 20 & 82.87 \tiny{(13.39)} & 72.78 \tiny{(17.47)} & 13.21 \tiny{(18.48)} & 2.11 \tiny{(1.74)} \\ \Xhline{1pt} \end{tabular}} \end{table} The ablation study indicates that the semi-supervised training benefits more from the backward process than the forward process. As shown in our loss function (Eqn. (\ref{eq:total_loss})), a fixed factor $\beta$ is introduced to control the trade-off between the forward process and backward process. Here, we investigate the impact of different $\beta$, and the results are shown in Table \ref{table_beta_brain}. It can be observed that the proposed CPCL is not particularly sensitive to $\beta$, except for the 95HD metric, but overall, it performs optimally when $\beta = 10$. Thus, we set $\beta = 10$ in all experiments involving the prototypical backward process. \begin{table*}[!ht] \centering \caption{Quantitative comparison study on the kidney segmentation task \cite{heller2019kits19}. Standard deviations are shown in parentheses. $*$ indicates $p\leq 0.05$ from a two-sided paired t-test when comparing ours with others. The best mean results are in bold.}\label{com_result_kits} \scalebox{1}{ \begin{tabular}{c|c|c|l|l|l|l} \Xhline{1pt} \multirow{2}{*}{Method} & \multicolumn{2}{c|}{\# Training set} & \multicolumn{4}{c}{Metrics} \\ \cline{2-7} & Labeled & Unlabeled & \multicolumn{1}{c|}{Dice (\%) $\uparrow$} & \multicolumn{1}{c|}{Jaccard (\%) $\uparrow$} & \multicolumn{1}{c|}{95HD (mm) $\downarrow$} & \multicolumn{1}{c}{ASD (mm) $\downarrow$} \\ \hline Supervised-only & 100\% & 0\% & 96.51 \scriptsize{(5.02)} & 93.64 \scriptsize{(8.77)} & 3.17 \scriptsize{(3.09)} & 0.41 \scriptsize{(0.40)} \\\hline Supervised-only & 5\% & 0\% & 89.64 \scriptsize{(8.66)}$^{*}$ & 82.40 \scriptsize{(12.71)}$^{*}$ & 10.64 \scriptsize{(8.61)}$^{*}$ & 0.79 \scriptsize{(0.61)}$^{*}$ \\ MT \cite{cui2019semi} & 5\% & 95\% & 92.92 \scriptsize{(7.23)} & 87.78 \scriptsize{(10.96)}$^{*}$ & 6.13 \scriptsize{(7.18)}$^{*}$ & 0.64 \scriptsize{(0.48)} \\ UA-MT \cite{yu2019uncertainty} & 5\% & 95\% & 92.88 \scriptsize{(6.89)}$^{*}$ & 87.63 \scriptsize{(10.58)}$^{*}$ & 6.57 \scriptsize{(7.14)}$^{*}$ & 0.62 \scriptsize{(0.46)} \\ Entropy Mini \cite{vu2019EM} & 5\% & 95\% & 92.76 \scriptsize{(7.01)}$^{*}$ & 87.01 \scriptsize{(10.65)}$^{*}$ & 6.45 \scriptsize{(6.99)}$^{*}$ & 0.69 \scriptsize{(0.48)}$^{*}$ \\ DAN \cite{zhang2017DAN} & 5\% & 95\% & 92.87 \scriptsize{(6.89)}$^{*}$ & 87.65 \scriptsize{(10.56)}$^{*}$ & 6.39 \scriptsize{(7.25)}$^{*}$ & 0.68 \scriptsize{(0.45)}$^{*}$ \\ ICT \cite{verma2019ICT} & 5\% & 95\% & 92.47 \scriptsize{(6.88)}$^{*}$ & 86.97 \scriptsize{(10.86)}$^{*}$ & 7.20 \scriptsize{(8.42)}$^{*}$ & 0.73 \scriptsize{(0.62)}$^{*}$ \\ DTML \cite{zhang2021DTML} & 5\% & 95\% & 92.54 \scriptsize{(9.50)}$^{*}$ & 87.27 \scriptsize{(13.07)}$^{*}$ & 6.90 \scriptsize{(8.53)}$^{*}$ & 0.67 \scriptsize{(0.61)}$^{*}$ \\ CPCL (ours) & 5\% & 95\% & \textbf{93.43} \scriptsize{(7.19)} & \textbf{88.67} \scriptsize{(10.51)} & \textbf{5.33} \scriptsize{(6.52)} & \textbf{0.59} \scriptsize{(0.56)} \\ \hline Supervised-only & 10\% & 0\% & 92.31 \scriptsize{(7.60)}$^{*}$ & 86.72 \scriptsize{(11.22)}$^{*}$ & 6.84 \scriptsize{(7.48)}$^{*}$ & 0.67 \scriptsize{(0.55)}$^{*}$ \\ MT \cite{cui2019semi} & 10\% & 90\% & 93.98 \scriptsize{(6.78)}$^{*}$ & 89.81 \scriptsize{(10.09)}$^{*}$ & 4.63 \scriptsize{(6.28)}$^{*}$ & 0.56 \scriptsize{(0.45)} \\ UA-MT \cite{yu2019uncertainty} & 10\% & 90\% & 94.12 \scriptsize{(6.89)} & 90.02 \scriptsize{(10.16)}$^{*}$ & 4.52 \scriptsize{(6.36)}$^{*}$ & 0.56 \scriptsize{(0.53)} \\ Entropy Mini \cite{vu2019EM} & 10\% & 90\% & 94.05 \scriptsize{(6.06)} & 90.36 \scriptsize{(9.28)} & 4.34 \scriptsize{(6.24)} & 0.55 \scriptsize{(0.43)} \\ DAN \cite{zhang2017DAN} & 10\% & 90\% & 93.94 \scriptsize{(6.86)}$^{*}$ & 89.65 \scriptsize{(10.27)}$^{*}$ & 4.79 \scriptsize{(6.31)}$^{*}$ & 0.59 \scriptsize{(0.50)}$^{*}$ \\ ICT \cite{verma2019ICT} & 10\% & 90\% & 94.02 \scriptsize{(4.98)} & 89.58 \scriptsize{(8.23)}$^{*}$ & 4.40 \scriptsize{(6.43)} & 0.61 \scriptsize{(0.57)} \\ DTML \cite{zhang2021DTML} & 10\% & 90\% & 94.04 \scriptsize{(9.67)}$^{*}$ & 89.88 \scriptsize{(12.62)}$^{*}$ & 4.86 \scriptsize{(7.28)}$^{*}$ & 0.57 \scriptsize{(0.55)} \\ CPCL (ours) & 10\% & 90\% & \textbf{94.59} \scriptsize{(5.96)} & \textbf{90.69} \scriptsize{(9.19)} & \textbf{4.15} \scriptsize{(5.94)} & \textbf{0.54} \scriptsize{(0.43)} \\ \Xhline{1pt} \end{tabular}} \end{table*} \subsection{Experiments on Kidney Segmentation} \label{sec:kidney} To further validate the proposed approach, we also conduct experiments on kidney segmentation from abdominal CT images. All the implementation settings are consistent with the above whole brain tumor segmentation task, except for the partition protocols. Table \ref{com_result_kits} presents the performance of our method and other semi-supervised methods under 5\% and 10\% labeled data settings, respectively. Such training set configuration depends on our empiricism that segmenting kidneys is much easier than the whole brain tumor segmentation. As shown in Table \ref{com_result_kits}, with only 5\% labeled data, the supervised-only method yields an average Dice score of 89.64\%, which is difficult to achieve in the former whole brain tumor segmentation task. Although the performances are very comparable among all the methods due to less difficulty of the segmentation targets, the proposed CPCL can still achieve the overall best results in terms of four metrics under the same partition protocol, which further demonstrates the superiority and robustness of our approach. Especially, under the least labeled data setting, i.e., 5\%, the proposed CPCL further advances performances with most metrics having significant difference (under the two-sided paired t-test) compared with other baselines. Fig. \ref{fig_results_kidney} presents two exemplary kidney segmentation results of the proposed CPCL and the supervised-only baseline under the 5\% labeled data setting. We can see that the proposed CPCL achieves visually better segmentation results compared to the supervised-only baseline. \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{fig_results_kidney.pdf}} \caption{Examples of the kidney segmentation results of the proposed CPCL and the supervised-only baseline under 5\% labeled data setting. Red color represents the segmented kidney.} \label{fig_results_kidney} \end{figure} \section{Discussions} To better understand the learning behavior and visually evaluate the quality of the prototype in our proposed CPCL, we visualize the evolution of the prototype-based predictions under 10\% (whole brain tumor segmentation) and 5\% (kidney segmentation) labeled data settings in different training periods, as shown in Fig. \ref{fig_evolution}. The corresponding model predictions $P_{u}$ and $P_{l}$ are also visualized at the top-left corner with the prototype-based predictions $P_{l2u}$ and $P_{u2l}$, respectively. At the early training stage, it is observed that the model predictions tend to under-segment the objects, yet the prototype-based predictions often indicate more accurate target regions. Overall, as the training goes on, both the model predictions and prototypical predictions are gradually refined, indicating that the learned features become more discriminative and compact. Ideally, we hope the prototypical prediction at the late training stage can be perfectly consistent with the ground-truth segmentation. However, since the holistic prototype mainly relies on an average prior via the masked average pooling approach, therefore the prototypical predictions lean to predict a relatively large and coarse area for the segmentation target. Interestingly, due to the ambiguous tumor boundary and variegated intensity within the tumor area, the above observation is particularly evident in the whole brain tumor segmentation task. Despite the imperfectness, the network training can still benefit from the real label-centric supervision mechanism and acquire effective information from those real label-driven consistency targets, as experimentally demonstrated above. These findings also echo the previous works \cite{ning2020macro} that the target-related weak labels can offer more high-level region proposals for segmentation. However, intuitively, improving the quality of prototypical prediction can better impose real label supervision. Besides, the ablation study also indicates that our backward process has more direct interaction with the real labels. Thus, for future work, we may consider removing the forward process to construct a more elegant framework and pay more attention to improving the discriminability and compactness of the class prototypes. Since the spirit in few-shot segmentation (FSS) is in line with the proposed perspective in semi-supervised segmentation, i.e., learning transferable knowledge from support set (labeled set) to query set (unlabeled set), more collaboration between the two research fields will be interesting future work on top of our early attempt. For example, introducing the superpixel technique to form an adaptive prototype allocation mechanism \cite{li2021adaptive} or considering the feature differences between foreground and background \cite{nguyen2019FWB} can be effective ways to further improve the prototype quality and thus enhance the final performance. \begin{figure}[t] \centerline{\includegraphics[width=\columnwidth]{fig_evolution_2.pdf}} \caption{Exemplar evolution of the prototype-based predictions $P_{l2u}$ and $P_{u2l}$ (with model predictions $P_u$/$P_l$ shown on the top-left corner) under 10\% (whole brain tumor segmentation) and 5\% (kidney segmentation) labeled data settings during the training process. Red color represents the segmented targets.} \label{fig_evolution} \end{figure} Besides the limitation caused by the holistic prototype, another challenge is that we assume both labeled and unlabeled samples are from the same distribution. However, the abundant image-only data are often collected from different devices and clinical centers, where the resulting domain shift will cause substantial performance degradation in semi-supervised learning \cite{oliver2018realistic}. Such a limitation also applies to most of current semi-supervised methods. Particularly, the domain shift may harm the extracted features in our framework, which may disastrously mislead the prototype extraction and thereby interfere with the cyclic prototype consistency learning process. Therefore, further investigating how to adapt domain adaptation to deal with potential domain shifts in our framework is also an interesting direction with fruitful clinical values. Overall, this work reveals that our initial attempt, i.e., utilizing expert-examined real labels to explicitly supervise the network learning from both their paired labeled data and unpaired unlabeled data, is challenging yet feasible. Compared with the previous unsupervised consistency fashion, radiologists may have more confidence in the trained model of our framework since we explicitly leverage the real labels acknowledged by the experts themselves during the whole network training. We hope that this work can evolve into a new direction in semi-supervised segmentation and inspire more future research devoted to more reliable models and making more effective use of the precious high-quality labeled data. \section{Conclusion} In this work, we studied the semi-supervised segmentation task from an unexplored perspective, i.e., exploiting the unlabeled data via explicit real label supervision. To this end, we proposed a novel cyclic prototype consistency learning (CPCL) framework constructed by a labeled-to-unlabeled prototypical forward process and an unlabeled-to-labeled backward process. In this way, our framework turns exisiting \textit{``unsupervised"} consistency into new \textit{``supervised"} consistency, obtaining the \textit{``all-around real label supervision"} property of our method. Extensive experiments on two public datasets demonstrated the superiority of our method over other state-of-the-art semi-supervised learning methods on both whole brain tumor segmentation from T2-FLAIR MRI and kidney segmentation from CT images. \bibliographystyle{IEEEtran.bst}
{ "timestamp": "2022-03-16T01:29:56", "yymm": "2109", "arxiv_id": "2109.13930", "language": "en", "url": "https://arxiv.org/abs/2109.13930" }
\section{Introduction} Let $S$ be a set of points in the affine plane over the finite field of $p$ elements, where $p$ is a prime. A natural way of viewing points of the projective line is to consider the equivalence classes of the nonzero vectors of $\mathbb{F}_p^2$. We write $u \sim v$ if $u$ is a nonzero multiple of $v$ and the equivalence class of $u$ which we call the \textit{direction} of $u$ is denoted by $d(u)$. We say that a pair of vectors $w_1 \ne w_2 \in \mathbb{F}_p^2$ determines the direction $d(u)$ if $u \sim w_1-w_2$. Finally, we denote by $D(S)$, the set of directions determined by the pair of points in $S$. By elementary pigeonhole argument one can see that every direction is determined by any subset of $\mathbb{F}_p^2$ of cardinality at least $p+1$. It was proved by R\'edei that if $S$ is of cardinality $p$, then either $S$ is a line or $S$ determines at least $\frac{p+3}{2}$ directions, see \cite{redei}. The same result was independently proved by Dress, Klin and Muzychuk \cite{DKM}. As a corollary of their argument they obtained a new proof for Burnside's classical theorem on permutation groups of prime degree. This was generalised by Sz\H{o}nyi, who proved that if $|S|<p$ and $S$ is not contained in a line, then $|D(S)|\ge \frac{|S|+3}{2}$. Lov\'asz and Schrijver \cite{LS} showed that if $|D(S)|=\frac{|S|+3}{2}$ for some $|S|=p$, then $S$ is an affine transform of the graph of the function $f(x)=x^{\frac{p+1}{2}}$. In \cite{gacs} G\'acs proved that the $|D(S)|$ cannot be between $\frac{{p + 5}} {2}$ and $2\frac{{p - 1}} {3}$ and showed that the upper bound obtained is one less than the smallest known example. Another way of thinking of directions of $p$-element subsets of $\mathbb{F}_p^2$ is the following. We say that $S$ is \textit{equidistributed} in a direction if $S$ intersects the lines having the corresponding fixed slope in the same amount of points. Equidistributivity is one of the key tools in investigating spectral sets of finite abelian groups, see \cite{covenmeyerowitz},\cite{KMSV},\cite{laba}. One can also see that for sets $|S|=p$ we have that $S$ is equidistributed in the direction $m$ if and only if $m \not\in D(S)$. In general, equidistributivity of a set of a certain direction implies that $p \mid |S|$. This motivates that we only study sets whose cardinality is a multiple of $p$ in the remaining part of the paper. The investigation of sets of cardinality larger than $p$ was initiated by Ghidelli \cite{ghidelli}. It was proved that a set of cardinality $kp$ ($1 \le k \le p,~ k \in \mathbb{Z}$) is either a set of parallel lines or is not equidistributed in at least $\lceil \frac{p+k+2}{k+1}\rceil$ directions. From now on we call a direction \textit{special} if $S$ is not equidistributed in that direction. Note that Ghidelli's definition of special direction is more general than this one and his results also handle sets whose cardinality is not divisible by $p$. For sets of cardinality divisible by $p$ the two definitions of special directions coincide. It was asked by Ghidelli, whether the sets, which are not the union of a set of parallel lines determine at least $\frac{p+3}{2}$ special directions. The main purpose of this paper is to construct an example to answer Ghidelli's problem in the negative. We prove the following theorem. \begin{thm} Up to an affine transformation, there is a unique set $S$ of size $\frac{p(p-1)}{2}$ in $\mathbb{F}_p^2$, which is equidistributed in $p-2$ directions. Moreover, every set having exactly 3 special directions can be transformed by an affine transformation (elements of $AGL(2,p)$) to either $S$ or $S^c$, where $S^c$ is the complement of $S$ in $\mathbb{F}_p^2$. \end{thm} This shows that for $k=\frac{p-1}{2}$ the result of Ghidelli is tight. A natural question arises here. Is it possible to construct sets of cardinality $kp$ which have $\lceil \frac{p+k+2}{k+1}\rceil$ special directions? The paper is organised as follows. In Section \ref{sec2} we describe sets having at most $2$ special directions. Section \ref{sec3} is devoted to the analysis of the proof of Ghidelli in order to try to understand the possible ways of describing examples for his problems. Then in Section \ref{sec4} we describe sets having 3 special directions while in Section \ref{sec6} we present some examples for sets having 4 special directions. Section \ref{sec5} contains a reformulation of the problem. Finally, we raise some questions concerning the topic in Section \ref{sec7}. \section{Two special directions}\label{sec2} From now on, let $\mathbb{F}_p^2$ be identified with the set of pairs of integers $(a,b)$, where $a,b \in \{ 0,1, \ldots, p-1\}$. Let us assume that $S$ is a subset of $\mathbb{F}_p^2$ of cardinality $kp$ which is equidistributed in $p-1$ directions. It has been proved by Fallon, Mayeli and Villano \cite{Tom} that $S$ is the union of $k$ parallel lines. This also means that a set having at most two special directions has at most one. The original proof is short but uses techniques from Fourier analysis. We present a combinatorial argument for the statement. Let us assume that $S$ is equidistributed along the lines $l(x)=ax+b$, where $a \in \mathbb{F}_p^*$, $b \in \mathbb{F}_p$. We may assume that $(y,c) \in S$ and $(z,c) \not\in S$ for some $c \in \mathbb{F}_p$ and $y \ne z \in \mathbb{F}_p$. If there is no such pair, then $S$ is the union of $k$ lines. We may assume $c=0$ and $z=0$ since $D(S)=D(S+t)$, where $t \in \mathbb{F}_p^2$. Let $l_{\infty}^j= \{ (j,i) \mid i \in \mathbb{F}_p\}$ and $l_0=\{(x,0) \mid x \in \mathbb{F}_p\}$. Now we count the cardinality of $S$ in three different ways. First, $|S|=kp$. It is equal to the number of points contained on the lines containing $(0,0) \not\in S$. We obtain \begin{equation}\label{eq1} kp=k(p-1)+(a_0+b_0), \end{equation} where $a_0=|S \cap l_0|$ and $b_0=|S \cap l_{\infty}^0|$. On the other hand we may count the number of elements contained in the lines going through $(y,0) \in S$. \begin{equation}\label{eq2} kp=1+ (k-1)(p-1)+(a_0-1)+(b_1-1), \end{equation} where $b_1=|S \cap l_{\infty}^y|$. It follows from equation \eqref{eq1} that $a_0+b_0=k$. In particular we get $a_0 \le k$. Equation \eqref{eq2} shows $k+p=a_0+b_1$. Plainly, $b_1 \le p$ and we have seen $a_0 \le k$ so $b_1 = p,a_0 = k$ and $b_0=0$. This shows that $l_{\infty}^y$ is contained in $S$. This holds for every $y \in \mathbb{F}_p$ with $(y,0) \in S$. Since $a_0=k$ we have $k$ such $y$. We have found the $kp$ elements of $S$ and hence $S$ is the union of parallel (vertical) lines. \section{Ghidelli's proof}\label{sec3} In this section we follow Ghidelli's proof to obtain some extra information about sets having few special directions. Let $S \subseteq \mathbb{F}_p^2$ be a nonempty set. We define the R\'edei polynomial associated with $S$ as $$H_S(x,y)=\prod_{(a,b)\in S \subseteq \mathbb{F}_p^2} (x-ay+b).$$ One of the main properties of $H_S$ follows from the observation that \[ (x - ay + b) = (x - a'y + b') \Longleftrightarrow \frac{b-b'}{a - a'} = y.\] In other words, if $y=m$ is fixed, then $am - b=c$ for a given $c\in \mathbb{F}_p$ holds for those points $(a,b)$ of $\mathbb{F}_p^2$ which are contained in a line whose slope is $m$. Therefore if we pick a set of representatives $\{(a_i,b_i) \mid i=0,1,\ldots, p-1\}$ of the class of parallel lines of slope $m$ (one point from each line), then $\prod_{i=0}^{p-1} x-a_im+b_i=x^p-x$. It follows that if $S$ is equidistributed in the direction $m$, then $H_S(x,m)=(x^p-x)^n$. We may write \[ H_S(x,y)=x^{|S|}+x^{np-1}g_1(y)+ x^{np-2}g_2(y)+ \ldots +g_{np}, \] where $g_i \in \mathbb{F}_p[y]$ ($i=1, \ldots, np$). It is easy to see that $g_i$ is of degree at most $i$ and it is equal to the $i$'th elementary symmetric polynomial\footnote{For each nonnegative integer $k$, the $k$'th elementary symmetric polynomial on $n$ variables is the sum of all distinct products of $k$ distinct variables. We denote it by $\sigma_k$.} $$g_i(y)=\sigma_i(a_1y-b_1,a_2y-b_2, \ldots , a_{np}y-b_{np}),$$ where the variables are $a_jy-b_j$ ($j=1, \dots, np$). Now assume that $S$ is equidistributed in $p-1 \ge k>\frac{p+3}{2}$ directions. Then $g_i(y)$ has at least $k$ roots since the coefficients $x^{a}$, where $a>(n-1)p+1$ in the polynomial $(x^p-x)^n$ are $0$. Then we have $g_i\equiv 0$ if $i<k$ since the number of its roots is larger than its degree. By Newton's identities $\sum_{i=1}^{np}(a_iy+b_i)^l=0$ if $l<k$ and so as the leading coefficients of these polynomials $\sum_{i=0}^{np}a_i^l$, these expressions should also be $0$. Let $w_j$ be the number of indices $i$ such that $a_i=j$. Then \[ \sum_{i=0}^{np}a_i^l =\sum_{j=0}^{p-1}w_j j^l. \] This shows that the vector $w=(w_j)_{j=0,1,\ldots, p-1}$ is orthogonal to $(j^l)_{j=0,1,\ldots, p-1}$ in $\mathbb{F}_p^p$ for $l=1,\ldots, k$. Further $w \in \mathbb{F}_p^p$ is orthogonal to $(1)_{j=0,1,\ldots, p-1}$ since $|S|$ is divisible by $p$. Thus $w$ is orthogonal to the first $k$ rows of the Vandermonde matrix $M_{j,l}=(j^l)$ ($0 \le j,l \le p-1$ ). It is not hard to see that the $i$'th and $j$'th rows of $M$ are orthogonal in $\mathbb{F}_p^p$ except if $i+j=p-1$. Thus we obtain that the orthogonal subspace for $\langle 1,x,x^2, \ldots, x^k \rangle$ is $\langle 1,x,x^2, \ldots, x^{p-1-1-k} \rangle$. Assume now that we have a set $S$, which is equidistributed in $p-2$ directions but it is not the union of lines. In this case $a_i$ is a function that is either constant or linear. We may write it as $\alpha i + \beta$. The $\alpha=0$ case is realised when $S$ is the union of parallel lines. In the case when $a_i=\alpha i +\beta$ with $\alpha \ne 0$, we obtain in particular that $|S| = 0+1+\ldots + p-1=\frac{p(p-1)}{2}$ or $|S| = 1+\ldots + p-1+p=\frac{p(p+1)}{2}$ since linear polynomials are permutation polynomials. \section{Sets with 3 special directions}\label{sec4} Using the result of the previous section we present a natural construction that fulfils the required conditions to answer Ghidelli's question in the negative. What is more, we prove that the sets (and their images of $AGL(2,p)$) described in this section are the ones having exactly 3 special directions. We emphasise the fact that we think of the coordinates as elements of $\mathbb{Z}$, which gives us the opportunity to compare them. However, the additive and multiplicative operations are understood ($\bmod ~ p$). According to the observations in the previous section it seems reasonable to try to understand the properties of the following set: \[ S=\{(a,b) \in \mathbb{F}_p^2 \mid b<a \}. \] $S$ has $\frac{p(p-1)}{2}$ elements. (See also Figure \ref{fig:triangle}.) \begin{figure} \begin{center} \begin{tikzpicture} \matrix[matrix of nodes,nodes={draw=gray, anchor=center, minimum size=.5cm}, column sep=-\pgflinewidth, row sep=-\pgflinewidth] (A) at (-1,0) { 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 1 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 1 & 1 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &1 & 1 & 1 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 1 &1 & 1 & 1 & 1 & 1 &1 \\ 0 & 0 & 0 & 1 & 1 &1 & 1 & 1 & 1 & 1 &1 \\ 0 & 0 & 1 & 1 & 1 &1 & 1 & 1 & 1 & 1 &1 \\ 0 & 1 & 1 & 1 & 1 &1 & 1 & 1 & 1 & 1 &1 \\ }; \end{tikzpicture} \end{center} \caption{\centering Set $S$ which has 3 special directions.} \label{fig:triangle} \end{figure} Clearly, $S$ is not equidistributed in at least $3$ directions since the lines having equation $ax+by=c$, where $(a,b,c) \in \mathbb{F}_p^3$ is either $(1,0,0)$, $(0,1,p-1)$ or $(1,-1,1)$ intersects $S$ in $p-1>\frac{p-1}{2}$ elements. Let $L$ be a line containing the origin. We show that if the equation determining $L$ is $f_L(x)=ax$ with $ 2 \le a \le p-1$, then $|L \cap S|=\frac{p-1}{2}$. Clearly, $(0,0) \not\in S$ and if $i<ai$ for some $i \in \{1, \ldots, p-1 \}$, then $-i>a(-i)$ since $a \ne 0$. Note also that $i \ne ai$ since $i \ne 0$ and $a \ne 1$. It remains to verify that if $|S\cap (L+i)|=\frac{p-1}{2}$, then $|S\cap (L+i+1)|=\frac{p-1}{2}$. Therefore, we show there is exactly one $j \in \{ 0, \ldots, p-1\}$ such that $f_L(j)+i >j$ and $f_L(j)+i+1 < j$. This happens when $f_L(j)+i=aj+i=p-1$ and $j \ne 0$. Since $a\ne 0$, if $i = p-1$ we would get $j =0$, which is excluded. If $i \ne p-1$, then there is a unique $j=\frac{p-1-i}{a}$ fulfilling the equation. Thus, there is a unique column, when the intersection of $S$ with the line $L+i$ increases by $1$, when we replace the line $L+i$ by $L+i+1$. On the other hand, if $f_L(j)+i=j-1$ ($j \ne 0 $), then $(j,f_L(j)+i)$, which is an element of $L+i$ is in $S$ but $(j,f_L(j)+i+1) \not\in S$. The solution of the equation $f_L(j)+i=aj+i=j-1$ is $j=-\frac{i+1}{a-1}$. Note that $a \ne 1$ so such a $j$ exists. The case $j=0$ ($i=p-1$) can be handled similarly. \begin{thm} Let $T$ be a subset of $\mathbb{F}_p^2$ which is equidistributed in $p-2$ directions. Then $T=\alpha(S)$, if $|T|=\frac{p(p-1)}{2}$ and $T=\alpha'(S^c)$ if $|T|=\frac{p(p+1)}{2}$ for some $\alpha,\alpha' \in AGL(2,p)$, where $S^c$ is the complement of $S$ in $\mathbb{F}_p^2$. \end{thm} \begin{proof} We have seen that $|T|=\frac{p(p-1)}{2} \mbox{ or } \frac{p(p+1)}{2}$. As the complement of a set of size $\frac{p(p-1)}{2}$ is of size $\frac{p(p+1)}{2}$ and vice-versa, it is enough to prove the statement for the case $|T|=\frac{p(p-1)}{2}$. Since $PGL(2,p)$ acts triply transitively on the elements of the projective line we may assume that the three special directions are $(1,0),(0,1),(1,1)$. Moreover, it follows from the argument in Section \ref{sec3} that the set of intersections of $T$ with the horizontal lines is $\{0,1, \ldots, p-1 \}$ (since $|T|=\frac{p(p-1)}{2}$) and the same holds for the vertical lines. Moreover using a suitable affine transformations along the axis we may assume that the order can be chosen to be $(0,1, \ldots, p-1 )$ along the vertical and $( p-1,p-2, \ldots, 1,0 )$ along the horizontal lines, respectively. This shows that the first column does not contain any element of $T$ and since its first line contains $p-1$ elements we have $\{(0,i ) \mid 0<i\le p-1\} \subseteq T$. Using the same argument recursively one can prove that $T=S$. \end{proof} \section{Weighted sum of lines}\label{sec5} The number of special directions of set $S$ coincides with the number of non-Fourier roots of the characteristic function of $S$. This allows us to give a construction of sets with a given number of special directions. These sets are obtained as a linear combination of characteristic functions of lines (determining the special directions of the set) with rational coefficients. It is easy to see that if $S$ is the weighted sum of lines of $k$ directions, then $S$ is equidistributed in every direction not determined by any line appearing in the sum. One further aim is to present an alternative proof for the results of Section \ref{sec2} and Section \ref{sec4} using the following proposition. We say that a function $f \colon \mathbb{F}_p^2 \to \C $ is equidistributed in a direction $d$ if the sum of the values of $f$ along the lines parallel to $d$ is constant. \begin{prp}\label{prp frenkl} Let $f \colon \mathbb{F}_p^2 \to \Q $ be a function. Assume $f$ is equidistributed in all but the following directions $d_1, \ldots , d_k$ ($k \ge 1$). Then $f$ can be written as the weighted sum of lines with rational weights: $$f=\sum_{j=1}^k\sum_{i=0}^{p-1} c_{j,i} 1_{l_{j,i}}, $$ where $c_{j,i} \in \Q$ and $l_{j,i}$ are lines determined by direction $d_j$ and for every $j\in \{1, \dots, k\}$ there is $i\in\{0, \dots, p-1\}$ $c_{j,i}\ne 0$. \end{prp} \begin{proof} We proceed by induction. Let $w_1$ be a function defined on the $\langle d_1 \rangle$-cosets such that $w_1$ takes the sum of the values of $f$ for the elements on each $\langle d_1 \rangle$-coset. Let $g_1(x)$ be a function of $\mathbb{F}_p^2$ defined as $g_1(x)=\frac{w_1(C)}{p}$, where $C$ is the $\langle d_1 \rangle$-coset containing $x$. Since $f$ is not equidistributed in direction $d_1$, function $g_1(x)$ is not constant. Clearly, $f_1:=f-g_1$ is a function equidistributed in all but $k-1$ directions. If $k=1$, then $f_1$ is equdistributed in every direction and the sums along every line is zero. Then we claim that $f_1$ is zero. This can be seen from the fact that the Fourier transform of $f_1$ vanishes on every character. Hence $f=g_1$, so it is of the form $\sum_{i=0}^{p-1} c_{1,i} 1_{l_{1,i}}$, where $l_{1,i}$ are determined by direction $d_j$ and $c_{1,i'}\ne 0$ for some $i'\in \{0,\dots, p-1\}$, since $g_1$ is not constant. For $k\ne 1$ we get the statement for $f=f_1+g$ by using the inductive hypothesis for $f_1$. \end{proof} \begin{itemize} \item In particular we obtain the following explicit formula for the set discussed in the previous Section \ref{sec4}. Let $\frac{c}{p}$ be the weight of lines defined by the equation $x=c$ and $\frac{-c}{p}$ the one of $y=c$. Further let $\frac{c}{p}$ be the weight of the vertical lines $y=x+c$. Now if $a<b$ we obtain that the sum of weight of these lines in this region is 1 and it is zero everywhere else. \item As a corollary of Proposition \ref{prp frenkl} one can see that there is no subset of $\mathbb{F}_p^2$, which is not equidistributed in exactly $2$ directions (see also Section \ref{sec2}). Similarly, one can also show the following Lam-Leung type result \cite{LL2000}, which is formulated for $\mathbb{F}_p \times \mathbb{F}_q$, where $p$ and $q$ are different primes. Let $S$ be a multiset, i.e, each value of the characteristic function of $S$ is a nonnegative integer, and suppose that there is at most two special directions of $S$. Then $S$ is a sum of weighted lines with nonnegative integer coefficients. The proofs of the these results are analogue to the proof of Proposition 3.8 in \cite{KMSV}. \end{itemize} \section{Examples for four special directions}\label{sec6} In this section we will try to find sets of smallest possible cardinality, having exactly $4$ special directions. For small prime $p\le 11$ we construct such sets of minimal cardinality according to Ghidelli's lower bound \cite{ghidelli}. For a matrix $M$ let $M^{(k)}$ denote the matrix defined by $M^{(k)}(i,j)=M(i,j-k)$. Let $\underline{1}$ denote the all 1 row vector and let $e_i$ denote the vector which is 1 at its $i'th$ coordinate and zero everywhere else. Let $L_j(v):=\sum _{i=0}^{p-2} \frac{p-i-1}{p} v^{(ij)}$. \begin{lem}\label{lem6.1} \begin{enumerate} \item Let $p$ be a prime and let $v_j \in \mathbb{R}^{1 \times \{0,1, \ldots, p-1 \}}$ be a row vector whose first coordinate is $1$ and $j+1$'th coordinate is $-1$. Then \[ \frac{1}{p}\underline{1}+L_j(v_j):=\frac{1}{p}\underline{1} +\sum _{i=0}^{p-2} \frac{p-i-1}{p} v_j^{(ij)}=e_1. \] \item\label{refitem2} Let $k \in \mathbb{F}_p \setminus \{0\}$ with $k l \equiv j \pmod{p}$. Then \[ \frac{l}{p}\underline{1} +L_k(v_j):=\frac{l}{p}\underline{1} +\sum _{i=0}^{p-2} \frac{p-i-1}{p} v_j^{(ik)}=\sum_{a=0}^{l-1}e_1^{(ak)}. \] \end{enumerate} \end{lem} \begin{proof} \begin{enumerate} \item Easy calculation gives the result. \item It is easy to see that $v_j=\sum_{a=0}^{l-1} v_k^{(ak)}$. The result follows from the fact that $L$ is additive and $L_k(v^{(m)})=L_k(v)^{(m)}$. \end{enumerate} \end{proof} The way we are going to use this lemma is the following. We construct $\{\pm 1,0\}$ valued matrices whose row sums are all $0$. The previous lemma will be used simultaneously for the rows of these matrices. We fix a $k \in \mathbb{F}_p \setminus \{0\}$ and we apply Lemma \ref{lem6.1} \eqref{refitem2}. Lemma \ref{lem6.1} treats $\{\pm 1,0\}$ valued rows, which contain exactly one $1$'s and $-1$'s but since $L_k$ are linear operators we may apply it for some of these types of matrices. Now if we write a $\{\pm 1,0\}$ valued row vector $v$, whose row sum is zero as the sum of $\{\pm 1,0\}$ valued row vectors ($v=\sum_{a=1}^c u_a$) with one $1$ and $-1$ entry, then $L_k(v)= L_k(\sum_{a=1}^c u_a)$. Now for each $1 \le a \le c$ there is a $k_a$ such that $u_a=v_{i_a}^{(k_a)}$ for some $1 \le i_a-1,k_a \le p-1$. Finally, let $l_a k \equiv i_a \pmod{p}$. Then it follows from the previous conversation and Lemma \ref{lem6.1} that $L_k(u_a)$ will be a nonnegative integer valued vectors such that $\underline{1}^tL_k(u_a)=\sum_{a=1}^c l_a$. The 4 directions used in the remainder of the section are $(1,0)$, $(0,1)$, $(1,1)$, $(1,-1)$. We build up sets, which are not equidistributed in these directions only. It is clear that the sets presented in Figure \ref{Fig:triangles} can be constructed using weighted sum of lines in these directions. \begin{figure} \begin{center} \begin{tikzpicture} \matrix[matrix of nodes,nodes={draw=gray, anchor=center, minimum size=.5cm}, column sep=-\pgflinewidth, row sep=-\pgflinewidth] (A) at (-1,0) { 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 1 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 1 & 1 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &1 & 1 & 1 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 1 &1 & 1 & 1 & 1 & 1 &1 \\ 0 & 0 & 0 & 1 & 1 &1 & 1 & 1 & 1 & 1 &1 \\ 0 & 0 & 1 & 1 & 1 &1 & 1 & 1 & 1 & 1 &1 \\ 0 & 1 & 1 & 1 & 1 &1 & 1 & 1 & 1 & 1 &1 \\ }; \matrix[matrix of nodes,nodes={draw=gray, anchor=center, minimum size=.5cm}, column sep=-\pgflinewidth, row sep=-\pgflinewidth] (B) at (6,0) {0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 1 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 1 & 1 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 1 & 1 & 1 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 1 & 1 & 1 & 1 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 1 & 1 & 1 & 1 &1 & 0 & 0 & 0 & 0 &0 \\ 0 & 1 & 1 & 1 & 1 &1 & 1 & 0 & 0 & 0 &0 \\ 0 & 1 & 1 & 1 & 1 &1 & 1 & 1 & 0 & 0 &0 \\ 0 & 1 & 1 & 1 & 1 &1 & 1 & 1 & 1 & 0 &0 \\ 0 & 1 & 1 & 1 & 1 &1 & 1 & 1 & 1 & 1 &0 \\ 0 & 1 & 1 & 1 & 1 &1 & 1 & 1 & 1 & 1 &1 \\ }; \end{tikzpicture} \end{center} \caption{ \centering Two triangular sets with non-equidistributed directions (0,1), (1,0), (1,1) and (0,1), (1,0), (1,-1), respectively. } \label{Fig:triangles} \end{figure} Now the difference of these two sets is also a weighted sum of suitable lines. This is presented in Figure \ref{fig:M11} and we denote it by $M_{11}$. \begin{figure} \begin{center} \begin{tikzpicture} \matrix[matrix of nodes,nodes={draw=gray, anchor=center, minimum size=.5cm}, column sep=-\pgflinewidth, row sep=-\pgflinewidth] (A) { 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & -1 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &1 \\ 0 & -1 & -1 & 0 & 0 &0 & 0 & 0 & 0 & 1 &1 \\ 0 & -1 & -1 & -1 & 0 &0 & 0 & 0 & 1 & 1 &1 \\ 0 & -1 & -1 & 1 & -1 &0 & 0 & 1 & 1 & 1 &1 \\ 0 & -1 & -1 & -1 & 1 &-1 & 1 & 1 & 1 & 1 &1 \\ 0 & -1 & -1 & -1 & -1 &0 & 0 & 1 & 1 & 1 &1 \\ 0 & -1 & -1 & -1 & 0 &0 & 0 & 0 & 1& 1 &1 \\ 0 & -1 & -1 & 0 & 0 &0 & 0 & 0 & 0 & 1 &1 \\ 0 & -1 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ }; \end{tikzpicture} \end{center} \caption{$M_{11}$.} \label{fig:M11} \end{figure} Now let $N_{11}=M_{11}+M_{11}^{(5)}+C_{11}$, where $C_{11}$ is the matrix whose entries are all 0 except in the second and seventh columns which are constant $1$ and the fifth and last columns, which are constant -1. An easy calculation shows that $N_{11}$ is the following matrix. Now we apply the operator $L_{-2}$ simultaneously for the rows of $N_{11}$. Note that this can be realised as the sum of lines of the chosen directions. One essential thing is that for those rows which contain more than one $1$'s (and $-1$'s) we have to find a pairing of these elements, which is indicated with colours. We obtain the following $\{0,1\}$ matrix, which corresponds to the set we were looking for. \begin{figure} \begin{center} \begin{tikzpicture} \matrix[matrix of nodes,nodes={draw=gray, anchor=center, minimum size=.5cm}, column sep=-\pgflinewidth, row sep=-\pgflinewidth] (A) { 0 & \textcolor{red}{1} & 0 & 0 & \textcolor{blue}{-1} & 0 & \textcolor{blue}{1} & 0 & 0 & 0 &\textcolor{red}{-1} \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 0 & \textcolor{red}{-1} & \textcolor{red}{1} & 0 &0 & 0 & \textcolor{blue}{-1} & 0 & \textcolor{blue}{1} &0 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & -1 & 0& 1 &0 \\ 0 & 1 & 0 & 0 & -1 &0 & 0 & 0 & 0 & 0 &0 \\ \textcolor{green}{1} & \textcolor{red}{1} & 0 & 0 & \textcolor{blue}{-1} & \textcolor{green}{-1} & \textcolor{blue}{1} & 0 & 0 & 0 &\textcolor{red}{-1} \\ 0 & 1 & 0 & 0 & -1 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & -1 & 0& 1 &0 \\ 0 & 0 & \textcolor{red}{-1} & \textcolor{red}{1} & 0 &0 & 0 & \textcolor{blue}{-1} & 0 & \textcolor{blue}{1} &0 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & \textcolor{red}{1} & 0 & 0 & \textcolor{blue}{-1} & 0 & \textcolor{blue}{1} & 0 & 0 & 0 &\textcolor{red}{-1} \\ }; \matrix[matrix of nodes,nodes={draw=gray, anchor=center, minimum size=.5cm}, column sep=-\pgflinewidth, row sep=-\pgflinewidth] (B) at (7,0){ 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &0 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 1 & 0 & 1 & 1 &0 & 1 & 0 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0& 1 &0 \\ 0 & 1 & 0 & 0 & 0 &0 & 1 & 0 & 1 & 0 &1 \\ 1 & 1 & 0 & 0 & 0 & 0 & 1 & 1 & 0 & 1 &0 \\ 0 & 1 & 0 & 0 & 0 &0 & 1 & 0 & 1 & 0 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0& 1 &0 \\ 0 & 1 & 0 & 1 & 1 &0 & 1 & 0 & 1 & 1 &1 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 &0 \\ 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 &0 \\ }; \end{tikzpicture} \end{center} \caption{\centering The coloured pairs of 1's and -1's in $N_{11}$ on the left, and the corresponding set given by the process on the right.} \end{figure} A similar algorithm gives us the following sets for $p=5,7$ and $13$. \begin{figure} \begin{center} \begin{tikzpicture} \matrix[matrix of nodes,nodes={draw=gray, anchor=center, minimum size=.6cm}, column sep=-\pgflinewidth, row sep=-\pgflinewidth] (A) { 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 1 & 0 & 0 & 1 & 0 \\ 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & 1 & 1 & 0 & 0 & 0 \\ }; \matrix[matrix of nodes,nodes={draw=gray, anchor=center, minimum size=.6cm}, column sep=-\pgflinewidth, row sep=-\pgflinewidth] (B) at (5,0) { 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 1 & 1 & 1 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 \\ }; \end{tikzpicture} \end{center} \caption{\centering Examples for sets of smallest cardinality that have 4 special directions for $p=7$ (on the left) and $p=5$ (on the right).} \end{figure} Note that if we fix the prime $p$ and the number of special directions $n$, then Ghidelli's result gives us a lower bound for the subsets of $\mathbb{F}_p^2$ having exactly $k$ special directions. The previous examples for $p=5,7,11$ meet this lower bound. However this is not the case for the next example $p=13$, which seems to be optimal using this method. The original method only gives us a multiset, where the sum of the weights is 65, which can easily be modified by subtracting $\underline{1}$ from those lines which contain weight 2 as well and adding $\underline{1}^t$ to those columns which are currently empty. This does not modify the sum of the values but makes the following matrix to a $\{0,1 \}$-matrix. \begin{figure} \begin{center} \begin{tikzpicture} \matrix[matrix of nodes,nodes={draw=gray, anchor=center, minimum size=.5cm}, column sep=-\pgflinewidth, row sep=-\pgflinewidth] (A) { 1 & 1 & 0 & 0 & 1 &0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 &1 & 0 & 1 & 1 & 1 &0 &0 &1\\ 0 & 0 & 0 & 0 & 1 &0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 1 & 1 &1 & 0 & 1 & 0 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 &0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 &1 & 0 & 1 & 1 & 1 &0 &0 &1\\ 1 & 1 & 0 & 0 & 1 &0 & 0 & 1 & 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 &0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 0 &0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \\ 1 & 1 & 0 & 1 & 1 &2 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 0 & 1 & 1 &2 & 0 & 1 & 1 & 1 & 1 & 1 & 1 \\ 0 & 1 & 0 & 0 & 0 &0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 \\ }; \end{tikzpicture} \end{center} \caption{\centering Example of 65-element set with 4 special directions in $\mathbb{F}_{13}^2$.} \end{figure} The question remains whether there exists $4*13=52$-element subset of $\mathbb{F}_{13}^2$ determining exactly $4$ special directions. \section{Open problems}\label{sec7} \begin{enumerate} \item Is there a set in $\mathbb{F}_p^2$ that is equidistributed in exactly $d$ directions for every $d \le p-2$? We have seen that this is not the case for $d=p-1$ since sets which are equidistributed in $p-1$ directions are unions of parallel lines so these are equidistributed in $p$ directions at least. It follows from the result of R\'edei \cite{redei} that there is a gap in the possible number of special directions for subsets of $\mathbb{F}_p^2$ of cardinality $p$. Further, the result of G\'acs \cite{gacs} shows that this is not a unique gap since sets (of cardinality $p$ having more than $\frac{p+3}{2}$ special directions determine at least $\lfloor 2 \frac{p-1}{3}+1\rfloor$ special directions. However, it is not hard to see that for $p=3,5,7$ there is no such a gap if the cardinality of the set is divisible by $p$ . \item What is the minimal size of a set in $\mathbb{F}_p^2$, which is equidistributed in at most $k$ directions? In particular, is it true that Ghidelli's bound is tight \cite{ghidelli}, i.e., is it possible to construct sets of cardinality $kp$ which have $\lceil \frac{p+k+2}{k+1}\rceil$ special directions? Even for $p=13$ this question is still open. \end{enumerate} \section*{Acknowledgement} G. Kiss was supported by Premium Postdoctoral Fellowship of the Hungarian Academy of Sciences and by the Hungarian National Research, Development and Innovation Office - NKFIH (grant no. K124749). G. Somlai was supported by the J\'anos Bolyai Research Grant, the New National Excellence Program \'UNKP-20-5-ELTE-231. The research was supported by the Hungarian National Research, Development and Innovation Office, OTKA grant no. SNN 132625. Application Domain Specific Highly Reliable IT Solutions" project has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the Thematic Excellence Programme no. 2020-4.1.1.-TKP2020 (National Challenges Subprogramme) funding scheme. \newpage
{ "timestamp": "2021-11-22T02:07:42", "yymm": "2109", "arxiv_id": "2109.13992", "language": "en", "url": "https://arxiv.org/abs/2109.13992" }
\section{Introduction} Click-Through Rate (CTR) prediction in the recommender systems is indispensable, which helps rank the candidate items meticulously based on the user interests. In the past few years, several Deep-Learning-based methods \textit{e.g.,} Wide\&Deep~\cite{cheng2016wide}, DeepFM~\cite{guo2017deepfm} and DIN~\cite{zhou2018deep}, have achieved impressive performance on this task. Nevertheless, these CTR models still suffer from the cold-start problem that breaks the assumption about the sufficient training samples available. Under the restricted user interactions in the cold-start scenarios, the model performance is dramatically limited~\cite{he2014practical}. This motivates a range of subsequent works~\cite{volkovs2017dropoutnet, lee2019melu, lu2020meta, zhu2021learning} to consider the cold-start recommendation. One line of methods leverage the implicit regularization to prevent the CTR models from over-fitting~\cite{chen2019lambdaopt}. The popular techniques \textit{e.g.,} Dropout~\cite{srivastava2014dropout} and Early-stop~\cite{raskutti2014early}, would be considered during the training. For example, DropoutNet~\cite{volkovs2017dropoutnet} randomly disturbs the embedding of users or items to robustify the optimization procedure. Some other works explore to use the parameter or the embedding initialization to regularize the training of the CTR models~\cite{hospedales2021meta}. For example, MeLU~\cite{lee2019melu} learns to initialize the whole parameters of the models using MAML~\cite{finn2017model}. MAMO~\cite{dong2020mamo} extends MELU with two groups of memories to enhance the personalized initialization. MetaEmb~\cite{pan2019warm}, MWUF~\cite{zhu2021learning} and GME~\cite{ouyang2021learning} explore to use side information, \textit{e.g.,} the item attributes and user neighbors, to initialize the user and item id embedding. For the recommendation model itself, the supervision for each sample is not changed during training. Another line of works explicitly construct the auxiliary task to help the training of the CTR models. For example, DeepMCP~\cite{ouyang2019representation} uses the additional tasks that model the user-item and item-item relationship to improve the user embedding and the item embedding. DIEN~\cite{zhou2019deep} and DMR~\cite{lyu2020deep} encourage the historical state representation close to the next click item, to better capture the evolution of user interests. SSL4Rec~\cite{yao2020self} explores to use the unsupervised learning task, SimCLR~\cite{chen2020simple}, to enhance the generalization performance of the representation in the models. Similarly, CLCRec~\cite{wei2021contrastive} implements this goal by maximizing the mutual dependencies between item content and the collaborative signals. Empirically, these methods essentially incorporate the prior knowledge on the feature representation to help the training of the CTR models, which have achieved the state-of-the-art performance. This work follows the second line and explores to leverage the structure of the representation space to automatically constrain the similarity among representations from different users. Specifically, we design an Auto-Quantized Contrastive Learning (AQCL) loss to regularize the training of the CTR models. Unlike the traditional contrastive learning approaches~\cite{chen2020simple, he2020momentum, chen2020big, chen2021empirical} and some attempts on the recommendation tasks~\cite{zhou2020s3, xie2020contrastive, yao2020self,wei2021contrastive} that focus only on instance-level discrimination, AQCL encourages both the instance-instance similarity and the instance-cluster similarity to automatically contribute to the modeling of the user interests. Figure~\ref{fig:framework} illustrates the framework and the intuition of AQCL. We conduct a range of experiments on three sparse datasets and the results show AQCL consistently improves the CTR performance under the cold-start scenarios. In total, our contributions can be summarized as follows: \begin{itemize}[leftmargin=10pt] \item We introduce an auxiliary AQCL loss that automatically leverages the instance-instance similarity and the instance-cluster similarity to regularize the representations in the CTR models under the cold-start scenarios. \item The interest clusters used in AQCL are learned together with the loss of the primary task in an end-to-end manner. Simultaneously, an $\alpha$-adaptation strategy is searched to automatically control the geometric balance for the representation of the active users and the non-active users. \item Extensive experiments on three datasets prove the effectiveness of our proposed method. Besides, AQCL is compatible with the common DNN-based CTR models like W$\&$D, DeepFM and DIN, which improves the ranking performance on both non-active users and active users. \end{itemize} \begin{figure*}[t] \centering \includegraphics[width=0.5\linewidth]{fig/framwork_left.pdf} \hspace{0.01\linewidth} \includegraphics[width=0.4\linewidth]{fig/framework_right.pdf} \caption{\textbf{Left}: Framework of Auto-quantized Contrastive Learning (AQCL) for the CTR model. For each input $x$, the embedding layer and feature interaction layer convert $x$ into latent code $h$. The primary prediction task is guided by Logloss, using the output $\hat{y}$ and ground truth label $y$. Besides, a simple MLP projector $g(\cdot)$ outputs the representation $z = g(h)$. An augmented input is also fed similarly to get $z'$. The proposed AQCL loss uses the representations $z,\,z'$ and the interest cluster $Q$. During training, both Logloss and AQCL Loss are applied. \textbf{Right}: Motivation of AQCL. It achieves: (1) instance-instance similarity, where the representation $z$ should be close to $z^+$ from an augmented input and far from $z^-$ from negative sampling; and (2) interest clusters support, where $z$ should be close to the positive interest cluster and far from negative interest clusters among $Q$; (3) automatic balance of the instance-instance and the instance-cluster similarities for non-active/active users by $\alpha$-adaptation.} \label{fig:framework} \end{figure*} \section{Related Works} \subsection{CTR in Cold-Start Recommendations} Recommender systems have been well studied in the past decades~\cite{deshpande2004item, he2014practical, rendle2010factorization, cheng2016wide, linden2003amazon, barkan2016item2vec, covington2016deep, yi2019sampling, zhang2021cause, yao2021device, kang2018self, tan2021sparse}, while the cold-start problem is a long-standing challenge in recommendation tasks. As mentioned before, many previous works can be considered as implicit or explicit regularization on the model. The former tries to interfere with the optimization without extra tasks. For example, DropoutNet~\cite{volkovs2017dropoutnet} applies a data-augmentation on input to encourage robust user or item representations. Many meta-learning based works, \textit{e.g.}, MeLU~\cite{lee2019melu}, MataEmb~\cite{pan2019warm}, MAMO~\cite{dong2020mamo}, MetaHIN~\cite{lu2020meta}, PAML~\cite{wang2021preference}, GME~\cite{ouyang2021learning} and MWUF~\cite{zhu2021learning}, explore to initialize model parameters or embeddings with user and item side information. Some other works train the recommendation model with auxiliary task as explicit regularization. DeepMCP~\cite{ouyang2019representation} is an early attempt to explore representation learning by designing matching subnet and correlation subnet. SSL4Rec~\cite{yao2020self} and CLCRec~\cite{wei2021contrastive} applies traditional contrastive learning loss on either users or item representations. Our method also introduces an auxiliary task, but more comprehensively explores the representation space. \subsection{Contrastive Learning} Self-supervised Learning (SSL) is an unsupervised approach in learning data representations~\cite{liu2020self} and has shown success in computer vision~\cite{wang2019self, li2018non}, audio~\cite{baevski2019vq, ravanelli2020multi}, natural language processing~\cite{devlin2018bert, lan2020albert} and many cross-modality tasks~\cite{alwassel2020self, zhang2020devlbert, owens2018audio}. Contrastive learning (CL) is one representative line of works, including CPC~\cite{oord2018representation}, MoCo~\cite{he2020momentum}, SimCLR~\cite{chen2020simple} and PIRL~\cite{misra2020self}. CL maximizes a lower bound on the mutual information between two or among more ``views" of an instance~\cite{wu2020on}. By identifying the positive sample pairs among other negative pairs, it succeeds in capturing the intrinsic features from individual instances in the latent space. Several attempts have used contrastive learning in sequential recommendations to learn either better item-level features~\cite{zhou2020s3,yao2020self,wei2021contrastive} or user representations~\cite{xie2020contrastive} individually, while our method considers the composed representations of the user and the item. Some works have also extended the traditional contrastive learning with more positive pairs. For example, SupCon\cite{khosla2020supervised} utilizes extra labels and make each instance close to others with the same class. PCL~\cite{li2020prototypical} uses EM to conduct unsupervised clustering and contrastive learning together. Similar to our AQCL method, these works also explore instance representations with other neighbors or clusters. However, it ignores the negative effect on the representation of active users who actually require the sufficient details in representation regarding the recommendation. \section{Preliminary} \subsection{CTR Prediction} The CTR prediction as a binary classification problem is to find a map $f(\mathbf{x}_j)\rightarrow y_j$ for each pair $\left( \mathbf{x}_j, y_j \right)\in \mathcal{D}$. Generally, for each input $\mathbf{x}_j$, it at least contains the user id $u_j$ and the candidate item id $i_j$. Besides, the user historical clicks $\mathbf{s}_j = [i_{j,1}, i_{j,2},\cdots, i_{j,L_j}]$ are often considered, where $L_j$ is the length of the click sequence. Combining the user id $u_j$, the item id $i_j$, the historical clicks $\mathbf{s}_j$ and the other features $o_j$, we have $\mathbf{x}_j = (u_j, i_j, \mathbf{s}_j, o_j)$. The corresponding target label is a binary scalar $y_j \in \{0,1\}$ meaning whether the user $u_j$ clicks on the candidate item $i_j$. A typical deep CTR model $f$ consists of the following parts \cite{zhang2021deep}: \begin{itemize}[leftmargin=10pt] \item{\emph{Embedding layer.}} It transforms the sparse categorical features into dense-valued vectors \textit{i.e.,} embedding. Features like the item id are projected as the fixed-length embedding. For the historical sequence, we correspondingly acquire a sequence of embedding for the interacted items. \item{\emph{Feature interaction layer.}} The transformed embeddings are then fed into the interaction layers to produce a compact representation $\mathbf{h}_j$ for input instance $\mathbf{x}_j$. This component has diverse designs such as Multi-Layer Perception (MLP) \cite{guo2017deepfm}, Cross Network \cite{wang2017deep} and Multi-Head Self-Attention \cite{song2019autoint}. \item{\emph{Prediction layer}}. Finally, a simple prediction layer (usually a logistic regression module) % produces the final score $\hat{y}_j = f \left( \mathbf{x}_j \right)\in [0,1]$ for $\mathbf{x}_j$ on the representation $\mathbf{h}_j$. \end{itemize} With model output $f(\mathbf{x}_j)$ and ground truth $y_j$, the CTR model is trained on dataset $\mathcal{D}$ with the Logloss $\mathcal{L}_{\mbox{c}}$: \begin{equation} \mathcal{L}_{\mbox{c}} = - \frac{1}{|\mathcal{D}|} \sum_{j=1}^{|\mathcal{D}|} \left( y_j \log f\left( \mathbf{x}_j \right) + \left( 1 - y_j \right) \log \left(1 - f(\mathbf{x}_j) \right) \right). \end{equation} \subsection{Cold-start Problem in CTR} \begin{figure}[!h] \centering \includegraphics[width=0.25\linewidth]{fig/distri.pdf} \caption{The user interactions (descending sort) on a cold-start industrial scenarios.} \label{fig:user-cdf} \end{figure} Users diverse a lot when it comes to the activeness in the cold-start scenarios\footnote{The cold-start scenarios here we mean are the early stage of the recommendation feed applications that have many slightly-active or non-active users but also have a small fraction of active users.}. We divide users into three groups, non-active, slightly-active and highly-active users based on the length of user click sequence $\mathbf{s}_j$. Figure~\ref{fig:user-cdf} plots the curve of the user sample number on one cold-start industrial dataset, and categorizes three groups roughly of 60\%, 30\% and 10\% users. We can see that there are only a few of active users and a large proportion of users produces very limited interactions, making the CTR prediction task challenging. \begin{figure}[tb] \centering \includegraphics[width=0.6\linewidth]{fig/compare3.pdf}\\ \begin{minipage}{0.2\linewidth}\centering (a) $\alpha=0$\end{minipage} \begin{minipage}{0.2\linewidth}\centering (b) $\alpha=1$\end{minipage} \begin{minipage}{0.2\linewidth}\centering (c) adaptive $\alpha$\end{minipage} \caption{Illustration of AQCL loss with different $\alpha$ in latent space. With $\alpha=0$, AQCL approximately degrades to ICL, which imposes each sample to be different from others. Ideally the representations are evenly distributed~\cite{wang2020understanding}. With $\alpha=1$, AQCL acts like quantized contrastive learning (QCL), which focuses on the interest clustering to help non-active users. The representations might lose the rich details for the CTR primary task. With an adaptive control of $\alpha$, AQCL combines the advantages of both ICL and QCL. It encourages the sample representations to be assigned to certain interest clusters, while the distances between samples are still guaranteed to maintain enough information for the CTR prediction.} \label{fig:comparison} \end{figure} \section{Auto-Quantized Contrastive Learning}\label{sqc} \subsection{Self-supervision Framework for CTR\label{sec:ssl}} In the cold-start scenarios, the click signal is usually scarce for training. In this case, the recent self-supervised learning (SSL), \textit{e.g.,} the contrastive learning loss SimCLR~\cite{chen2020simple}, is a straightforward choice as an auxiliary task to regularize the CTR model. In SimCLR, for each training instance $\mathbf{x}$ and its randomly augmented version $\mathbf{x}^+$, they go through the same feature interaction layer to get corresponding representations $\mathbf{z}$ and $\mathbf{z}^+$ after projector $g(\cdot)$. The classical loss focuses on the instance-level contrastive learning (ICL) to maximize the similarity between $\mathbf{z}$ and $\mathbf{z}^+$, \begin{equation}\label{eq:icl} \mathcal{L}_{\textrm{ICL}}(\mathbf{z})= -\log \frac{\exp \left(\texttt{sim}\left(\mathbf{z}, \mathbf{z}_{}^{+}\right) / \tau\right)} { \sum_{\mathbf{z'} \in \{\mathbf{z}^+\} \cup \mathbf{Z}^-} \exp \left(\texttt{sim}\left(\mathbf{z}, \mathbf{z}' \right) / \tau\right) }, \end{equation} where $\mathtt{sim}$ means cosine similarity, $\tau$ is a temperature hyper-parameter and $\mathbf{Z}^-$ are samples from negative sampling. Regarding the CTR task, we applying the following input augmentation: first, we randomly mask some items from the user click history $\mathbf{s}_j$; and then we randomly set some embedding bits into zero, like dropout operation. Similar to \cite{chen2020simple}, we transform the latent code $\mathbf{h}$ by a small non-linear projection module $g(\cdot)$ to get the actual representation $\mathbf{z}$ for SSL task, i.e., $\mathbf{z}_i = g(\mathbf{h}_i)$. In practice, $g(\cdot)$ is implemented as a 3-layer MLP with the leaky-ReLU~\cite{maas2013rectifier} activation. The final training loss in the framework is a combination of both the primary CTR prediction task and the self-supervised auxiliary task. Note that, the SSL module only participates in the training stage. \subsection{Auto-Quantized Contrastive Learning} However, ICL loss only explores the instance-level similarity and fails to capture the relationship between neighbors. For non-active users, it is reasonable to get benefits from the neighbors with the rich behaviors in the latent space. This motivates us to model the structure information in the latent representation space, which can be also considered as the user interest clusters. Here, we define the codeword as interest in the representation codebook, borrowed from the concept of vector quantization~\cite{gray1984vector}, and propose an Auto-Quantized counterpart of contrastive learning. Figure~\ref{fig:comparison} shows the difference between ICL and AQCL. Formally, for $T$ interests $Q=[\mathbf{q}_1, \mathbf{q}_2, \cdots, \mathbf{q}_T]$ and a certain $\mathbf{z}$, we find the top-$K$ closest codewords $Q^+$, i.e., \begin{equation} Q^+ = {\arg\max} _{\mathbf{q}^1, \mathbf{q}^2,\cdots, \mathbf{q}^K \in {Q}} \sum_{k=1}^K \texttt{sim}(\mathbf{z}, \mathbf{q}^k). \nonumber \end{equation} Note that, \texttt{sim} is the cosine similarity in unit-sphere space as in Eqn.~\eqref{eq:icl}, since it is much less vulnerable to mode collapse~\cite{ma2019learning} and is widely used in prototype-based methods~\cite{li2020prototypical}. The proposed Auto-Quantized contrastive learning loss is formulated as a dynamic combination of instance-level and cluster-level contrastive learning: \begin{equation} \label{AQCL} \begin{gathered} \mathcal{L}_{\textrm{AQCL}} (\mathbf{z}) = -\log \frac{ \left[ d_1\left(\mathbf{z}, \mathbf{z}^+ \right) \right] ^ {1-\alpha} \left[\sum_{\mathbf{q}^+ \in Q^+} d_2\left(\mathbf{z}, \mathbf{q}^+ \right) \right] ^ {\alpha} } { \sum_{ \mathbf{z'} \in \{\mathbf{z}^+\} \cup {\mathbf{Z}^-}} d_1 \left(\mathbf{z}, \mathbf{z}' \right) + \sum_{\mathbf{q'} \in Q} d_2 \left(\mathbf{z}, \mathbf{q}' \right) }, \\ d_1\left( \mathbf{z}, \mathbf{z}^{'} \right) = \exp \left(\texttt{sim}\left(\mathbf{z}, \mathbf{z}^{'}\right) / \tau_1 \right), \\ d_2\left(\mathbf{z}, \mathbf{q}^{'} \right) = \exp \left(\texttt{sim}\left(\mathbf{z}, \mathbf{q}^{'}\right) / \tau_2 \right), \end{gathered} \end{equation} where $\tau_1$ and $\tau_2$ are temperature hyper-parameters for instances and clusters, and $\alpha$ controls the geometric mean of instance-instance and instance-cluster similarities. The design of $\alpha$ is for automatic balance for the representation personalization of non-active users and active users, which will be explained later. AQCL extends ICL by introducing positive support from both its augmented version $\mathbf{z}'$ and a set of $K$ closest codewords. We allow $K$ to be equal or larger than 1, because a sample representation may contain several interests, and using multiple codewords can be more stable to the probably incomplete interest clustering. Note that, the codewords are built based on the whole dataset, while the positive and negative pairs are from the same batch. \subsubsection{Building the codebook $Q$} In AQCL, a good codebook should try to cover all the sample representations with relatively small distances. Considering the possibly large-scale training dataset, we leverage an online method~\cite{Caron2020UnsupervisedLO} to learn the codebook along with AQCL training. In detail, for a codebook $Q=[\mathbf{q}_1, \mathbf{q}_2, \cdots, \mathbf{q}_T]$ and a batch of representation $Z = [\mathbf{z}_1, \mathbf{z}_2, \cdots, \mathbf{z}_B]$ with batch size $B$, we would acquire the corresponding assignment code matrix $A = [\mathbf{a}_1, \mathbf{a}_2, \cdots, \mathbf{a}_B] \in \mathbb{R}_+^{T\times B}$. Each column $\mathbf{a_b}$ of $A$ denotes the probability of assigning $\mathbf{z}_b$ into the totally $T$ codewords. Similar to \cite{asano2019self, Caron2020UnsupervisedLO}, the objective is \begin{equation} \max_{A\in\mathcal{A}} \mbox{Tr}(A^\top Q^\top Z) + \epsilon H(A), \end{equation} where $H$ is the entropy function serving as a regularization with a small weight $\epsilon$. We define the constraints of $A$ by \begin{equation} \mathcal{A} = \left\{ A \in \mathbb{R}_+^{T\times B} \mid A\mathbf{1}_B = \cfrac{1}{T} \mathbf{1}_T, Q^\top \mathbf{1}_T = \cfrac{1}{B}\mathbf{1}_B \right\}\,, \nonumber \label{eqn:assign} \end{equation} where $\mathbf{1}_T$ denotes the vector of ones in dimension $T$. The constraint ensures that each codeword is roughly assigned evenly. This can be considered as an optimal transport problem \cite{asano2019self} and solved by the iterative Sinkhorn-Knopp \cite{cuturi2013sinkhorn} algorithm with the small computation cost. Then, we convert the continuous solution $A^*$ into its discrete, one-hot version using $\arg\max$ operation. For each representation $\mathbf{z}_b$, we encourage it to be close to only one of interest codewords. In summary, the loss to build the codebook is transformed as \begin{align}\label{eqn:codebook} \begin{split} & \mathcal{L}_{\textrm{codebook}} = -\sum_{b=1}^B \sum_{t=1}^T \mathbf{a}_b^{(t)} \log \mathbf{p}_b^{(t)}, \\ & \mathbf{p}_b^{(l)} = \cfrac{ \exp (\texttt{sim} \left( \textrm{sg} (\mathbf{z}_b), \mathbf{q}_l \right) / \tau_3 ) }{ \sum_{\mathbf{q}\in Q} \exp (\texttt{sim} \left( \textrm{sg} (\mathbf{z}_b), \mathbf{q} \right) / \tau_3 )}, \end{split} \end{align} where $\textrm{sg}(\cdot)$ means the stop-gradient operation, and $\tau_3$ is a temperature hyper-parameter. Note that, Eqn.~\eqref{eqn:codebook} is only to learn the codebook and not to update the model parameters. \subsubsection{Auto-Quantization via $\alpha$-adaptation} Intuitively, users with the scarce clicks are more uncertain and need support from their neighbors, and conversely, the performance of active users might suffer from over-quantization, since their representation maintains more details for prediction. Therefore, the representation learning should account for the different user activeness in the cold-start scenarios. To achieve this, we search $\alpha$ to automatically balance the importance of instance-instance measure and instance-cluster measure in Eqn.~\eqref{AQCL}. Specially, we design a weight control module for $\alpha$ as a function of variable $L_j$ for user $u_j$, \textit{i.e.}, $\alpha_j = R(L_j)$. With $L_j$ increasing, we know better about the user history, and empirically we shall not enforce conformity and make $\alpha$ smaller. However, it is challenging and labor-consuming to design the weight \textit{vs.} activeness curve for each agnostic cold-start scenario. Therefore, we resort to AutoML~\cite{hutter2019automated} to search the appropriate function $R(L_j)$ for Eqn.~\eqref{AQCL} in the following. \emph{Search space.} First, the search space about $R(L_j)$ should take the following two intuitions into account: (1) when $L_j$ increases, $\alpha$ should be reduced; (2) $\alpha$ value should be in the range $[0,1]$. In this paper, we design the search space as \begin{equation}\label{eq:search-space} \mathcal{F} =\left\{ R(L_j) = e^{-w_1 \cdot (L_j / L)^{w_2}} : w_1>0, w_2 > 0\right\}, \end{equation} where $L$ is the mean length of the click history for all users. The exact choice of basis function is not important. Figure~\ref{fig:possible-search-results} illustrates some possible search results. \begin{figure}[h] \centering \includegraphics[width=0.23\linewidth, trim=0 5 0 5]{fig/auto-demo.pdf} \caption{Possible search results for $R(L_j)$.} \label{fig:possible-search-results} \end{figure} \emph{Search objective.} For the problem in this section, we need a subset $\mathcal{D}_{\textrm{val}}$ (partitioned from the training set) to help auto-search. Given $\theta$ as the parameters of CTR model $f$, we target to search for the proper $\alpha$ function such that the model trained on training set $\mathcal{D}_{\textrm{train}}$ has the best performance on validation set $\mathcal{D}_{\textrm{val}}$. Concretely, the objective is defined as \begin{align}\label{eq:auto-ml} \begin{split} \{w_1^*, w_2^*\} & = \arg\min_{R(\cdot)\in \mathcal{F}} \mathcal{L}_{\textrm{val}} (f(\theta^*; R), \mathcal{D}_{\textrm{val}}), \\ & \theta^* = \arg\min_{\theta} \mathcal{L}_{\textrm{val}}(f(\theta; R), \mathcal{D}_{\textrm{val}}). \end{split} \end{align} where $\mathcal{L}_{\textrm{val}}$ is the Logloss $\mathcal{L}_c$ on validation set $\mathcal{D}_{\textrm{val}}$. Based on Eqn.~\eqref{eq:auto-ml}, we can find the optimal function in Eqn.~\eqref{eq:search-space}, which achieves the dynamic balance in representation learning. \subsection{AQCL Algorithm} The AQCL is implemented as an auxiliary loss to the primary CTR task. Therefore, the overall loss is \begin{equation} \label{eqn:final-overall} \mathcal{L} = \mathcal{L}_{\textrm{c}} + w \mathcal{L}_{\textrm{AQCL}}\,, \end{equation} where $w$ is the weight for the auxiliary task. Note that, the codebook $Q$ is learned together with Eqn.~\eqref{eqn:final-overall} in an end-to-end manner, using the loss function in Eqn.~\eqref{eqn:codebook}. The training procedure is summarized in Algorithm~\ref{alg:algorithm}. AutoML eases the search of hyper-parameters $w_1, w_2$ in Algorithm~\ref{alg:algorithm} by using the objective~\eqref{eq:auto-ml}. Once the proper hyper-parameters are found by AutoML, we get the final CTR model. During the test phase, all components in the auxiliary task are omitted. \begin{algorithm}[tb] { \caption{Algorithm for AQCL training} \label{alg:algorithm} \textbf{Input}: Training samples $\{\mathbf{x}_j\}$ and their history length $\{L_j\}$, \\ \text{\quad\quad\,\,\,\,\,} parameters $w_1$ and $w_2$ for $\alpha$-adaptation\\ \textbf{Output}: the CTR model \begin{algorithmic}[1] % \STATE Initiate CTR model $f$ and user interest codebook $Q$. \WHILE{not early stop} \STATE Fetch a batch of $\{\mathbf{x}_j\}$ and the click length $\{ L_j\}$. \STATE Get output $\{\hat{y}_j\}$ and projected representation $\{\mathbf{z}_j\}$. \STATE Get top-$K$ positive interests with codebook $Q$. \STATE Update $f$ by Eqn.~\eqref{eqn:final-overall} with $\alpha$ computed by Eqn.~\eqref{eq:search-space}. \STATE Get the discrete assignment matrix ${A}$ with Sinkhorn and update the user interest codebook $Q$ by Eqn.~\eqref{eqn:codebook}. \ENDWHILE \STATE \textbf{return} model $f$ \end{algorithmic}} \end{algorithm} \section{Experiments} In this section, we will evaluate the proposed AQCL framework. Specifically, we would like to answer the questions: \begin{itemize}[leftmargin=10pt] \item{\textbf{RQ1.}} Compared with other methods, how does AQCL perform with the CTR model on different group of users? \item{\textbf{RQ2.}} Is AQCL as an auxiliary task generally compatible with the different CTR models? \item{\textbf{RQ3.}} How does AQCL work and how do the hyperparameters make the effect? \end{itemize} \subsection{Datasets} We conduct experiments on three datasets with relatively severe data sparsity. The statistics are listed in Table \ref{dataset}. \input{table/dataset} \begin{itemize}[leftmargin=10pt] \item{\textbf{Amazon}\footnote{\url{https://jmcauley.ucsd.edu/data/amazon}}.} This dataset~\cite{He2016UpsAD} is composed of product reviews from the Amazon website. We follow~\cite{chen2018neural,ijcai2018-521} and use the subset of Beauty to verify AQCL. The task is defined to predict whether a user will comment about a certain item. \item{\textbf{Ta Feng}\footnote{\url{http://recsyswiki.com/wiki/Grocery_shopping_datasets}}.} This is a sparse grocery shopping dataset released by ACM RecSys. It covers products from food, office supplies to furniture. The dataset consists of user transactions from November 2000 to February 2001. We predict whether a user will buy a certain item. \item{\textbf{Oncold.}} This is an industrial dataset of the real-world online cold-start recommendation feeds, which is collected from May to July, 2021. The dataset is extremely sparse as most of users only have clicked a few of items. \end{itemize} Like~\cite{zhou2018deep, ma2019learning}, we sort the user behaviors by timestamp. For Amazon (Ta Feng, respectively), we use the last interaction (day) as the test, the second last interaction (day) as the validation, and the rest as the training data. For Oncold dataset, we split the data of last 20 days equally as the test set and the validation set, and the rest clicks are used as the training data. \subsection{Experiment Settings} \subsubsection{Baselines} For RQ1, we compare with some representative methods of two research lines, \textit{i.e.,} the implicit regularization and the explicit regularization. For fair comparison, the following methods and AQCL all use DIN as the backbone model. Besides, we only refer to the design of regularization and omit their modifications of the CTR model architectures in these works, \textit{e.g.,} the positional encoding. \begin{itemize}[leftmargin=10pt] \item \textbf{DropoutNet}~\cite{volkovs2017dropoutnet} is a training strategy that randomly masks user or item embeddings to handle the cold-start problem. It encourages the CTR model to make full use of the side information. \item \textbf{DeepMCP}~\cite{ouyang2019representation} is a representation-learning-aided model. Except the Logloss, it uses the matching subnet to capture the user-item relation, and the correlation subnet to explore the item-history relation. \item \textbf{DMR}~\cite{lyu2020deep} proposes an auxiliary matching loss to measure the correspondence between the user preference and the target item in the embedding space. \item \textbf{ICL}~\cite{chen2020simple} is the vanilla instance-level contrastive learning. We here use ICL during the training stage rather than pre-training. \end{itemize} For RQ2, we verify AQCL with the following backbones. \begin{itemize}[leftmargin=10pt] \item \textbf{W$\&$D}~\cite{cheng2016wide} is a classical method that uses the feature-cross to help the model capture the high-order relationship hidden in the data for the better prediction. \item \textbf{DeepFM}~\cite{guo2017deepfm} is a successful attempt to combine the power of factorization machines in recommendation and deep learning in the feature learning. \item \textbf{DIN}~\cite{zhou2018deep} uses the attention mechanism to learn the representation of the user click history given the candidate item, to better explore the user interests. \end{itemize} \subsubsection{Implementations} All experiments are implemented in PyTorch and run on NVIDIA Tesla V100 GPUs. We use Adam optimizer and the learning rate is $0.001$ during training. The dropout rate for DNNs is set as 0.2. We search the L2 regularization weight for the embedding among $\{10^{-5}, 10^{-4}, \cdots, 10^{-1}\}$ in all models. For AQCL, we define the weight $w$ of auxiliary task as $\{0.01, 0.05, 0.1\}$, and the temperatures $\tau_1, \tau_2, \tau_3$ are all set as 0.1. We set the codebook capacity $T$ as 128 for each dataset, and $K$ as 5. \subsubsection{Evaluation metrics} We adopt AUC and RelaImpr to evaluate the performance of the CTR models. AUC measures the probability that a randomly chosen positive sample is ranked higher than a randomly chosen negative sample. Higher AUC indicates better performance. RelaImpr, used in many works like~\cite{pmlr-v32-yan14, zhu2021learning}, shows the relative improvement of the target compared with the base. Here, we define the base as the backbone used in our experiments, \textit{e.g.,} DIN. RelaImpr is thus defined as \begin{equation} \textrm{RelaImpr} = \left( \cfrac{\textrm{AUC (target)}-0.5}{\textrm{AUC (base)}-0.5} - 1\right) \times 100\%. \end{equation} We calculate the metrics on non-active, slightly-active and active users individually to monitor the model performance on the users with the different activeness. \subsection{Results and Discussion} \subsubsection{CTR prediction} To answer RQ1, we conduct experiments on the baselines and our method. From Table \ref{result}, we can find AQCL achieves consistent improvements on three datasets. \input{table/maintable} \begin{itemize}[leftmargin=10pt] \item In general, we observe the overall performance improvement with the implicit/explicit regularization. However, there is a slight performance drop for DropoutNet on Ta Feng and DMR on Oncold. Considering that DropoutNet emphasizes the importance of the side information, the performance may get hurt if there is no enough auxiliary information available. For DMR, the representation correspondence constraint between the user and the target item might be harmful to the active users. For other cases, the positive effects are shown on both non-active users and active users. This means that the proper regularization can help the CTR model in cold-start scenarios. \item Our method outperforms the baselines in most cases. The advantage of AQCL is that it can automatically capture the interest clusters to support the non-active users and weaken the information loss for the representation of the active users via $\alpha$ adaptation. For Amazon dataset, we can see the significant improvement on non-active users, which shows the effectiveness by using neighbour information to alleviate the sparsity issue. For Ta Feng and Oncold datasets, there is more gain on the highly-active users. This can be attributed to that the proper relaxation to incorporate the instance-instance similarity in the AQCL loss yields a more robust representation. \end{itemize} \subsubsection{Different backbones} To answer RQ2, we adapt AQCL with other backbones, \textit{i.e.,} W$\&$D and DeepFM to verify its effectiveness. For both W$\&$D and DeepFM, we do not modify their wide or FM part, and apply AQCL to the output before the last linear layer on the deep side. Table \ref{tab:deepfm} summarizes the results of AQCL with different backbones. According to the table, AQCL consistently improves the backbones, which demonstrates its compatibility with the popular CTR models. \input{table/backbone} \begin{figure}[t] \centering \includegraphics[width=0.22\linewidth, trim=0 2 0 0]{fig/auto-result.pdf} \hspace{0.01\linewidth} \includegraphics[width=0.22\linewidth, trim=0 2 0 0]{fig/alpha.pdf} \hspace{0.03\linewidth} \caption{\textbf{Left}: The curves of $\alpha$ in AQCL for three datasets; \textbf{Right}: The experiments on Amazon with the constant $\alpha$.} \label{fig:automl} \end{figure} \subsection{Visualization and Ablation Study} To answer RQ3, we conduct a range of visualization and ablation study about AQCL in the following. \subsubsection{$\alpha$-adaptation} The search result of $\alpha$ in AQCL for each dataset is plot in the left panel of Figure~\ref{fig:automl}. The curvatures are significantly different on three datasets. Amazon requires the relatively small instance-cluster regularization on active users. In comparison, Ta Feng emphasizes more dependency on the clusters and Oncold is similar but gives large $\alpha$ to all users. Besides, we explore to replace $\alpha$-adaptation with the constant value in $\{0, 0.3, 0.6, 1.0\}$. According to the right panel of Figure~\ref{fig:automl}, there is a performance drop compared with $\alpha$-adaptation, which demonstrates the effectiveness of AQCL to avoid the human labor. \subsubsection{Representation $h$} To visualize the learned representation in the latent space, we randomly choose a subset from Oncold dataset, and project the representations $\{\mathbf{h}_j\}$ into the 2D space via t-SNE~\cite{van2008visualizing} in Figure~\ref{tsne}. For a better view, we assign each point with the color representing the interest cluster deduced by AQCL. We find that AQCL can divide the sample representations roughly into several groups, while the vanilla DIN does not. This confirms the motivation of AQCL regarding the interest clusters, which might be useful to the non-active users. \begin{figure}[t] \centering \begin{subfigure}{0.22\linewidth} \includegraphics[width=\linewidth, trim=0 1 0 1]{fig/tsne-din.pdf} \caption{DIN} \label{fig:tsne-din} \end{subfigure}% \hspace{0.03\linewidth} \begin{subfigure}{0.22\linewidth} \includegraphics[width=\linewidth, trim=0 1 0 1]{fig/tsne-aqcl.pdf} \caption{AQCL} \label{fig:tsne-aqcl} \end{subfigure}% \caption{t-SNE visualization of the latent representations respectively learned by DIN and AQCL on Oncold dataset.}\label{tsne} \end{figure} \begin{figure}[t] \centering \begin{subfigure}{0.2\linewidth} \includegraphics[width=\linewidth, trim=0 1 0 1]{fig/ablation-dimension.pdf} \caption{Dimension of $\mathbf{z}$} \label{fig:ablation-dimension} \end{subfigure}% \begin{subfigure}{0.2\linewidth} \includegraphics[width=\linewidth, trim=0 1 0 1]{fig/ablation-auxweight.pdf} \caption{Weight of $\mathcal{L}_{\textrm{AQCL}}$} \label{fig:ablation-auxweight} \end{subfigure}% \begin{subfigure}{0.2\linewidth} \includegraphics[width=\linewidth, trim=0 1 0 1]{fig/ablation-K.pdf} \caption{Top-$K$ interest} \label{fig:ablation-K} \end{subfigure}% \caption{(a) The effect of the dimension of $z$ on Amazon; (b) The effect of the auxiliary weight $w$ on Ta Feng. (c) The effect of the positive interest number $K$ on Oncold.} \label{fig:ablation} \end{figure} \subsubsection{Dimension of $z$} As mentioned before, we project the hidden latent code $\mathbf{h}_i$ to another space by MLP $g(\cdot)$. The resulting vector $\mathbf{z}_i$ can have different dimensions. In Figure~\ref{fig:ablation-dimension}, we show the dimension of $\mathbf{z}$ is also important to the final performance. Specifically, we find that the dimension $\geq 32$ can keep the effectiveness of AQCL, while too small dimensions hinder the projector from collecting the enough information for the auxiliary task, and thus causes negative effects. \subsubsection{Weight of $\mathcal{L}_{\textrm{AQCL}}$} In this part, we conduct experiments with the different auxiliary task weight $w$ on Ta Feng dataset to verify its effect. As shown in Figure~\ref{fig:ablation-auxweight}, we empirically find that $w$ should not be very large. This might be because AQCL here serves as a data-driven regularization, and too large $w$ may result in the little attention to the primary task. Besides, $w$ should not be too small, since in this case, it will degrade to the vanilla CTR task without the auxiliary gain. \subsubsection{Interest number $K$} In AQCL, it allows each sample to be assigned into several interest clusters by adjusting $K$. Figure~\ref{fig:ablation-K} shows the effect of changing $K$ as $\{1,5,10\}$. We see that the performance decreased when $K=1$. This implies the user representation consists of multiple interests. \section{Conclusion} This paper aims at handling the cold-start scenarios to help the CTR model by designing an auxiliary task. We propose an Auto-quantized Contrastive Learning (AQCL) loss to encourage the model to leverage the possible interest clusters to help the non-active users and maintain the generalization ability to the active users. By training the CTR models with AQCL, we demonstrate our method consistently improve the current models, especially in the face of scarce interactions. The proposed framework is compatible with different model architectures and can be trained in an end-to-end fashion. We hope our work can inspire more explore to improve the CTR models with self-supervised representation learning. \bibliographystyle{plain}
{ "timestamp": "2021-09-30T02:00:14", "yymm": "2109", "arxiv_id": "2109.13921", "language": "en", "url": "https://arxiv.org/abs/2109.13921" }
\section{Introduction} \section{Introduction} An investigation of the extinction properties towards a number of regions of recent star formation in the Large Magellanic Cloud (De Marchi \& Panagia 2014, 2019; De Marchi et al. 2014, 2016, 2020) has so far revealed a consistent picture: shortwards of $\sim 1\,\mu$m, the extinction curve is systematically flatter (in logarithmic units) than in the diffuse interstellar medium (ISM). This points to the presence of a grey component due to a larger proportion of big grains. In the regions studied so far (30 Dor, NGC 2060, NGC 1938), the total extinction is not only uneven, but also rather large for the LMC, with $A_V\la2$. We have undertaken a systematic study of all the LMC star-forming regions for which high-quality photometry with the {\em Hubble Space Telescope} (HST) is available, in an attempt to understand whether and how the extinction properties depend on the actual amount of extinction present in these fields. In the Milky Way, regions of heavy extinction along the Galactic plane have long been known to show a ratio of total-to-selective extinction $R_V \equiv A_V/E(B-V)$ systematically larger than the value of $\sim 3.1$ typical of the diffuse ISM. Examples include the Orion Nebula, NGC\,2244, I Ara, IC\,2851 (see, e.g., Sharpless 1952; Sharpless 1962; Johnson 1965; Turner 1973; Herbst 1975). In this work we consider three additional LMC clusters located in the bar of the galaxy, which are subject to intermediate values of $A_V$, in order to explore whether the extinction properties and the value of $R_V$ depend indeed on the amount of dust in these fields. The first is the cluster NGC\,1858. The only study so far of the stellar populations in this cluster based on digital photometry is that of Vallenari et al. (1994). Adopting $E(B-V)=0.15\pm0.05$ (from Caplan \& Deharveng 1984), these authors show that the most likely age of the main cluster population is $8-10$\,Myr, although the presence of a protostar in the field (Epchtein et al. 1984) suggests that some star formation might still be active. The second is the cluster NGC\,1854 (also called NGC\,1855). In a study based on photoelectric photometry, Connolly \& Tifft (1977) showed that the cluster has a tight main sequence (MS) and derived an age of $25 \pm 15$ Myr, with a colour excess $E(B-V)$ about $0.1$ mag redder than the neighbouring field. The age determination is fully consistent with later works based on photographic photometry by Hodge (1983), suggesting $30\pm10$\,Myr, and by Alcaino \& Liller (1987), who derived $25\pm6$\,Myr. No results based on digital photometry exist in the literature for this cluster. The third cluster is NGC\,1856, whose stellar population and extended MS turn-off in the colour-magnitude diagram (CMD) have been extensively studied in the past decade. Works by Bastian \& Silva--Villa (2013), D'Antona et al. (2015), Correnti et al. (2015), and Milone et al. (2015) concur in assigning to it an age of $\sim 300$\,Myr. Correnti et al. (2015) and Milone et al. (2015) address the presence of uneven extinction across the cluster. They assume that the extinction properties, namely the extinction law and direction of the reddening vector, are those of the ``standard'' MW extinction curve. In this work we will not cover the NGC\,1856 cluster itself, but rather four regions adjacent to it, at a projected distance of $\sim 5^\prime$ or $\sim 75$\,pc, specifically to probe an area still relatively close to NGC\,1854 and NGC\,1858 (about 200 pc to the south) but not affected by recent massive star formation episodes. \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics[height=13.1cm]{figure1a.pdf} \includegraphics[height=13cm]{figure1b.pdf}} \resizebox{\hsize}{!}{\includegraphics[width=14cm]{figure1c.pdf} \includegraphics[trim=-1.5cm 0cm 1cm 0cm,clip,width=15.4cm]{figure1d.pdf}} \caption{Colour composite images of the regions studied in this work. Panel a) and b), corresponding respectively to fields around NGC\,1858 and NGC\,1854, were obtained by combining the exposures in the $V$ and $I$ bands. Panel c) corresponds to Field 4, located to the north-east of NGC\,1856, and was obtained from the combination of the $B$ and $I$ bands. The three fields span approximately $205\arcsec$ or $\sim 50$\,pc on a side. Panel d) shows the projected distribution of the fields on the plane of the sky, in a region of about $20\arcmin$ or $\sim 300$\,pc on a side. The yellow circles, with a radius of $50\arcsec$ of $12.5$ pc, indicate the areas corresponding to the CMDs shown in Figures\,\ref{fig2} -- \ref{fig4}.} \label{fig1} \end{figure*} In this paper, we investigate the origin of the patchy extinction causing differential reddening across the fields around NGC\,1854, NGC\,1856, and NGC\,1858 and show that the extinction properties, and the grain size distribution that they imply, are consistent with those measured in a field North of NGC\,2060. The structure of the paper is as follows. In Section\,2 we present the observations and their analysis. In Section\,3 we discuss the different populations present in the fields, while Section\,4 is devoted to the extinction properties, which we compare with those of other regions in the LMC. A discussion and the conclusions follow, respectively, in Sections\,5 and 6. \vspace*{0.5cm} \section{Observations and data analysis} The { fields around the} clusters NGC\,1854, NGC\,1856, and NGC\,1858 were observed with the {\em Wide Field Channel} (WFC) of the {\em Advanced Camera for Surveys} (ACS) on board the HST. Colour composite images of the regions studied in this work are shown in Figure\,\ref{fig1}, while details on the filters, exposure times, and dates of the observations are contained in Table\,\ref{tab1}. The effective point spread function (ePSF) fitting procedure developed by Anderson et al. (2008) was used for the astrometric and photometric analysis of the images. The stellar positions derived in this way were further corrected for geometric distortion by using the solution by Anderson \& King (2006). The instrumental magnitudes were calibrated in the VEGAMAG reference system following Anderson et al. (2008), with the zero-point values taken from the ACS Zeropoints Calculator (see Ryon 2019). Throughout this paper we will refer to the magnitudes in the F475W, F555W, and F814W bands as, respectively, $B$, $V$, and $I$. Even with the shortest amongst the exposures times listed in Table\,\ref{tab1}, some non linearity and saturation of the detector's response is unavoidable for the brightest stars. In the NGC\,1854 field, stars brighter than $V=16.2$ and $I=15.8$ are saturated. For NGC\,1858 saturation occurs for stars with $V \le 15.2$ and $I \le 15.0$, while in the fields around NGC\,1856 this happens for stars with $B \le 19.3$ and $I \le 15.7$. Nevertheless, the intrinsic brightness of these stars was fully recovered by summing over pixels into which bleeding occurred as a result of the over-saturation (see Gilliland 2004 and Anderson et al. 2008 for details). Photometric uncertainties on the magnitudes and colours of non-saturated stars are very small (see Table\,\ref{tab2}). The magnitudes of stars within about 2\,mag of the saturation limit are recovered with an accuracy of typically $0.02-0.03$\,mag. In our analysis we considered only stars with small root-mean-square scatter in position measurements and well fitted by the ePFS routine (Anderson et al. 2008). This sample of stars with ``high quality'' photometry was selected as in Milone et al. (2009, see their Figure\,1) on the basis of the various diagnostics of the astrometric and photometric quality provided by the computer programmes by Anderson et al. (2008). \begin{deluxetable}{llccr} \tablecolumns{5} \tabletypesize{\footnotesize} \tablecaption{Observations used in the paper. \label{tab1}} \tablewidth{9cm} \tablehead{\colhead{Cluster} & \colhead{Filter} & \colhead{Exposure time} & \colhead{Date} & \colhead{Proposal} } \startdata NGC\,1854 & F555W & 50 & 2003 Oct 07 & 9891 \\ & F814W & 40 & 2003 Oct 07 & 9891 \\ NGC\,1858 & F555W & 20 & 2003 Oct 08 & 9891 \\ & F814W & 20 & 2003 Oct 08 & 9891 \\ NGC\,1856 F1 & F475W & $2\times665$ & 2014 Feb 09 & 13379 \\ & F814W & $42+559$ & 2014 Feb 09 & 13379 \\ NGC\,1856 F2 & F475W & 2$\times$665 & 2014 Mar 24 & 13379 \\ & F814W & $42+559$ & 2014 Mar 24 & 13379 \\ NGC\,1856 F3 & F475W & $2\times665$ & 2014 May 18 & 13379 \\ & F814W & $42+559$ & 2014 May 18 & 13379 \\ NGC\,1856 F4 & F475W & $2\times665$ & 2014 Jun 06 & 13379 \\ & F814W & $42+559$ & 2014 Jun 06 & 13379 \enddata \tablecomments{Exposure times are in seconds.} \end{deluxetable} \begin{deluxetable}{lcccccc} \tablecolumns{7} \tabletypesize{\footnotesize} \tablecaption{Photometric uncertainties. \label{tab2}} \tablehead{\colhead{} & \multicolumn{2}{c}{NGC\,1854} & \multicolumn{2}{c}{NGC\,1856} & \multicolumn{2}{c}{NGC\,1858} \\ \colhead{Magnitude} & \colhead{\hspace{0.1cm}$\sigma_{V}$} & \colhead{$\sigma_{V-I}$} & \colhead{\hspace{0.8cm}$\sigma_{B}$} & \colhead{$\sigma_{B-I}$} & \colhead{\hspace{0.8cm}$\sigma_{V}$} & \colhead{$\sigma_{V-I}$}} \startdata 15.00 & 0.016 & 0.022 & --- & --- & 0.014 & 0.019 \\ 15.50 & 0.015 & 0.020 & --- & --- & 0.013 & 0.017 \\ 16.00 & 0.014 & 0.019 & --- & --- & 0.013 & 0.018 \\ 16.50 & 0.013 & 0.018 & --- & --- & 0.012 & 0.019 \\ 17.00 & 0.013 & 0.017 & --- & --- & 0.015 & 0.021 \\ 17.50 & 0.012 & 0.019 & --- & --- & 0.015 & 0.022 \\ 18.00 & 0.015 & 0.021 & --- & --- & 0.017 & 0.025 \\ 18.50 & 0.015 & 0.022 & --- & --- & 0.022 & 0.032 \\ 19.00 & 0.017 & 0.027 & --- & --- & 0.025 & 0.036 \\ 19.50 & 0.022 & 0.033 & 0.013 & 0.018 & 0.030 & 0.044 \\ 20.00 & 0.025 & 0.038 & 0.012 & 0.017 & 0.037 & 0.053 \\ 20.50 & 0.030 & 0.046 & 0.012 & 0.019 & 0.043 & 0.062 \\ 21.00 & 0.037 & 0.054 & 0.014 & 0.021 & 0.056 & 0.078 \\ 21.50 & 0.044 & 0.062 & 0.015 & 0.022 & 0.073 & 0.097 \\ 22.00 & 0.057 & 0.076 & 0.016 & 0.023 & 0.086 & 0.115 \\ 22.50 & 0.073 & 0.097 & 0.020 & 0.028 & 0.097 & 0.130 \\ 23.00 & --- & --- & 0.024 & 0.033 & --- & --- \\ 23.50 & --- & --- & 0.028 & 0.038 & --- & --- \\ 24.00 & --- & --- & 0.035 & 0.046 & --- & --- \\ 24.50 & --- & --- & 0.041 & 0.055 & --- & --- \\ 25.00 & --- & --- & 0.052 & 0.067 & --- & --- \enddata \tablecomments{Values for NGC\,1856 are representative of all fields F1--F4.} \end{deluxetable} \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics[angle=180,origin=b,trim=0.5cm 6cm 1.5cm 2cm,clip]{figure2.pdf}} \caption{CMDs of high-quality stars in and around NGC\,1858. In Panel a), small grey dots are used for stars within a radius of $12.5$\,pc of the nominal centre (see Figure\,\ref{fig1}), while thick black dots mark objects within $6.25$\,pc of it. The solid and dashed lines show isochrones from the models of Chen et al. (2015) for ages of 5 and 18\,Myr, respectively, for the appropriate LMC distance, metallicity $Z=0.007$, and a combined colour excess (foreground + intrinsic) of $E(V-I)=0.19$. We also indicate the approximate mass of the heaviest stars consistent with the youngest isochrone. The CMD in Panel b) is obtained from all stars more distant than $12.5$\,pc from the centre of NGC\,1858 and the theoretical isochrones are from the models of Tang et al. (2014) for metallicity $Z=0.004$ and only Galactic foreground extinction of $E(V-I)=0.11$. Ages of $1.5$, 2, 3, and 4\,Gyr are shown, respectively, in green, red, orange, and blue. In all panels the thin horizontal lines indicate the saturation level discussed in Section\,2.} \label{fig2} \end{figure*} \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics[angle=180,origin=b,trim=0.5cm 6.8cm 1.5cm 2cm,clip]{figure3.pdf}} \caption{Same as Figure\,\ref{fig2} but for { stars in and around} NGC\,1854. The only difference is in Panel a), where the combined colour excess (foreground + intrinsic) applied to the isochrones is $E(V-I)=0.21$ and the ages of the isochrones are 5 (solid line) and 60\,Myr (dashed line).} \label{fig3} \end{figure*} \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics{figure4.pdf}} \caption{CMDs of the four regions around NGC\,1856 indicated in Figure\,\ref{fig1}. Small grey dots mark all stars, while thick black dots indicate objects within the circles shown in Figure\,\ref{fig1} (radius $12.5$\,pc, for ease of comparison with Figures\,\ref{fig2} and \ref{fig3}). Isochrones from Tang et al. (2014) for metallicity $Z=0.004$ and ages of $1.5$, 2, 3, and 4\,Gyr (shown respectively, by green, red, orange, and blue thin solid lines) are the same in all panels and only include foreground Galactic extinction $A_V=0.22$, corresponding $E(B-I)=0.18$ for the Galactic extinction law (same as in Figures\,\ref{fig2}b and \ref{fig3}b). Isochrones for younger ages are from the models of Chen et al. (2015) for metallicity $Z=0.007$ and ages and colour excess as follows. In Panel a), from Region 1, the thick solid (cyan) {lines correspond to ages of 10 and 50\,Myr} and a combined colour excess (foreground + intrinsic) $E(B-I)=0.33$, while the dashed lines represent ages of 100, 270, and 600\,Myr with the same combined colour excess. In Panel b), corresponding to Region 2, the ages are of 10 and 50\,Myr for the thick solid lines (cyan), while the dashed lines represent ages of 120, 270, and 600\,Myr, with a combined colour excess $E(B-I)=0.28$. In Panel c), from Region 3, the thick solid (cyan) lines are for ages of 12 and 60\,Myr, while the thick dashed lines are for ages of 100, 300, and 600\,Myr, all of them with combined colour excess $E(B-I)=0.32$. In panel d), from Region 4, the thick solid (cyan) lines are for ages of 12 and 50\,Myr, and the thick dashed lines for ages of 200, 300, and 800\,Myr, again with combined colour excess $E(B-I)=0.28$.} \label{fig4} \end{figure*} \vspace*{0.5cm} \section{Colour--magnitude diagrams: multiple populations} We show in Figures\,\ref{fig2} and \ref{fig3} the CMDs obtained from the stars with high-quality photometry in the fields { including and surrounding} NGC\,1858 and NGC\,1854, respectively. The CMDs reveal a complex population, made up of stars in different evolutionary phases. In Panel a) of both figures small grey dots sample a region of $12.5$\,pc radius around the nominal centres of the two clusters (dashed circles in Figure\,\ref{fig1}). Objects within the inner $6.5$\,pc are indicated with thick black dots. Both regions reveal a young population of stars in the upper main sequence (MS) and a sparsely populated red giant branch (RGB). Also shown are theoretical isochrones from the models of Chen et al. (2015) for metallicity $Z=0.007$, since this is a typical value for the LMC (e.g., Hill et al. 1995; Geha et al. 1998). All isochrones already include a distance modulus $(m-M)_0=18.55$ (Panagia 1998) for the LMC and the reddening contribution of the Milky Way along the line of sight to these clusters, namely $A_V=0.22$ (e.g., Fitzpatrick \& Savage 1984), which corresponds to $E(B-V)=0.07$ and $E(V-I)=0.11$ for the extinction law of the diffuse Galactic ISM (e.g., Cardelli et al. 1989; Fitzpatrick \& Massa 1990). The best fit to the upper MS of NGC\,1858 is obtained for an age of 5 Myr (blue solid line in Figure\,\ref{fig2}a) and requires an extra $E(V-I)=0.08$ component of color excess in addition to the foreground $E(V-I)=0.11$ mentioned above,\footnote{As we will show in Section 5, in all these regions also the ratio $R_V$ between total and selective extinction is considerably higher than the characteristic $R_V=3.1$ value typical of the diffuse Galactic ISM.} thus in total $E(V-I)=0.19$. With the same total reddening, the blue supergiant HD\,261196 at $V=12.07$, $V-I= 0.20$ is compatible with an age of $\sim 13$\,Myr (red dashed line). We note that for this object we have used the magnitudes published by Bonanos et al. (2009) because in our ACS images the star is saturated. Concerning NGC\,1854, the best fit to its upper MS is also obtained for an age of 5 Myr (blue solid line in Figure\,\ref{fig3}a) and a slightly larger value of the colour excess, namely $E(V-I)=0.21$. Adopting the same extinction value also for the supergiants located near $V=14$, $V-I=0.3$ suggests an age around 60\,Myr for these objects (red dashed line). The CMDs of both clusters (Figures\,\ref{fig2}a and \ref{fig3}a) also reveal a sparsely populated RGB, which however is consistent with contamination by { LMC} field stars { along the line of sight falling} within the selected radius. To explore this, we counted the number of stars inside the dashed areas of the CMDs in Figures\,\ref{fig2}a and \ref{fig3}a and compared them with the number of objects in identical regions of the CMDs of the comparison fields. The latter are shown in Figures\,\ref{fig2}b and \ref{fig3}b, and include all stars farther than $12.5$\,pc from the centres of NGC\,1858 and NGC\,1854, respectively. Once scaled by the relative areas spanned by each cluster and its comparison field, the number of stars within the dashed trapezoids in the CMDs of the cluster and of the field are indisinguishable, within statistical uncertainties. Furthermore, objects in the RGB phase would not be compatible with the young ages of both clusters, in the range 5 -- 60\,Myr derived above, requiring instead ages in excess of $\sim 1$\,Gyr. This is shown graphically in Figures\,\ref{fig2}b and \ref{fig3}b, where { over} the CMDs of the { comparison} fields { surrounding} NGC\,1858 and NGC\,1854 { we show} theoretical isochrones for ages of $1.5$, 2, 3, and 4\,Gyr, respectively in green, red, orange, and blue. The isochrones are taken from the models of Tang et al. (2014) for metallicity $Z=0.004$ and already include Galactic foreground extinction of $E(V-I)=0.11$. { We do not include any additional colour excess with these isochrones since old LMC field stars can be located anywhere along the line of sight, not only behind but also in front of the young clusters, whose exact position is not known. We also note that, in principle, red clump stars in this field could have ages up to 10 Gyr. However, as Girardi \& Salaris (2000) pointed out, the age distribution of red clump stars in galaxies with constant star formation is strongly skewed towards younger ages, due to the longer lifetimes of more massive RC stars and to the decreasing rate at which stars leave the MS at older ages. Moreover, as pointed out in De Marchi et al. (2014), RC stars with ages between 3 and 9 Gyr change very little their intrinsic positions in the CMDs. This justifies limiting our interval of ages up to $\sim 4$ Gyr. } Besides a broad MS, the prominent RGB is the characteristic feature of these CMDs, together with a remarkably elongated RC, extending by over one magnitude in $V$ and suggesting a considerable amount of differential reddening in these fields. The extended RC is clearly not caused by age effects, since all isochrones agree with the overdensity oberved in the CMD at $V-I \simeq 1.0$, $V\simeq19.2$ (the nominal RC location), but not with the extended tail. { More details on the effects of a range of ages on the shape and extent of the RC in the CMD are provided by De Marchi et al. (2014). In particular, for the metallicity $Z=0.004$ that is relevant to this work, their Figure\,4 shows that the combination of stellar populations with ages ranging from $1.4$ to 3\,Gyr only causes a broadening of less than $0.05$ and $0.02$\,mag (1\,$\sigma$) respectively on the $V$ magnitude and $B-V$ colour of the resulting RC ($\sigma=0.02$\,mag is also the value found in the $V-I$ colour). Considering that the nominal RC position corresponding to each individual age already has an intrinsic spread of $\sim 0.1$\,mag in $V$ and $\sim 0.05$\,mag in $B-V$, the broadening introduced by the age spread is marginal.} The CMDs of the third region studied in this work, around NGC\,1856, are shown in Figure\,\ref{fig4}. As mentioned in Section\,2, we selected observations of four fields { surrounding (but not containing)} the cluster, at a typical distance of $5^\prime$ or about 75\,pc from the cluster centre (see Figure\,\ref{fig1}). In these regions no concentrations of objects or clustering of bright stars are seen, indicating that these areas are dominated by { LMC} field stars. In Figure\,\ref{fig4} we show the CMDs of the four fields, using small grey dots to indicate all objects and thick black dots for stars within the circular regions shown in Figure\,\ref{fig1}. The circular regions, with a radius of $12.5$\,pc, have been selected to simplify the comparison with the CMDs of { the fields around} NGC\,1858 and NGC\,1854, but they do not correspond to any overdensity of objects, as mentioned above. Most stars in these fields are old, as indicated by the prominent RGB, for which comparison with isochrones suggests ages older than $\sim 1$\,Gyr. The green, red, orange, and blue isochrones are the same as shown in Figures\,\ref{fig2}b and \ref{fig3}b from Tang et al. (2014) for ages of $1.5$, 2, 3 and 4\,Gyr, metallicity $Z=0.004$, and already include Galactic foreground extinction, which in these bands amounts to $E(B-I)=0.18$. Besides old stars, a much younger population is also present in these CMDs, as witnessed by the many upper MS stars. These objects are compatible with ages in the range $\sim 10 - 50$\,Myr (for details see caption to Figure\,\ref{fig4}) and masses up to $\sim 10-15$\,M${_\odot}$. Furthermore, a number of objects in the range $0.6 \la B-I \la 1.6$ and $16 \la B \la 19$ are consistent with giants with ages in the range $\sim 100 - 600$\,Myr. Thus, in spite of the lack of clear clustering or overdensities, in these regions star formation has been proceeding in the past several 10\,Myr and in the past several 100\,Myr. This finding will be crucial for understanding the extinction properties in these fields (see Section\,5). Like in the case of { the fields around} NGC\,1858 and NGC\,1854, also here does the RC show an extended shape, suggesting the presence of patchy extinction. The extent of the RC elongation is not the same in all fields and increases proceeding from Panel a) to d). In the following section we will study the extinction properties in all these fields through an analysis of the shape of the extended RC. \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics{figure5a.pdf} \includegraphics{figure5c.pdf} \includegraphics{figure5e.pdf}} \resizebox{\hsize}{!}{\includegraphics{figure5b.pdf} \includegraphics{figure5d.pdf} \includegraphics{figure5f.pdf}} \caption{Unsharp masking applied to all CMDs. { Panels are as follows: a) NGC\,1858; b) NGC\,1854; c) NGC\,1856 F1; d) NGC\,1856 F2; e) NGC\,1856 F3; f) NGC\,1856 F4.} The slope of the reddening vector is obtained through a linear fit to the elongated RC. The extent of the elongated RC is indicated by the tick marks. { The short solid arrow starting at $V\simeq 17.6$ in panel a) is parallel to the reddening vector for this field and provides a good fit to the younger extended RC.} } \label{fig5} \end{figure*} \vspace*{1cm} \section{Red clump stars to probe extinction} The stability associated with the phase of central He-burning characteristic of RC stars (e.g. Cannon 1970) makes these objects excellent tracers of reddening, when the distance is known. Girardi \& Salaris (2001) and Salaris \& Girardi (2002) presented a detailed study of the properties of the mean RC as a function of age, metallicity, and star-formation history. As shown in Figures\,\ref{fig2}, \ref{fig3}, and \ref{fig4}, the expected location of the RC in the CMD is only marginally affected by age differences as large as $2.5$\,Gyr. To be sure, when objects with different ages and metallicities are present in the stellar population, the RC does take on a slightly elongated shape, but De Marchi et al. (2014) showed that for LMC stars these effects account for a dispersion of at most $0.2$\,mag in the $V$ and $I$ bands when both the age and metallicity vary by a factor of two. Also considerable dispersion in distance along the line of sight could of course cause the RC to appear elongated in the CMD, { albeit only vertically, along the magnitude axis, since distance does not have any effect on the colour of stars. Moreover,} for stars in the LMC this effect is { negligible}, since that galaxy is seen at a high inclination ($\sim 35^\circ$) and its disc has a scale height of typicaly less than $0.5$\,kpc (Van der Marel \& Cioni 2001), which is not significant when compared with the distance to the LMC itself ($51.4 \pm 1.2$\,kpc; Panagia et al. 1991, and updates in Panagia 1998, 1999). { Indeed, an exponential distribution with a scale height of $0.5$\,kpc at the distance of the LMC results in a median deviation along the line of sight of less than $0.02$\,mag even including the thickening caused by the $35^\circ$ inclination. For 83\,\% of the stars the deviation is less than $0.05$\,mag. The effect of distance along the line of sight is, therefore, negligible. Even when combined with the small broadening of the RC caused by age differences, these effects remain barely detectable and cannot account for the substantial RC elongation in colour and magnitude that we observe in these regions.} In this work we are interested not only in the elongation of the RC, which is a measure of the total extinction in the field, but also in the slope of the extended RC in the CMD, since this gives a fully empirical measure of the direction of the reddening vector (Nataf et al. 2013; De Marchi et al. 2014). Together, the two quantities provide information on the extinction properties in the field. A practical method to measure both the length and the slope of the RC feature in the CMD is to apply the unsharp-masking technique. De Marchi et al. (2016) provide a detailed description of the method, which we briefly summarise here. The purpose of unsharp masking is to make an image of the CMD sharper by subtracting from it a mask consisting of a blurred version of the CMD image itself. First, to obtain the image of the CMD, each object in it is mapped to a two-dimensional array, with a sampling of $0.01$ mag in colour and magnitude. The array is then convolved with a narrow Gaussian beam, which assigns to each point in the CMD the resolution pertaining to the photometric uncertainties. We used a beam size $\sigma=0.08$\,mag, corresponding to about three times the typical photometric uncertainty. Similarly, to obtain the blurred mask, the same array is convolved with a wider Gaussian beam, in our case $\sigma=0.3$\,mag. The mask is then subtracted from the CMD image. Analytically, these operations are equivalent to convolving the CMD with a kernel represented by the difference between two Gaussian beams with different $\sigma$ (see De Marchi et al. 2016). { We experimented with different values for the wider Gaussian beam (namely $0.2$ and $0.4$\,mag) and the differences are imperceptible.} We show in Figure\,\ref{fig5} the CMDs of the various regions after unsharp masking. { Panels a) and b) refer to the regions surrounding NGC\,1858 and NGC\,1854, respectively, while Panels c) through to f) refer to fields F1 to F4 around NGC\,1856.} The high-frequency substructures are easier to identify than in the original CMDs, in particular the MS, the MS turn-off, the RG branch, and of course the elongated RC. The direction of the reddening vector (thick solid arrows in Figure\,\ref{fig5}) is measured using the ridge line of the extended RC. The uncertainty on the slope is obtained using weights proportional to the local density of points in the CMD. In Figure\,\ref{fig5}, the dashed lines show the reddening vector corresponding to the extinction law of the Galactic diffuse ISM, in the bands specific to each panel. It is evident that in these regions the slope is considerably steeper than in the Galactic ISM, about $1.5$ times as steep. The slope of the reddening vector corresponds in turn to the ratio between total ($A$) and selective ($E$) extinction in these specific bands. The values measured in the individual regions are given in Table\,\ref{tab3}, together with their uncertainties. We also show the values measured in the same bands in and around 30 Dor and in NGC\,1938, together with their uncertainties, as well as the values corresponding to the average extinction law in the diffuse Galactic ISM. The uncertainty on the latter does not reflect the actual measurement errors, but rather the wide dispersion around the mean, exceeding 20\% (see e.g. Herbst 1975; Massa et al. 1983; Fitzpatrick \& Massa 2005; Nataf et al. 2016). { We note in passing that also the feature seen in Figure\,\ref{fig5}a at $1.0<V-I< 1.4$ and $18.3 <V< 18.9$ is an elongated RC. Comparison with the Chen et al. (2015) isochrones mentioned above suggests that this RC is associated to a population of intermediate age, $\sim 330$\,Myr, and as such considerably younger than the population responsible for the main RC feature at fainter magnitudes. According to the isochrones, the unextinguished location of the RC for this population should be at $V=17.8$ and $V-I=1.0$, where an enhancement is indeed present in the CMD, both before and after unsharp masking. The slope of the younger extended RC is consistent with the reddening vector measured elsewhere in this field (see short solid arrow). Its shape and appearence suggests that most of these RC stars are behind the NGC\,1858 cluster, because of the gap that separates the nominal RC position at $V=17.8$ from the blue end of the elongation at $V=18.2$. The implied minimum reddening value of $A_V=0.4$ is larger than the minimum intrinsic reddening revealed by the massive stars in the upper MS of NGC\,1858, which is of the order of $A_V=0.25$ (see Figure\,\ref{fig2}a). } \begin{deluxetable}{lcc} \tablecolumns{3} \tabletypesize{\footnotesize} \tablecaption{Ratio of total and selective extinction in our regions. For comparison, we also indicate the values in the same bands in and around 30 Dor, as well as in the diffuse Galactic ISM . \label{tab3}} \tablehead{\colhead{Region} & \colhead{$A_V/E(V-I)$} &\colhead{$A_B/E(B-I)$} \\[0.05cm] \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)}} \startdata NGC 1858 & $3.21 \pm 0.14$ & \\ NGC 1854 & $3.26 \pm 0.35$ & \\ NGC 1856 field 1 & & $2.70 \pm 0.53$ \\ NGC 1856 field 2 & & $2.31 \pm 0.26$ \\ NGC 1856 field 3 & & $2.29 \pm 0.28$ \\ NGC 1856 field 4 & & $2.33 \pm 0.12$ \\ NGC 1938 & $2.99 \pm 0.12$ & \\ 30 Dor & $2.97 \pm 0.08$ & $2.21 \pm 0.14$ \\ 30 Dor West (NGC 2060) & $3.17 \pm 0.29$ & $2.41 \pm 0.31$ \\ & & \\ Diffuse Galactic ISM& $2.17\pm0.44$ & $1.70\pm0.34$ \enddata \end{deluxetable} \section{Discussion} Table\,\ref{tab3} reveals that the reddening vectors for the { lines of sight towards the three clusters and their surroundings} are systematically steeper than { those towards} NGC 1938 or 30 Dor (where $R_V=4.5\pm0.2$; De Marchi \& Panagia 2014). The slopes are more similar to those measured { towards} NGC\,2060 (30 Dor West), where the extinction law derived by De Marchi et al. (2014) corresponds to $R_V=5.6 \pm 0.3$. This suggests that { the lines of sight towards} NGC\,1854, NGC\,1856, and NGC\,1858 share similar extinction properties to those { towards} 30 Dor West, with a value of $R_V\simeq5.5$. Similarly to 30 Dor and 30 Dor West, the likely reason for the elevated value of $R_V$ in { the direction of} NGC\,1854, NGC\,1856, and NGC\,1858 is the presence of a grey component, caused by an extra population of big grains, superposed to the more standard LMC extinction curve (Gordon et al. 2003). In 30\,Dor, De Marchi \& Panagia (2019) showed that the effect of the additional grey component is present at all wavelengths shorter than $\sim 1\,\mu$m and through to the far ultraviolet range. That work reveals that the big grains responsible for the grey component { towards} 30\,Dor are of the same nature as those present in the diffuse ISM of the MW, but their fraction is about twice as high. The higher value of $R_V$ { towards} the three clusters studied here suggests possibly an even larger fraction of big grains. We highlight that the extinction properties in these fields, as probed by the slope of the reddening vector, do not correlate with the amount of extinction, which is indicated by the length of the extended RC. All fields have a slope consistent with $R_V\simeq 5.5$ but the range of extinction values across these fields varies considerably. This can be seen in Figure\,\ref{fig5}, where the extent of the RC in each CMD is marked by the short segments. The marks correspond to the farthest points along the RC where the density of stars exceeds the 95 percentile ($2\,\sigma$) and provide a measure of the spread of extinction present in the fields. The bluest end of the RC corresponds to no intrinsic extinction (and hence with $A_V\simeq 0.2$ once the Galactic foreground component is taken into account), while the most reddened end, { towards} NGC\,1858, corresponds to $A_V\simeq1.7$ or total extinction $A_V \simeq 1.9$ once also the contribution of the Milky Way is included. The range of extinction values is derived from the magnitude difference between the marks, taking into account the intrinsic size of the undispersed RC, which in these bands amounts to $\sim 0.1$\,mag (Girardi \& Salaris 2001). {We underline that if instead of adopting $\sigma=0.3$\,mag as the size of the smoothing Gaussian beam we had used $0.2$\,mag or $0.4$\,mag the resulting length of the RC would have been, respectively, $0.02$\,mag shorter or $0.01$\,mag longer than the $A_V \simeq 1.9$ value given above. These differences are negligible when compared to the the intrinsic size of the undispersed RC.} { As already mentioned in Section\,4,} De Marchi et al. (2014) showed that a spread of a factor of two on both the age and metallicity of the stars only broadens the size of the RC to about $0.2$\,mag in $V$ and $I$. Because of the differential way in which they are derived, the measured extinction spreads are not affected by the Galactic foreground extinction along the line of sight. Using the same definition of the extinction spread for all CMDs in Figure\,\ref{fig5} allows us to compare to one another the ranges of extinction in the different regions. They are shown in Table\,\ref{tab4} for the $B$, $V$, and $I$ bands, respectively indicated as $\Delta A_B$, $\Delta A_V$, $\Delta A_I$. The values of $\Delta A_B$ are directly measured { towards} NGC\,1856 in Figure\,\ref{fig5}, while those of $\Delta A_V$ are measured in the same figure { along the lines of sight to} NGC\,1854 and NGC\,1858. The extinction law { towards} 30 Dor West (De Marchi et al. 2014) implies $A_B=1.2 \, A_V$ and this relationship was used here to transform the measured $\Delta A_B$ values into $\Delta A_V$, and viceversa (derived quantities are shown in Italics in Table\,\ref{tab4}). All values of $\Delta A_I$ are measured directly for all regions from CMDs similar to those of Figure\,\ref{fig5} but in which the $I$ magnitude is plotted as a function of the $B-I$ and $V-I$ colours. \begin{deluxetable}{lcccc} \tablecolumns{3} \tabletypesize{\footnotesize} \tablecaption{Total extinction in each field, in various bands, compared with the number of stars more massive than 8\,M${_\odot}$. \label{tab4}} \tablehead{\colhead{Region} & \colhead{$\Delta A_B$} &\colhead{$\Delta A_V$} & \colhead{$\Delta A_I$} & \colhead{$N$} \\[0.05cm] \multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)}} \startdata NGC 1856 field 1 & $0.39$ & {\em 0.31} & $0.18$ & $6$ \\ NGC 1856 field 2 & $0.48$ & {\em 0.38} & $0.23$ & $4$ \\ NGC 1856 field 3 & $0.45$ & {\em 0.36} & $0.22$ & $4$ \\ NGC 1856 field 4 & $0.88$ & {\em 0.72} & $0.47$ & $12$ \\ NGC 1854 & {\em 1.14} & $0.93$ & $0.59$ & $11$ \\ NGC 1858 & {\em 1.94} & $1.60$ & $1.05$ & $20$ \enddata \end{deluxetable} \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics{figure6a.pdf} \includegraphics{figure6b.pdf}} \resizebox{\hsize}{!}{\includegraphics{figure6c.pdf} \includegraphics{figure6d.pdf}} \caption{Extinction spread in the various regions, as a function of the number of stars more massive than 8\,M${_\odot}$. Measurements in the $B$, $V$, and $I$ bands are indicated, respectively, in panels a), b), and c). Filled dots mark extinction values measured directly in that band, while empty dots are for measurements derived from neighbouring bands. The lines represent the best linear fit in the three bands. Panel d) is equivalent to panel c), but the range is expanded to accommodate also the data point (lower limit) corresponding to 30\,Dor. } \label{fig6} \end{figure*} Interestingly, the extinction range appears to increase from Field 1 to Field 4 { around} NGC\,1856 and the growth continues further when moving to NGC\,1854 and NGC\,1858. A study of the young stellar populations in these fields suggests that there is indeed a correlation between the total amount of extinction and the number of massive stars, as we show in the following. To allow for a meaningful comparison between these fields, we counted in each of them the number of massive stars with ages between 10 and 40 Myr, which in turn can be interpreted as a proxy for the recent formation of massive stars in these fields. This is achieved by first building the CMDs from all stars present in each field, with no distinction concerning their spatial location. We then drew on the CMDs the isochrones extracted from the models of Chen et al. (2015) for our specific bands and metallicity $Z=0.007$ and ages of 10 and 40 Myr, to which we applied the same combination of foreground and intrinsic extinction as in Figure\,\ref{fig2} -- \ref{fig4}. We finally counted all stars with a MS mass of 8\,M${_\odot}$ or more, which were identified with the help of the evolutionary tracks. The number of massive objects selected in this way is shown in the last column of Table\,\ref{tab4}. { We note that the number of massive stars measured in this way is necessarily subject to some uncertainty caused by reddening. Some of the stars redder than the 40\,Myr isochrone might in fact be younger objects subject to larger reddening. By the same token, objects younger than 10\,Myr might appear redder than the corresponding isochrone. However, we expect the two effects to at least partly compensate, thereby reducing the overall uncertainty. } This analysis reveals that fields containing more massive stars also have typically higher extinction and extinction spread. This is shown graphically in Figure\,\ref{fig6}. The dots correspond to the values in Table\,\ref{tab4} and the green, red, and black colours are used, respectively, for the extinction in the $B$, $V$, and $I$ bands (Panels a, b, and c; panel d, also referring to the $I$ band data, is discussed further down). The uncertainties in the figure reflect the Poisson statistics on the number of stars and a typical uncertainty of $0.1$ mag on the measured extent of the RC, dictated by the intrinsic RC size mentioned above. In the $I$ band we could directly measure $\Delta A_I$ in all regions (black filled dots). In the $V$ and $B$ band, direct measurements are available respectively for NGC\,1858 and NGC\,1854 (filled red dots), and for { fields F1--F4 surrounding} NGC\,1856 (filled green dots). Empty dots show values obtained from a neighbouring band by interpolation through the extinction law { for} 30 Dor West (De Marchi et al. 2014), since the latter is fully consistent with the reddening vectors measured in all three clusters in the $B$, $V$, and $I$ bands (see Section\,3). All three sets of $\Delta A$ values are consistent with simple linear correlations with the number of massive stars, as shown by the solid lines. The formal coefficients are $0.052$, $0.077$, and $0.093$ for the three bands, respectively, with an uncertainty of about 13\% ($1\,\sigma$). Within the uncertainties, we cannot exclude that the best-fitting lines have a zero intercept term. While certainly possible, this does not appear to be likely, because it would imply that the effects of star formation episodes on the local ISM vanish as soon as the the last SNe have exploded, contrary to what is observed (e.g., Temim et al. 2015). What is interesting, however, is not the actual values of the coefficients, which necessarily depend on the selected sample and would be different for another mass range, but rather the existence of a clear correlation between extinction and the number of massive stars in these fields. A linear correlation is also present if the mass range is extended to stars down to 6\,M${_\odot}$ and the same age range (10--40\,Myr). Further extending the study to less massive stars is not possible without introducing large uncertainties on the age, since the isochrones overlap. Selecting stars more massive than 8\,M${_\odot}$ is relevant because these are the progenitors of core-collapse Type II supernovae (SNe II; e.g. Edlridge \& Tout 2004) and we expect objects of this type to be likely at the origin of the anomalous extinction properties that we see in these and other LMC star-forming regions, as suggested by De Marchi \& Panagia (2019) and De Marchi et al. (2020). Those works show that the fraction and amount of big grains (with typical size $\sim 0.1\,\mu$m) implied by the extinction laws measured in and around 30\,Dor and NGC\,1938 are quantitatively consistent with the output caused by the SNe II expected to have exploded in those regions in the past $\sim 50$\,Myr. The ejecta from these events appear quantitatively sufficient (De Marchi et al. 2020) to have locally altered the standard grain-size distribution typical of the diffuse ISM of the LMC (e.g., Gordon et al. 2003). Figure\,\ref{fig6} suggests that one can estimate the range of the extinction values in a region of star formation in the LMC by counting the number of massive stars present in that field. The estimate is necessarily approximate, and the relationships shown in Figure\,\ref{fig6} are likely to become non linear and to saturate at high extinction values, because highly extinguished RC stars cannot be detected as such. An example is shown in Figure\,\ref{fig6}d, where the data from panel \ref{fig6}c (contained within the dotted rectangle) are compared with the data point measured in 30 Dor. To obtain { the latter} point, we used the photometric catalogue of De Marchi \& Panagia (2014), covering the central $\sim 3 \times 3$ arcmin$^2$ of 30 Dor, together with the median value of the extinction that they measured towards upper MS stars in that field. We estimate that the central $\sim 3 \times 3$ arcmin$^2$ of 30\,Dor contain about 200 stars more massive that 8\,M${_\odot}$ and with ages between 10 and 40 Myr. The most extinguished RC stars still detectable as such in that field are about 4 mag fainter than the least extinguished ones (see Figure\,6 in De Marchi \& Panagia 2014). However, objects with even higher extinction (and $B-V>2$) might well be present in the field, yet they are not detectable because of the limited depth of the observations in the $B$ band. Therefore, the corresponding data point, $A_I\simeq 4$ (marked by a diamond in Figure\,\ref{fig6}d) necessarily represents a lower limit to the maximum extinction value in 30\,Dor. That point appears to underestimate by about 60\% the amount of extinction that one would obtain by extending the best fitting line in Figure\,\ref{fig6}c to larger values of $N$. Even if there is no guarantee that the relationship that we have measured in our smaller clusters should remain linear in star-forming regions as dense as 30 Dor, the solid line in Figure\,\ref{fig6}d is likely to provide a realistic estimate of the maximum extinction to be expected in this field, with an uncertainty of less than 50\%. In regions of less intense star formation, we expect the uncertainty to be considerably smaller. \begin{figure*} \centering \resizebox{\hsize}{!}{\includegraphics{figure7a.pdf} \includegraphics{figure7b.pdf}} \caption{{ Contour plots showing the position and density distribution of RC and upper MS stars in the field containing NGC\,1858. The coordinates system is centred on the nominal cluster centre. Red contour lines are used for highly reddened RC stars ($A_V>1$), while the grey and blue shadings are used for low-reddening RC stars and upper MS objects, respectively. The lower contour level corresponds to twice the mean density of each group. The circles at the top of the panels shows the size of the Gaussian beam used for smoothing the actual distributions.}} \label{fig7} \end{figure*} In summary, our analysis suggests that it is not the amount of extinction that determines the extinction properties in a region, at variance with what was generally concluded in the 1960s and 1970s from the study of highly extinguished regions of the Galactic Plane (see Introduction). Instead, it appears that recent massive-star formation is systematically accompanied by ''non-standard'' grey extinction, whose amount is correlated with the star formation strengtht. { Finally, to characterise the patchiness of the extinction in these regions, we studied the spatial distribution of highly extinguished RC stars. We show an example in Figure\,\ref{fig7} for the case of the region around NGC\,1858. We took as candidate RC stars all objects in the CMD contained within $\pm 0.4$\,mag of the reddening vector shown in Figure\,\ref{fig5}a and fainter than $V=19$. A total of 582 stars satisfy this condition: although some of them might in fact be RGB stars, we expect the majority to be RC objects. About 80\,\% of them have extinction $A_V<1.0$ and the remaining 20\,\% have values in the range $1.0 < A_V < 1.5$. We compare the positions and spatial distributions of these two groups in Figure\,\ref{fig7}a by means of lines of stellar density with constant logarithmic steps set at 2, 4, and 8 times the mean density of each group. The contour plots were obtained after smoothing the actual distribution with a Gaussian beam with $\sigma = 2\arcsec$, as indicated by the circle at the top of the figures. The grey shaded contours correspond to the low-reddening RC stars and the red contour lines to highly reddened RC objects. Figure\,\ref{fig7}a reveals that the low-reddening RC stars are more uniformly distributed than the RC objects with higher extinction. In Figure\,\ref{fig7}b we compare, using the same relative contour levels, the distribution of the highly reddened RC stars (red contour lines; same as in panel a) with that of the massive MS stars ($>8$\,M${_\odot}$; blue shaded contours). Although qualitative, this comparison shows that the distribution of the highly reddened RC stars is rather similar to that of the massive stars. This suggests that a considerable fraction of the absorbing material along the lines of sight to these RC stars is indeed associated with the region of recent star formation.} \section{Conclusions} We conclude with some considerations on the nature of the relationships highlighted in Figure\,\ref{fig6}, which is not completely unexpected. The typical dust extinction of star-forming galaxies is known to increase with their total stellar mass $M_*$, not only in the local universe but also out to redshift of at least $z \simeq 2$ (e.g., Brinchmann et al. 2004; Stasinska et al. 2004; Pannella et al. 2009). In the range $10^8 \la M_*/$M${_\odot}$$ \la 10^{10}$, the extinction has been shown to increase monotonically and approximately linearly with $\log M_*$ (Garn \& Best 2010; Zahid et al. 2013). Also the rate of star formation and the metallicity of these galaxies correlate with the extinction, but the predominant factor influencing it appears to be the stellar mass (Garn \& Best 2010). To be sure, the total mass of the star-forming regions probed by our study is some $\sim 4$ orders of magnitude smaller than that of the smallest of the high-redshift star-forming galaxies mentioned above. As an example, adopting for NGC 1854 a standard initial mass function (e.g. Kroupa 2001) with a power-law index $\gamma=-2.3$ for stars more massive than $0.5$\,M${_\odot}$ and $\gamma=-1.3$ in the $0.08-0.5$\,M${_\odot}$ range, we derived a total initial mass of about $2.5 \times 10^4$\,M${_\odot}$ for this cluster. Nonetheless, the stars responsible for the bulk of the dust, and particularly of the big grains that we detect in our regions, are massive stars of the same type as those whose emission line fluxes are used to infer both the total extinction and $M_*$ values of the star-forming galaxies. These are the objects more massive than 8\,M${_\odot}$ that end their lives as SNe II (e.g. Dwek 1998). Although so far limited in size, our sample is potentially better suited to studying the actual physical mechanisms at the origin of the observed correlations. In our sample, the total mass, i.e. the number of stars, is measured directly by counting individual objects, so we do not have to rely on necessarily uncertain integrated quantities often derived through approximate model-based relations (e.g. Kauffmann et al. 2003): instead, we have direct measurements. Also the extinction properties (and extinction law) are directly measured by us in our fields, including the value of $R_V$. Other studies instead assume an average value of $R_V$, which is known to be affected by large uncertanties (Calzetti et al. 2000) and is not applicable outside the regime of centrally-concentrated starburst galaxies (Calzetti et al. 2021). Furthermore, the fact that the empirical relationships that we discovered from a limited number of small star-forming regions is able to reproduce, to within a factor of two, also to the case of 30 Dor suggests that these relationships are meaningful also for the intense star-forming knots of high-redshift galaxies that 30 Dor mimics (e.g., Doran et al. 2013; Crowther et al. 2017). Extending our study of the extinction properties to a number of other star-forming regions in the LMC (De Marchi et al., in prep) will allow us to better constrain the nature of the physical processes at play. \vspace*{0.5cm} We are very grateful to an anonymous referee whose expert advice and costructive criticism have helped us to improve the presentation of this work. APM acknowledges support from the European Research Council (ERC) under the European Union's Horizon 2020 research innovation programme (Grant Agreement ERC-StG 2016, No 716082 'GALFOR', http://progetti.dfa.unipd.it/GALFOR), by the MIUR through the FARE project R164RM93XW 'SEMPLICE' and the PRIN programme 2017Z2HSMF.
{ "timestamp": "2021-09-29T02:26:46", "yymm": "2109", "arxiv_id": "2109.13914", "language": "en", "url": "https://arxiv.org/abs/2109.13914" }
\section{Introduction} Topologically non-trivial defects, textures, and knots have inspired physicists since the days of Kelvin~\cite{Thomson1869}. They are remarkably ubiquitous throughout physics, spanning a vast range of energy scales from cosmology and elementary particle physics, to superconductors, superfluidity, and liquid crystals. The universal nature of topological stability in such diverse areas provides unprecedented opportunities to use experimentally accessible laboratory systems as emulators even of cosmology and high-energy physics where the experimental evidence is absent~\cite{volovik}. In recent years, the experimental study of topological defects and textures in structured optical fields has emerged as one of the most promising areas to engineer and detect topologically non-trivial characteristics~\cite{strlight_review}, including singularities of the phase or polarization that may form knotted or linked geometries~\cite{Leach2004,Dennis2010,Kedia2013,Larocque2018} or M\"obius strips~\cite{Bauer964}. Another line of research on non-trivial topologies of light has focused on photonic band structures~\cite{Ozawa_review}, analogous to electronic band structures in crystals. Topological Skyrmionic textures~\cite{Skyrme1961} are non-singular, localized spin (or pseudo-spin) configurations that do not perturb the spin profile sufficiently far from the center of the structure. The non-trivial topology of the object arises from how the spatial profile of the spin texture wraps over the spin configuration, or order-parameter, space. Particle-like Skyrmions, defined by mappings of the $\Pi_3$ homotopy group, display non-trivial three-dimensional (3D) spatial profiles and are well-known in nuclear and elementary particle physics as theoretical paradigms~\cite{manton-sutcliffe, Battye97,Battye09,donoghue2014}. Similar structures have been proposed in cosmological models~\cite{radu_physrep_2008}, and they have also been actively investigated in superfluids~\cite{Volovik1977,Shankar1977, Ruostekoski2001, al-khawaja_nature_2001}, with a particular focus on their energetic stability~\cite{battye_prl_2002,savage_prl_2003, Ruostekoski2004, Kawakami12,Tiurev_2018}. Hopfions, classified by the integer-valued Hopf charge, are closely related to the 3D Skyrmions and have attracted particular attention owing to their tendency to form stable torus knots~\cite{faddeev_nature_1997, Battye98, Sutcliffe17, HIETARINTA99, Babaev2002}. Although 3D Skyrmions and Hopfions have recently been experimentally realized as stationary superfluid configurations~\cite{Hall2016, Lee2018} in liquid crystals~\cite{Ackerman2017}, and in phase and polarization-structured light~\cite{Sugic2021}, the wider attention has focused on much simpler, planar analogs, 2D baby-Skyrmions. 2D baby-Skyrmions, like their 1D cousins~\cite{borgh_prl_2016a}, exhibit topologically non-trivial, non-singular configurations in reduced dimensions and have the best known early examples in superfluids as the Anderson-Toulouse-Chechetkin~\cite{anderson_prl_1977, chechetkin_jetp_1976} and Mermin-Ho~\cite{mermin_prl_1976} non-singular vortices, owing to their ability to carry angular momentum. A large body of more recent research has included magnetic systems~\cite{muhlbauer_science_2009,Nagaosa2013}, with potential data storage applications, rotating atomic superfluids~\cite{leanhardt_prl_2003, Leslie2009, choi_prl_2012,weiss_ncomm_2019,ho_prl_1998,mizushima_prl_2002,Lovegrove2014}, exciton-polariton structures~\cite{Cilibrizzi2016, Donati2016, Krol2021}, and optical fields~\cite{Tsesses2018, Du2019, Gao2020, Davis2020,Gutierrez-Cuevas2021}. For the particular case of baby-Skyrmions in optical fields, field profiles are usually analyzed using the Stokes vector, i.e., a point on the Poincar\'e sphere, corresponding to the coherent, transverse polarization state at each point in the field. However, going beyond these more easily observable parameters, the full topology of the field configurations, crucially, also depends on the spatial variation of the total phase of vibration on the polarization ellipse~\cite{Bliokh2019}, which is the sum of the phases of the electric field components, and is not represented by the Stokes vector. This complete topology is then described by the optical hypersphere $S^3$ (unit sphere in 4D)~\cite{Sugic2021}, allowing, e.g., for full 3D particle-like topologies of light. Here we utilize simple configurations of structured light fields to show how these can lead to optical excitations in atomic media of comparable or considerably more complex topologies. Baby-Skyrmions, represented by full Poincar\'e beams~\cite{Beckley2010} in light fields, can straightforwardly be transferred to optical excitations, and therefore frozen and stored, in strongly confined oblate atomic ensembles. We consider a $J=0\rightarrow J'=1$ transition that can form, e.g., in $^{88}$Sr very long-lived excitations. By going beyond the Stokes representation of light beams to incorporate the full degrees of freedom of the field amplitudes, where we no longer discard the spatial variation of the sum of the phases for the two field components, we can form 3D particle-like Skyrmions, localized in space. We identify the transverse polarization density of the atoms as a synthetic magnetic vector potential of the 3D Skyrmions with non-trivial helicity. While constructing such an object directly in a light beam is quite challenging even for modern structured light engineering~\cite{Sugic2021}, we show how appropriately adjusting the light-matter coupling provides a solution with simple copropagating beams. For this solution, we then formulate the Stokes representation to provide precisely a Hopf fibration between the optical hypersphere and the Poincar\'e sphere, representing knotted solitons or Hopfions, analogous to the knotted solitons in the Skyrme-Faddeev model~\cite{faddeev_nature_1997,manton-sutcliffe}, and show the linked and trefoil knot Hopfion preimages of the Poincar\'e sphere. While such objects are non-singular, we also show how singular defects can be transferred from light to optical excitations. For systems where the light scattering is strong, and light mediates dipole-dipole interactions between the atoms, we remarkably find that singular defects can even exist as collective excitation eigenmodes. These behave as spatially delocalized `superatoms', exhibiting their own collective resonance linewidth and line shift. \section{Baby-Skyrmions} \label{baby} We first show how to prepare 2D baby-Skyrmions in an atomic ensemble. A non-singular topological texture can be constructed by letting a (pseudo-)spin orient into a localized structure that points in every direction somewhere within a 2D plane, but takes a uniform constant value everywhere sufficiently far away from the origin, independently of the direction. The plane can then be compactified to a unit sphere $S^2$ and the orientations of the spin on the 2D plane can be characterized by $S^2\rightarrow S^2$ mappings. Such mappings can take topologically non-trivial values, associated with the existence of baby-Skyrmions, also frequently called non-singular vortices. For optical fields, the state is most commonly characterized on the $S^2$ Poincar\'e sphere by an easily observable Stokes vector $\textbf{S}$~\cite{BOR99,Bliokh2019}, and the $S^2\rightarrow S^2$ mapping defining the baby-Skyrmion topology counts the number of times the object wraps over $S^2$, \begin{equation}\label{Eq:SkyrmionNumber} W =\int_{\mathcal{S}}\frac{d\Omega_i}{8\pi} \epsilon_{ijk} \textbf{S}\cdot\frac{\partial\textbf{S}}{\partial r_j}\times \frac{\partial \textbf{S}}{\partial r_k}, \end{equation} where $\epsilon_{ijk}$ denotes a completely antisymmetric Levi-Civita tensor. A field configuration that satisfies a non-trivial winding $W = 1$ can be achieved using a superposition of a Gaussian and Laguerre-Gaussian (LG) beam, with wavevector $\textbf{k}$ and frequency $\omega = c|\textbf{k}|=ck$. Working with slowly-varying amplitudes for the light and atoms by factoring out the fast-rotating term $\exp(-\text{i}\omega t)$, the positive frequency component of the field, $\boldsymbol{\mathcal{E}}{}(\textbf{r})$, is given by \begin{equation}\label{Eq:BabySkyrmionField} \boldsymbol{\mathcal{E}}{}(\textbf{r})=\text{U}_{0,0}(w_0)\hat{\textbf{e}}_x+\text{U}_{1,0}(w_0)\hat{\textbf{e}}_y. \end{equation} Here $\text{U}_{l,p}(w_0)$ are the LG modes with azimuthal quantum number $l$, radial quantum number $p$, and focused beam width $w_0$~\cite{strlight_review}. The light field of Eq.~\eqref{Eq:BabySkyrmionField}, now a full Poincar\'e beam~\cite{Beckley2010}, contains a N\'eel type baby-Skyrmion whose optical polarization we have defined here in the linear $\hat{\textbf{e}}_{x,y}$ basis, instead of the commonly used circular basis~\cite{Donati2016,Gao2020,Gutierrez-Cuevas2021}, because the linear basis is physically relevant when manipulating the atomic transition, as discussed in Sec.~\ref{Skyrmions}. Topologically non-trivial fields in this simple example can be straightforwardly transformed to optical excitations in atomic ensembles. We consider a $\ket{J=0, m=0}\rightarrow\ket{J' = 1, m=\upsilon}$ transition which can in alkaline-earth-metal-like atoms be very narrow, forming long-lived excitations. For instance, the $^{88}$Sr clock transition ${}^1S_0\rightarrow {}^3P_0$ has a linewidth controllable by a magnetic field, with the transition entirely forbidden at the zero field. We create a non-singular topological texture of the optical excitation by considering an oblate ensemble of atoms, strongly confined along the light propagation direction ($z$ axis). We write the optical excitation as an electric polarization density, or the density of electromagnetic vibration in atoms, with the slowly-varying positive frequency component $\textbf{P}{}(\textbf{r})= \sum_j\delta(\textbf{r}-\textbf{r}_j)\textbf{d}_j$. The induced dipole $\textbf{d}_j = \mathcal{D}\sum_{\upsilon}\hat{\textbf{e}}_{\upsilon}\mathcal{P}^{(j)}_{\upsilon}$ on atom $j$, located at $\textbf{r}_j$, is given in terms of the reduced dipole matrix element $\mathcal{D}$ and the excitation amplitudes $\mathcal{P}_{\upsilon}^{(j)}$, with the unit vectors $\hat{\textbf{e}}_{\pm}=\mp(\hat{\textbf{e}}_{x}\pm\text{i}\hat{\textbf{e}}_{y})/\sqrt{2}$ and $\hat{\textbf{e}}_{0}=\hat{\textbf{e}}_{z}$. Light couples to $\textbf{P}$ via the atomic polarizability, $\alpha = -\mathcal{D}^2/[\hbar\epsilon_0(\Delta +\text{i}\gamma)]$, according to $\textbf{P}{}(\textbf{r})= \epsilon_0\alpha\boldsymbol{\mathcal{E}}{}(\textbf{r})$, where $\gamma$ denotes the resonance linewidth of the atom and $\Delta$ is the detuning of the laser frequency from the atomic resonance. The excitation is then re-emitted back to the light field, where the scattered light amplitude is given by $\int d^3 r' \mathsf{G}(\textbf{r}-\textbf{r}') \textbf{P}{}(\textbf{r}')$ and $\mathsf{G}({\bf r})\textbf{d}$ denotes the dipole radiation at ${\bf r}$ from an oscillating dipole $\textbf{d}$ at the origin~\cite{Jackson}. For describing the topology of the optical excitation, we define a pseudo-spinor in terms of the normalized transverse atomic polarization densities, \begin{equation} \hat{\sf P}({\bf r})= \begin{pmatrix} \hat P_x ({\bf r}) \\ \hat P_y ({\bf r}) \end{pmatrix}, \end{equation} where, as for the light field in Eq.~\eqref{Eq:BabySkyrmionField}, we work in a linear rather than circular basis, with $\hat P_j=P_j/|\textbf{P}|$ and the longitudinal component $P_z=0$. We can then define the corresponding atomic Stokes vector \begin{equation}\label{Eq:StokesParamAtoms} S_j(\textbf{r})=\hat{\mathsf{P}}^{\dagger}\sigma_{j}\hat{\mathsf{P}}, \end{equation} where $\sigma_j$ are the Pauli matrices. In Fig.~\ref{Fig:Model}, we show the baby-Skyrmion configuration generated by the field in Eq.~\eqref{Eq:BabySkyrmionField}. The atomic Stokes vector, Eq.~\eqref{Eq:StokesParamAtoms}, now has a fountain-like structure, $\textbf{S}=[2\sqrt{2}\rho w_0\hat{\textbf{e}}_{\rho}+(w_0^2-2\rho^{2})\hat{\textbf{e}}_z]/(w_0^2+2\rho^{2})$, and takes a uniform value $\textbf{S}=(0,0,-1)$ sufficiently far away from the center of the object. It is easy to verify that the winding number Eq.~\eqref{Eq:SkyrmionNumber} for $\textbf{S}$ integrates to $W=1$, and that the same topological structure in the incident field is excited in the atomic polarization density. The principle of creating a baby-Skyrmion is therefore closely related to the studies of analogous objects in exciton-polariton systems~\cite{Cilibrizzi2016, Donati2016}. \begin{figure} \hspace*{0cm} \includegraphics[width=0.9\columnwidth]{Figure_1_BabySkyrmion.pdf} \vspace{-0.2cm} \caption{Optical excitation of a baby-Skyrmion with a characteristic fountain-like structure in the atomic Stokes vector $\textbf{S}$ [Eq.~\eqref{Eq:StokesParamAtoms}], generated by the light field of Eq.~\eqref{Eq:BabySkyrmionField}. For illustrative purposes, we choose a regular square array, but any geometry can be chosen. The vector coloring corresponds to the $S_3$ component of $\textbf{S}$.} \label{Fig:Model} \end{figure} \section{Particle-like objects}\label{Sec:Solitons} \subsection{3D Skyrmions}\label{Skyrmions} We now show how 3D Skyrmionic structures can be constructed by considering the full complex nature of the electric polarization. The Stokes vector representation of the Poincar\'e sphere for the light field amplitudes or optical excitations in atoms [Eq.~\eqref{Eq:StokesParamAtoms}] does not provide the full field description, as the texture may also exhibit non-trivial, non-uniform spatial variation of the total phase of the two field components, which is discarded. A more complete description of the field topology can instead be obtained using the optical hypersphere $S^3$ \cite{Sugic2021}. The field parametrization in $S^3$ permits considerably more complex, particle-like objects, localized in 3D physical space. Compactifying the real 3D space, such that the fields are assumed to take the same value far away from the particle, independently of the direction, allows us to describe the topology by $S^3\rightarrow S^3$ mappings. Such mappings can be characterized by distinct topological equivalence classes, identified by the third homotopy group elements $\Pi_3(S^3)=\mathbb{Z}$. Non-trivial objects whose $S^3$ mappings wrap over the order parameter space an integer number of times represent topologically non-trivial solutions, originally introduced by Skyrme~\cite{Skyrme1961}. We now parametrize the atomic polarization spinor on the $S^3$ optical hypersphere by writing it as a four-component unit vector $\hat{\textbf{n}} = (n_1,n_2,n_3,n_4)$, and taking \begin{equation}\label{Eq:ndefn} \hat{\mathsf{P}} = \begin{pmatrix} n_2 + \text{i}n_1 \\ n_4 + \text{i}n_3 \end{pmatrix}= \begin{pmatrix}\text{i}\sin\psi \sin\beta\exp(-\text{i}\eta) \\ \cos\psi+\text{i}\sin\psi \cos\beta \end{pmatrix}, \end{equation} where $\hat{\textbf{n}}$ is represented by the hyperspherical angles $0<\psi,\beta \leq \pi$ and $0<\eta\leq2\pi$. The integer topological charge of the 3D Skyrmion (known in high-energy physics as the baryon number~\cite{donoghue2014}), is found then by counting the number of times $\hat{\textbf{n}}$ wraps over $S^3$, \begin{equation}\label{Eq:3DSkyrmionNumber} B =\int d^3r \mathcal{B}(\textbf{r}) =-\int \frac{d^3r}{2\pi^2}\epsilon_{ijk}\epsilon_{a b c d} n_a\frac{\partial n_b}{\partial r_i}\frac{\partial n_c}{\partial r_j}\frac{\partial n_d}{\partial r_k}, \end{equation} where $\mathcal{B}(\textbf{r})$ is the topological charge density. By introducing the transverse polarization density current $\textbf{J}= \frac{1}{2{\text{i}}}[{ \hat{\sf P}}^\dagger \nabla {\hat{\sf P}}- (\nabla {\hat{\sf P}}^\dagger){\hat{\sf P}}]$, $\mathcal{B}$ can be rewritten as \begin{equation}\label{Eq:OpticalCurrent} \mathcal{B}(\textbf{r}) = -\frac{1}{4\pi^2}\textbf{J}\cdot\nabla\times\textbf{J}, \end{equation} and is therefore analogous to the linking number density in (super)fluids~\cite{Volovik1977}, where $\textbf{J}$ is replaced by the (super)fluid velocity, and to the Chern-Simons term for the magnetic helicity~\cite{Jackiw00}, in which case $\textbf{J}$ represents the gauge potential for the magnetic field (Note that the sign of the winding numbers may vary depending on the orientations of the coordinates and the mappings). To understand the structure of the Skyrmion in Eq.~\eqref{Eq:ndefn}, we consider a simple analytic mapping from 3D Euclidean real space to the optical hypersphere~\cite{Ruostekoski2004} with $\eta = p\phi$, $\beta = \theta$ and $\psi = q\varsigma(r)$, finding that Eq.~\eqref{Eq:3DSkyrmionNumber} integrates to give a topological charge $B=pq$, where the monotonic function $\varsigma(r)$ satisfies $\varsigma(0)=0$ and $\varsigma\rightarrow \pi$ sufficiently far from the origin. The first spinor component vanishes along the $z$ axis, and now forms a multiply-quantized vortex line with a winding number $p$. The second component vanishes at the circles $\theta = \pi/2$, $r = \varsigma^{-1}[(n-1/2)\pi/q]$ for $n=1,\ldots,q$, with $\hat{P}_y \sim -\delta r - \text{i}\delta\theta $ in the circle vicinity, and hence forms $q$ concentric vortex rings with different radii. The vortex line threads the vortex rings, and has a non-vanishing density confined inside the toroidal regions around the vortex ring singularities, such that the Skyrmion is spatially localized, forming a particle-like object. Any continuous deformation of Eq.~\eqref{Eq:ndefn} conserves the discrete topological charge; a 3D Skyrmion with $B=pq$ can also be constructed by taking any combination of singly- and multiply-quantized lines (rings) with total winding $q$ ($p$), located in the components of $P_x$ ($P_y$), where the lines thread through the rings. Forming such a structure in the polarization density using electromagnetic fields in free space alone is a rather challenging task of structured light engineering~\cite{Sugic2021}. However, we can here exploit the properties of the light-matter coupling to simplify the field profiles considerably. To create the Skyrmion, we take a coherent superposition of copropagating light beams \begin{equation}\label{Eq:SkyrmionField} \boldsymbol{\mathcal{E}}{}(\textbf{r})=\text{U}_{l,0}(w_x)\hat{\textbf{e}}_x+[\text{U}_{0,0}(w_1)-c\text{U}_{0,0}(w_2)]\hat{\textbf{e}}_y, \end{equation} where for the LG beam we now choose $l=1$ to form a $B=1$ Skyrmion, although we consider higher-order charges in the next section. For the Gaussian beams of unequal focusing, the parameter $c=\exp(-\rho_0^2/w_1^2+\rho_0^2/w_2^2)$ defines the circular radius $\rho_0$ in the $z=0$ plane of minimum focusing at which they interfere destructively. Destructive interference outside the ring is prevented due to diffraction. Diffraction also leads to variation of the phase (Gouy phase), such that $\text{U}_{0,0}(w_1)-c\text{U}_{0,0}(w_2) \sim (\rho-\rho_0)+\text{i}\zeta z$ in the zero field ring vicinity. The $y$-polarized light component now forms a singular vortex ring~\cite{Ruostekoski2005} with a $2\pi$ phase winding, analogously to $\hat P_y$ of Eq.~\eqref{Eq:ndefn} for $q=-1$, and a vortex core anisotropy \begin{equation}\label{Eq:Anisotropy} \zeta = \frac{w_1^2w_2^2-\rho_0^2(w_1^2+w_2^2)}{w_1^2w_2^2\rho_0k}. \end{equation} The $x$-polarized light component exhibits a singular vortex line, analogously to $\hat P_x$ of Eq.~\eqref{Eq:ndefn} for $p=-1$, where the LG beam has an intensity that reaches its maximum in the $z=0$ plane at $\rho = w_x/\sqrt{2}$, coinciding with the vortex ring singularity. However, the intensity is not confined along the $z$ direction as required by the Skyrmion solution Eq.~\eqref{Eq:ndefn}. In order to achieve the desired profile, we can utilize the light-matter coupling, which can be selectively turned on around the $z=0$ plane only, to confine $P_x$. This can be achieved by controlling the $m=0$ quadratic Zeeman level shift, either by magnetic fields, or ac Stark shifts of lasers or microwaves~\cite{gerbier_pra_2006}. In Fig.~\ref{Fig:ChargeDensity}(a), we show the topological charge density $\mathcal{B}$ for the 3D Skyrmion constructed using the field in Eq.~\eqref{Eq:SkyrmionField}, [where we have ignored any contribution from the beam phase factor $\exp(\text{i}kz)$], and the confinement of $P_x$, achieved using spatially dependent level shifts $\Delta_x(\textbf{r}) =\delta[1-\exp(-z^2/10w_x^2)]$ in $\textbf{P}{}(\textbf{r})=\epsilon_0\alpha(\Delta_{\upsilon}) \boldsymbol{\mathcal{E}}{}(\textbf{r})$. We consider long-lived excitations with extremely narrow linewidth, so typically $\delta\gg\gamma$, and we take $\delta/\gamma=200$. The topological charge density shows the localization of the Skyrmion, with the density concentrated at the origin, and also in two rings where the gradient of $P_x$ and $P_y$ becomes large from the applied level shifts and vortex ring phase winding, respectively. Changing the vortex ring core anisotropy, Eq.~\eqref{Eq:Anisotropy}, which has the value $\zeta = 0.08$ in Fig.~\ref{Fig:ChargeDensity}(a), increases the concentration in the rings for a more anisotropic core. We find the corresponding transverse polarization density current $\textbf{J}$ [Fig.~\ref{Fig:ChargeDensity}(b)], which represents the synthetic magnetic vector potential with an integer linking number, has a large magnitude where the charge density is highly concentrated. At the charge density rings, $\textbf{J}$ flows radially inwards or outwards, while closer to the origin, $\textbf{J}$ flows almost entirely along the $\pm x$ directions. \begin{figure*} \hspace*{0cm} \includegraphics[width=2\columnwidth]{Figure_2_TopologicalObjects.pdf} \vspace{-0.2cm} \caption{Topological particle-like objects of optical excitations: (a) a 3D Skyrmion with the topological charge $B=1$, prepared using the light field in Eq.~\eqref{Eq:SkyrmionField} for $l=1$; (b) its artificial gauge vector potential field, represented by the polarization density current $\textbf{J}$; and (c) the construction of the Hopfion field profile $\hat{\textbf{h}}$. (a) The isosurface of $|P_x|^2 = 0.6 \mathcal{D}^4 |\boldsymbol{\mathcal{E}}{}(\textbf{0})|^2/(\hbar \gamma)^2$ (meshed region) shows the confinement of the $x$-polarized electromagnetic vibration of the atoms due to the applied level shifts. The topological charge density $\mathcal{B}(\textbf{r})$ (colored region) is concentrated at the origin and in two rings where the gradients of the atomic polarization density become large, with the corresponding $\textbf{J}$ in (b) exhibiting a flow in the $\pm x$ directions near the origin or radial flow at the charge density rings. In (b, c), a geometry of stacked square arrays is chosen for illustrative purposes, although any geometry can be used. The vector coloring in (c) corresponds to the $h_3$ component of $\hat{\textbf{h}}$. The beam widths $(w_x,w_1,w_2)/\lambda = (2,3,4.5)$. } \label{Fig:ChargeDensity} \end{figure*} \subsection{Knotted solitons}\label{KnottedSolitons} We have shown how 3D particle-like objects can be prepared by going beyond the Stokes vector representation used to describe baby-Skyrmions in Sec.~\ref{baby} and parametrizing the optical excitations on $S^3$. However, we can also construct particle-like 3D objects using the Poincar\'e sphere, instead of the full optical hypersphere. The advantages of our choice of representation for the optical hypersphere in Eq.~\eqref{Eq:ndefn} become apparent when we formulate the $S^3\rightarrow S^2$ transformation from the optical hypersphere to the Stokes vector precisely as a Hopf fibration~\cite{Hopf1931,Urbantke2003,Sugic2021}. The Hopf fibration, initially of purely mathematical interest, arises naturally in field theories. In the Skyrme-Faddeev model, 3D topological objects known as Hopfions are classified by an integer-valued Hopf charge~\cite{faddeev_nature_1997,Battye98,Sutcliffe17,HIETARINTA99,Babaev2002}. Considerable interest in these systems was generated by the observations that the stable solutions may exhibit knots. The Hopf map of the vector $\hat{\textbf{n}}$ on $S^3$ to a vector $\hat{\textbf{h}}=(h_1,h_2,h_3)$ on $S^2$ is given by \begin{subequations}\label{Eq:HopfionMapping} \begin{align} &h_1 = 2(n_1n_3+n_2n_4),\\ &h_2 = 2(n_2n_3-n_1n_4),\\ &h_3 = n_1^2+n_2^2-n_3^2-n_4^2, \end{align} \end{subequations} where the mapping falls into distinct topological equivalence classes $\Pi_3(S^2)=\mathbb{Z}$, characterized by the integer Hopf charge, $Q_H$. Upon substituting the expressions for $\hat{\textbf{n}}$ in terms of $\hat{\mathsf{P}}$, the mapping indeed returns the atomic Stokes vector, Eq.~\eqref{Eq:StokesParamAtoms}. Applying the Hopf map of Eq.~\eqref{Eq:HopfionMapping} to the $B=1$ Skyrmion of Fig.~\ref{Fig:ChargeDensity}(a), we obtain a Hopfion with charge $Q_H=1$, shown in Fig.~\ref{Fig:ChargeDensity}(c) by the field profile $\hat{\textbf{h}}$. The particle-like nature of the Hopfion is clearly visible, where the full 3D spin texture is localized around the origin. At large distances in any direction from the center, and along the vortex line where $\hat{P}_x$ vanishes, we have $\hat{\textbf{h}}=(0,0,-1)$, while at the vortex ring with $\hat{P}_y=0$, $\hat{\textbf{h}}=(0,0,1)$. The topological structure of the Hopfion is revealed when considering the reduction in dimensionality of the parameter space under the Hopf map of Eq.~\eqref{Eq:HopfionMapping}, where multiple points on $S^3$ map to the same point on $S^2$. These points form closed curves in real space, known as Hopfion preimages, which interlink an integer number of times, as the preimages of the Hopfion introduced in Fig.~\ref{Fig:ChargeDensity}(c) show in Fig.~\ref{Fig:Hopfions}(a). The linking number is given by the Hopf charge, $Q_H$, which can be shown~\cite{Gudnason2020} to be equal to the 3D Skyrmion charge, Eq.~\eqref{Eq:3DSkyrmionNumber}. Therefore, we can increase the preimage interlinking by increasing the total winding of vortex rings and lines, as discussed in Sec.~\ref{Skyrmions}. Multiply-quantized vortex lines in $P_x$ can easily be prepared by changing the beam orbital angular momentum in Eq.~\eqref{Eq:SkyrmionField}. Choosing $l=2$, we form a $Q_H=2$ Hopfion with real space preimages that interlink twice, as shown in Fig.~\ref{Fig:Hopfions}(b). Here we show how we can even prepare Hopfions that have the highly sought-after knotted structure, provided that the total winding of vortex rings is increased. This is more complicated than preparing higher quantized vortex lines, not least because multiply-quantized optical vortex rings are forbidden in paraxial light beams~\cite{berry_2001}. However, we overcome this limitation by two alternative strategies. The first is to create the Hopfion using the field in Eq.~\eqref{Eq:SkyrmionField}, but where the $y$-polarized light component is now chosen to drive a two-photon transition, with the beam wavelength doubled. Each photon excites a single vortex ring, such that $P_y \propto [\text{U}_{0,0}(w_1)-c\text{U}_{0,0}(w_2)]^2$, therefore forming a doubly-quantized ring~\cite{Ruostekoski2005}, with $P_y \sim[(\rho-\rho_0)+\text{i}\zeta z]^2$ in the ring vicinity. Using then $l=3$ for the LG beam in Eq.~\eqref{Eq:SkyrmionField} to prepare a triply-quantized vortex line that threads the vortex ring, we create a Hopfion that has a real space preimage of a trefoil knot, Fig.~\ref{Fig:Hopfions}(c). An alternative method is to choose the $y$-polarized light component in Eq.~\eqref{Eq:SkyrmionField} to be structured according to the techniques of Refs.~\cite{Leach2004,berry_2001}, where using the superposition of LG modes, \begin{equation}\label{Eq:Knot} {\cal E}{}_y({\bf r})\simeq 0.25\text{U}_{0,0}(w_x)-0.6\text{U}_{0,1}(w_x)+0.375\text{U}_{0,2}(w_x), \end{equation} gives a configuration of four coaxial vortex rings (two oppositely winding in the focal plane, one above and one below the focal plane), through which the triply-quantized vortex line threads and creates a Hopfion with a trefoil knot preimage, Fig.~\ref{Fig:Hopfions}(d). \begin{figure} \hspace*{0cm} \includegraphics[width=0.9\columnwidth]{Figure_3_Preimages.pdf} \vspace{-0.2cm} \caption{Links and knots in particle-like Hopfion optical excitations. (a-d) Real space preimages of Hopfions with different Hopf charge, $Q_H$, where each preimage corresponds to a point on the Poincar\'e sphere $S^2$ (e). Hopfion with (a) $Q_H=1$ (b) $Q_H=2$, where preimages interlink once and twice, respectively, and organize around the preimages of the vortex ring (red circle) and vortex line (black line corresponding to a circle of infinite radius). (c,d) $Q_H=6$ Hopfions with trefoil knot preimages. The Hopfions are created using the light field in Eq.~\eqref{Eq:SkyrmionField} with $l$ equal to (a) 1, (b) 2, (c,d) 3, and the $y$-polarized light component replaced in (c) by a two-photon transition and in (d) by the LG beams of Eq.~\eqref{Eq:Knot}. The beam widths $(w_x,w_1,w_2)/\lambda = (2,3,4.5)$.} \label{Fig:Hopfions} \end{figure} \section{Singular defects} Until now, we have considered non-singular topological textures where the orientation of the spin is well defined everywhere in space. We now show how it is also possible to form singular defects for which the spinor becomes ill-defined at a finite number of points. Structured light fields that exhibit singularities can create singular defects in atomic optical excitations by analogous principles to non-singular textures. To form 2D optical point defects in oblate atomic ensembles strongly confined along the $z$-direction, we consider a full Poincar\'e beam profile formed by a superposition of two LG beams with opposite orbital angular momenta~\cite{Donati2016}, \begin{equation}\label{vortex} \boldsymbol{\mathcal{E}}{}(\textbf{r})=e^{-\text{i}\varphi/2}\text{U}_{1,0}(w_0)\hat{\textbf{e}}_++e^{\text{i}\varphi/2}\text{U}_{-1,0}(w_0)\hat{\textbf{e}}_-, \end{equation} where $\varphi = 0$ ($\varphi = \pi$) results in an azimuthal (radial) singular vortex in the light field. For a dominant incident field, the singular configuration can be transferred onto the atomic polarization density without the need for any applied level shifts, as in Sec.~\ref{baby}. Such a configuration eventually radiates at the single-atom decay rate. However, remarkably, we find that specific defect structures can be highly robust and stable even in the strongly interacting limit where the incident field is no longer dominant in the atomic ensemble. These structures therefore represent spatially delocalized coherent `superatoms' that extend over the sample. In a cold and dense atomic ensemble, resonant incident light can scatter strongly, mediating dipole-dipole interactions between the atoms. In Fig.~\ref{Fig:Vortices2}, we show the real components of the steady-state polarization density for interacting atoms driven by the field in Eq.~\eqref{vortex}, with $w_0/\lambda = 2.77$, in the limit of low light intensity where individual atoms respond to light as classical linear oscillators~\cite{Ruostekoski1997a, Lee16}. For an atom spacing $a/\lambda=0.5$, the intensity of light scattered between nearest-neighbor atoms at the center of the lattice, $I_{\text{scat}}$, is much larger than the maximum intensity of the incident field, $I_{\text{inc}}$, with $I_{\text{scat}}/I_{\text{inc}} \simeq 2$. Therefore the atoms no longer emit light independently, but instead exhibit collective optical excitations, together with collective resonance linewidths and line shifts. Despite the presence of strong collective behavior, the optical excitations in Fig.~\ref{Fig:Vortices2} show clear vortex-like structures similar to the incident field. To understand this behavior, we calculate the collective excitation eigenmodes of the interacting system. We find that the system supports several collective eigenmodes with singular defects in the real components of the atomic polarization amplitudes. The resulting stationary excitations in Fig.~\ref{Fig:Vortices2} consist almost solely of a single collective excitation with an azimuthal (radial) defect, with a well-defined resonance linewidth and line shift, where the eigenmode occupation~\cite{Facchinetti18} reaches 99\% at the eigenmode resonance, $\Delta/\gamma = 0.90$ ($\Delta/\gamma = 0.89$). $S^1\rightarrow S^1$ mappings determine the winding number (Poincar\'e index) for a singular topological defect, as the count of the net total change in the real components of the polarization density orientation around a closed loop, \begin{equation}\label{Eq:SingularWinding} Q = \oint \frac{d\textbf{r}}{2\pi}\cdot \nabla \arctan\left(\frac{\hat{P}_y}{\hat{P}_x}\right), \end{equation} with $Q=1$ for the azimuthal and radial vortices. In an infinite system, the collective eigenmodes are real, and the system exhibits true topological defects. However, for the small atomic ensembles considered here, the imaginary components of the eigenmodes do not entirely vanish, e.g., the azimuthal vortex eigenmode appearing in the stationary excitation of Fig.~\ref{Fig:Vortices2}(a) has a small 3\% contribution to the total polarization density amplitude from the imaginary part. In comparison, the full stationary excitation has a 2\% contribution from the imaginary part. \begin{figure} \hspace*{0cm} \includegraphics[width=\columnwidth]{Figure_4_SingularDefects.pdf} \vspace{-0.2cm} \caption{Collective optical excitations with singular-like defects. (a) Azimuthal and (b) radial point vortex in the atomic polarization, generated in a steady-state response to the incident light field of Eq.~\eqref{vortex} that mediates dipole-dipole interactions between the atoms in an $N=313$ triangular array with circular boundaries and spacing $a/\lambda=0.5$. The beam width $w_0/\lambda = 2.77$, phase $\theta = 0$, and laser frequency detuning from the atomic resonance $\Delta/\gamma = 0.90$ for (a), or $\theta = \pi$ and $\Delta/\gamma = 0.89$ for (b). } \label{Fig:Vortices2} \end{figure} \section{Concluding remarks} 3D particle-like topological objects have inspired research across a wide range of different disciplines. The ideas originate from Kelvin, who proposed how vortex strings forming closed loops, links, and knots could explain the structure of atoms~\cite{Thomson1869}. To transfer such universal concepts to light and optical excitations, standard textbook representations of field amplitudes in terms of the Stokes vector on the Poincar\'e sphere fall dramatically short of the goal. This is because particle-like topologies can only be achieved in the complete optical hypersphere description, where variation of the total electromagnetic phase of vibration is retained. Here we have constructed a comprehensive platform of topologically non-trivial optical excitations of atoms, induced by light. The resulting amplitudes of electronic vibrations have been shown to exhibit substantially more complex topologies than the incident light creating them. In addition, this allows topological objects to be stored in excitations in highly controllable quantum systems with long lifetimes. The proposed setup potentially paves the way for applications in future quantum simulators. The Skyrme model of 3D particle-like objects~\cite{Skyrme1961} is not only an elegant mathematical construction, but also simulates a low-energy limit of QCD where baryons are described by the quantised states of classical soliton solutions~\cite{ADKINS}. By describing these field configurations using linked Hopf maps, the particle-like objects take the form of links and knots, analogous to knotted solitons of the Skyrme-Faddeev model~\cite{faddeev_nature_1997, Battye98, Sutcliffe17, HIETARINTA99, Babaev2002}, and representing physical realisations of Kelvin’s ideas in optical excitations. \section{Acknowledgements} C.D.P.\ and J.R.\ acknowledge financial support from the UK EPSRC (Grant Nos.\ EP/S002952/1, EP/P026133/1), and M.R.D.\ from the EPSRC Centre for Doctoral Training in Topological Design (EP/S02297X/1).
{ "timestamp": "2021-09-30T02:00:17", "yymm": "2109", "arxiv_id": "2109.13927", "language": "en", "url": "https://arxiv.org/abs/2109.13927" }
\section{Introduction} \label{sec:introduction} Quantify at which rate one can reliably convey information between two or more parts by means of a quantum communication channel is one of the major goals of quantum information theory~\cite{wilde, GT07}. Although the road to this end has a rich landscape already in the nonrelativistic context, it is only when relativity is taken into account that we can fully enjoy all its hues. The main reason is that relativity opens up the possibility of having non-trivial structures such as black hole event horizons, causal horizons caused by the relativistic relative motion of the parts conveying the information (or the expansion of spacetime itself), or even the presence of Cauchy horizons~\cite{wald84}. This has led several authors to analyze the communication process in relativistic settings with particular attention being paid to Minkowski~\cite{AM03, BHTW10, LT13, MHM12, BHP12, CK10, JMK14, MM15, JRMK18, J17, HLL12, SAKM20,YASKM20}, Schwarzschild~\cite{HBK12, BA15, JACKM20}, or asymptotically flat cosmological spacetimes~\cite{MM, BGMM16,SM17}. In a recent paper~\cite{L16}, one of the authors analyzed a communication model using a bosonic quantum field as a communication channel which is suited to arbitrary observers communicating in any globally hyperbolic curved spacetime. In order to convey the information, both sender and receiver interact with the field, which can be in any quasi-free algebraic quantum state~\cite{wald94}, by means of localized two-level quantum systems (qubits). By analyzing such a quantum communication channel nonperturbatively, it was determined at which rate one can reliably transfer information between the two parts. It was shown that the channel has a nonvanishing classical capacity as well as entanglement-assisted classical and quantum capacities. However, it is not enough to know how much classical or quantum information can be transmitted between two parts. A related and equally relevant inquiry is: how much energy is needed to convey the information? This has important consequences not only for practical proposes (such as engineering communication networks) but because it may also shed light in one of the major open problems in semiclassical/quantum gravity nowadays, namely, what is the fate of the information which falls into a black hole. For instance, by studying the energy toll for information transmission one may be able to address if it is possible for all the information to come back at the end of the black hole evaporation process, if it needs to come back during its earlier stages, or even if the information gets destroyed/erased. (For a review of such an issue, see Ref.~\cite{UW} and references therein.) Although some investigation has been performed on the issue of how much energy is needed to convey information~\cite{B81}, they are usually restricted to Minkowski spacetime~\cite{JMK15, HSU15} or 1+1 spacetimes~\cite{J16, Wald19}. In order to gain a broader perspective on the subject, in the present paper we will use the communication channel developed in~\cite{L16} to analyze what is the energy cost to transmit information in general globally hyperbolic and asymptotically flat spacetimes (which may contain black holes). By means of the so-called null-surface quantization and its interplay with the usual quantization procedure, we will be able to analyze how the total energy of the system formed by the quantum field, $\phi$, and the two local qubits, $A$ and $B$, changes as it is evolved from the asymptotic past to the asymptotic future. It will be shown that such energy change can be separated in three terms: {\bf (i)} a contribution coming from the particle creation due to the change of the spacetime metric, {\bf (ii)} a contribution accounting for the energy needed to switch-on/off each qubit, and {\bf (iii)} a term which measures the extra energy cost coming from the communication process itself. For the communication channel being used, it will be proved that the contribution {\bf (iii)} coming from the convey of information vanishes. This shows that, once one has already created the qubits, there is no extra energy cost in reliably transmitting information between the two parts. The paper is organized as follows. In Sec.~\ref{sec:Comm.Chann.} we will review the quantum communication model used here. In Sec.~\ref{sec:nullquant} we will develop the so-called null-surface quantization and relate it to the usual quantization procedure. In Section~\ref{sec:energy}, we will study how the energy of the total system--field $\phi$+qubits $AB$--changes when it is evolved from past to future null infinity and analyze what is the energy cost for communication. Section~\ref{sec:Mink} will be used to illustrate the channel capacities and energy cost in two paradigmatic examples in Minkowski spacetime: two inertial observers and one inertial and the other uniformly accelerated. Section~\ref{sec:finalremarks} is reserved to our final remarks. We assume metric signature $(- + + +)$ and natural units in which $c=\hbar=G=1$ unless stated otherwise. \section{The Communication Channel} \label{sec:Comm.Chann.} The quantum communication model we rely on describes the exchange of information between two arbitrary observers in any globally hyperbolic curved spacetime $\left(\mathcal{M},g\right)$ using a quantum scalar field $\phi$ as a communication channel. Here, $\mathcal{M}$ denotes the four-dimensional spacetime manifold and $g$ its Lorentzian metric. Let us consider a real free scalar field $\phi$ propagating in $\left(\mathcal{M},g\right)$. The spacetime can be foliated by Cauchy surfaces $\Sigma_t$ labeled by the real parameter $t$. The field is described by the action \begin{equation} \label{eq:KG-action} S\equiv- \frac{1}{2}\int_\mathcal{M}\epsilon_\mathcal{M}\, ( \nabla_a\phi \nabla^a\phi + m^2 \phi^2 + \xi R \phi^2 ), \end{equation} where $\epsilon_\mathcal{M}=\sqrt{-\mathfrak{g}}dx^0\wedge \cdots \wedge dx^3$ is the spacetime volume 4-form, $m$ is the field mass, $\xi\in\mathbb{R}$, $R$ is the scalar curvature, $\nabla_a$ is the torsion-free covariant derivative compatible with the metric $g$, and $\mathfrak{g}\equiv\text{det}(g_{\mu\nu})$ in some arbitrary coordinate system. The extremization of the action~\eqref{eq:KG-action} gives rise to the Klein-Gordon equation \begin{equation} \label{eq:KG-equation} (-\nabla^a\nabla_a + m^2+\xi R)\phi = 0. \end{equation} In the canonical quantization procedure, we promote the real field $\phi$ to an operator\footnote{Rigorously, an operator-valued distribution.} that satisfies the ``equal-time'' canonical commutation relations (CCR) \begin{eqnarray} \label{eq:CCR-unsmeared-1} \, [\phi (t, {\bf x}), \phi (t, {\bf x}') ]_{\Sigma_t} & = & [\pi (t, {\bf x}), \pi (t, {\bf x}') ]_{\Sigma_t} = 0, \\ \label{eq:CCR-unsmeared-2} \, [ \phi (t, {\bf x}), \pi (t, {\bf x}')]_{\Sigma_t} & = & i \delta^3 ({\bf x}, {\bf x}' ), \end{eqnarray} where $\mathbf{x}\equiv(x^1,x^2,x^3)$ are spatial coordinates on $\Sigma_t$ and $\pi(x)$ is the conjugate momentum defined as \begin{equation} \label{eq:conjugate-momentum-definition} \pi\equiv\frac{\delta S}{\delta\dot{\phi}}\,, \end{equation} where $``\;\dot\;\;"\equiv\partial_t$. In addition, we may formally write the canonical Hamiltonian of the field as \begin{equation} \label{eq:field-canonical-hamiltonian} H_\phi(t)\equiv\int_{\Sigma_t}d^3{\bf x} \,\left(\pi(t,\mathbf{x})\dot\phi(t,\mathbf{x})-\mathcal{L}[\phi,\nabla_a\phi]\right), \end{equation} with \begin{equation} d^3{\bf x}\equiv dx^1\wedge d x^2 \wedge dx^3 \end{equation} and \begin{equation} \label{eq:field-lagrangian-density} \mathcal{L}[\phi,\nabla_a\phi]\equiv-\frac{1}{2}\sqrt{-\mathfrak{g}} \,( \nabla_a\phi \nabla^a\phi + {m}^2 \phi^2 + \xi R \phi^2) \end{equation} being the Lagrangian density. To find a representation of the CCR, Eqs.~\eqref{eq:CCR-unsmeared-1} and~\eqref{eq:CCR-unsmeared-2}, we define an antisymmetric bilinear map $\sigma$ acting on the space $\mathcal{S}^\mathbb{C}$ of complex solutions of Eq.~\eqref{eq:KG-equation} as \begin{equation} \label{eq:antisymmetric-bilinear-map} \sigma(\psi_1,\psi_2)\equiv\int_{\Sigma_t}\epsilon_\Sigma \,n^a\left[\psi_2\nabla_a\psi_1-\psi_1\nabla_a\psi_2\right], \end{equation} where $\epsilon_\Sigma$ represents the proper-volume 3-form on the Cauchy surface $\Sigma_t$ and $n^a$ its future-directed normal unit vector. It allows us to define the Klein-Gordon product as \begin{equation} \label{eq:KG-inner-product} \langle\psi_1,\psi_2\rangle\equiv -i\,\sigma(\overline{\psi}_1,\psi_2), \end{equation} and, although this product is not positive-definite on $\mathcal{S}^\mathbb{C}$, we may choose any subspace $\mathcal{H}\subset\mathcal{S}^\mathbb{C}$ (the so-called \textit{one-particle Hilbert space)} such that~\cite{wald94}: \textbf{(i)}~$\mathcal{S}^\mathbb{C}\simeq\mathcal{H}\bigoplus\overline{\mathcal{H}}$\footnote{For the sake of mathematical precision, we note that one must first suitably Cauchy-complete $\mathcal{S}^\mathbb{C}$ for this decomposition to be valid.}; \textbf{(ii)}~the KG product is positive definite on $\mathcal{H}$, thus making $(\mathcal{H},\langle,\rangle)$ a Hilbert space\footnote{After its completion with respect to the norm induced by $\langle,\rangle$.}; \textbf{(iii)}~given any $u\in\mathcal{H}$ and $v\in\overline{\mathcal{H}}$, $\langle u,v\rangle=0$. Then, the Hilbert space that comprises the field states is defined as the symmetric Fock space $\mathfrak{F}_s(\mathcal{H})$ and the quantum field operator is formally defined as \begin{equation} \label{eq:unsmeared-field-operator} \phi(t,\mathbf{x})\equiv\sum_j\left[u_j(t,\mathbf{x})a(\overline{u}_j)+\overline{u}_j(t,\mathbf{x})a^\dagger(u_j)\right], \end{equation} where $\{u_j\}$ form an orthonormal basis for $\mathcal{H}$ and $a(\overline{u})$ and $a^\dagger(v)$ are the usual annihilation and creation operators associated with the modes $u$ and $v$, respectively, which satisfy \begin{equation} \label{eq:commutation-relation-annihilation-and-creation} \left[a(\overline{u}),a^\dagger(v)\right]=\langle u,v\rangle I, \end{equation} with $I$ being the identity operator on $\mathfrak{F}_s(\mathcal{H})$. The vacuum state associated with this representation of the CCR is the normalized vector $|0\rangle$ that satisfies $a(\overline{u})|0\rangle=0$ for every mode $u\in\mathcal{H}$. In order to make it mathematically well-defined, the quantum field operator must be defined as an operator-valued distribution. To this end, let $\mathcal{S}\subset\mathcal{S}^\mathbb{C}$ be the space of real solutions of Eq.~\eqref{eq:KG-equation} whose restriction to Cauchy surfaces have compact support; let $K:\mathcal{S}\rightarrow\mathcal{H}$ be the projection operator that takes the positive-norm part of any $\psi\in\mathcal{S}$; and define the map $E:C^\infty_0(\mathcal{M})\rightarrow\mathcal{S}$ acting on some \textit{test function} $f\in C^\infty_0(\mathcal{M})$, where $C^\infty_0(\mathcal{M})$ is the set of all smooth, compactly supported real functions on $\mathcal{M}$, as \begin{equation} \label{eq:def-causal-propagator} Ef(x)\equiv Af(x)-Rf(x). \end{equation} Here, $Af$ and $Rf$ are the advanced and retarded solutions, respectively, of the Klein-Gordon equation with source $f$. Hence, they satisfy \begin{equation} \label{eq:KG-equation-source} P(Af) = P(Rf) = f, \end{equation} with $P\equiv(-\nabla^a\nabla_a + m^2+\xi R)$ representing the Klein-Gordon differential operator. Then, for each test function $f\in C^\infty_0(\mathcal{M})$, we define a \textit{smeared quantum field operator} by \begin{equation} \label{eq:smeared-quantum-field-definition} \phi(f)\equiv i\left[a(\overline{KEf})-a^\dagger(KEf)\right], \end{equation} which satisfy the covariant version of the CCR, \begin{equation} \label{eq:covariant-CCR} \left[\phi(f_1),\phi(f_2)\right]=-i\Delta(f_1,f_2)I, \end{equation} where \begin{equation} \label{eq:def-nabla} \Delta(f_1,f_2)\equiv \int_\mathcal{M}\epsilon_\mathcal{M} f_1(x)Ef_2(x) \end{equation} and $f_1,f_2\in C^\infty_0(\mathcal{M})$. It is easy to see that Eq.~(\ref{eq:smeared-quantum-field-definition}) can be obtained by formally integrating Eq.~(\ref{eq:unsmeared-field-operator}) with the test function $f$, i.e., \begin{equation} \phi(f)=\int_\mathcal{M}\epsilon_\mathcal{M}\, \phi(x)f(x). \label{intfieldf} \end{equation} The above construction has the downside that there are infinitely many choices of $\mathcal{H}$ satisfying properties \textbf{(i)}-\textbf{(iii)} below Eq.~\eqref{eq:KG-inner-product} which are, in general, unitarily inequivalent. This issue can be avoided through the algebraic approach to quantum field theory (QFT)~\cite{wald94,KM15} in which the field quantization can be seen as a real linear map $\phi:f\in C^\infty_0(\mathcal{M})\rightarrow \phi(f)\in\mathcal{A}(\mathcal{M})$ between the space of test functions and an *-algebra $\mathcal{A}(\mathcal{M})$ (called \textit{algebra of observables}) such that \begin{enumerate} \item $\phi(f)^*=\phi(f)$ for all $f\in C^\infty_0(\mathcal{M})$, i.e., the (smeared) field is Hermitian; \item $\phi(Pf)=0$, for all $f\in C^\infty_0(\mathcal{M})$, i.e., the field satisfies the Klein-Gordon equation; \item $\left[\phi(f_1),\phi(f_2)\right]=-i\Delta(f_1,f_2)I,$ $f_1, f_2\in C^\infty_0(\mathcal{M})$, i.e., the field satisfies the CCR; \item$\mathcal{A}(\mathcal{M})$ is algebraically generated by the identity $I$ and the $\phi(f)$'s, $f\in C^\infty_0(\mathcal{M})$. \end{enumerate} In the algebraic approach, a quantum state is defined as a complex linear functional $\omega:\mathcal{A}(\mathcal{M})\rightarrow\mathbb{C}$ which satisfies $\omega(A^*A)\geq0$ for all $A\in\mathcal{A}(\mathcal{M})$ and $\omega(I)=1$. The so-called Gelfand-Naimark-Segal~(GNS) construction~\cite{wald94,KM15} ensures that every algebraic state $\omega$ can be realized as a vector on a Hilbert space together with a representation of the algebra of observables. In this work, we will focus on a particular class of states: the \textit{quasifree states}, defined as follows. Given a real inner product $\mu:\mathcal{S}\times\mathcal{S}\rightarrow\mathbb{R}$ satisfying \begin{equation} \label{eq:quasifree-state-condition} |\sigma(\varphi_1,\varphi_2)|^2\leq4\mu(\varphi_1,\varphi_1) \mu(\varphi_2,\varphi_2), \end{equation} for all $\varphi_1,\varphi_2\in\mathcal{S}$, we define a quasifree state $\omega_\mu$ associated with $\mu$ by the relation \begin{equation} \label{eq:quasifree-state-definition} \omega_\mu\left[e^{i\phi(f)}\right]\equiv e^{-\mu(Ef,Ef)/2}, \end{equation} for all $f\in C^\infty_0(\mathcal{M})$. We can extend $\omega_\mu$ to act on all observables of $\mathcal{A}(\mathcal{M})$ by using \begin{equation} \omega_\mu\left[\phi(f_1)\cdots \phi(f_n)\right]\!\equiv\! \left.(-i)^n\partial^n_{t_1\cdots t_n}\left[e^{it_1\phi(f_1)}\cdots e^{it_n\phi(f_n)}\right]\right|_{{\bf t} ={\bf 0}}, \end{equation} where ${\bf t}\equiv (t_1,\cdots, t_n)$, together with linearity and continuity. Now that the field quantization procedure has been introduced, we present the communication scheme studied here. Suppose that two observers, Alice and Bob, want to use the quantum field $\phi$ to communicate with each other. We consider Alice's and Bob's trajectories to be arbitrary and we consider the field to be initially in some quasifree state $\omega_\mu$\footnote{Actually, the results from this section apply to any algebraic state $\omega$ which satisfies $\omega\left[e^{i\phi(f)}\right]\in \mathbb{R}^+$. This condition includes the vacuum states, n-particle states, as well as KMS (thermal) states.}. Each observer possesses a two-level gapless quantum system that may interact with the quantum field. The two-dimensional Hilbert spaces associated with Alice's and Bob's qubits are denoted by $\mathcal{H}_A$ and $\mathcal{H}_B$, respectively. The communication setup is as follows: Alice wants to transmit classical or quantum information to Bob, and for that, she prepares her qubit in some initial quantum state $\rho_{-\infty}^A$ and switches-on its interaction with the field for a finite time $\Delta t_A$ (measured in the parameter $t$). To measure the information imprinted by Alice on the field's state, Bob initially prepares his qubit in a suitable state~$\rho_{-\infty}^B$. He then switches-on its interaction with the field for a finite time interval $\Delta t_B$, but only after Alice has switched-off her qubit interaction. Such communication setup is implemented by means of the Hamiltonian \begin{equation} \label{eq:total-hamiltonian} H(t)\equiv H_\phi(t) + H_{\mathrm{int}}(t), \end{equation} where $H_\phi$ is the field Hamiltonian in Eq.~\eqref{eq:field-canonical-hamiltonian} and $H_\mathrm{int}$ is the Hamiltonian that describes the interaction between the qubits and the field which, in the interaction picture, is given by \begin{equation} \label{eq:interaction-hamiltonian} H_\mathrm{int}^\mathrm{I} (t)\equiv \sum_{j}\epsilon_j(t)\int_{\Sigma_t}d^3{\bf x} \sqrt{-\mathfrak{g}} \; \psi_j(t,{\bf x}) \phi(t,{\bf x}) \otimes\sigma^{\rm z}_j, \end{equation} where $j=A,B$ with $A$ and $B$ labelling Alice and Bob qubit, respectively. Here, $\sigma_j^\mathrm{z}$ is one of the Pauli matrices $\left\{\sigma^\mathrm{x}_j,\sigma^\mathrm{y}_j,\sigma_j^\mathrm{z}\right\}$ associated with qubit $j$, $\psi_j(t,\mathbf{x})$ is a smooth real function satisfying $\psi|_{\Sigma_t} \in C_0^\infty\left(\Sigma_t\right)$ for all $t$ which models the range of interaction between the qubit $j$ and the field (i.e., the interaction occurs only at some vicinity of each qubit worldline), and $\epsilon_j(t)$ is a smooth and compactly-supported real {\em coupling function} modeling the finite-time coupling of the qubit $j$ with the field. Each coupling function has support \begin{equation} \label{eq:support-coupling-function} \mathrm{supp}\;\epsilon_j=\left[T_j^\mathrm{i},T_j^\mathrm{f}\right], \end{equation} where $T_j^\mathrm{i}$ and $T_j^\mathrm{f}$ represent the time (with respect to the parameter $t$) in which each qubit interaction with the field is switched-on and -off, respectively. Here, we denote $\Delta t_j\equiv {T_j^\mathrm{f}-T_j^\mathrm{i} }$ and assume $T_B^\mathrm{i}\geq T_A^\mathrm{f}$ (i.e., Bob measurement will be performed after Alice imprinted her information on the field state). The interaction between each qubit and the field, given by Eq.~\eqref{eq:interaction-hamiltonian}, is very similar to the Unruh-DeWitt model. However, we assumed that the two levels of each qubit have the same (zero) energy. This assumption allows us to calculate the evolution operator of the system and trace out the field degrees of freedom in a nonperturbative manner, thus making this model interesting to investigate both the causality in the information exchange process as well as the communication between parts lying in early and future asymptotic spacetime regions. We note that one could also have given an energy gap $2\,\delta_j$ for each qubit $j$ by adding $H_j=\delta_j\sigma_j^\mathrm{z}$ to the total Hamiltonian in Eq.~\eqref{eq:total-hamiltonian}. This would change it to \begin{equation} \label{eq:total-hamiltonian-with-gaps} H=H_\phi+H_A+H_B+H_\mathrm{int}, \end{equation} but would keep the interaction Hamiltonian in the interaction picture, Eq.~\eqref{eq:interaction-hamiltonian}, unchanged. Hence, all the results we have just described would remain the same. The interaction-picture time-evolution operator, associated with the foliation $\Sigma_t$, can be written as the time-ordered expression \begin{equation} \label{eq:evolution-operator-definition} U\equiv T\exp\left[-i\int_{-\infty}^{\infty}dt\,H^\mathrm{I}_\mathrm{int}(t)\right]. \end{equation} As shown in \cite{L16}, Eq.~\eqref{eq:evolution-operator-definition} can be computed exactly and it is given by \begin{equation} \label{eq:evolution-operator-closed-form} U=e^{i\Xi}e^{-i\phi(f_A)\otimes \sigma^{\mathrm{z}}_A}e^{-i\phi(f_B)\otimes \sigma^{\mathrm{z}}_B}e^{-i\Delta(f_A,f_B)\sigma^{\mathrm{z}}_A\otimes\sigma^{\mathrm{z}}_B}, \end{equation} where $\Xi$ is the c-number \begin{equation*} \Xi\equiv\frac{1}{2}\sum_j\int_{-\infty}^\infty dt \; \epsilon_j(t)\int_{-\infty}^{t} \; dt' \epsilon_j(t') \Delta_{j}(t,t'), \end{equation*} with \begin{equation*} \Delta_{j}(t,t')\!\equiv \!\!\!\int_{\Sigma_t}\!\!\!\!\!\!d^3{\mathbf{x}}\sqrt{-\mathfrak{g}} \!\!\int_{\Sigma_{t'}}\!\!\!\!\!\!\!d^3{\mathbf{x'}} \sqrt{-\mathfrak{g}'}\psi_j(t,{\mathbf{x}})\Delta(x,x')\psi_j(t',{\mathbf{x'}}), \end{equation*} and we recall that $[\phi(x),\phi(x')]\equiv-i\Delta(x,x')I$ is the unsmeared version of Eq.~\eqref{eq:covariant-CCR}. Additionally, we have defined \begin{equation} \label{eq:qubit-funtion} f_j(t,\mathbf{x})\equiv \epsilon_j(t)\psi_j(t,\mathbf{x}), \end{equation} which is a compactly supported function on $\mathcal{M}$ carrying all information about the interaction of qubit $j$ with the field. The initial state of the two-qubit+field system is ${\rho_{-\infty}\equiv\rho^A_{-\infty}\otimes\rho^B_{-\infty}\otimes\rho_\omega}$, where $\rho_\omega$ is the density operator associated with the initial quasifree state, $\omega_\mu$, of the field. Using the unitary evolution operator in Eq.~\eqref{eq:evolution-operator-closed-form}, we can evolve $\rho_{-\infty}$ to obtain the system's state after the communication process, $\rho_{+\infty}=U\rho_{-\infty}U^\dagger$. Additionally, we can trace-out the field and Alice's qubit degrees of freedom, obtaining the state of Bob's qubit after the communication process has finished: \begin{equation} \label{eq:qubit-B-final-state-definition} \rho^B\equiv\mathrm{tr}_{A,\phi}\left(U\,\rho^A_{-\infty}\otimes\rho^B_{-\infty}\otimes\rho_\omega\,U^\dagger\right). \end{equation} As shown in \cite{L16}, Eq.~\eqref{eq:qubit-B-final-state-definition} can be written in the form \begin{equation} \label{eq:quantum-communication-map} \rho^B=\mathcal{E}\left(\rho^A_{-\infty}\right), \end{equation} where $\mathcal{E}$ is a linear, completely-positive, and trace-preserving (CPTP) quantum map that relates the initial state of Alice's qubit (which has the information that will be conveyed) to the final state of Bob's qubit (which will be measured by him in order to retrieve the message). In other words, $\mathcal{E}$ is the quantum map that describes the communication channel used between Alice and Bob. It depends on their trajectories, on the spacetime geometry, and on both the initial state of the field and of Bob's qubit. The explicit form of Eq.~\eqref{eq:quantum-communication-map} as well as the details of the calculations can be found in~\cite{L16}. It is worth pointing out, however, that the initial state $\rho_{-\infty}^B$ should not be arbitrarily chosen: since $\sigma^\mathrm{z}_B$ commutes with the total Hamiltonian \eqref{eq:total-hamiltonian-with-gaps}, $\sigma^\mathrm{z}_B$ is conserved and thus its eigenvalues cannot be used to recover any information transmitted by Alice. Nevertheless, it can be shown that some states will maximize the signalling amplitudes between Alice and Bob, e.g., $\rho^B_{-\infty}\equiv|y_+\rangle_B{}_B\langle y_+|$, where $|y_+\rangle_B$ satisfies $\sigma_B^\mathrm{y}|y_+\rangle_B=|y_+\rangle_B$. With this choice of $\rho^B_{-\infty}$, it can be shown that this quantum channel has a classical capacity (i.e., the maximum rate at which classical bits can be reliably transmitted) given by \begin{equation} \label{eq:classical-channel-capacity} \begin{aligned} C(\mathcal{E}) = &\, H\left(\frac{1}{2}+\frac{\nu_B}{2}|\cos[2\Delta(f_A,f_B)]|\right)\\ - & \,H\left(\frac{1}{2}+\frac{\nu_B}{2}\right), \end{aligned} \end{equation} where $H(x)\equiv-x\log_2{x}-(1-x)\log_2{(1-x)}$ is the Shannon entropy and \begin{equation} \label{eq:nu-B-def} \nu_B\equiv\omega_\mu\left[e^{i\phi(2f_B)}\right]=e^{-2\mu(KEf_B,KEf_B)}. \end{equation} On the other hand, since this channel is entanglement-breaking, its quantum channel capacity (i.e., the rate at which qubits can be reliably transmitted) is \begin{equation} \label{eq:quantum-channel-capacity} Q(\mathcal{E})=0. \end{equation} One could also define protocols for sending both classical and quantum information when Alice and Bob initially have access to an unlimited supply of entanglement. In this case, one can define classical (or quantum) entanglement-assisted channel capacities which measure the maximum rate at which classical information (or qubits) can be reliably sent through the channel. As shown in~\cite{L16}, the classical $C_{ea}(\mathcal{E})$ and quantum $Q_{ea}(\mathcal{E})$ entanglement-assisted capacities are related to the classical channel capacity~\eqref{eq:classical-channel-capacity} by \begin{equation} \label{eq:entanglement-assisted-channel-capacities} C_{ea}(\mathcal{E})=2Q_{ea}(\mathcal{E})=C(\mathcal{E}). \end{equation} Thus, it is not worth using entanglement in order to try to increase the classical capacity of this channel. On the other hand, when prior entanglement is shared between Alice and Bob, it is possible to convey qubits through this channel at maximum rate $Q_{ea}(\mathcal{E})$, in contrast with the unassisted case in Eq.~\eqref{eq:quantum-channel-capacity}. \section{Quantization on Null Surfaces} \label{sec:nullquant} The so-called quantization on null-surfaces provides the formulation of a QFT restricted to 3-dimensional null submanifolds such as black hole horizons~\cite{MP03}, asymptotic infinities~\cite{M06,DMP06}, and cosmological horizons~\cite{DMP09}. In this section, we will build a quantum field theory restricted to a special class of null hypersurfaces. Then, under a few assumptions, we show how one can relate the ordinary QFT presented in Sec.~\ref{sec:Comm.Chann.} to the algebra of operators defined in the asymptotic past and future null infinities as well as causal horizons. Let $\mathfrak{h}$ be a 3-dimensional null hypersurface satisfying: \begin{enumerate}[label=\textbf{(\arabic*)}] \item $\mathfrak{h}$ is diffeomorphic to $\mathbb{R}\times\Gamma$, where $\Gamma$ is a two-dimensional spacelike submanifold of $\mathcal{M}$; \item there exists coordinates $(\Omega,\lambda,s^1,s^2)$ on $\mathcal{M}$ such that: \begin{enumerate} \item $s\equiv (s^1,s^2)$ are coordinates of $\Gamma$; \item $\mathfrak{h}=\{p\in\mathcal{M}\,|\,\Omega(p)=0\}$ and $d\Omega\neq0$ at $\mathfrak{h}$; \item the restriction of the metric to $\mathfrak{h}$ takes the form \begin{equation} g\big|_\mathfrak{h} = -\gamma^2\,\left(d\Omega \otimes d\lambda + d\lambda \otimes d\Omega\right)+h_\Gamma \label{eq:metric-restricted-to-horizon} \end{equation} where $h_\Gamma$ is the metric induced by $g$ on $\Gamma$ and $\gamma\in\mathbb{R}$. \end{enumerate} \end{enumerate} It follows from conditions {\bf (1)} and {\bf (2)} above that $(\lambda,s)$ defines a coordinate system for $\mathfrak{h}$ and that the curves $\lambda\rightarrow(\lambda,s)$, defined for fixed $s$, are the null generators of $\mathfrak{h}$. From now on, we will refer to $\mathfrak{h}$ generically as ``the horizon'' (although it can also be describing past or future infinity). To parallel the usual QFT construction we have presented in Sec.~\ref{sec:Comm.Chann.}, let us define the ``solution space'' on $\mathfrak{h}$ as \begin{equation} S^\mathbb{C}_\mathfrak{h}\equiv\left\{\mathrm{smooth}\;\psi:\mathfrak{h}\rightarrow\mathbb{C}\,\big|\,\psi,\partial_\lambda\psi\in L^2(\mathfrak{h},d\lambda\wedge\epsilon_\Gamma)\right\}, \end{equation} where $\epsilon_\Gamma$ is the natural volume element on $\Gamma$ and $L^2(\mathfrak{h},d\lambda\wedge\epsilon_\Gamma)$ is the space of square-integrable functions on $\mathfrak{h}$ with respect to the measure $d\lambda\wedge\epsilon_\Gamma$. Similarly to Eq.~\eqref{eq:antisymmetric-bilinear-map}, we define the sympletic product on $\mathcal{S}^\mathbb{C}_\mathfrak{h}$ as \begin{equation} \sigma_\mathfrak{h}(\psi_1,\psi_2) \equiv \int_\mathfrak{h} d\lambda\wedge \epsilon_\Gamma\left[\psi_2\partial_\lambda \psi_1-\psi_1\partial_\lambda\psi_2\right], \label{eq:symplectic-form-null-surface} \end{equation} which again allows us to define the Klein-Gordon inner-product on $\mathcal{S}^\mathbb{C}_\mathfrak{h}$ by \begin{equation} \langle\psi_1,\psi_2\rangle_\mathfrak{h}\equiv-i\,\sigma_\mathfrak{h}(\overline{\psi_1},\psi_2). \label{eq:KG-product-null-surface} \end{equation} Now, let us note that Eq.~\eqref{eq:metric-restricted-to-horizon} is invariant under translations $\lambda\rightarrow\lambda+a$, with $a\in\mathbb{R}$. We can explore this translation symmetry on $\lambda$ to choose a preferable representation of the CCR. For this purpose, let us first define the \textit{positive-frequency projection operator} $K$ acting on $\psi\in\mathcal{S}^\mathbb{C}_\mathfrak{h}$ by \begin{equation} K\psi(\lambda,s)\equiv\frac{1}{\sqrt{2\pi}}\int_{\mathbb{R}^+}dE\,e^{-i E \lambda}\,\widetilde{\psi}(E,s), \label{eq:projection-operator-null-surface} \end{equation} where \begin{equation} \widetilde{\psi}(E,s)\equiv\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}d\lambda\,e^{i E \lambda}\,\psi(\lambda,s) \end{equation} is the Fourier transform of $\psi$ with respect to $\lambda$. Then, the one-particle Hilbert space is defined as \begin{equation} \mathcal{H}_\mathfrak{h}\equiv\overline{\left\{K\psi\,|\,\psi\in\mathcal{S}^\mathbb{C}_\mathfrak{h}\right\}} \label{eq:one-particle-space-null-surface}, \end{equation} where the closure is with respect to the norm induced by the product in Eq.~\eqref{eq:KG-product-null-surface} (which is positive definite on~$\mathcal{H}_\mathfrak{h}$). It is easy to see that the horizon one-particle Hilbert space $\mathcal{H}_\mathfrak{h}$ in Eq.~\eqref{eq:one-particle-space-null-surface} satisfies properties \textbf{(i)}-\textbf{(iii)} below Eq.~\eqref{eq:KG-inner-product}. Having established the one-particle Hilbert space $H_\mathfrak{h}$, we can now follow the same procedure described in Sec.~\ref{sec:Comm.Chann.} to build the bosonic Fock space $\mathfrak{F}_s(\mathcal{H_\mathfrak{h}})$ and the creation/annihilation operators associated with each mode $u\in\mathcal{H}_\mathfrak{h}$. Then, the \textit{horizon smeared quantum field} can be defined as \begin{equation} \phi^{(\mathfrak{h})}(\psi)\equiv i \left[a(\overline{K\psi})-a^\dagger(K\psi)\right], \label{eq:smeared-quantum-field-null-surface} \end{equation} for every $\psi\in\mathcal{S}_\mathfrak{h}\subset \mathcal{S}_\mathfrak{h}^{\mathbb{C}}$ with \begin{equation} \mathcal{S}_\mathfrak{h} \equiv\{\psi\in\mathcal{S}^\mathbb{C}_\mathfrak{h}\,|\,\psi\mathrm{\;is\;real}\}. \end{equation} The horizon algebra of observables, $\mathcal{A}(\mathfrak{h})$, is generated by the identity operator ${I:\mathfrak{F}_s(\mathcal{H}_\mathfrak{h})\rightarrow\mathfrak{F}_s(\mathcal{H}_\mathfrak{h})}$ and the set of field operators ${\{\phi(\psi)\,|\,\psi\in \mathcal{S}_\mathfrak{h}\}}$. It will be useful to write a basis for the one-particle space $\mathcal{H}_\mathfrak{h}$. To this end, let $\{\varphi_\alpha\}_{\alpha\in \Lambda}\subset L^2(\Gamma,\epsilon_\Gamma)$ be an orthonormal basis for $L^2(\Gamma,\epsilon_\Gamma)$ with respect to some measure $d\mu(\alpha)$ on the set $\Lambda$ of quantum numbers $\alpha$. Hence, every ${\psi\in L^2(\Gamma,\epsilon_\Gamma)}$ can be written as \begin{equation} \psi(s)=\int_{\Lambda} d\mu(\alpha)\,\tilde{\psi}(\alpha) \,\varphi_\alpha(s) \end{equation} for some function $\tilde{\psi}(\alpha)$, with $\varphi_\alpha,\varphi_\beta$ satisfying \begin{equation} \int_\Gamma \epsilon_\Gamma\,\overline{\varphi_\alpha(s)}\,\varphi_\beta(s)=\delta_\mu(\alpha,\beta), \end{equation} where $\delta_\mu$ is the Dirac distribution relative to the measure $d\mu(\alpha)$. Then, define the set of modes $\{u_{E\alpha}\}\subset \mathcal{H}_\mathfrak{h}$ as \begin{equation} u_{E\alpha}(\lambda,s)\equiv \frac{1}{\sqrt{4\pi E}} e^{-i E\lambda}\varphi_\alpha(s),\;\;\;E>0, \label{eq:basis-modes-null-surface} \end{equation} which satisfy \begin{equation} \langle u_{E\alpha},u_{E', \alpha'}\rangle_\mathfrak{h}=\delta(E-E')\delta_\mu(\alpha, \alpha') \end{equation} and thus, form a orthonormal basis for the one-particle Hilbert space $\mathcal{H}_\mathfrak{h}$. They allow us to define annihilation operators, $a_{E\alpha}\equiv a\left(\overline{u_{E\alpha}}\right)$, and write the \textit{horizon unsmeared quantum field operator} as \begin{equation} \phi^{(\mathfrak{h})}(\lambda,s)\equiv \int_{\mathbb{R^+}}dE\int_\Lambda d\mu(\alpha) \left[\,u_{E\alpha}a_{E\alpha}+\mathrm{H.c.} \,\right], \label{eq:unsmeared-quantum-field-null-surface} \end{equation} which satisfy the commutation relation \begin{equation} \left[\phi^{(\mathfrak{h})}(\lambda,s),\partial_\lambda\phi^{(\mathfrak{h})}(\lambda',s')\right]=\frac{i}{2}\delta(\lambda-\lambda')\delta_\Gamma(s-s'). \label{eq:unsmeared-commutation-relation-horizon} \end{equation} It's worth noting that the smeared and unsmeared quantum fields, Eqs.~\eqref{eq:smeared-quantum-field-null-surface} and~\eqref{eq:unsmeared-quantum-field-null-surface}, are related by \begin{equation} \begin{aligned} \phi^{(\mathfrak{h})}(\psi)&= \sigma_\mathfrak{h}(\psi,\phi^\mathfrak{h}) \\ & =2\int_\mathfrak{h} d\lambda\wedge \epsilon_\Gamma\,\partial_\lambda \psi(\lambda,s) \phi^\mathfrak{h}(\lambda,s) \\ & = \int_\mathfrak{h} 2 d\psi\wedge\epsilon_\Gamma\,\phi^\mathfrak{h}(\lambda,s), \end{aligned} \label{eq:smearing-horizon} \end{equation} from which we see that the correct way to smear this field is with forms. This is because there is no natural volume element on $\mathfrak{h}$ (the induced metric is degenerate). Let us now discuss the application of the null-surface quantization and its relation to the ordinary QFT presented in Sec.~\ref{sec:Comm.Chann.}. Suppose that the spacetime $(\mathcal{M},g)$ is asymptotically flat with future null infinity $\mathcal{I}^+$, possibly containing a future causal horizon $\mathfrak{h}^+$ (e.g., the event horizon of a black hole). The future null infinity $\mathcal{I}^+$ is a 3-dimensional null hypersurface which satisfies the properties \textbf{(1)}-\textbf{(2)} defined at the beginning of this section with $\Gamma=\mathbb{S}^2$ and $\lambda \equiv u$ being the so-called ``retarded time''. Similarly, the causal horizon $\mathfrak{h}^+$ is a 3-dimensional null hypersurface which satisfies the same properties but with $\lambda\equiv v$ being the so-called ``advanced time''. Thus, we can apply the quantization procedure introduced in this section to both surfaces $\mathcal{I}^+$ and $\mathfrak{h}^+$ (if present) and build the field algebras $\mathcal{A}(\mathcal{I}^+)$ and $\mathcal{A}(\mathfrak{h}^+)$, respectively. Let us take now $\mathfrak{h}\equiv\mathfrak{h}^+\cup\mathcal{I}^+$ as the union of future null infinity and the future causal horizon and let $\widetilde{\mathcal{M}}\equiv I^-(\mathfrak{h})$ be the asymptotically flat region outside the horizon, where $I^-(A)$ indicates the chronological past of a subset $A\subset \mathcal{M}$. For the sake of simplicity, from now on, we will restrict our analysis to minimally-coupled massless fields. Suppose we have constructed a quantum field theory in $\widetilde{\mathcal{M}}$ following the steps of Sec.~\ref{sec:Comm.Chann.}, obtaining an algebra of observables $\mathcal{A}(\widetilde{\mathcal{M}})$ (called the \textit{bulk algebra}). Since we are dealing with a massless field, all the information carried by the field will be ``imprinted'' on $\mathfrak{h}$. Thus, we expect that: {\bf (BB1)} every solution $\psi\in \mathcal{S}$ of the Klein-Gordon equation in $\widetilde{\mathcal{M}}$ which has compact support on Cauchy surfaces can be extended by continuity to (unique) functions $\psi^{\mathcal{I}^+}\in \mathcal{S}^{\mathbb{C}}_{\mathcal{I}^+}$ on $\mathcal{I}^+$ and $\psi^{\mathfrak{h}^+}\in \mathcal{S}^{\mathbb{C}}_{\mathfrak{h}^+}$ on $\mathfrak{h}^+$. Moreover, $\psi, \psi^{\mathcal{I}^+},$ and $\psi^{\mathfrak{h}^+}$ should satisfy \begin{equation} \mathbf{(BB2)} \;\; \sigma(\psi_1,\psi_2)=\sigma_{\mathcal{I}^+}(\psi_1^{\mathcal{I}^+},\psi_2^{\mathcal{I}^+})+\sigma_{\mathfrak{h}^+}(\psi_1^{\mathfrak{h}^+},\psi_2^{\mathfrak{h}^+}), \end{equation} where the LHS is defined in Eq.~\eqref{eq:antisymmetric-bilinear-map} and the horizon bilinear products in the RHS are defined in Eq.~\eqref{eq:symplectic-form-null-surface}. By using {\bf (BB1)} and {\bf (BB2)} above, each operator in $A(\widetilde{\mathcal{M}})$ can then be mapped with an operator in ${\mathcal{A}(\mathfrak{h})\equiv\mathcal{A}(\mathcal{I}^+)\otimes\mathcal{A}(\mathfrak{h}^+)}$ by the identification \begin{equation} \phi(f)\rightarrow\phi^{(\mathfrak{h})}(Ef^\mathfrak{h}),\;\;\;\;\forall f\in C^\infty_0\left(\widetilde{\mathcal{M}}\right). \end{equation} An algebraic state ${\omega:\mathcal{A}(\widetilde{M})\rightarrow\mathbb{R}^+}$ induces the state ${\omega_\mathfrak{h}:\mathcal{A}(\mathcal{\mathfrak{h})}\rightarrow\mathbb{R}^+}$ on $\mathfrak{h}$ through the identification \begin{equation} \omega_\mathfrak{h}\left[\phi^{(\mathfrak{h})}(Ef^\mathfrak{h})\right]\equiv \omega\left[\phi(f)\right]\;,\;\;\;\;\forall\phi^{(\mathfrak{h})}(Ef^\mathfrak{h})\in\mathcal{A}(\mathfrak{h}). \end{equation} By following a completely analogous procedure, one can also relate the bulk algebra to the algebra defined at past null infinity, $\mathcal{I}^-$. It is important to note that, although we will restrict ourselves to massless and minimally-coupled real scalar fields, the above relation between between $\mathcal{A}(\widetilde{M})$ and $\mathcal{A}(\mathfrak{h})$ will always exists provided that the field in question satisfies conditions {\bf (BB1)} and {\bf (BB2)}~\cite{DMP09}. \section{Energy Cost for the Transmission of Information} \label{sec:energy} In Sec.~\ref{sec:Comm.Chann.}, we have discussed a communication channel that allows the transmission of information between two arbitrary observers in a globally hyperbolic spacetime $(\mathcal{M}, g)$. Now, we turn our attention to investigate the energy cost involved in this communication process when $(\mathcal{M}, g)$ is asymptotically flat with past and future null infinities given by $\mathcal{I}^-$ and $\mathcal{I}^+,$ respectively. Our goal will be to analyze the total energy variation of the two-qubit+field system between early and late times. We recall that the initial state of the system is given by \begin{equation} \rho_{-\infty}\equiv\rho^A_{-\infty}\otimes\rho^B_{-\infty}\otimes\rho_\omega, \end{equation} where $\rho_\omega$ is the density operator associated with some initial quasi-free field state $\omega_\mu$ and $\rho^j_{-\infty}$ is the initial state of qubit $j=A,B$. When the communication process finishes, the final state of the two-qubits+field system is \begin{equation} \rho_{+\infty}\equiv U\rho_{-\infty}U^\dagger, \label{eq:initial-state} \end{equation} where $U$ is the evolution operator given by Eq.~\eqref{eq:evolution-operator-closed-form}. As a result, the total energy variation of the system is formally written as \begin{equation} \Delta E \equiv \langle H(+\infty) \rangle_{\rho_{+\infty}} - \langle H(-\infty)\rangle_{\rho_{-\infty}}, \label{eq:definition-energy-difference} \end{equation} with $H(t)$ defined in Eq.~\eqref{eq:total-hamiltonian}. As the interaction time of each qubit with the field is finite, the interaction Hamiltonian vanishes for $t\rightarrow\pm\infty$ and thus, Eq.~\eqref{eq:definition-energy-difference} can be cast as \begin{equation} \begin{aligned} \Delta E = \mathrm{tr}\left(H_\phi(+\infty) U\rho_{-\infty}U^\dagger \right)-\mathrm{tr}\left(H_\phi(-\infty)\rho_{-\infty}\right). \end{aligned} \label{eqDeltaE} \end{equation} As in Sec.~\ref{sec:Comm.Chann.}, define $\mathfrak{h}\equiv\mathfrak{h}^+\cup\mathcal{I}^+$ or $\mathfrak{h}\equiv\mathcal{I}^+$, depending on whether there is a future causal horizon $\mathfrak{h}^+$ or not. Now, let us restrict our attention to the globally-hyperbolic region $\widetilde{\mathcal{M}}=I^-\left(\mathfrak{h}\right)$ outside the horizon and let us foliate it with Cauchy surfaces $\Sigma_t$ such that $\Sigma_{t\rightarrow -\infty}=\mathcal{I}^-$ and $\Sigma_{t\rightarrow \infty}=\mathfrak{h}$. By using the identification between the algebra $\mathcal{A}\left(\widetilde{\mathcal{M}}\right)$ with the algebras $\mathcal{A}\left(\mathfrak{h}\right)$ and $\mathcal{A}\left(\mathcal{I}^-\right)$, we can cast Eq.~\eqref{eqDeltaE} as \begin{eqnarray} \Delta E & =& \mathrm{tr}\left(H^{(\mathfrak{h})}_\phi\,U^{(\mathfrak{h})}\rho_{-\infty}^\mathfrak{h}U^{(\mathfrak{h})\,\dagger}\right)-\mathrm{tr}\left(H_\phi^{(\mathcal{I}^-)}\rho_{-\infty}^{(\mathcal{I}^-)}\right) \nonumber \\ & =& \mathrm{tr}\left(U^{(\mathfrak{h})\,\dagger}H^{(\mathfrak{h})}_\phi\,U^{(\mathfrak{h})}\rho_{-\infty}^\mathfrak{h}\right)-\mathrm{tr}\left(H_\phi^{(\mathcal{I}^-)}\rho_{-\infty}^{(\mathcal{I}^-)}\right),\nonumber\\ \label{eq:delta-E-horizon} \end{eqnarray} with $H^{(\mathfrak{h})}_\phi$ $\left(H_\phi^{(\mathcal{I}^-)}\right)$ and $\rho_{-\infty}^{\left(\mathfrak{h}\right)}$ $\left(\rho_{-\infty}^{(\mathcal{I}^-)}\right)$ being the horizon field Hamiltonian and the state induced by $\rho_{-\infty}$ at $\mathfrak{h}$ $\left(\mathcal{I}^-\right)$, respectively. Similarly, $U^{(\mathfrak{h})}$ is the evolution operator~\eqref{eq:evolution-operator-closed-form} written using the algebra $\mathcal{A}\left(\mathfrak{h}\right)$, i.e., we have used the identification $\phi(f_j) \rightarrow \phi^{(\mathfrak{h})}(Ef_j^\mathfrak{h})$. \begin{comment} \begin{equation} U^{(\mathfrak{h})} =e^{i\Xi}e^{-i\phi^{(\mathfrak{h})}(Ef_A^\mathfrak{h})\otimes \sigma^{\mathrm{z}}_A}e^{-i\phi^{(\mathfrak{h})}(Ef_B^\mathfrak{h})\otimes \sigma^{\mathrm{z}}_B}e^{-i\Delta(f_A,f_B)\sigma^{\mathrm{z}}_A\otimes\sigma^{\mathrm{z}}_B}. \end{equation} \end{comment} The field Hamiltonian at $X=\mathcal{I}^-, \mathfrak{h}$ can be written as \begin{equation} \begin{aligned} H^{X}_\phi & = \int_{X} d\lambda_X \wedge\epsilon_{\Gamma_X}\; T^{X}_{ab}k^ak^b \\ & =\int_{X} d\lambda_X\wedge\epsilon_{\Gamma_X} \left[\partial_{\lambda_X} \phi^{X}\right]^2. \end{aligned} \label{eq:energy-horizon-expression} \end{equation} where $\Gamma_X$ is the spacelike 2-surface transverse to the null generators of X, \begin{equation} T^{X}_{ab}\equiv \nabla_{(a}\phi^{X}\nabla_{b)}\phi^{X}-\frac{1}{2}g_{ab}\nabla_c\phi^{X}\nabla^c\phi^{X} \label{TX} \end{equation} is the stress-energy-momentum tensor at $X$ of the massless KG field, and $k^a\equiv (\partial_{\lambda_X})^a$ is the vector field tangent to the affinely-parametrized null generators of $X$ whose affine parameter is given by $\lambda_{\mathcal{I}^-}= v$ or $\lambda_{\mathfrak{h}}=\lambda$ whenever $X=\mathcal{I}^-$ or $X=\mathfrak{h}$, respectively. Let us evaluate Eq.~\eqref{eq:delta-E-horizon} by steps. For this purpose, we first define \begin{equation} U^{(\mathfrak{h})}_j\equiv e^{-i\phi^{(\mathfrak{h})}(Ef_j^\mathfrak{h})\otimes\sigma_j^\mathrm{z}},\;\;j=A,B, \label{Uj} \end{equation} and use Eq.~\eqref{eq:evolution-operator-closed-form}, together with the identification between field algebras in $\mathfrak{h}$ and $\widetilde{\mathcal{M}}$, to write \begin{equation} {U^{(\mathfrak{h})}}^\dagger H_\phi^{(\mathfrak{h})}U^{(\mathfrak{h})}= \\ {U^{(\mathfrak{h})}_B}^\dagger {U^{(\mathfrak{h})}_A}^\dagger\,H_\phi^{(\mathfrak{h})}U^{(\mathfrak{h})}_A U^{(\mathfrak{h})}_B. \label{eqHhorizonU} \end{equation} Next, by using Eqs.~(\ref{eq:unsmeared-commutation-relation-horizon}) and~(\ref{Uj}) together with the relation \begin{equation} e^{\mathfrak{a}}\mathfrak{b}e^{-\mathfrak{a}}=\mathfrak{b}+[\mathfrak{a},\mathfrak{b}], \end{equation} valid when $\left[[\mathfrak{a},\mathfrak{b}],\mathfrak{a}\right]=\left[[\mathfrak{a},\mathfrak{b}],\mathfrak{b}\right]= 0$, we can write \begin{equation} {U^{(\mathfrak{h})}_j}^\dagger \,\partial_\lambda \phi^{\mathfrak{h}}\,U_j^{(\mathfrak{h})}=\partial_\lambda \phi^{\mathfrak{h}}- \partial_\lambda Ef_j^\mathfrak{h}\,\sigma^\mathrm{z}_j, \label{UHU} \end{equation} where we recall that $j=A, B$. Now, using Eqs.~(\ref{Uj}) and~(\ref{UHU}) in Eq.~(\ref{eqHhorizonU}), we obtain \begin{equation} {U^{(\mathfrak{h})}}^\dagger \,\partial_\lambda \phi^{\mathfrak{h}}\,U^{(\mathfrak{h})}=\partial_\lambda \phi^{\mathfrak{h}}- \sum_{j=A,B}\partial_\lambda Ef_j^\mathfrak{h}\,\sigma^\mathrm{z}_j. \label{UHU2} \end{equation} By using Eqs.~(\ref{eq:energy-horizon-expression}) and~(\ref{UHU2}) we can cast the evolved Hamiltonian on $\mathfrak{h}$ as \begin{eqnarray} && U^{(\mathfrak{h})\dagger} H_\phi^{(\mathfrak{h})}U^{(\mathfrak{h})} =\; H_\phi^{(\mathfrak{h})}\nonumber \\ &-& 2 \sum_{j=A,B}\int_\mathfrak{h}d\lambda \wedge \epsilon_\Gamma \partial_\lambda Ef_j^\mathfrak{h} \partial_\lambda \phi^\mathfrak{h}\otimes\sigma_j^\mathrm{z} \nonumber \\ &+& \sum_{i,j=A,B}\int_\mathfrak{h}d\lambda \wedge \epsilon_\Gamma \partial_\lambda Ef_i^\mathfrak{h} \partial_\lambda Ef_j^\mathfrak{h}\sigma^\mathrm{z}_i\otimes \sigma^\mathrm{z}_j. \label{eq:evolution-hamiltonian-horizon-final} \end{eqnarray} Finally, by substituting Eq.~\eqref{eq:evolution-hamiltonian-horizon-final} in Eq.~\eqref{eq:delta-E-horizon} we can write the energy variation as \begin{equation} \Delta E=W_\phi+W_A+W_B+W_{AB}, \end{equation} where \begin{equation} W_\phi\equiv\mathrm{tr}\left(H^{(\mathfrak{h})}_\phi\,\rho^{(\mathfrak{h})}_{\omega}\right)-\mathrm{tr}\left(H^{(\mathcal{I}^-)}_\phi\,\rho^{(\mathcal{I}^-)}_{\omega}\right), \label{eq:W-phi} \end{equation} \begin{equation} W_j\equiv \int_\mathfrak{h}d\lambda\wedge\epsilon_\Gamma \left(\partial_\lambda Ef^\mathfrak{h}_j\right)^2, \label{eq:W-j} \end{equation} \begin{equation} W_{AB}\equiv 2\left[\int_\mathfrak{h}d\lambda\wedge\epsilon_\Gamma\,\left(\partial_\lambda Ef^\mathfrak{h}_A\right)\left(\partial_\lambda Ef^\mathfrak{h}_B\right)\right]\langle\sigma_A^\mathrm{z}\rangle_{\rho^A_{-\infty}}\langle\sigma_B^\mathrm{z}\rangle_{\rho^B_{-\infty}}, \label{eq:W-AB} \end{equation} and we have used that, for any quasi-free state $\omega$, $$\langle\partial_\lambda \phi^\mathfrak{h} \rangle_\omega\equiv {\rm tr}\left(\rho^{\mathfrak{h}}_\omega \partial_\lambda \phi^\mathfrak{h}\right)=0.$$ Note that we have separated the energy variation into three parts. The first one, $W_\phi$, is the contribution to the energy that arises from the particle creation due to the change in the spacetime metric. It depends only on the field state and spacetime metric and has nothing to do with the presence of Alice and Bob. A difficulty we face now is that some sort of renormalization of the field energy operator $H_\phi$ is needed. For this purpose, we will restrict ourselves to the so-called Hadamard states, for which a general renormalization procedure is possible~\cite{wald94}. By noting that any state that is Hadamard in some open neighborhood of a Cauchy surface is Hadamard everywhere~\cite{FSW78}, we can see that the spacetime evolution preserves the renormalizability of the state. As a result, we can see that, for Hadamard states, Eq.~(\ref{eq:W-phi}) (and, thus, Eq.~(\ref{eqDeltaE}) as $W_\phi$ is the only contribution to $\Delta E$ where divergences appear) is well-defined and gives finite results. The second contribution, $W_A+W_B$, depends independently on each qubit interaction with the field. This contribution is due to the work necessary to switch-on/off each qubit, and it depends on their trajectories, coupling constants, as well as on spacetime parameters. The third contribution, $W_{AB}$, measures the extra energy cost arising from the communication process itself. It depends on the initial state of each qubit, on the spacetime metric, and on the relative motion between Alice and Bob. We note that, by integrating by parts and using Eq.~(\ref{eq:symplectic-form-null-surface}), we can write \begin{equation} 2 \int_\mathfrak{h}d\lambda\wedge\epsilon_\Gamma\,\left(\partial_\lambda Ef^\mathfrak{h}_A\right)\left(\partial_\lambda Ef^\mathfrak{h}_B\right)=\sigma_{\mathfrak{h}}\left(Ef^\mathfrak{h}_A, \partial_\lambda Ef^\mathfrak{h}_B\right), \end{equation} which, by using Eqs.~(\ref{eq:unsmeared-commutation-relation-horizon}) and~(\ref{eq:smearing-horizon}), enable us to cast Eq.~(\ref{eq:W-AB}) as \begin{equation} W_{AB}=\left\langle i\left[\phi^{(\mathfrak{h})}(Ef^\mathfrak{h}_A),\phi^{(\mathfrak{h})}(\partial_\lambda Ef_B)\right] \right\rangle_{\omega}\langle\sigma_A^\mathrm{z}\rangle_{\rho^A_{-\infty}}\langle\sigma_B^\mathrm{z}\rangle_{\rho^B_{-\infty}} \end{equation} As expected, we can see that $W_{AB}$ vanishes if Alice's and Bob's qubits interact with the field in causally disconnected regions of the spacetime. More interestingly, we can make the $W_{AB}$ contribution to identically vanish with a convenient choice of $\rho^B_{-\infty}$. Recall that Alice encodes the information she wants to convey in her qubit's initial state $\rho^{A}_{-\infty}$. On the other hand, we are free to choose the initial state of Bob's qubit. The choice $\rho^B_{-\infty}\equiv|y+\rangle_B{}_B\langle y_+|$, for example, leads to $W_{AB}=0$ while it maximizes the channel capacities. This shows that one can convey arbitrary amounts of information through this quantum channel without extra energy costs. \section{Two Paradigmatic Examples} \label{sec:Mink} We now illustrate the results presented in the previous sections with two paradigmatic examples in Minkowski spacetime. Let us begin with the field quantization, following the steps presented in Sec~\ref{sec:Comm.Chann.}. Consider a free and massless scalar field $\phi$ propagating in the Minkowski spacetime $(\mathbb{R}^4,\eta)$. Let $(t,x,y,z)\in\mathbb{R}^4$ denote global inertial Cartesian coordinates and let us denote the spatial coordinates as $\mathbf{x}\equiv(x,y,z)$. The Klein-Gordon equation is simply \begin{equation} \Box \phi = 0, \label{eq:minkowski-KG-equation} \end{equation} with $\Box\equiv -\sum_{\mu,\nu}\eta^{\mu \nu}\partial_\mu\partial_\nu$, $\eta\equiv \sum_{\mu,\nu}\eta_{\mu \nu}dx^\mu\otimes dx^\nu$, and \begin{equation} \eta=-dt\otimes dt + dx\otimes dx + dy\otimes dy + dz\otimes dz. \label{eta} \end{equation} Let $\mathcal{S}^\mathbb{C}$ be the space of complex solutions of Eq.~\eqref{eq:minkowski-KG-equation} with compact-support initial data and consider the antisymmetric bilinear map~\eqref{eq:antisymmetric-bilinear-map}, which takes the form \begin{equation} \sigma(\psi_1,\psi_2) = \int_{\Sigma_{t=0}} d^3\mathbf{x}\left[\psi_2\partial_t\psi_1-\psi_1\partial_t\psi_2\right], \label{eq:KG-product-minkowski} \end{equation} where \begin{equation} \Sigma_{t={\rm cte}}\equiv\left\{{(t,{\bf x})\in \mathbb{R}^4| t={\rm cte}}\right\}. \label{SigmaInert} \end{equation} We choose as the one-particle Hilbert space $\mathcal{H}$ the space spanned by the positive frequency parts, with respect to the inertial time $t$, of solutions in $\mathcal{S}^\mathbb{C}$ Cauchy-completed with the norm induced by the Klein-Gordon inner product~\eqref{eq:KG-inner-product}. One then builds the bosonic Fock space $\mathfrak{F}_s(\mathcal{H})$ as usual, to represent the space of field states and define the field operators via Eq.~\eqref{eq:smeared-quantum-field-definition}. This is the standard CCR representation in Minkowski spacetime associated with inertial observers and we will refer to its vacuum state, $|0_M\rangle$, as the inertial (or Minkowski) vacuum state. Using the Green functions of the D'alambertian operator $\Box$, one can show that the map $E:C^\infty_0(\mathcal{M})\rightarrow \mathcal{S}$ defined in Eq.~\eqref{eq:def-causal-propagator} takes the form~\cite{F89} \begin{equation} Ef(x') =\int \epsilon_{\mathcal{M}} \,f(x)\,E(x',x) \label{eq:causal-operator-minkowski} \end{equation} with \begin{multline} E(x,x')\equiv\frac{1}{4\pi |\mathbf{x}-\mathbf{x'}|}\left[\phantom{\big|}\delta\left(t-t'-|\mathbf{x}-\mathbf{x'}|\right)\right. \\ \left. - \,\delta\left(t-t'+ |\mathbf{x}-\mathbf{x'}|\right)\phantom{\big|}\right]. \end{multline} For later use, it will be useful to consider the standard \textit{positive-frequency modes} \begin{equation} u_\mathbf{k}(t,\mathbf{x})\equiv\frac{1}{4\pi^{\frac{3}{2}}|\mathbf{k}|^\frac{1}{2}}\, e^{-i|\mathbf{k}|t}e^{i\mathbf{k\cdot x}}\;,\;\;\;\mathbf{k}\in\mathbb{R}^3, \label{eq:inertial-modes} \end{equation} which comprises a complete basis for the one-particle Hilbert space $\mathcal{H}$. Now that we have chosen a representation for the CCR in Minkowski spacetime, let us analyze the effects of the field state as well as the state of motion of both Alice and Bob in the communication process. \subsection{Inertial sender and receiver} We consider first the following scenario: suppose Alice is at rest at the origin of our inertial coordinate system and wants to transmit some information to Bob, which is at rest at the spatial position $\mathbf{x}=(L,0,0)$ (thus at rest relative to Alice and separated by a spatial distance $L$). For simplicity, we consider that both are equipped with pointlike detectors. To avoid divergences, we consider that the interactions of each qubit with the field are switched-on/off continuously. We have seen that Eq.~\eqref{eq:qubit-funtion} carries all the information about the qubit interaction with the field. Applying it to Alice's qubit+field interaction gives \begin{equation} f_A(t,\mathbf{x})=\epsilon_Ac_A(t)\delta^3(\mathbf{x}), \label{eq:inertial-Alice-qubit} \end{equation} where $\epsilon_A$ is a dimensionless coupling constant and \begin{equation} c_A(t)=\begin{cases} e^{\alpha_A (t-T_A^\mathrm{i})}, & t < T_A^\mathrm{i} \\ 1, & T_A^\mathrm{i} \leq t \leq T_A^\mathrm{f} \\ e^{-\alpha_A (t-T_A^\mathrm{f})}, & t > T_A^\mathrm{f} \end{cases} \label{eq:Alice-switching-function} \end{equation} models the switching function. Similarly, the function modeling Bob's qubit+field interaction is \begin{equation} f_B(t,\mathbf{x})=\epsilon_Bc_B(t)\delta^3(\mathbf{x}-L\mathbf{\hat{x}}), \label{eq:inertial-Bob-qubit} \end{equation} where $\epsilon_B$ is a dimensionless coupling constant and $c_B(t)$ is defined as $c_A(t)$ but replacing the $A$'s by $B$'s in Eq.~\eqref{eq:Alice-switching-function}. Our goal is to explicitly evaluate the classical channel capacity in Eq.~\eqref{eq:classical-channel-capacity} and analyze its dependence on the various parameters involved in this communication process. To this end, we first substitute Eq.~\eqref{eq:inertial-Bob-qubit} in Eq.~\eqref{eq:causal-operator-minkowski} to write \begin{multline} Ef_B(t,\mathbf{x})=\frac{\epsilon_B}{4\pi|\mathbf{x}-L\mathbf{\hat{x}}|}\left[c_B\left(t-|\mathbf{x}-L\mathbf{\hat{x}}|\right)\right. \\ \left. - c_B\left(t+|\mathbf{x}-L\mathbf{\hat{x}}|\right)\right]. \label{eq:inertial-EfB} \end{multline} Then, by using Eqs.~\eqref{eq:inertial-Alice-qubit} and~(\ref{eq:inertial-EfB}) in Eq.~\eqref{eq:def-nabla}, we can cast the smeared propagator as \begin{multline} \Delta(f_A,f_B) = \frac{\epsilon_A\epsilon_B}{4\pi L}\int_\mathbb{R}dt\,c_A(t)\left[c_B(t-L) -c_B(t+L)\right]. \label{eq:inertial-Delta} \end{multline} Now, note that $\nu_B$ defined in Eq.~\eqref{eq:nu-B-def} depends on Bob's state of motion as well as the quantum field state. Let us consider two cases: if the field is initially in the inertial vacuum state $|0_M\rangle$, $\nu_B$ is simply \begin{equation} \nu_B=\exp\left[-2\langle KEf_B,KEf_B\rangle\right], \label{eq:inertial-vacuum-nu-B} \end{equation} where $\langle\;,\;\rangle$ is the Klein-Gordon inner product~\eqref{eq:KG-inner-product} and we recall that $K:\mathcal{S}^\mathbb{C}\rightarrow \mathcal{H}$ takes the positive-frequency part of any solution of Eq.~(\ref{eq:minkowski-KG-equation}). Now, if the field is in a KMS (thermal) state at temperature $\Theta$, then~\cite{kay1985a} \begin{equation} \nu_B=\exp\left[-2\left\langle KEf_B,\coth\left(\frac{\beta \hat{h}}{2}\right)KEf_B\right\rangle\right], \label{eq:inertial-thermal-nu-B} \end{equation} where $\beta\equiv\Theta^{-1}$ is the inverse temperature and ${\hat{h}:\mathcal{H}\rightarrow\mathcal{H}}$ is the \textit{one-particle Hamiltonian}, which is given by $\hat{h}=i\partial_t$ and satisfies $${H_\phi=d\Gamma(\hat{h})\equiv 1\oplus \hat{h}\oplus \left(\hat{h}\otimes \hat{h}\right)\oplus \cdots}\;.$$ We note that, in the zero-temperature limit (i.e., $\beta\rightarrow \infty$), Eq.~(\ref{eq:inertial-thermal-nu-B}) reduces to Eq.~(\ref{eq:inertial-vacuum-nu-B}), as it should be. Since the modes $u_\mathbf{k}$ defined in Eq.~\eqref{eq:inertial-modes} form a complete basis for $\mathcal{H}$, we can decompose $Ef_B$ as \begin{equation} KEf_B=\int d^3{\bf k} \; \langle u_{\bf k}, Ef_B\rangle \; u_{\bf k} \end{equation} and thus, as $\hat{h}$ is diagonal in this basis, we can write \begin{multline} \left\langle KEf_B,\coth\left(\frac{\beta \hat{h}}{2}\right)KEf_B\right\rangle \\ = \int d^3\mathbf{k} \;\coth\left(\frac{\beta|\mathbf{k}|}{2}\right) \left|\langle u_\mathbf{k},Ef_B\rangle\right|^2. \label{eq:thermal-product} \end{multline} By making use of Eq.~\eqref{eq:KG-inner-product} and Lemma 3.2.1 of~\cite{wald94} we can cast the Klein-Gordon inner product in Eq.~(\ref{eq:thermal-product}) as \begin{equation} \begin{aligned} \langle u_\mathbf{k},Ef_B\rangle & = i\int_\mathcal{M}\epsilon_{\mathcal{M}}\,\overline{u_\mathbf{k}(x)}f_B(x) \end{aligned} \label{eq:product-uk-KEfb0} \end{equation} which, by using Eq.~(\ref{eq:inertial-modes}), can be put in the form \begin{equation} \begin{aligned} \langle u_\mathbf{k},Ef_B\rangle & = \frac{i\epsilon_B}{2^\frac{3}{2}\pi|\mathbf{k}|^\frac{1}{2}}\,\widetilde{c_B}(|\mathbf{k}|)e^{-ik_xL}, \end{aligned} \label{eq:product-uk-KEfb} \end{equation} where \begin{equation} \widetilde{c_B}(\omega)\equiv\frac{1}{\sqrt{2\pi}}\int_\mathbb{R}dt\,e^{i\omega t}c_B(t) \end{equation} is the Fourier transform of $c_B(t)$. Putting together Eqs.~\eqref{eq:inertial-thermal-nu-B},~\eqref{eq:thermal-product}, and~\eqref{eq:product-uk-KEfb} we obtain \begin{equation} \nu_B (\Theta) =\exp\left[-\frac{2\epsilon_B^2}{\pi}\int_0^\infty dk\,k\,\coth{\left(\frac{k}{2\Theta}\right)}|\tilde{c_B}(k)|^2\right]. \label{eq:inertial-thermal-nu-B-final} \end{equation} We can now use Eqs.~\eqref{eq:classical-channel-capacity},~\eqref{eq:inertial-Delta}, and~\eqref{eq:inertial-thermal-nu-B-final} to investigate the classical channel capacity when sender and receiver are inertial observers at rest relative to each other. Let us consider that Alice and Bob let their qubits interact with the quantum field for the same amount of time $$\Delta T\equiv T_A^\mathrm{f}-T_A^\mathrm{i}=T_B^\mathrm{f}-T_B^\mathrm{i},$$ where, for the sake of simplicity, we have set $T_A^\mathrm{i}=0$. We note that by choosing large values of $\alpha_A,\alpha_B$ (i.e., $\alpha_A,\alpha_B \gg 1/\Delta T$) in the switching functions $c_A(t), c_B(t)$, we can model the case where qubit+field interactions take place at finite time intervals $\Delta T$. In Fig.~\ref{fig:inertial-spacetime-diagram} we plot Alice and Bob worldlines for a spatial separation $L$ as well as the regions where the emission and detection events take place. Note that the emission and detection events are spacelike separated whenever $T_B^\mathrm{i}<L-\Delta T$ or time-like separated whenever $T_B^\mathrm{i}>L+\Delta T$. As the field is massless, the channel capacity is expected to be zero in such cases since Bob cannot intercept any signal emitted by Alice. \begin{figure} \centering \begin{tikzpicture}[scale=0.9\columnwidth/7cm,every label/.append style={text=black,font=\large}] \coordinate (O) at (0,0); \coordinate[label=right:$x$] (R) at (6.5,0); \coordinate[label=above:$t$] (T) at (0,6.5); \coordinate (L) at (-0.5,0); \coordinate (B) at (0,-0.5); \coordinate (BobB) at (4,-0.5); \coordinate (AliceTi) at (0,0); \coordinate (AliceTf) at (0,1); \coordinate (FP1) at (6.15,6.15); \coordinate (FP2) at (5.15,6.15); \coordinate[label=left:$T^\mathrm{i}_B$] (TBi) at (0,2); \coordinate[label=left:$T^\mathrm{f}_B$] (TBf) at (0,3); \draw[black,line width=1.25pt,->] (L) -- (R); \draw[black,line width=1.25pt,->] (B) -- (T); \draw[dashed,red,line width=2pt,domain=-0.5:6.15] plot (0,\x); \draw[dashed,blue,line width=2pt,domain=-0.5:6.15] plot (4,\x); \draw[black,line width=0.8pt,dotted,domain=0:4] plot (\x,2); \draw[black,line width=0.8pt,dotted,domain=0:4] plot (\x,3); \draw[dashed,gray,,line width=1.25,domain=0:6.15] plot (\x,\x); \draw[dashed,gray,,line width=1.25,domain=0:5.15] plot (\x,\x+1); \fill[gray,opacity=0.25] (AliceTi) -- (AliceTf) -- (FP2) -- (FP1) -- cycle; \draw[red,line width=6pt,domain=0:1] plot (0,\x); \draw[blue,line width=6pt,domain=2:3] plot (4,\x); \path[<->] ($(B)!0.05!(BobB)$) edge[line width=1.25pt] node[fill=white,anchor=center,pos=0.5] {\large $L$} ($(B)!0.95!(BobB)$); \end{tikzpicture} \caption{Spacetime diagram representing Alice's worldline (red dashed line) and Bob's worldline (blue dashed line). The red and blue rectangles represent the regions where their respective qubits interact with the quantum field. As we are considering a massless field, the gray region represents the region where signals emitted by Alice should be present.} \label{fig:inertial-spacetime-diagram} \end{figure} Let us begin by analyzing how the coupling constants influence the channel capacity $C\left(\mathcal{E}\right)$. For this purpose we consider the case where the quantum field is initially in the inertial vacuum state and, thus, we are considering the $\beta\rightarrow \infty $ limit in Eq.~(\ref{eq:inertial-thermal-nu-B-final}). In Fig.~\ref{fig:inertial-Cxeaxeb}, we plot how $C\left(\mathcal{E}\right)$ varies when one changes the couplings $\epsilon_A$ and $\epsilon_B$. For the sake of illustration, we have considered the case where $T_B^\mathrm{i} = L=4\,\Delta T$. This guarantees that Bob's detection process takes place entirely in the gray region of the plot. We can see that the channel capacity increases very close to 1 (maximum efficiency) for large values of $\epsilon_A$ but decreases rapidly with the increase of Bob's coupling constant $\epsilon_B$. This happens, for a fixed value of $\Delta T,$ because as Alice wants to imprint some information on the field state, the stronger her interaction with the field, the more efficient this process will be. On the other hand, Bob's qubit state is altered when it is allowed to interact with the field. If the interaction is too strong (or if it is switched-on for a long period of time), the information encoded in his qubit state can be lost due to quantum decoherence. \begin{figure}[tp] \includegraphics[width=\columnwidth]{Cxebxea.eps} \caption{Classical channel capacity as a function of Bob's and Alice's coupling constants, $\epsilon_A$ and $\epsilon_B$, respectively. Here, $\alpha_A=\alpha_B=100\,\Delta T^{-1}$ and $T_B^\mathrm{i}=L=4\,\Delta T$. The channel capacity and coupling constants are dimensionless.} \label{fig:inertial-Cxeaxeb} \end{figure} Having established how one can tune $\epsilon_A$ and $\epsilon_B$ to maximize the channel capacity, let us choose suitable values for the coupling constants and investigate the communication process for different choices of the field initial thermal state as well as different causal relations between Alice emission and Bob measurement events. The results are shown in Fig.~\ref{fig:inertial-CxTbi}, where we plot the channel capacity, $C\left(\mathcal{E}\right),$ as a function of the time $T^i_B$ where Bob begins his measurement. In view of our previous results, we have chosen $\epsilon_A=800,$ $\epsilon_B=0.05$, and $L= 4\Delta T$. We can see that the channel capacity vanishes if Bob's qubit interacts with the field too soon ($T_B^\mathrm{i}<L-\Delta T=3\,\Delta T$) or too late ($T_B^\mathrm{i}>L+\Delta T=5\,\Delta T$), regardless of the initial field state. One may observe in Fig.~\ref{fig:inertial-spacetime-diagram} that these are the cases where emission and detection events are spacelike and timelike separated, respectively, and thus Bob cannot intercept any signal emitted by Alice. On the other hand, the maximum communication efficiency is reached when $T_B^\mathrm{i}=L=4\,\Delta T$, since now Bob is able to intercept every signal emitted by Alice. Additionally, note how the temperature of the field state limits the maximum channel capacity. The higher the temperature $\Theta$, the greater the noise in the quantum channel. This increases the quantum decoherence in Bob's qubit state (as the decoherence time decreases) and, thus, it becomes impossible to achieve high efficiency in the communication process. \begin{figure}[htp] \includegraphics[width=\columnwidth]{CxTbi-inertial.eps} \caption{Classical channel capacity as a function of the time $T^\mathrm{i}_B$ when Bob starts the measurement process for different initial thermal states of the field. Each curve represents a different temperature $\Theta$ of the initial quantum field state (with $\Theta=0$ representing the inertial vacuum state). Here, $\alpha_A=\alpha_B=100\,\Delta T^{-1}$, $\epsilon_A=800$, $\epsilon_B=0.05$, and $L=4\,\Delta T$.} \label{fig:inertial-CxTbi} \end{figure} \subsection{Inertial sender, accelerated receiver} \begin{figure*}[tp] \subfloat[]{\label{figa} \begin{tikzpicture}[scale=0.9\columnwidth/6cm,every label/.append style={text=black,font=\large}] \coordinate (O) at (0,0); \coordinate[label=right:$x$] (R) at (5.5,0); \coordinate[label=above:$t$] (T) at (0,4.5); \coordinate (L) at (-0.5,0); \coordinate (B) at (0,-1.5); \coordinate (AliceTi) at (0,-1); \coordinate (AliceTf) at (0,0); \coordinate (FP1) at (5.2,4.2); \coordinate (FP2) at (4.2,4.2); \coordinate[label=below left:{$x_0$}] (x0) at (2,0); \draw[black,line width=1.25pt,->] (L) -- (R); \draw[black,line width=1.25pt,->] (B) -- (T); \draw[dashed,red,line width=2pt,domain=-1.5:4.2] plot (0,\x); \draw[dashed,orange,line width=2pt,domain=-1.5:4.2] plot ({2.0+(sqrt(1+(0.1*\x)*(0.1*\x))-1)/0.1},\x); \draw[dashed,gray,,line width=1.25,domain=0:5.2] plot (\x,\x-1); \draw[dashed,gray,,line width=1.25,domain=0:4.2] plot (\x,\x); \fill[gray,opacity=0.25] (AliceTi) -- (AliceTf) -- (FP2) -- (FP1) -- cycle; \draw[red,line width=6pt,domain=-1:0] plot (0,\x); \draw[orange,line width=6pt,domain=1.235:2.25] plot ({2.0+(sqrt(1+(0.1*\x)*(0.1*\x))-1)/0.1},\x); \end{tikzpicture} }\hfill \subfloat[]{\label{figb} \begin{tikzpicture}[scale=0.9\columnwidth/6cm,every label/.append style={text=black,font=\large}] \coordinate (O) at (0,0); \coordinate[label=right:$x$] (R) at (5.5,0); \coordinate[label=above:$t$] (T) at (0,4.5); \coordinate (L) at (-0.5,0); \coordinate (B) at (0,-1.5); \coordinate (AliceTi) at (0,-1); \coordinate (AliceTf) at (0,0); \coordinate (FP1) at (5.2,4.2); \coordinate (FP2) at (4.2,4.2); \coordinate[label=below left:{$x_0$}] (x0) at (2,0); \draw[black,line width=1.25pt,->] (L) -- (R); \draw[black,line width=1.25pt,->] (B) -- (T); \draw[dashed,red,line width=2pt,domain=-1.5:4.2] plot (0,\x); \draw[dashed,blue,line width=2pt,domain=-1.5:4.2] plot ({2.0+(sqrt(1+(0.5*\x)*(0.5*\x))-1)/0.5},\x); \draw[dashed,gray,,line width=1.25,domain=0:5.2] plot (\x,\x-1); \draw[dashed,gray,,line width=1.25,domain=0:4.2] plot (\x,\x); \fill[gray,opacity=0.25] (AliceTi) -- (AliceTf) -- (FP2) -- (FP1) -- cycle; \draw[red,line width=6pt,domain=-1:0] plot (0,\x); \draw[blue,line width=6pt,domain=1.5:3.204] plot ({2.0+(sqrt(1+(0.5*\x)*(0.5*\x))-1)/0.5},\x); \end{tikzpicture} } \caption{Spacetime diagram, in Cartesian coordinates $(t,x)$, representing Alice (red dashed line) and Bob worldline with different accelerations (blue and orange dashed lines). The solid regions represent emission and detection events that maximize the channel capacity. The detectors remain switched-on for the same proper-time interval.} \label{fig:accelerated-spacetime-diagrams} \end{figure*} Let us consider now the following scenario: suppose Alice is at rest at the origin of some inertial Cartesian coordinate system $(t,x,y,z)$ and wants to transmit information to Bob, which travels uniformly accelerated following the worldline \begin{equation} \begin{aligned} t_B(\tau) & =a^{-1}\sinh\left(a\tau\right), \\ x_B(\tau) & = x_0 + a^{-1}\left[\cosh\left(a\tau\right)-1\right], \\ y_B(\tau) & = z_B(\tau) = 0. \end{aligned} \end{equation} Here, $a$ is Bob proper acceleration, $\tau$ is his proper-time (synchronized as $\tau=0$ when $t=0$), and $x_0$ is the spatial distance between Bob and Alice (as measured by Alice) at the point of maximum approximation. Both worldlines are shown in Fig.~\ref{fig:accelerated-spacetime-diagrams} for two different values of Bob proper acceleration $a$. The quantum field is supposed to be initially in the inertial vacuum state $|0_M\rangle$ and we consider again that both observers are equipped with pointlike detectors which are continuously switched-on/off. The function modeling Alice's qubit+field interaction remains the one in Eq.~\eqref{eq:inertial-Alice-qubit}. To discuss Bob's qubit interaction with the field, let us first introduce Rindler coordinates $(\tau, \xi, y,z)$, with $\tau, \xi \in \mathbb{R}$ implicitly defined by \begin{equation} t=a^{-1}e^{a\xi}\sinh{a\tau}\;,\;\; x=\left(x_0-\frac{1}{a}\right)+a^{-1}e^{a\xi}\cosh{a\tau}.\\ \label{rindlercoord} \end{equation} These coordinates cover the right Rindler wedge (RRW), i.e., the region defined by $\left[x-(x_0-1/a)\right]>|t|$, in which the metric takes the form \begin{equation} g=e^{2a\xi}(-d\tau\otimes d\tau +d\xi\otimes d\xi )+dy\otimes dy +dz\otimes dz. \end{equation} In such coordinates, Bob remains static at $\xi=y=z=0$ and its qubit+field interaction is simply described by the function \begin{equation} f_B(\tau,\xi,y,z)=\epsilon_B\,c_B(\tau)\,\delta(\xi)\,\delta(y)\,\delta(z), \label{eq:accelerated-Bob-qubit} \end{equation} where \begin{equation} c_B(\tau)=\begin{cases} e^{\alpha_B (\tau-\tau_B^\mathrm{i})}, & \tau < \tau_B^\mathrm{i} \\ 1, & \tau_B^\mathrm{i} \leq \tau \leq \tau_B^\mathrm{f} \;.\\ e^{-\alpha_B (\tau-\tau_B^\mathrm{f})}, & \tau > \tau_B^\mathrm{f} \end{cases} \end{equation} In order to analyze the channel capacity, we need to evaluate again $\Delta(f_A,f_B)$. By an analogous procedure to the one leading to Eq.~\eqref{eq:inertial-EfB}, we obtain \begin{equation} Ef_A(t,\mathbf{x})=\frac{\epsilon_A}{4\pi|\mathbf{x}|}\left[c_A\left(t-|\mathbf{x}|\right) - c_A\left(t+|\mathbf{x}|\right)\right]. \label{eq:inertial-EfA} \end{equation} Using Eq.~\eqref{eq:covariant-CCR}, we have \begin{equation} \begin{aligned} \Delta(f_A,f_B) &= -\Delta(f_B,f_A) \\ &=- \int_\mathcal{M} \epsilon_{\mathcal{M}}\;f_B(x)Ef_A(x) \end{aligned} \end{equation} which, in Rindler coordinates~(\ref{rindlercoord}), can be straightforwardly evaluated giving \begin{multline} \Delta(f_A,f_B)= \frac{\epsilon_A\epsilon_B}{4\pi} \int_\mathbb{R}d\tau \; \frac{c_B(\tau)}{|x_B(\tau)|}\left\{c_A\left[t_B(\tau)+x_B(\tau)\right]\right. \\ \left.-c_A\left[t_B(\tau)-x_B(\tau)\right]\right\}. \label{eq:accelerated-Delta-fA-fB} \end{multline} Now, we proceed to calculate the quantity $\nu_B$ defined in Eq.~\eqref{eq:nu-B-def}, which depends on Bob's state of motion and the initial state of the quantum field. In order to do so, it will be useful to introduce the so-called Right Rindler modes defined by \begin{equation} v_{\omega \mathbf{k_\perp}}^R\equiv\left[\frac{\sinh(\pi\omega/a)}{4\pi^4a}\right]^{1/2}K_{i\omega/a}\left(\frac{|\mathbf{k_\perp}|e^{a\xi}}{a}\right)e^{i\mathbf{k_\perp\cdot x_\perp}}e^{-i\omega\tau} \label{vR} \end{equation} in the RRW and vanishing in the left Rindler wedge (LRW), which is the region where $\left[x-(x_0-1/a)\right]>-|t|$. Here, $\mathbf{x_\perp}\equiv(y,z)$, $\mathbf{k_\perp}\in\mathbb{R}^2$, $\omega>0$, and $K_{\nu}(x)$ is the modified Bessel function. The left Rindler modes, $v^L_{\omega {\bf k}_\perp}$, are defined by $v^L_{\omega {\bf k}_\perp}(t,x,{\bf x}_\perp)\equiv \overline{v^R_{\omega {\bf k}_\perp}}(-t,-x,{\bf x}_\perp)$. Hence, they vanish in the RRW and take the form~(\ref{vR}) in Rindler coordinates covering the LRW. By using $v^R_{\omega {\bf k}_\perp}$ and $v^L_{\omega {\bf k}_\perp}$ we can define the so-called Unruh modes \begin{equation} w^1_{\omega \mathbf{k_\perp}}\equiv\frac{v_{\omega \mathbf{k_\perp}}^{R}+e^{-\pi\omega/a}\,\overline{v_{\omega -\mathbf{k_\perp}}^{L}}}{\sqrt{1-e^{-2\pi\omega/a}}},\label{w1} \end{equation} \begin{equation} w^2_{\omega \mathbf{k_\perp}}\equiv\frac{v_{\omega \mathbf{k_\perp}}^{L}+e^{-\pi\omega/a}\,\overline{v_{\omega -\mathbf{k_\perp}}^{R}}}{\sqrt{1-e^{-2\pi\omega/a}}}, \end{equation} which have purely positive frequency relative to inertial time and comprise a complete basis for the one-particle space $\mathcal{H}$ defined below Eq.~\eqref{SigmaInert}. Therefore, we can write \begin{eqnarray} \langle KEf_B,KEf_B\rangle =\int_{\mathbb{R}} d\omega\int_{\mathbb{R}^2} d^2\mathbf{k_\perp}\;\left|\langle w^1_{\omega \mathbf{k_\perp}},Ef_B\rangle\right|^2, \nonumber \\ \label{eq:accelerated-KEfb-norm} \end{eqnarray} where we have used that $ w^1_{-\omega \mathbf{k_\perp}}= w^2_{\omega \mathbf{k_\perp}}$. Analogously to Eq.~\eqref{eq:product-uk-KEfb}, we can write \begin{equation} \langle w^1_{\omega \mathbf{k_\perp}},Ef_B\rangle = i\int \epsilon_{\mathcal{M}}\; \overline{w^1_{\omega \mathbf{k_\perp}}}(x)f_B(x) \end{equation} which, by using Eq.~(\ref{w1}), gives \begin{eqnarray} \langle w^1_{\omega \mathbf{k_\perp}},Ef_B\rangle &=& \frac{i\epsilon_B}{\sqrt{1-e^{-2\pi\omega/a}}}\left[\frac{\sinh(\pi\omega/a)}{2\pi^3a}\right]^{1/2} \nonumber \\ &\times& e^{-\pi\omega/a}K_{i\omega/a}\left(|\mathbf{k_\perp}|/a\right)\widetilde{c_B}(\omega), \label{eq:w-KEfb-product} \end{eqnarray} Substituting Eqs.~\eqref{eq:w-KEfb-product} and~\eqref{eq:accelerated-KEfb-norm} in Eq.~\eqref{eq:inertial-vacuum-nu-B} and evaluating the transverse, $\mathbf{k_\perp}$, integral by means of the identity \begin{equation} \int_0^\infty dx x |K_{i\omega/a}(x)|^2= \frac{\omega}{4a \sinh{(\pi \omega/a)}} \end{equation} we obtain \begin{equation} \nu_B =\exp\left[-\frac{2\epsilon_B^2}{\pi}\int_0^\infty d\omega\,\omega\coth{\left(\frac{\pi\omega}{a}\right)}|\widetilde{c_B}(\omega)|^2\right]. \label{eq:accelerated-nu-B-final} \end{equation} Now, we can use Eqs.~\eqref{eq:classical-channel-capacity},~\eqref{eq:accelerated-Delta-fA-fB},~and~\eqref{eq:accelerated-nu-B-final} to investigate the classical channel capacity $C\left(\mathcal{E}\right)$ for inertial sender and accelerated receiver. We consider that Alice and Bob let their qubits interact with the quantum field for the same amount of their respective proper-time, hence \begin{equation} \Delta T \equiv T_A^\mathrm{f}-T_A^\mathrm{i}=\tau_B^\mathrm{f}-\tau_B^\mathrm{i}. \end{equation} In Fig.~\ref{fig:accelerated-spacetime-diagrams}, we plot their spacetime trajectories as well as regions where emission and detection events may take place. \begin{figure}[tp] \includegraphics[width=\columnwidth]{CxTbi-accelerated.eps} \caption{Classical channel capacity as a function of the proper time $\tau_B^\mathrm{i}$ when Bob starts the measuring process, for different proper accelerations. Each curve represents one of the situations schematized in Fig.~\ref{fig:accelerated-spacetime-diagrams}. Here, $\alpha_A=\alpha_B=100\,\Delta T^{-1}$, $\epsilon_A=420$, $\epsilon_B=0.05$ and $x_0=2\,\Delta T$. The field is initially in the inertial vacuum state.} \label{fig:accelerated-CxTbi} \end{figure} In Fig.~\ref{fig:accelerated-CxTbi} we show the behavior of the classical channel capacity as a function of the (proper) time $\tau^i_B$ in which bob begins the measurement process. Let us first consider the case where that Bob's worldline is the one in Fig.~\ref{figa}, where $x_0=2\,\Delta T$ and $a=0.1\,\Delta T^{-1}$. In this case, Bob intercepts the first and final signal emitted by Alice at proper times $\tau_1\simeq1.05\,\Delta T$ and $\tau_2\simeq2.23\,\Delta T$, respectively. As it can be seen in Fig.~\ref{fig:accelerated-CxTbi}, information exchange is possible only if Bob starts the measurement process at proper-time $\tau_B^\mathrm{i}$ satisfying $\tau_1-\Delta T < \tau_B^\mathrm{i} < \tau_2$. When $\tau_B^\mathrm{i}<\tau_1-\Delta T$ or $\tau_B^\mathrm{i}>\tau_2$, the classical channel capacity vanishes since the emission and detection events will be spacelike or timelike separated, respectively. Let us consider now that Bob's trajectory is the one depicted in Fig.~\ref{figb}, where $x_0=2\,\Delta T$ and $a=0.5\,\Delta T^{-1}$. In this case, Bob's worldline intercepts the first signal emitted by Alice at proper time $\tau_1'\simeq1.37\,\Delta T$. In Fig.~\ref{fig:accelerated-CxTbi}, we see that it is exactly when $\tau_B^\mathrm{i}=\tau_1'$ that the maximum channel capacity is attained. However, since the surface $t = x$ is Bob's causal horizon, he never leaves the lightcone section where the information emitted by Alice is traveling. Thus, the channel capacity never vanishes. It only decreases as Bob accelerates away from Alice. \subsection{Work for switching on/off the detectors} We have shown in Sec.~\ref{sec:energy} that one can use the quantum channel presented here to convey arbitrary amounts of information without any extra energy cost. However, some work is necessary to switch-on/off the interaction of each qubit with the quantum field, which is given by Eq.~\eqref{eq:W-j}. Let us now estimate this energy cost for by means of the inertial detectors previously discussed. For this purpose, let us first relate the ordinary Minkowski QFT construction presented at the beginning of this section to the null-surface construction introduced in Sec.~\ref{sec:nullquant}. By recalling that $(t,x,y,z)$ are inertial Cartesian coordinates where Minkowski metric takes the form given in Eq.~(\ref{eta}), let \begin{equation} u\equiv t-|\mathbf{x}|\; {\rm and} \; v\equiv t+|\mathbf{x}| \label{uvcoord} \end{equation} be the retarded and advanced null coordinates, respectively, where $\mathbf{x}\equiv (x,y,z)$. By defining \begin{equation} \tan{V}\equiv v, \end{equation} one can cast Eq.~(\ref{eta}) using $u$ and $V$ coordinates as \begin{equation} \eta=-\frac{1}{2}\sec^2V\left(du\otimes dV+ dV\otimes du\right)+\frac{(\tan V - u)^2}{4}g_{\mathbb{S}^2}, \end{equation} where $g_{\mathbb{S}^2}$ is the standard metric on a unit 2-sphere. Now, it is easy to see that Minkowski spacetime is future null asymptotically flat in the following sense~\cite{HE}: there exists a second spacetime $(\widehat{\mathcal{M}},\hat{g})$ with metric \begin{equation} \hat{g}=-\frac{2}{(\sin V-u\cos V)^2} \left(du\otimes dV+ dV\otimes du\right) + g_{\mathbb{S}^2} \end{equation} such that: \textbf{(1)} $\mathcal{M}$ is conformally embedded in $\widehat{\mathcal{M}}$ satisfying $\hat{g}=\Omega^2 \eta$ with a conformal factor \begin{equation} \Omega\equiv \frac{2}{\tan V - u}; \label{eq:conformal-factor-minkowski} \end{equation} \textbf{(2)} the future null infinity $\mathcal{I}^+$ is the 3-dimensional null hypersurface defined by the set $$\left\{p\in \widehat{\mathcal{M}}|V=\frac{\pi}{2}\right\}$$ in which the conformal factor satisfies $\left.\Omega\right|_{\mathcal{I}^+}=0$ and $\left.d\Omega\right|_{\mathcal{I}^+}\neq 0$. The conformal metric $\hat{g}$ restricted to $\mathcal{I}^+$ takes the form \begin{equation} \hat{g}|_{\mathcal{I}^+}=-\frac{1}{2}\left(du\otimes d\Omega + d\Omega \otimes du \right) +g_{\mathbb{S}^2}, \end{equation} which has the form given in Eq.~\eqref{eq:metric-restricted-to-horizon}. Thus, we can perform the null quantization procedure at future null infinity $\mathcal{I}^+$ described in Sec.~\ref{sec:nullquant} with $\lambda=u$. Now, let us consider the inertial qubit carried by Alice whose interaction with the field is described by the function $f_A=\epsilon_A c_A(t)\delta^3({\bf x})$ with $c_A$ given in Eq.~\eqref{eq:inertial-Alice-qubit}. To compute the energy cost $W_A$, given in Eq.~\eqref{eq:W-j}, to switch-on/off the qubit, we need to first compute $Ef_A^{\mathfrak{h}}$. To this end, we can use the explicit form of $Ef_A$ given in Eq.~\eqref{eq:inertial-EfA} together with Eq.~(\ref{uvcoord}) to write \begin{equation} Ef_A = \frac{\epsilon_A}{4\pi|\mathbf{x}|}\left[c_A(u)-c_A(v)\right]. \end{equation} By using the above equation, the extension $Ef_A^{\mathfrak{h}}$ of $Ef_A$ to $\mathcal{I}^+$ is simply computed using \begin{equation} Ef_A^{\mathfrak{h}} \equiv \lim_{v\rightarrow\infty}\; \Omega^{-1} Ef_A \end{equation} and $$|{\bf x}|=\frac{v-u}{2}=\Omega^{-1}$$ yielding \begin{equation} Ef_A^{\mathfrak{h}} = \frac{\epsilon_A}{4\pi}\,c_A(u). \label{Efh} \end{equation} By using Eqs.~(\ref{Efh}) and~(\ref{eq:inertial-Alice-qubit}) in Eq.~\eqref{eq:W-j} leads to \begin{equation} \begin{aligned} W_A & =\int_{\mathcal{I}^+} du\wedge\epsilon_{\mathbb{S}^2}\,\left(\frac{\epsilon_A}{4\pi}\right)^2\left[\partial_u c_A(u)\right]^2 \\ & =\frac{\epsilon_A^2\alpha_A}{4\pi}. \end{aligned} \end{equation} We can see that the work necessary to switch-on/off the detector increases with the coupling strength and it is inversely proportional to the time-scale $\tau_{\alpha_A}\equiv \alpha_A^{-1}$ characterizing the switching process. This gives the usual trade-off between the energy $W_A$ of the process and its characteristic time $\tau_{\alpha_A}$: \begin{equation} W_A \tau_{\alpha_A} =\frac{\epsilon_A^2}{4\pi}, \end{equation} which shows that the more energy is needed the more rapid is the switching-on/off of the qubit interaction. \section{Conclusions} \label{sec:finalremarks} In the present paper, we have analyzed the energy cost in conveying classical and quantum information between two arbitrary observers in asymptotically flat and globally hyperbolic spacetimes (possibly containing black holes) when they use a quantum scalar field as a communication channel. We have shown that the energy variation of the total 2-qubits+field system, $\Delta E$, can be cast as $\Delta E =W_\phi + W_A + W_B + W_{AB}$. This shows that such energy variations has three may contributions: {\bf (1)} $W_\phi$ which accounts for the particle creation due to the change of the spacetime geometry; {\bf (2)} $W_A+ W_B$ which gives the energy needed to switch-on/off qubits $A$ and $B$ used by Alice and Bob, respectively, to communicate; {\bf (3)} $W_{AB}$ which describes the extra energy cost needed for the communication process. We have shown that, by suitably choosing Bob's initial (ready-to-measure) state, the term $W_{AB}$ vanishes. Such condition is satisfied by the channel $\mathcal{E}$ considered here. We have then illustrated the communication process and analyzed how the classical channel capacity $C\left(\mathcal{E}\right)$ (and, as a result, the entanglement-assisted classical and quantum capacities as well) behaves in two paradigmatic examples in Minkowski spacetime: {\bf (A)} when sender and receiver are inertial observers and {\bf (B)} when the sender is inertial and the receiver is uniformly accelerated. By using example {\bf (A)}, we were able to analyze how the coupling constants as well as the initial field state influence $C\left(\mathcal{E}\right)$. Example {\bf (B)} enabled us to analyze how causal horizons affects the communication process when the field state is the inertial vacuum (which, by the Unruh effect, is perceived as a thermal state with temperature $T_U=a/2\pi$ by the uniformly accelerated receiver). We have ended the paper using the behaviour of Alice inertial qubit in Minkowski spacetime to estimate the energy cost in switching-on/off the interaction, i.e., $W_A$. We have shown that, as one would expect, $W_A$ satisfies the energy-time relation: $W_A \tau_{\alpha_A}=\epsilon_A^2/ 4\pi$, where $\tau_{\alpha_A}$ is the characteristic time of the switching-on/off process and $\epsilon_A$ is the coupling constant describing Alice's qubit interaction with the field $\phi$. Hence, one would expect to spend an amount $W_X \sim \epsilon_X^2 \tau_{\alpha_X}^{-1}$, $X=A,B$, of energy in order to create the qubits and switch them on/off to perform some task. However, if one has already created the qubits for some purpose (and the energy cost for it is accounted by $W_A+W_B$), there is no extra energy cost in using them to convey information. \acknowledgments I. B. and A. L. were fully and partially supported by S\~ao Paulo Research Foundation under Grants 2018/23355-2 and 2017/15084-6, respectively.
{ "timestamp": "2021-09-29T02:26:02", "yymm": "2109", "arxiv_id": "2109.13896", "language": "en", "url": "https://arxiv.org/abs/2109.13896" }
\section{Introduction} Colorado, 1989. In the Neural Information Processing Systems conference, a game-changer approach of image classification was presented by LeCun {\it et al.}~\cite{lecun1990handwritten}. A breakthrough in the field of computer vision was made when they used Convolutional Neural Networks trained by backpropagation to categorize low-resolution images of handwritten digits. Thenceforward, the resolution of high-level problems scaled expeditiously; the area of computer vision had reached a new and elaborate level. LeNet-1 --- a multi-layer network with convolutions with small size kernels followed by a squashing function and additional local averaging and subsampling layers, diminishing the resolution of the generated feature map --- was just the beginning, and the evolution continues to this day. In 1998, LeCun~\cite{726791} published a paper reviewing miscellaneous approaches applied to handwritten character recognition and compared them, showing that CNNs outperform all other techniques. It also proposes an evolution to the original LeNet-1, LeNet-5. Nonetheless, the simplicity of the architectures, directly tied to the hardware of the time, was short-lived. According to Girshick {\it et al.} (2014)~\cite{girshick2014rich}, CNNs saw massive usage in the 1990s~\cite{lecun1990handwritten,726791}, but then fell out of fashion with the rise of Support Vector Machines (SVMs). In 2012, however, Krizhevsky {\it et al.} (2012)~\cite{alexnet} revived the interest in CNNs by exhibiting considerably higher image classification accuracy --- $84.7\%$ --- on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC)~\cite{imagenet}, almost halve the error rate for object recognition at the time. Likewise, the increasing hardware processing capability and popularization of Graphical Processing Units (GPUs) were additional assets to the works. The impact of the convolutional network called Alexnet and the revival of the CNN topic on images can also be seen in the graphic presented in Figure~\ref{chap1:cnn_occur} of publications by year, since 1985, from three of the largest digital research libraries of the Artificial Intelligence field: ACM, IEEE Xplore, and ScienceDirect. Correlate terms as \textit{convolutional neural network, ConvNet} and \textit{deep learning} were also used to highlight the effects of such event. \begin{figure}[htpb] \centering \includegraphics[scale=0.5]{publication_by_year.pdf} \caption{Occurrences of the terms: \textit{CNN, convolutional neural network, ConvNet} and \textit{deep learning} in 3 different repositories (ACM~\cite{acm}, Elsevier (ScienceDirect)~\cite{sciencedirect} and IEEEXplore~\cite{ieeexplore}).} \label{chap1:cnn_occur} \end{figure} Figure~\ref{chap1:cnn_occur} also shows that brief research is enough to encounter a variety of state-of-the-art winning architectures. But are the early designs so out-of-date and unusable that we have to keep modifying them with such frequency? Besides complex architecture creations, new approaches appeared using more than one of the already established structures to solve classification problems, taking advantage of the infusion of more features in the model: Multistream, or Multichannel, Convolutional Neural Networks (MCNN) have been refined and used in many distinct applications~\cite{karpathy, abade, rw_gammulle2017two, feichtenhofer2016convolutional, 8219720, 8614102, DBLP:journals/tcsv/TuXDLY19, 8513556}. Originally derivative from traditional CNNs, this kind of model architecture allows to employ nearly all, conventional (or not) models, available in the literature, as \textit{LeNet}\cite{726791}, \textit{Alexnet}\cite{alexnet}, \textit{ResNet}\cite{resnet} and many others, by basically adjusting (or modifying) the fusion stage\cite{karpathy}. Normally, multistream approaches are an alternative to multimodal tasks, such as video classification or gesture recognition, due to their capability of feeding the network with an extra set of features. In Karpathy \textit{et al.} (2014)~\cite{karpathy}, the fusion issue was taken to a whole new level: two individual single-frame networks were arranged time-delayed apart and their outputs were merged in a fusion procedure, improving the accuracy scores on video action recognition applications. Feichtenhofer \textit{et al.} (2014)~\cite{feichtenhofer2016convolutional} went a little further on network fusions, showing that this additional step can be implemented at a convolutional layer sans loss of accuracy. A model based on a cross-fusion method applied for image classification multistream approaches, the Lattice-Convolutional Neural Network (LCNN), was presented in Almeida \textit{et al.} (2019)~\cite{lattice}. This general architecture, that can be adapted for any CNN with an ReLU activation output, is focused on the fusion stages of the network. Using input distractors to prove the efficacy of the technique, the cross-fusion strategy is focused on observed features, particularly in the ReLU's block, used by many classical MCNNs models developed in several popular approaches (e.g. in \cite{karpathy,Velickovic_2016,feichtenhofer2016convolutional}). Since the LCNN model targets how the fusions are performed can improve the final model accuracy, in this paper, we explored more of this technique by expanding the tests of the aforementioned strategy, gathering more datasets for a fair comparison, and adapting eight traditional CNN models of different shapes: Alexnet~\cite{alexnet}, Xception~\cite{chollet2017xception}, ResNet-18, ResNet-34, ResNet-50~\cite{resnet}, DenseNet-121, DenseNet-169 and DenseNet-201~\cite{densenet}. All these architectures were, at a given time, state-of-the-art on ImageNet competition~\cite{imagenet}, a reference on the image classification task, and they fill the LCNN main requirement: ReLU activations. Here, we show that a slight modification can bring these once state-of-the-art back to the competition. We reaffirm that our intention is not to achieve the state-of-the-art accuracy, but rather to indicate that models that are no longer considered the best in performance can receive an extra life. This recycling process can provide power to out-of-date models, reusing a backbone that has already been developed. Using CIFAR-10 dataset as reference, our lattice strategy outperforms a simple late fusion of the models in at most $30.78\%$ --- L-ResNet-50 with average operation --- and at least $0.54\%$ --- L-DenseNet-121 with addition operation ---. Furthermore, it also presents an adaptability when it comes to testing with different types of mathematical operations, providing an auxiliary alternative when results are not so good. Thus, this paper is divided as follows: Section~\ref{relatedworks} presents a brief background on multiple stream networks, our primary basis. Section~\ref{lattice} introduces the L-strategy module, while Sections~\ref{experiment} and~\ref{results} shows our experiments and discussions, with positive and negative aspects of the technique. Section~\ref{conclusion} concludes the work with its future projections. \section{Background} \label{relatedworks} \subsection{Convolutional Neural Networks} The late 90's LeNet-5~\cite{726791} is a 7-layer CNN that receives as input an approximately size-normalized and centered 32x32 pixel image. This input size was meant to be larger than the images of the dataset in order to center the potential distinctive features of the data, such as corners. The first convolutional layer, C1, has six 28x28 sized feature maps. Each unit in each feature map is connected to a 5x5 neighborhood in the input. Afterward, there is a subsampling layer with six 14x14 sized feature maps. These feature maps' units are connected to a 2x2 neighborhood in the corresponding feature map in C1. These connections are sustained until the third convolutional layer. This behavior causes a break of symmetry in the network, so different feature maps are forced to extract different features~\cite{726791}. Following the breakthrough models, the AlexNet architecture~\cite{alexnet} has eight layers. With an input size of $224 \times 224 \times 3$, the first convolutional layer filters the image with 96 kernels of size $11 \times 11 \times 3$ with a distance between the receptive field centers of neighboring neurons in a kernel map of 4 pixels. The output of the first convolutional layer becomes the input of the second convolutional layer. The third, fourth, and fifth convolutional layers are connected without pooling layers. The first two fully-connected layers use a dropout~\cite{dropout} technique to reduce data overfitting. ResNet~\cite{resnet} is also a deep architecture that relies in modules to prevent a learning degradation with the increase of depth. Instead of learning a direct mapping of $x \rightarrow y$ with a few stacked non-linear layers ($H(x)$), a residual mapping is proposed, where $F(x) := H(x) — x$, which can be reframed into $H(x) = F(x) + x$, where $F(x)$ and $x$ represents the stacked non-linear layers and the identity function respectively. According to He {\it et al.} (2015)~\cite{resnet}, it is easier to optimize the residual mapping than the unreferenced mapping. Likewise, the formulation of $H(x) = F(x) + x$ can be realized by feedforward neural networks with shortcut connections. In the \textit{ResNet Module}, these shortcut connections perform identity mapping, and their outputs are added to the outputs of the stacked layers. To evaluate the explored strategy herein, we used ResNet-18, ResNet-34 and ResNet-50. Xception~\cite{chollet2017xception} network is an interpretation of Inception modules~\cite{googlenet} with linear stacks of depthwise separable convolution layers with residual connections. Moreover, in 2018, DenseNets or Densely Connected Convolutional Networks~\cite{densenet} were introduced. They are an architecture that connects each layer to every other layer in a feed-forward way so maximum information flow between layers in the network is guaranteed. In this model, for each layer, the feature-maps of all earlier layers are used as inputs, and its own feature-maps are used as inputs into all following layers. Here, we modified architectures DenseNet-121, DenseNet-169 and DenseNet-201. \subsection{Multistream Convolutional Neural Networks} Karpathy {\it et al.} (2014)~\cite{karpathy} proposed an empirical evaluation of CNNs on video classification based on the fact that the standard video classification approaches~\cite{dollar2005behavior, laptev2005space, laptev2008learning, 5206744} consisted in involving visual features extraction, feature combination, and then classification and also that CNNs were achieving state-of-the-art results on different fields of computer vision~\cite{alexnet, seg2, obj1}, a new technique that included local motion information present in the video as connectivity to a CNN architecture was suggested. This technique is a multiresolution architecture that extends the connectivity of a CNN and uses the additional information as a way to improve overall performance. Using an Alexnet~\cite{alexnet} as a baseline, Karpathy {\it et al.} explore three different types of network fusion: early, late, and slow. These fusion stages, illustrated in Figure~\ref{karpathy}, are concatenations in between layers and late fusion is the preferred method to this day. \begin{figure*}[!htpb] \centering \includegraphics[width=.9\textwidth]{karpathy-models.png} \caption{Fusion stages presented by Karpathy {\it et al.}(2014)~\cite{karpathy}. Figure originally found in~\cite{karpathy}.} \label{karpathy} \end{figure*} In the same year, Simonyan \textit{et al.} (2014) proposed a two-stream architecture for the purpose of receiving spatial and temporal components of a video. However, in this work, two fusion methods are considered: averaging and training a multi-class linear SVM on stacked $L_{2}$-normalized softmax scores as features. Inspired by this proposal, Feichtenhofer {\it et al.} (2016)~\cite{feichtenhofer2016convolutional} propose a fusion evaluation of CNNs, to best take advantage of the additional information that this kind of network disposes of. Using the two-stream architecture previously proposed by Simonyan {\it et al.} (2014)~\cite{simonyan2014two}, the authors consider different types of fusion, for spatial --- sum, maximum, concatenation, convolution, and bilinear --- and temporal --- 3D pooling and 3D convolution + pooling --- features. \begin{figure*}[!htpb] \centering \includegraphics[width=.85\textwidth]{simon-model.png} \caption{Two-stream architecture. Figure originally found in~\cite{simonyan2014two}.} \label{simonyan} \end{figure*} It is still unusual to work with multiple streams in an image classification context. Aerial images, for example, have not only space and texture features but also contain a large number of scene semantic information~\cite{rw_yu2018two}. Thus, a good feature representation is needed for positive classification results. Yu {\it et al.} (2018)~\cite{rw_yu2018two} proposed a two-stream CNN based on two pre-trained CNNs as feature extractors to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Right before an extreme learning machine classifier, the streams are fused in two distinct strategies -- concatenation and sum. Abade {\it et al.} (2019)~\cite{abade} use a multistream approach in order to diagnose plant diseases. It adapts classical CNN models to train and evaluates the PlantVillage dataset. The adaptation was made so the different versions of the dataset could be used. The fusion strategy, however, is a simple late fusion. As previously shown, multiple stream networks have been developed, applied, and used in many situations and applications nowadays~\cite{rw_gammulle2017two, 8219720, 8614102, DBLP:journals/tcsv/TuXDLY19, 8513556, 8371385}. Nevertheless, signal treating in between multistream layers strategies lack in literature. Velickovic \textit{et al.} (2016)~\cite{Velickovic_2016} present a cross-modal architecture for image classification, the X-CNN. Designed to deal with sparse data sets, X-CNNs are a typically image-based approach that allows weight-sharing and information exchange between hidden layers of a network by using cross-connections inserted after each pooling layer, presented in Figure~\ref{xcnn}. Inspired by the biological cross-connections between various sensory networks, this strategy attempts to improve CNNs predictions without requiring large amounts of input data dimensionality. \begin{figure*}[!htpb] \centering \includegraphics[width=\textwidth]{xcnn.png} \caption{X-CNN architecture. Figure originally found in~\cite{Velickovic_2016}.} \label{xcnn} \end{figure*} Akilan \textit{et al.} (2017)~\cite{8122666} explore late fusion upon different multi-deep CNNs used as feature extractors. Their approach uses a four rule feature fusion -- product, sum, average and maximum -- in order to merge different CNN features. Based on previous works that ensembled distinct classifiers, such as K-Nearest Neighbors (KNN) and CNNs, they demonstrate once more that even simple fusion processes can improve classification results. Still following the line of multiple CNN architectures ensemble, Amin-Naji \textit{et al.} (2019)~\cite{AMINNAJI2019201} also use several networks simultaneously trained on a dataset to solve an issue. This time, however, the proposed methodology also intends to create multiple focuses on the dataset, improving the overall accuracy. For fusing the networks, a concatenation strategy is used. As said before, it is typical to observe multiple streams when treating videos, that have spatial and temporal portions, or when using different data modalities. In~\cite{Velikovi2018}, multimodal time-series data is analyzed using an X-LSTM technique. Each input passes through a separate three-layer LSTM stream, allowing a piece of information to flow using cross-connections between the streams in the second layer, where features from a stream are passed and concatenated with features from another stream. In~\cite{perez}, they propose a search space that covers numerous possible fusion architectures given the nature of the multimodal dataset. The outputs of layers that perform a function, such as convolutions or poolings, are then eligible to be chosen for fusion in this approach. When a fusion point is selected, a concatenation operation is performed. The XFlow network~\cite{8894404} also brings an ensemble of different architectures, but with cross-connections as fusing strategies. In 2020, Joze \textit{et al.}~\cite{joze2020mmtm} proposed a multimodal transfer module for CNN fusions -- MTMM --. The MTMM can be added at different levels of the model and one main advantage is that the input tensors do not have to have the same spatial dimensions, as it performs squeeze and excitation operations. Also, each stream can be initialized with existing pre-trained weights, since minimum chances are made in the main structure. Although the module is not mainly used for image classification, it can be adapted to this task. An illustration of the module can be found in Figure~\ref{mtmm}, where $A$ and $B$ are layer features. \begin{figure}[!htpb] \centering \includegraphics[width=0.4\textwidth]{mtmm.png} \caption{MTMM module. Figure originally found in~\cite{joze2020mmtm}.} \label{mtmm} \end{figure} \section{Lattice Cross-fusion Strategy} \label{lattice} In system analysis, there is a field that studies the enhancement or restoration of degraded signals~\cite{signals_oppenheim}. Inspired by the fact that there is no signal processing without degradation and that a CNN signal suffers loss along its course, we decided to apply basic signal operations in between activation layers. Especially the ones that use the ReLU (Rectified Linear Unit) function: $g(z) = max\{0, z\}$, where $g(z)$ is a non-linear function; used by many classical MCNNs models developed in several popular approaches (e.g. in~\cite{karpathy,Velickovic_2016,feichtenhofer2016convolutional}). ReLU outputs zero across half its domain, making the derivatives through it remain significant whenever the unit is active. Nonetheless, they cannot learn with gradients near zero~\cite{deep_learning_goodfellow}. In order to boost up near zero gradients and get a hold of important features that may be left out, we combine two different signal streams and in a crossing signal inference of the operation result to the input of the subsequent layer. Equation~\ref{eq_fusion} presents a fusion $F$ of two signals $s, \bar{s}$, where $\odot$ represents the chosen mathematical operation. \begin{equation} \centering F(s,\bar{s}) = s \odot \bar{s}. \label{eq_fusion} \end{equation} Then, a layer $L$ can be defined as: \begin{equation} \centering \begin{aligned} L(s,\bar{s}) =[F_{a}(C_{a}(s), C_{b}(\bar{s})), \\ F_{b}(C_{b}(\bar{s}), C_{b}(s))]. \end{aligned} \label{lat_eq1} \end{equation} As described in Almeida \textit{et al.} (2019)~\cite{lattice}, the cross-fusion function $F: C_{a}, C_{b},..., \\C_{k},... C_{n} \rightarrow y$ combines the $n-1$ convolutional layers with the $C_{k} \in \mathbb{R}^{h \times w \times d}$ where $h, w$ and $d$ are the height, width, and depth (number of channels/streams), respectively. The cross-fusion general module is defined in Equation~\ref{lat_eq1}. It computes the operation $\odot$ of two convolutional layers inputs, $C_{a}$ and $C_{b}$, connecting the result as an input of the next layers $C_{a'}$ and $C_{b'}$. It is important to point out that any mathematical operation available in the Tensorflow library~\cite{abadi2016tensorflow} can be applied in the $\odot$ stage, such as addition, subtraction, average. These fusion modules are repeated along with all CNNs' \textit{convolution-ReLU} layers sets, finishing with a late fusion process right before the fully connected stack step. Figure~\ref{generall} presents a visual definition of this proposed cross-function strategy. \begin{figure*}[!htpb] \centering \includegraphics[width=0.75\textwidth]{lcnnmodel_vertical_color.png} \caption{a general L-CNN model. Convolutional layers are represented by the color red, followed by a wine color ReLU indicator, while the fusion modules are dotted-blue. Poling layers are expressed by the color green. Other layers are included for an architecture disclosure.} \label{generall} \end{figure*} The signal crossing can be noticed in the visualization of the L-strategy module found in Figure~\ref{lmodule}. \begin{figure}[H] \centering \includegraphics[width=0.45\textwidth]{module.png} \caption{A closer look at the L-strategy module. $OP$ represents a mathematical operation.} \label{lmodule} \end{figure} With this signal enhancement, it is possible to empower CNNs with fewer layers and use simpler structures to achieve as good results as state-of-the-art networks that have tons of convolutional layers, modules and cannot run in modest hardware. The next Section shows our work methodology and we demonstrate that our results, even with insignificant inputs, generally outperform classical late fusion approaches. \section{Experimental evaluation} \label{experiment} Knowing that the addition of features in another stream will naturally improve the network accuracy, we want to demonstrate that the lattice strategy has significant improvements even with distractors or non-quality features being included on the input of the network. Since our cross-fusion strategy can be applied to any network that has convolutions with ReLU activations, we chose to evaluate our method on eight distinct architectures, varying in depth, the number of convolutional layers and parameters in general. Further, three fusion operations were used. \subsection{Network architectures} For our experimental evaluation, 8 different network architectures were used as backbones to new networks that used the cross-fusion module, all of them detailed in Section~\ref{relatedworks}. Except for AlexNet, the other architectures are very large and we are not going to detail our L-strategy blocks. Three mathematical operations were implemented in our fusion module: average, addition and subtraction. Full models can be found in the appendix section. All the experiments were made using a default SGD optimizer with a 0.01 learning rate. \begin{table*}[!htpb] \centering \begin{tabular}{c|c|c} \textbf{Architecture} & \textbf{Type} & \textbf{Parameters} \\ \hline\hline \multirow{3}{*}{Alexnet} & Single stream & 23981450 \\ \cline{2-3} & Multistream - late fusion & 30087946 \\ \cline{2-3} & Multistream - lattice & 30087946 \\ \hline \multirow{3}{*}{ResNet-18} & Single stream & 11192458 \\ \cline{2-3} & Multistream - late fusion & 22384906 \\ \cline{2-3} & Multistream - lattice & 34937866 \\ \hline \multirow{3}{*}{ResNet-34} & Single stream & 21311754 \\ \cline{2-3} & Multistream - late fusion & 42623498 \\ \cline{2-3} & Multistream - lattice & 65295754 \\ \hline \multirow{3}{*}{ResNet-50} & Single stream & 23592842 \\ \cline{2-3} & Multistream - late fusion & 47185674 \\ \cline{2-3} & Multistream - lattice & 57338634 \\ \hline \multirow{3}{*}{DenseNet-121} & Single stream & 7047754 \\ \cline{2-3} & Multistream - late fusion & 14095498 \\ \cline{2-3} & Multistream - lattice & 14087306 \\ \hline \multirow{3}{*}{DenseNet-169} & Single stream & 12659530 \\ \cline{2-3} & Multistream - late fusion & 25319050 \\ \cline{2-3} & Multistream - lattice & 25305738 \\ \hline \multirow{3}{*}{DenseNet-201} & Single stream & 18341194 \\ \cline{2-3} & Multistream - late fusion & 36682378 \\ \cline{2-3} & Multistream - lattice & 36667018 \\ \hline \multirow{3}{*}{Xception} & Single stream & 20881970 \\ \cline{2-3} & Multistream - late fusion & 41763930 \\ \cline{2-3} & Multistream - lattice & 41728538 \\ \end{tabular} \caption{Network parameters by implementation type.} \label{parameter} \end{table*} \subsection{Data} Three sets of data were used to evaluate our fusion strategy: CIFAR-10~\cite{cifar10}, CIFAR-50, and CIFAR-100~\cite{cifar100}. Three different sizes of datasets were chosen to check our strategy performance with a distinct amount of data samples. In order to create images for the second stream input, we created a mirrored input using an edge extraction created with a Canny edge detector~\cite{Canny:1986:CAE:11274.11275}. Figure~\ref{fig_2} presents an example of our input streams. Also, it is important to emphasize that all images were resized to a $224 \times 224$ shape with the purpose that they fit most of our networks default input shapes. \begin{itemize} \item The CIFAR-10 dataset is composed of 60000 colored images in a 32x32 resolution, distributed in 10 balanced classes. There are 50000 training images and 10000 test images. \item Instead of 10 classes, like the previously cited CIFAR-10, the CIFAR-100 has 100 classes with 600 32x32 images in each class. Inside the one hundred classes, there are 20 superclasses that generalize certain labels, but these were not considered in this work. \item To produce the CIFAR-50 dataset, we randomly chose fifty classes from the original CIFAR-100 dataset. The final samples consisted of 50 classes with 600 images each, 500 for training and 100 for testing. \end{itemize} \begin{figure}[!htpb] \centering \begin{subfigure}[t]{.35\textwidth} \centering \includegraphics[width=\linewidth]{input_example_a.jpg} \caption{RGB image.} \label{fig_2:sub1} \end{subfigure} \begin{subfigure}[t]{.35\textwidth} \centering \includegraphics[width=\linewidth]{input_example_b.jpg} \caption{Image with edge detection.} \label{fig_2:sub2} \end{subfigure} \caption{A class sample from the CIFAR dataset~\cite{cifar10}. In (a) the input is a $224 \times 224$ RGB image, used as the models' first input stream. In (b), the same image with edge detection is the input of the second stream.} \label{fig_2} \end{figure} \subsection{NORB Object Recognition Dataset} The main goal of this dataset is to recognize 3D objects from shapes. It has pictures of 50 toys belonging to 5 general categories: airplanes, four-legged animals, trucks, human figures, and cars. The dataset provides a pair of images for every toy: the items were imaged by a pair of cameras under 9 elevations (30 to 70 degrees every 5 degrees), 6 lighting conditions, and 18 azimuths (0 to 340 every 20 degrees)~\cite{norb}. \begin{figure}[!htpb] \centering \includegraphics[width=.5\textwidth]{norb.png} \caption{Pair of images found in NORB dataset.} \label{norb} \end{figure} \subsection{Hardware and training time} All networks -- except for AlexNet -- were adapted from their previous implementation on the Keras library~\cite{chollet2015keras}. Fusion operations were created using tensor operations available in the Tensorflow framework~\cite{abadi2016tensorflow}. The experiments were trained in one RTX 3090 GPU and all experiments, including the NORB dataset, took approximately 4 months. \section{Results and discussion} \label{results} All of the CIFAR datasets had their training set subsampled in a cross-validation procedure with 5 folds. The following accuracy results are presented by fold. No data augmentation nor transfer learning was used in the whole process. Due to a large number of experiments, we chose to gather all accuracies in one figure. In all plots, the color \textit{blue} will represent the late fusion version of the network. Colors \textit{orange, green}, and \textit{red} are the L-fusion architectures with average, sum, and subtraction operations. \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{alexnet-10.png} \caption{Alexnet - CIFAR-10} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{alexnet-50.png} \caption{Alexnet - CIFAR-50} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{alexnet-100.png} \caption{Alexnet - CIFAR-100} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{dense121-10.png} \caption{DenseNet-121 - CIFAR-10} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{dense121-50.png} \caption{DenseNet-121 - CIFAR-50} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{dense121-100.png} \caption{DenseNet-121 - CIFAR-100} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{dense169-10.png} \caption{DenseNet-169 - CIFAR-10} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{dense169-50.png} \caption{DenseNet-169 - CIFAR-50} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{dense169-100.png} \caption{DenseNet-169 - CIFAR-100} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{dense201-10.png} \caption{DenseNet-201 - CIFAR-10} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{dense201-100.png} \caption{DenseNet-201 - CIFAR-50} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{dense201-100.png} \caption{DenseNet-201 - CIFAR-100} \end{subfigure} \caption{Compilation of accuracy comparisons using Alexnet and DenseNet architecture variations.} \label{fig:alexdense} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{xception-10.png} \caption{Xception - CIFAR-10} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{xception-50.png} \caption{Xception - CIFAR-50} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{xception-100.png} \caption{Xception - CIFAR-100} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{resnet18-10.png} \caption{ResNet-18 - CIFAR-10} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{resnet18-50.png} \caption{ResNet-18 - CIFAR-50} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{resnet18-100.png} \caption{ResNet-18 - CIFAR-100} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{resnet34-10.png} \caption{ResNet-34 - CIFAR-10} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{resnet34-50.png} \caption{ResNet-34 - CIFAR-50} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{resnet34-100.png} \caption{ResNet-34 - CIFAR-100} \end{subfigure} \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{resnet50-10.png} \caption{ResNet-50 - CIFAR-10} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{resnet50-50.png} \caption{ResNet-50 - CIFAR-50} \end{subfigure} \hfill \begin{subfigure}[b]{0.3\textwidth} \centering \includegraphics[width=\textwidth]{resnet50-100.png} \caption{ResNet-50 - CIFAR-100} \end{subfigure} \caption{Compilation of accuracy comparisons using Xception and ResNet architecture variations.} \label{fig:xcepres} \end{figure} \begin{figure} \centering \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{alexnet_loss.png} \caption{AlexNet losses per fold - CIFAR-10 dataset} \end{subfigure} \hfill \begin{subfigure}[b]{0.49\textwidth} \centering \includegraphics[width=\textwidth]{resnet18_loss.png} \caption{ResNet-18 losses per fold - CIFAR-10 dataset} \end{subfigure} \begin{subfigure}[b]{0.5\textwidth} \centering \includegraphics[width=\textwidth]{densenet169_loss.png} \caption{DenseNet-169 losses per fold - CIFAR-100 dataset} \end{subfigure} \caption{Loss comparison per fold showing different architectures and distinct inputs.} \label{fig:losses} \end{figure} Considering the increase of the network parameters presented in Table~\ref{parameter}, but also an accuracy gaining of $5.66\%$ using a default L-strategy average operation over a simple late fusion technique, independently of the given dataset or dataset quality, it is fair to say that the occasional increase in parameters is outweighed by the performance boost. Our losses graph, Figure~\ref{fig:losses}, also shows that the lattice-adapted networks tend to converge faster. The goal of this work is not to beat any kind of state-of-the-art models, as previously noted. Thus, it is remarkable to achieve strong accuracies with very degraded inputs. Our L-Xception with the average operation, for example, reached an $87.02\%$ accuracy on CIFAR-10. Also, using the same dataset as a comparator, a late fusion Alexnet went from $42.69\%$ to $58.66\%$. Alexnet has a very straightforward backbone, being easy to implement and it can run smoothly on plain hardware. Tables~\ref{tb:allacc10},~\ref{tb:allacc50}, and~\ref{tb:allacc100} present the mean accuracy of the folds for all models and datasets tested here, being a summarized version of the graphs presented in Figures~\ref{fig:alexdense}, and~\ref{fig:xcepres}. \newpage \KOMAoptions{paper=landscape,pagesize} \newgeometry{top=2cm,textwidth=19cm,textheight=18cm} \begin{table}[!htpb] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \multicolumn{2}{c|}{Architecture} & Alexnet & DenseNet-121 & DenseNet-169 & DenseNet-201 & Xception & ResNet-18 & ResNet-34 & ResNet-50 \\ \hline\hline \multicolumn{2}{c|}{MCNN} & 0.4269 & 0.77938 & 0.75268 & \textbf{0.83398} & 0.83796 & 0.55552 & 0.44064 & 0.28746 \\ \hline \multirow{3}{*}{LCNN} & \textit{avg} & \textbf{0.58664} & 0.54228 & 0.54044 & 0.57288 & 0.8702 & \textbf{0.66346} & 0.50456 & \textbf{0.59532}\\ \cline{2-10} & \textit{add} & 0.1 & \textbf{0.78486} & 0.76316 & 0.83204 & 0.8689 & 0.5501 & 0.56054 & 0.54876 \\ \cline{2-10} & \textit{sub} & 0.1 & 0.774 & 0.7706 & 0.80192 & \textbf{0.87172} & 0.64964 & \textbf{0.63494} & 0.56318 \end{tabular} \caption{Mean accuracies of all trained models for CIFAR-10.} \label{tb:allacc10} \end{table} \begin{table}[!htpb] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \multicolumn{2}{c|}{Architecture} & Alexnet & DenseNet-121 & DenseNet-169 & DenseNet-201 & Xception & ResNet-18 & ResNet-34 & ResNet-50 \\ \hline\hline \multicolumn{2}{c|}{MCNN} & 0.18856 & 0.3574 & 0.35958 & 0.54936 & 0.71128 & 0.2506 & 0.27376 & 0.30304 \\ \hline \multirow{3}{*}{LCNN} & \textit{avg} & \textbf{0.336} & 0.47392 & 0.49604 & 0.48392 & 0.69468 & 0.29904 & 0.25228 & \textbf{0.3896}\\ \cline{2-10} & \textit{add} & 0.1 & \textbf{0.55664} & 0.5418 & \textbf{0.60304} & 0.69968 & 0.37636 & 0.37844 & 0.34484\\ \cline{2-10} & \textit{sub} & 0.1 & 0.53472 & \textbf{0.54212} & 0.576 & \textbf{0.7122} & \textbf{0.39008} & \textbf{0.39488} & 0.36508 \end{tabular} \caption{Mean accuracies of all trained models for CIFAR-50.} \label{tb:allacc50} \end{table} \begin{table}[!htpb] \centering \begin{tabular}{c|c|c|c|c|c|c|c|c|c} \multicolumn{2}{c|}{Architecture} & Alexnet & DenseNet-121 & DenseNet-169 & DenseNet-201 & Xception & ResNet-18 & ResNet-34 & ResNet-50 \\ \hline\hline \multicolumn{2}{c|}{MCNN} & 0.17308 & \textbf{0.77938} & 0.75268 & \textbf{0.83398} & 0.64986 & 0.25368 & \textbf{0.3718} & \textbf{0.3094}\\ \hline \multirow{3}{*}{LCNN} & \textit{avg} & \textbf{0.3058} & 0.77306 & \textbf{0.7905} & 0.78772 & 0.65028 & 0.27036 & 0.37116 & 0.29326\\ \cline{2-10} & \textit{add} & 0.1 & 0.49234 & 0.5043 & 0.54928 & \textbf{0.66294} & 0.32088 & 0.31384 & 0.2655\\ \cline{2-10} & \textit{sub} & 0.1 & 0.48726 & 0.49514 & 0.53238 & 0.66398 & \textbf{0.3709} & 0.3244 & 0.29738 \end{tabular} \caption{Mean accuracies of all trained models for CIFAR-100.} \label{tb:allacc100} \end{table} \newpage \KOMAoptions{paper=portrait,pagesize} \recalctypearea After the experiments with variant and degraded CIFAR datasets, we decided to understand our proposal better by applying our method in a real and not overused dataset. Table~\ref{tb:norb} presents a comparison between three different-sized architectures in three forms: single stream, multistream with late fusion, and multistream with our lattice cross-connection. Results show that at least one lattice operation could outperform all other results, demonstrating the flexibility of the technique when we come across not-so-good results. \begin{table}[] \centering \begin{tabular}{c|c|c|c|c} \multicolumn{2}{c|}{Architecture} & DenseNet-169 & ResNet-50 & Alexnet \\ \hline\hline \multicolumn{2}{c|}{Single stream - left image} & 0.6021 & 0.2 & 0.2 \\ \hline \multicolumn{2}{c|}{Multistream - late fusion} & 0.3727 & 0.6785 & 0.2 \\ \hline \multirow{3}{*}{Multistream - lattice} & \textit{average} & \textbf{0.7091} & 0.4682 & 0.2 \\ \cline{2-5} & \textit{addition} & 0.5253 & 0.3138 & 0.2 \\ \cline{2-5} & \textit{subtraction} & 0.5156 & \textbf{0.7558} & \textbf{0.8321} \end{tabular} \caption{NORB accuracies on the test set.} \label{tb:norb} \end{table} \subsection{What if L-strategy doesn't improve my results?} There is a possibility that the described module ant its default operators will over-amplify signals in a way that the model will simply not learn. Using signal processing knowledge, we know that when this kind of peaks occurs, we can perform a compression to approximate low and high signals. Therefore, we propose the use of a logarithm-based compression function that will balance the broad range, reducing the difference between large and small values. This function is described in Equation~\ref{eq:log}, where $s$ and $\bar{s}$ are the incoming signal streams. \begin{equation} \centering F(s,\bar{s}) = s - \log(1 - \bar{s}). \label{eq:log} \end{equation} An example is our L-VGG-16~\cite{vgg} implementation. Our first run using an \textit{average} function went well, but not enough to beat the M-VGG-16 architecture. Further runs with other defined operators were boosting both tensor streams and our model found its local minimum very fast, without room for optimizations. Figure~\ref{fig:vgg} shows a comparison between operations and modeling type. \begin{figure}[H] \centering \includegraphics[width=.5\textwidth]{vgg16-10.png} \caption{VGG-16 results on CIFAR-10.} \label{fig:vgg} \end{figure} Table~\ref{tb:vgg} presents all obtained VGG-16 accuracies for each fold on test set. We can clearly see that our log-compression function improved the overall results. \begin{table*}[!htpb] \centering \begin{tabular}{c|ccccc} \multicolumn{1}{c|}{} & \multicolumn{5}{c}{Architecture} \\ \cline{2-6} \multicolumn{1}{c|}{\multirow{-2}{*}{Fold}} & M-VGG-16 & L-VGG-16-avg & L-VGG-16-log & L-VGG-16-add & L-VGG-16-sub \\\hline\hline 1 & 0.6454 & 0.6021 & \textbf{0.7265} & 0.1 & 0.1 \\ 2 & 0.672 & 0.628 & \textbf{0.7426} & 0.1 & 0.1 \\ 3 & 0.6847 & 0.638 & \textbf{0.7562} & 0.1 & 0.1 \\ 4 & 0.6963 & 0.6397 & \textbf{0.7594} & 0.1 & 0.1 \\ 5 & 0.6966 & 0.6377 & \textbf{0.7555} & 0.1 & 0.1 \\\hline Average & 0.679 & 0.6291 & \textbf{0.74804} & 0.1 & 0.1 \end{tabular} \caption{VGG-16 accuracies on the test set for each fold.} \label{tb:vgg} \end{table*} Notice that the aforementioned technique was meant to be used with all default hyperparameters, still, it does not work for all situations. A VGG-19 with standard initialization will continue to have its signal overamplified, converging too quickly. However, basic optimization approaches are sufficient to smoothly train the network: the learning rate, for example, controls the adaptability of the dataset to the network and it affects directly in weights updated, hence, when we are faced with large amplifications caused by the L-strategy, it is usually enough to decrease the starting learning rate or apply schedulers. To illustrate this, we present in Table~\ref{tb:vgg19} a comparison between a multistream late fusion approach and the L-strategy approach with the non-default learning rate of $10^{-4}$, forasmuch as the regular parameterization generated unusable results. \begin{table*}[!htpb] \centering \begin{tabular}{c|cccc} \multicolumn{1}{c|}{} & \multicolumn{4}{c}{Architecture} \\ \cline{2-5} \multicolumn{1}{c|}{\multirow{-2}{*}{Fold}} & M-VGG-19 & L-VGG-19-avg & L-VGG-19-add & L-VGG-19-sub \\\hline\hline 1 & 0.6663 & 0.5643 & 0.6511 & \textbf{0.6671} \\ 2 & 0.6625 & 0.5064 & 0.6952 & \textbf{0.7052} \\ 3 & 0.6932 & 0.5252 & 0.7121 & \textbf{0.7184} \\ 4 & 0.6911 & 0.5485 & \textbf{0.7289} & 0.7245 \\ 5 & 0.6455 & 0.5553 & 0.7338 & \textbf{0.7349} \\ \hline Average & 0.67172 & 0.53994 & 0.70422 & \textbf{0.71002} \end{tabular} \caption{VGG-19 accuracies on the test set for each fold.} \label{tb:vgg19} \end{table*} Finally, we can observe that the presented results in this Section are not linear, there isn't an absolute better fusion operation. Although we recommend on using \textit{average}, we displayed our results using a trial-and-error method. \section{Conclusions and future works} \label{conclusion} In this work, we presented a novel fusion technique in order to improve multistream image classifications. Using different backbones and 4 datasets, experimental results show that the proposed strategy has faster convergence and also has a flexibility of operational switches. Likewise, the proposed fusion demonstrated robustness and stability, even when distractors are used as inputs. Still, there is no definitive technique. We showed that we can obtain an increase of accuracy up to $63.21\%$, as well as our strategy overamplifies so much the signal that the default hypermeters are not enough to train a good model. Our goal to reuse previous state-of-the-art architectures with few modifications to keep them in the game was successfully achieved. In other words: the L-strategy module can make older models usable for brand-new challenges. Future work will focus on the inclusion of new steps exploring improvement categories: the next move is to improve a range of other models, following an exploratory line of research for new techniques to give an extra life to models becoming obsolete. Additionally, we intend to explore not only CNNs, but every kind of structure that includes an ReLU activation, such as Long short-term memory (LSTM) and Generative Adversarial Networks (GAN), evaluation the proposed technique in other data modalities. \bibliographystyle{elsarticle-num}
{ "timestamp": "2021-09-29T02:25:45", "yymm": "2109", "arxiv_id": "2109.13885", "language": "en", "url": "https://arxiv.org/abs/2109.13885" }
\section{Introduction} The origins of epidemiological modeling extend as far back as the 18th century \cite{bookModelingInfectiousDiseases}, with Bernoulli's 1760 treatise on smallpox considered one of its foundational documents~\cite{BernoulliRevisited}. In the 20th century, the field experienced significant growth, including the introduction of approaches that employed dynamic systems theory \cite{bookModelingInfectiousDiseases}. In particular, Kermack and McKendrick's 1932 \textit{Contributions to the Mathematical Theory of Epidemics} introduced compartmental models, which are heavily used to this day \cite{kermack1932contributions,PARE2020Overview}. With this method, members of the population are compartmentalized based on their current state of health. The most common models are the SIS, SIR, and SEIR models, in which people are grouped as: \textit{susceptible (S), exposed (E), infected (I),} or \textit{recovered (R)} \cite{PARE2020Overview}. Expansions of the aforementioned models have also been developed. For instance, in light of the COVID-19 pandemic, Giordano \textit{et al.} constructed a SIDARTHE-V model that subdivides the infected population according to the detection and severity of their symptoms and includes vaccination levels \cite{ giordano2021sidarthe-vaccines}. We include a vaccinated compartment in our model because vaccines are a powerful tool in mitigating epidemic spread and saving lives \cite{bookModelingInfectiousDiseases}. The modern practice of vaccination is rooted in Edward Jenner's 1796 experiments on smallpox \cite{jenner1800inquirysmall_cow_pox,hilleman2000historyofvaccines}. Since then, incredible breakthroughs have been made in vaccinations, including the use of vaccines to eradicate smallpox \cite{henderson2011eradicationofsmallpox}. Today, vaccines are widely used around the world as early as infancy to prevent infections \cite{WHO_ImmunizationCoverage}. Due to the power of vaccines, the drastic death rates caused by the spread of the novel SARS-CoV-2 virus have spurred worldwide vaccination efforts to combat the virus \cite{dong2020JohnHopkinsDataInteractiveDashboard,OurWorldinData_vaccines2021global}. In the U.S., three vaccines are currently available: Pfizer-BioNTech, Moderna, and Johnson \& Johnson’s Janssen \cite{CDC_vaccine_types}. Nevertheless, vaccine hesitancy remains a real, growing concern for public health \cite{dube2013vaccine_Hesitancyoverview}. An individual's vaccine hesitancy is affected by context, interpersonal experiences, and disease-specific metrics \cite{SAGEGroup2015vaccinehesitancydef}. In the United States, vaccine hesitancy is especially apparent regarding the reception of the COVID-19 vaccine. As of September 11, 2021, 53\% of Americans have been fully vaccinated against COVID-19, with an additional 9.2\% partially vaccinated \cite{OurWorldinData_vaccines2021global}. However, these levels are lower than those predicted by a February 2021 survey by Pew Research Center. According to the survey, 69\% of Americans reported they would probably, definitely, or had already been at least partially vaccinated \cite{Pew_AmericanVaccineOpinions}. In fact, the number of vaccines administered daily in the U.S. in September 2021 is less than half of those administered daily in mid-April 2021 \cite{OurWorldinData_vaccines2021global}. There have been dozens of studies on COVID-19 vaccine hesitance around the world \cite{sowa2021VaccHesitancyPoland,oliveira2021Covid-19VaccineHesitancyBrazil,aw2021VaccineHes_High_Income_Countries}. There are also studies on how vaccination levels affect the dynamics of disease spread. Pires and Crokidakis explore the effect of vaccine opinions in the spread of a disease \cite{pires2017Epidemic_dynamicsw_Vaccination}. Similarly, \cite{pires2018sudden_ContinousOpinions} considers the dynamics when opinions are continuous. However, to the best of the authors' knowledge, the correlation between a maximum vaccination capacity $\kappa$ and the equilibria of system dynamics \cite{dhondt1988carrying} has not yet been established. This work analytically illustrates the effect of $\kappa$ on the dynamics of COVID-19 spread. Specifically, $\kappa$ is shown to determine the endemic and disease-free equilibria of the SIRS-V$_\kappa$ system. \section{SIRS-V$_{\kappa}$ Model} In this paper, we take an SIRS model and add a vaccinated compartment with a vaccine confidence $\kappa$ as the upper-bound of vaccination level. The result is an \textit{SIRS-V$_\kappa$} model: \textit{susceptible (S), infected (I), recovered (R), vaccinated (V)}. Figure \ref{fig:SIHRVDcompartments} gives a graphical representation of the compartmental flow. \begin{definition}[Vaccine Hesitance]\label{def:vaccine_hesitance} We define the vaccine hesitance as $\kappa^{-1}$, where $\kappa$ is the vaccine confidence. \end{definition} Each compartment in the grouped SIRS-V$_\kappa$ model is a scalar between $0$ and $1$ modeling the fraction of the population in each epidemic compartment at time $t$. The following are the equations describing the flow from one compartment to another in the group SIRS-V$_\kappa$ model: \begin{subequations}\label{eq:sirsv_group} \begin{align} \dot{S} & = -\beta S I - \rho \left( 1 - \frac{V}{\kappa}\right) S + \omega R \label{eq:sirsv_group_S} \\ \dot{I} & = \beta S I -\gamma I \label{eq:sirsv_group_I} \\ \dot{R} & = \gamma I - \omega R -\rho\left(1 - \frac{V}{\kappa}\right) R \label{eq:sirsv_group_R} \\ \dot{V} & = \rho\left(1 - \frac{V}{\kappa}\right) (S + R). \label{eq:sirsv_group_V} \end{align} \end{subequations} The parameters are defined as follows: \begin{itemize} \item $\beta_{}$ is the frequency-dependent transmission rate \item $\gamma_{}$ is the recovery rate \item $\rho_{}$ is the rate of vaccination roll-out \item $\kappa$ is the vaccine confidence of the population \item $\omega$ is the rate of waning natural immunity\phil{.} \end{itemize} All of the above parameters are defined in the range of $(0, \infty)$. \begin{figure \centering \vspace{1ex} \includegraphics[width=\columnwidth]{images/SIRV_square.PNG} \caption{SIRS-V$_\kappa$ Compartments.} \label{fig:SIHRVDcompartments} \end{figure} \begin{assumption}[Frequency dependent and population preserving system]\label{asm:00} We assume $S(t_0), I(t_0), R(t_0)\in [0, 1]$, $V(t_0) \in [0, \underline{\kappa}]$, where $\underline{\kappa} := \min\{1, \kappa\}$, and $S(t_0) + I(t_0) + R(t_0) + V(t_0) = 1$ $\forall i \in \{1, ..., n\}$. Furthermore, all parameters of the system are strictly positive. \end{assumption} \begin{lemma}[Permissible range of $V$]\label{lem:kappa_range} Let Assumption~\ref{asm:00} hold, then $0 \leq V(t) \leq \underline{\kappa}\ \ \forall t \geq t_0, \forall i \in \{1, ..., n\}$. \end{lemma} \begin{proof} Suppose Assumption \ref{asm:00} holds, we first note that $\dot{V}(t_0) = \rho(S(t_0) + R(t_0)) \geq 0$ when $V(t_0) = 0$. Furthermore, $\lim_{V(t_0)\downarrow 0} \dot{V}(t_0) = \rho(S(t_0) + R(t_0)) \geq 0$. Then, consider the upper bound of $V(t)$, $\dot{V}(t_0) = 0$ when $V(t_0) = \underline{\kappa}$. Moreover, for any $V(t_0) \leq \underline{ \kappa}$, $\lim_{V(t_0)\uparrow \kappa} \dot{V}(t_0) \geq 0$. Therefore, $v(t) \in [0, \underline{\kappa}]$ $\forall t \geq t_0$ for all permissible initial values in Assumption \ref{asm:00}. \end{proof} The following lemma shows that the bounds on the initial state apply to all states for all $t\geq t_0$. \begin{lemma}[Admissible range of states values] \label{lem:00} Let Assumption~\ref{asm:00} hold. Then $S(t), I(t), R(t), V(t) \in [0, 1]$ and $S(t) + I(t) + R(t) + V(t) = 1$ $\forall t \geq 0, \forall i \in \{1, ..., n\}$. \end{lemma} \begin{proof} We first observe that $\dot{S}(t) + \dot{I}(t) + \dot{R}(t) + \dot{V}(t) = 0$ $\forall t \geq 0$ by summing over \eqref{eq:sirsv_group_S}-\eqref{eq:sirsv_group_V}, then by taking the integration: \begin{align*} \int_{t_0}^t \dot{S}(\tau) + \dot{I}(\tau) + \dot{R}(\tau) + \dot{V}(\tau) d\tau & = 0 \\ S(t) + I(t) + R(t) + V(t) & = 1\ \ \forall t \geq 0 \end{align*} since the initial condition is given as $S(t_0) + I(t_0) + R(t_0) + V(t_0) = 1$. To show that $S(t), I(t), R(t), V(t) \in [0, 1]$ $\forall t \geq 0$, we rewrite the state vector as $z(t) \in \mathbb{R}^{4}$, then % % we note that for any $j \in \{1, \hdots, 4\}$, $\dot{z}_j = -z_jf_j(z_{k\neq j}) + g_j(z)$ where $f_j(z_{k\neq j}), g_j(z) \geq 0$ $\forall z \geq 0$, also $z_{k\neq j}$ is the state vector $z$ without its $j^{th}$ entry. This applies to $\dot{z}_j = \dot{V}$ also, as we have demonstrated in Lemma~\ref{lem:kappa_range}. % Then, we observe that $\lim_{z_j\downarrow 0}\dot{z}_j(t) = g_j(z) \geq 0\ \ \forall z \geq 0$. Therefore, $z(t) \geq 0$ when $z(t_0) \geq 0$. Then $z_j(t) \leq 1$ follows directly from $z_j(t) \geq 0$ and $S(t) + I(t) + R(t) + V(t) = 1$. \end{proof} We denote the set of admissible states of \eqref{eq:sirsv_group} by $\mathcal{S} = \{(S(t), I(t), R(t), V(t)):\ S(t) + I(t) + R(t) + V(t) = 1\ \wedge\ S(t), I(t), R(t) \in [0, 1]\ \wedge\ V(t) \in [0, \underline{\kappa}]\ \forall t \geq t_0\}$. \section Stability Analysis} This section will lay down the existence and uniqueness of the two equilibria of the SIRS-V$_\kappa$ model and their stability conditions. We denote an equilibrium of the SIRS-V$_\kappa$ model as $(S^*,I^*,R^*,V^*)$ where $S^*$, $I^*$, $R^*$, and $V^*$ are the steady-state values of \eqref{eq:sirsv_group} as $t \to \infty$. \begin{definition}[Disease-Free Equilibrium]\label{def:DFE} A disease-free equilibrium (DFE) is an equilibrium with the steady-state infected level $I^{(d)} = 0$ $\forall i \in \{1,...,n\} $, where $\dot{S}^{(d)} = \dot{I}^{(d)} = \dot{R}^{(d)} = \dot{V}^{(d)} = 0$. \end{definition} Similarly, we define the endemic equilibrium as follows: \begin{definition}[Endemic Equilibrium Point]\label{def:eep} An endemic equilibrium point (EEP) is an equilibrium with the steady-state infected level $I^{(e)} > 0$, where $\dot{S}^{(e)} = \dot{I}^{(e)} = \dot{R}^{(e)} = \dot{V}^{(e)} = 0$. \end{definition} \begin{lemma}[Strictly Positive EEP]\label{lemma:positive_eep} If there exists an endemic equilibrium $(S^{(e)}, I^{(e)}, R^{(e)}, V^{(e)})$, then it is strictly greater than $0$, i.e. $z^{(e)} > 0$ $\forall z \in \{S, I, R, V\}$. Furthermore, $V^{(e)} = \underline{\kappa}$. \end{lemma} \begin{proof} Assume, by way of contradiction, that $S^{(e)} = 0$, then $\dot{S}^{(e)} = -\gamma I \neq 0$, which contradicts the definition of EEP. Therefore, $S^{(e)} > 0$. A similar argument can be made by substituting $R^{(e)} = 0$ into \eqref{eq:sirsv_group_R} to get $R^{(e)} > 0$. Therefore, from \eqref{eq:sirsv_group_V}, we have \begin{equation}\label{eq:eep_sirs_network_v} 0 = \rho \left(1 - \frac{V^{(e)}}{\kappa}\right) (S^{(e)} + R^{(e)}) \end{equation} and the only way for the R.H.S. to equal zero is if $V^{(e)} = \kappa$. Notice that \eqref{eq:eep_sirs_network_v} cannot be satisfied when $\kappa \geq 1$, and therefore, EEP only exists when $\kappa < 1$, concluding the proof. \end{proof} Lastly, we reiterate the definition of the basic reproduction number through the next generation matrix method \cite{diekmann1990definition, van2002reproduction}: \begin{definition}[Basic and Effective Reproduction Number] Let $\dot{z_i} = f_i(z) - g_i(z)$ be the differential equation of the $i^{th}$ infected compartment, where $f_i(z)$ is the function governing the rate of appearance, $g_i(z)$ is the function governing the rate of transferring into other compartments, and $i \in \{1, \hdots, m\}$ for $m$ infected compartments out of $n$ total compartments. Let $F=\left[\frac{\partial f_i(z_0)}{\partial z_j}\right]$ and $G=\left[\frac{\partial g_i(z_0)}{\partial z_j}\right]$ be the Jacobians of $[f_i]$ and $[g_i]$, $\forall j \in \{1, \hdots, m\}$, then $FV^{-1}$ is the next generation matrix, and its spectral radius $\sigma(FV^{-1})$ is the basic reproduction number $\mathcal{R}_0$ of the system. $z_0$ is the initial condition where full susceptibility of population is assumed ($S(t_0) \approx 1$). The effective reproduction number is defined as $\mathcal{R}_t :=S(t)\mathcal{R}_0$. \end{definition} The $ij^{th}$ entry of $FV^{-1}$ is the expected number of secondary cases in the $i^{th}$ compartment produced by an infected individual in the $j^{th}$ compartment \cite{van2017reproduction}. In the case of a single infected compartment, it is simply the average number of secondary infected cases introduced by one infected individual. \begin{proposition} The basic reproduction number $\mathcal{R}_0$ of \eqref{eq:sirsv_group} is $\frac{\beta}{\gamma}$, and the effective reproduction number $\mathcal{R}_t$ is upper bounded by $(1 - V(t))\frac{\beta}{\gamma}$. \end{proposition} \begin{proof} From \eqref{eq:sirsv_group_I}, $f(I) = \beta SI$, $ g (I) = \gamma I$. By computing the Jacobians, we get $F =\frac{\partial\beta SI(t_0)}{\partial I} = \beta S(t_0) $ and $G =\frac{\partial\gamma I}{\partial I}= \gamma$. Therefore, $\mathcal{R}_0 = \sigma(FG^{-1}) = \frac{\beta}{\gamma}$ since $\mathcal{R}_0$ is defined for $S(t_0) \approx 1$. Further, $\mathcal{R}_t = S(t)\frac{\beta}{\gamma} = (1 - V(t) -R(t))\frac{\beta}{\gamma} \leq (1 - V(t))\frac{\beta}{\gamma}$. \end{proof} Notice that we use the word uniqueness in the following text not to refer to the uniqueness of the set of all equilibria of \eqref{eq:sirsv_group} but of a particular class of equilibria, i.e. the DSF and EEP are sets of cardinality at most one. As we will see in the following, the EEP could be an empty set under certain conditions. \begin{proposition}[Existence and uniqueness of DFE] \label{prop:dfe} There always exists a unique disease-free equilibrium $(S^{(d)}, I^{(d)}, R^{(d)}, V^{(d)})$ = $(1 - \underline{\kappa}, 0, 0, \underline{\kappa})$. \end{proposition} \begin{proof} By the definition of a disease-free equilibrium and from \eqref{eq:sirsv_group_I}, we know the $j^{th}$ DFE is characterized by $I^{(d_j)}=0$, and $\dot{S}^{(d_j)}$ $=$ $\dot{I}^{(d_j)}$ $=$ $\dot{R}^{(d_j)} $ $=$ $\dot{V}^{(d_j)}$ $=$ $0$ $\forall j \in \mathbb{R}$. Equation~\eqref{eq:sirsv_group_R} can be evaluated as $ 0 = \left(\omega + \rho\left(1 - \frac{V^{(d_j)}}{\kappa}\right)\right)R^{(d_j)} $ by substitution, which implies either: \begin{numcases}{} V^{(d_1)} = \kappa\left(1 + \frac{\omega}{\rho}\right) & or \label{eq:grouped_dfe_case1}\\ R^{(d_2)} = 0. \label{eq:grouped_dfe_case2} \end{numcases} In the first case \eqref{eq:grouped_dfe_case1}, we can substitute $V^{(d_1)}$ into \eqref{eq:sirsv_group_V} and solve for $S^{(d_1)} + R^{(d_1)} = 0$. By Assumption \ref{asm:00} and Lemma \ref{lem:00}, $S^{(d_1)} = R^{(d_1)} = 0$ is the only solution to the equation. Together with the presumptions $V^{(d_1)} = \kappa\left(1 + \frac{\omega}{\rho}\right)$ and $I^{(d_1)} = 0$, we realize that $S^{(d_1)} + I^{(d_1)} + R^{(d_1)} + V^{(d_1)} \neq 1$, which is not in the permissible domain of states by Assumption~\ref{asm:00} and Lemma~\ref{lem:00}. Therefore, DFE $(d_1)$ is not a feasible equilibrium. Now, by \eqref{eq:grouped_dfe_case2}, we can substitute $R^{(d_2)} = 0$ into \eqref{eq:sirsv_group_S}, then we have $0 = \rho\left(1 - \frac{V^{(d_2)}}{\kappa}\right)S^{(d_2)}$, which leads to two subcases: \addtocounter{equation}{-1} \begin{subequations} \begin{numcases}{} S^{(d_{21})} = 0 & or \label{eq:grouped_dfe_case2a}\\ V^{(d_{22})} = \kappa. \label{eq:grouped_dfe_case2b} \end{numcases} \end{subequations} In the first subcase \eqref{eq:grouped_dfe_case2a}, since $I^{(d_{21})} = 0$ by the definition of the DFE and $R^{(d_{21})} = 0$ by \eqref{eq:grouped_dfe_case2}, we have $(S^{(d_{21})}, I^{(d_{21})}, R^{(d_{21})}, V^{(d_{21})})$ $=$ $(0, 0, 0, 1)$, which is true when $\kappa \geq 1$ by recalling Lemma \ref{lem:kappa_range} and that $\underline{\kappa} = \min\{1, \kappa\}$. The second subcase \eqref{eq:grouped_dfe_case2b}, combined with $I^{(d_{22})} = 0$, \eqref{eq:grouped_dfe_case2}, and Lemma \ref{lem:00}, gives $(S^{(d_{22})}, I^{(d_{22})}, R^{(d_{22})}, V^{(d_{22})}) = (1 - \kappa, 0, 0, \kappa)$ which is true when $\kappa \in (0, 1]$. DFE $(d_{21})$ and $(d_{22})$ can be combined as $(1 - \underline{\kappa}, 0, 0, \underline{\kappa})$ to be applicable to the whole range of $\kappa \in (0, \infty)$. Since $(1 - \underline{\kappa}, 0, 0, \underline{\kappa})$ is the only DFE satisfying Definition~\ref{def:DFE} and \eqref{eq:sirsv_group}, it is unique under Assumption~\ref{asm:00}. Since $(1 - \underline{\kappa}, 0, 0, \underline{\kappa})$ is a valid state under Assumption~\ref{asm:00} for all admissible values of $\kappa$, the DFE always exists. \end{proof} \begin{remark} Note that $\kappa$ ceases to act as an upper bound on the vaccination state while still affecting the effective rate of vaccination $\rho \left(1 - \frac{V}{\kappa}\right)$ when $\kappa$ is greater than $1$. Moreover, $\rho \left(1 - \frac{V}{\kappa}\right) \to \rho$ as $\kappa \to \infty$. \end{remark} \begin{proposition}[Uniqueness of EEP] \label{prop:eep} The endemic equilibrium point of \eqref{eq:sirsv_group}: \small \begin{equation*} \left(S^{(e)}, I^{(e)}, R^{(e)}, V^{(e)}\right) = \left(\frac{\gamma}{\beta}, \frac{1 - \kappa - \gamma / \beta}{1 + \gamma/\omega}, \frac{1 - \kappa - \gamma / \beta}{1 + \omega/\gamma}, {\kappa}\right) \end{equation*} \normalsize \vspace{-1.5ex} \noindent is unique. Moreover, $$\frac{I^{(e)}}{R^{(e)}} = \frac{\omega}{\gamma}.$$ \end{proposition} \begin{proof} If an endemic equilibrium exists, by Definition \ref{def:eep}, $I^{(e)}>0$. Applying this fact and setting \eqref{eq:sirsv_group_I} to zero gives $S^{(e)}=\frac{\gamma}{\beta}$. Therefore, the result from Lemma \ref{lemma:positive_eep}, with $n=1$, implies $V^{(e)}=\kappa$. By substituting $V^{(e)}=\kappa$ into \eqref{eq:sirsv_group_R}, we will have $ \frac{I^{(e)}}{R^{(e)}} = \frac{\omega}{\gamma}$. To solve for $I^{(e)}$ and $R^{(e)}$, recall the global preservation of population result in Lemma~\ref{lem:00}, we have $I^{(e)} + R^{(e)} = 1 - \kappa - \frac{\gamma}{\beta}$. We can combine this equation with $\frac{I^{(e)}}{R^{(e)}} = \frac{\omega}{\gamma}$, then solving for the system of two equations with two unknowns gives us the desired result. Therefore, we have shown the uniqueness of permissible EEP under Assumption~\ref{asm:00}. \end{proof} Notice that since $I^*$ can either be $0$ or $(0, 1]$, the DFE and the EEP are the only two possible admissible equilibria of \eqref{eq:sirsv_group} under Assumption~\ref{asm:00}. \begin{lemma}[Necessary and sufficient condition for existence of EEP]\label{lem:nns_eep_exists} The endemic equilibrium exists in \eqref{eq:sirsv_group} if and only if $\kappa < 1 - \frac{\gamma}{\beta}$, which is equivalent to $(1 - \kappa)\frac{\beta}{\gamma} > 1$. \end{lemma} \begin{proof} From Assumption~\ref{asm:00} and Lemma~\ref{lem:00}, the EEP in Proposition~\ref{prop:eep} exists if and only if the range of parameters satisfy the four inequality: \begin{equation}\label{eq:eep_param_range} \begin{cases} 0 < \frac{\gamma}{\beta}< 1 \\ 0 < \frac{1 - \kappa - \gamma/\beta}{1 + \gamma / \omega} < 1 \\ 0 < \frac{1 - \kappa - \gamma / \beta}{1 + \omega / \gamma} < 1 \\ 0 < \kappa < 1. \end{cases} \end{equation} The first and forth constraints are absorbed by Assumption \ref{asm:00} and the second constraint. The second and third constraints can be reduced to $0 < 1 - \kappa - \gamma / \beta$, which is equivalent to $\kappa < 1 - \gamma/\beta$ and $(1 - \kappa)\frac{\beta}{\gamma} > 1$. \end{proof} Fig.~\ref{fig:state_space_with_EEP} illustrates the set of states $\mathcal{S}$, which is a 3-dimensional simplex with six vertices $(S, I, R, D, F, G)$ with coordinates $(1, 0, 0 ,0)$, $(0, 1, 0 ,0)$, $(0, 0, 1 ,0)$, $(1-\kappa, 1, 0 ,\kappa)$, $(0, 1-\kappa, 0 ,\kappa)$, and $(0, 0, 1-\kappa,\kappa)$, respectively. The larger simplex is a tetrahedron because of Lemma~\ref{lem:00}, which states that $S(t) + I(t) + R(t) + V(t) = 1$ and $(S(t), I(t), R(t), V(t))\in[0,1]^4$. To see this clearly, consider an SIR compartmental model with three states, $(S, I, R)$. If $S + I + R = 1$, then $(S, I, R)$ lie only on the surface of the unit circle defined by the 1-norm. Furthermore, if $(S, I, R) \in [0, 1]^3$, then $(S, I, R)$ lies only on the surface belonging to the first quadrant of the 3-dimensional space, where $S$, $I$, $R$ are non-negative. Notice that the surface is an equilateral triangle with its three vertices $S = (1, 0, 0)$, $I = (0, 1, 0)$, and $R = (0, 0, 1)$. Generalizing this notion to the 4-dimensional case gives us Fig.~\ref{fig:state_space_with_EEP}. \begin{theorem}[DFE GAS Conditions]\label{thm:DFEGAS_condtion} The DFE of \eqref{eq:sirsv_group} is globally asymptotically stable (GAS) if and only if $(1 - \kappa)\frac{\beta}{\gamma} \leq 1$. \end{theorem} \begin{proof} Our goal is to show by exhaustion of cases that all the states in $\mathcal{S}$ are either the DFE or will approach the DFE as $t \to \infty$ under the condition $(1 - \kappa)\frac{\beta}{\gamma} \leq 1$. Fig.~\ref{fig:state_space_with_EEP} visualizes the simplex of states, i.e. the range of admissible states $\mathcal{S}$, of which we want to show GAS around the DFE. Suppose $(1 - \kappa)\frac{\beta}{\gamma}\leq 1$. If $S(t)+R(t) = 0$, which is equivalent to $S(t)=R(t)=0$, then either $(S(t)=R(t)=I(t)=0 \wedge V(t)=1)$, which is a special case of DFE when $\kappa \geq 1$, or $(S(t)=R(t)=0 \wedge I(t)>0)$. If $(S(t)=R(t)=0 \wedge I(t)>0)$, then by \eqref{eq:sirsv_group_R}: \begin{align*} \dot{R}(t) & = \gamma I(t) > 0. \end{align*} Hence, every admissible state in \eqref{eq:sirsv_group} which satisfies $S(t)+R(t) = 0$ will have $\dot{R}(t) > 0$, hence evolving to another state such that $S(t)+R(t) \neq 0$. In other words, states on the $IV$ edge in Fig.~\ref{fig:state_space_with_EEP} are not stable and will move toward vertex $R$ as time unfolds. The only state on $IV$ which violates this tendency of evolving towards $R$ is the vertex $V$, which is already the DFE if it lies within $\mathcal{S}$. For every state that satisfies $(S(t) + R(t) > 0 \wedge V(t) < \kappa)$, \begin{align*} \dot{V}(t) & = \rho\left(1 - \frac{V(t)}{\kappa}\right)(S(t)+R(t)) \\ & > 0 \end{align*} by substituting $S(t) + R(t) > 0$ into \eqref{eq:sirsv_group_V}. Therefore, every state with $V(t) < \kappa$ will satisfy $V(t) = \kappa$ as $t \to \infty$ or it is already the DFE. If $(I(t) = R(t) = 0 \wedge V(t) = \kappa)$, then $S(t) = 1 - \kappa$ by Assumption~\ref{asm:00} and Lemma~\ref{lem:00}, which state that $S(t) + I(t) + R(t) + V(t) = 1$ $\forall t \geq t_0$, which is the DFE. Otherwise, $(V(t) = \kappa \wedge I(t) + R(t) > 0)$. Thus, by substituting $V(t) = \kappa$ into $S(t)+I(t)+R(t)+V(t)=1$, we have \begin{align} S(t) & = 1 - \kappa - I(t) - R(t)\nonumber \\ & \leq \frac{\gamma}{\beta} - I(t) - R(t) \label{eq:ineq1} \\ & < \frac{\gamma}{\beta}, \label{eq:ineq2} \end{align} where \eqref{eq:ineq1} holds because the condition $(1 - \kappa)\frac{\beta}{\gamma}\leq 1$ is equivalent to $\frac{\gamma}{\beta} \geq 1 - \kappa$ and \eqref{eq:ineq2} holds because $I(t) + R(t) > 0$. Therefore, from \eqref{eq:sirsv_group_I} and \eqref{eq:ineq2}, we have \begin{align*} \ \ \ \dot{I}(t) & = \beta \frac{\gamma}{\beta}I(t) - \gamma I(t) \\ & = -2\gamma I(t) \\ & \leq 0, \end{align*} where equality holds only if $I(t)>0$. Therefore, every state which satisfies $V(t) = \kappa$ will satisfy $(V(t) = \kappa \wedge I(t) = 0)$ as $t\to \infty$ or it is already the DFE. Lastly, for any state that satisfies $(V(t) = \kappa \wedge I(t) = 0 \wedge R(t) > 0)$, from \eqref{eq:sirsv_group_R}, we have \begin{align*} \dot{R}(t) = - \omega R(t) < 0. \end{align*} Thus, every state that satisfies $(V(t) = \kappa \wedge I(t) = 0 \wedge R(t) > 0)$ will satisfy $(V(t) = \kappa \wedge I(t) = 0 \wedge R(t) = 0)$ as $t \to \infty$ or it is already the DFE. Therefore, the DFE is GAS if $(1 - \kappa)\frac{\beta}{\gamma} \leq 1$. If $(1 - \kappa)\frac{\beta}{\gamma} > 1$, then there exists an EEP in $\mathcal{S}$ by Lemma~\ref{lem:nns_eep_exists}, which concludes the other direction of the proof. \end{proof} \begin{figure \centering \includegraphics[width=\columnwidth]{images/fig1_with_EEP.png} \caption{Admissible range of state values of \eqref{eq:sirsv_group} when EEP exists. $D$ is the DFE, and $E$ is the EEP. The plane $ABC$ represents all states with $S(t)=\frac{\gamma}{\beta}$, and the plane $DFG$ represents all states with $V(t)=\kappa$. The ratio between the segments $\frac{\|EQ\|}{\|PE\|} = \frac{\omega}{\gamma}$.} \label{fig:state_space_with_EEP} \end{figure} Referring to the points in Fig.~\ref{fig:state_space_with_EEP}, $(1 - \kappa)\frac{\beta}{\gamma} \leq 1$ is achieved when the plane $ABD$ slides up or $DFG$ slides back, such that the segment $PQ$ vanishes. One can show that all states above the $ABC$ plane have $\dot{I} > 1$ and below $\dot{I} < 1$. An intuitive way to interpret Theorem~\ref{thm:DFEGAS_condtion} is that when $ABC$ is strictly above $DFG$, all states are pushed away from $I$ and pulled toward $V$, until they reach the DFE. \begin{remark} Notice that the necessary and sufficient GAS condition of the disease-free equilibrium is not $\mathcal{R}_0 < 1$ for the SIRS-V$_\kappa$ model, because the basic reproduction number only guarantees sufficient local asymptotic stability when it is less than one. Therefore, it might at times over/under-estimate the threshold condition of the spread process. \end{remark} \section{Simulations} In this section, we investigate the impact of the vaccine confidence $\kappa$ on the behavior of the SIRS-V$_{\kappa}$ model. Fig.~\ref{fig:kappa_v} compares the trajectories of the vaccination level $V(t)$ with different values of $\kappa$, ranging from $0.1$ to $\infty$. Note that, consistent with the analysis results, $\kappa$ slows the rate of convergence and, if $\kappa<1$, can lower the limit of $V(t)$. Fig.~\ref{fig:kappa_v} and Fig.~\ref{fig:kappa_I} were plotted with initial condition $(S(t_0), I(t_0), R(t_0), V(t_0)) = (0.54,0.41,0.05,0)$. Fig.~\ref{fig:models_compare} and Fig.~\ref{fig:peak_inf_peak_time} were plotted with initial condition $(S(t_0), I(t_0), R(t_0), V(t_0)) = (0.99,0.01,0,0)$, and the initial condition of Fig.~\ref{fig:threshold} is $(S(t_0), I(t_0), R(t_0), V(t_0)) = (0.7,0.3,0,0)$. All plots use the parameter set: $(\beta = 1.6, \gamma = .8, \rho = .12, \kappa = .8, \omega = .2)$ except for Fig.~\ref{fig:kappa_I} and Fig.~\ref{fig:kappa_v} we have used $\omega = 3$. \begin{figure \centering \includegraphics[width=\columnwidth]{images/fig_kappa_V.png} \caption{The impact of varying $\kappa$ on $V(t)$} \label{fig:kappa_v} \end{figure} Fig.~\ref{fig:kappa_I} illustrates how varying $\kappa$ affects the infection $I(t)$. Since $\frac{\beta}{\gamma} = 2$ in this case, $\kappa = 0.5$ barely satisfies the condition for GAS in Theorem~\ref{thm:DFEGAS_condtion} to reach the DFE as time goes to infinity. We also notice that $\kappa$ slows the rate of convergence of $I(t)$ when comparing the cases $\kappa = (0.5, 0.6, 1.2, \infty)$. \begin{figure \centering \includegraphics[width=\columnwidth]{images/fig_kappa_I.png} \caption{The impact of varying $\kappa$ on $I(t)$} \label{fig:kappa_I} \end{figure} Fig.~\ref{fig:models_compare} compares $I(t)$ and $R(t)$ between the SIRS, SIRSV, and SIRS-V$_{\kappa}$ models. Notice the key difference between the three models is that SIRSV and SIRS-V$_{\kappa}$ has $\rho > 0$ while $SIRS$ has $\rho = 0$, and SIRSV$_{\kappa}$ has $\kappa = 0.8 < \infty$ while SIRSV assumes $\kappa \to \infty$. In this particular setup, both the SIRSV and SIRSV$_{\kappa}$ models reach the DFE before $t = 40$, while the SIRS model settles at the EEP. Furthermore, we observe a higher infection peak with slower peak time in the SIRS-V$\kappa$ model. \begin{figure \centering \includegraphics[width=\columnwidth]{images/fig_models.png} \caption{Comparation between SIRS, SIRSV, and SIRS-V$_{\kappa}$} models. \label{fig:models_compare} \end{figure} To investigate how the vaccine confidence affects the maximum peak infection value and time, we plot the maximum peak infection value and time versus $\kappa$ in Fig.~\ref{fig:peak_inf_peak_time}. Note that the maximum peak infection value decreases monotonically with $\kappa$. On the other hand, the peak infection time reaches its maximum when $\kappa = 0.2$ before decaying, which is not expected and will need further investigation. \begin{figure \centering \includegraphics[width=\columnwidth]{images/fig_maxT_maxI.png} \caption{Maximum Infection value and peak time versus $\kappa$.} \label{fig:peak_inf_peak_time} \end{figure} Fig.~\ref{fig:threshold} plots the earliest time when $I(t) < 0.001$ with respect to different $(1 - \kappa)\frac{\beta}{\gamma}$ through varying $\kappa$. We use the condition $I(t) < 0.001$ as a way to investigate the asymptotic behavior of the system under different parameter settings. The program returns $-10$ if $I(t)$ fails to converge close enough to $0$ before $t = 100$. The sharp drop at $(1 - \kappa)\frac{\beta}{\gamma} \approx 1$ aligns with our finding in Lemma~\ref{lem:nns_eep_exists} and Theorem~\ref{thm:DFEGAS_condtion} that the DFE ceases to be GAS when $(1 - \kappa)\frac{\beta}{\gamma}$ is larger than $1$. Note that the drop occurs slightly before $1$ because we do not allow $t>100$, which is much smaller than asymptotic time. Finally, all the simulations indicate that the EEP has a large region of convergence as long as it exists in $\mathcal{S}$. More investigation is required to firmly conclude these findings analytically. \begin{figure}[h] \centering \includegraphics[width=\columnwidth]{images/fig_threshold.png} \caption{Threshold behavior with respect to $(1 - \kappa)\frac{\beta}{\gamma}$.} \label{fig:threshold} \end{figure} \section{Conclusion and Future Work} In this paper, we have proposed an SIRS-V$_\kappa$ model, where $\kappa$ is the vaccine confidence level. We have proved the existence of unique endemic and disease-free states, both of which depend on $\kappa$. Furthermore, $\kappa$ acts as an important component in determining the necessary and sufficient condition of the global asymptotic stability, namely $(1 - \kappa)\frac{\beta}{\gamma} \leq 1$, of the disease-free equilibrium. From the perspective of control, manipulating the transmission rate $\beta$, recovery rate $\gamma$, and the vaccine confidence $\kappa$ are equally viable ways of mitigating epidemic spread. From the disease modeling perspective, we have shown through analytical and numerical methods that ignoring vaccine confidence in the model will introduce significant biases when determining the system's stability condition, maximum peak infection time, and its threshold behaviors. The dependence of COVID-19 prevalence on vaccine hesitance should encourage Americans to become vaccinated if they have not done so already. For future work, we are interested in extending our findings from the scalar case to the networked case to further inspect the impact of varying vaccine confidence across different geographic/demographic populations. Moreover, we also would like to provide analysis on the region of convergence of EEP and DFE when $(1 - \kappa)\frac{\beta}{\gamma} > 1$, and the effect of vaccine confidence on the maximum peaking time of $I(t)$. \bibliographystyle{IEEEtran}
{ "timestamp": "2021-09-30T02:02:39", "yymm": "2109", "arxiv_id": "2109.14000", "language": "en", "url": "https://arxiv.org/abs/2109.14000" }
\section*{Acknowledgments} This paper is written as part of the author's graduate studies. He is grateful to his beneficent advisor, professor Idris Assani, for no shortage of helpful guidance.
{ "timestamp": "2021-09-30T02:00:57", "yymm": "2109", "arxiv_id": "2109.13965", "language": "en", "url": "https://arxiv.org/abs/2109.13965" }
\section{Introduction} \label{sec:introduction} Energy flows induced into magnetically dominated relativistic magnetospheres of compact objects are commonly modeled by numerical simulations in the force-free electrodynamics (FFE) limit. Fueled by the track record of observations in the era of multi-messenger astrophysics, current targets for such simulations include the magnetospheres of rapidly spinning black holes, spiraling neutron stars, magnetars, and pulsars. The tenuous, magnetically dominated atmosphere (magnetosphere) of pulsars is an active field of scientific interest. They fascinate both observers \citep[e.g.][]{Lorimer_1995MNRAS.273..411,Ransom_2005Sci...307..892,Abdo_2013ApJS..208...17,Jankowski_2018MNRAS.473.4436} and theorists \citep[e.g.][]{Kennel1984,Lyubarskii1996,Contopoulos1999,Contopoulos_2019MNRAS.482L..50,Goodwin_2004MNRAS.349..213,Timokhin2006,Timokhin_2013MNRAS.429...20, Petri_2020Univ....6...15}. With the remarkable progress in scientific computing, their rotating magnetosphere has captured designers of numerical methods that integrate FFE and magnetohydrodynamics (MHD) with ever improving accuracy \citep[e.g.][]{Komissarov_2006MNRAS.367...19,Spitkovsky2006,Tchekhovskoy2013,Parfrey2017,Carrasco_2020PhRvD.101f3017}. Recently, particle-in-cell (PIC) simulations were able to resolve a broad range of scale separations and allow for unprecedented insight into the microphysics of pulsar magnetospheres across the global scale \citep[][]{Cerutti_2015MNRAS.448..606,Philippov2015,Guepin2020,Kalapotharakos_2018ApJ...857...44, Philippov_2018ApJ...855...94}. In this fascinating flurry of outcomes, only few references scrutinised whether the results from \emph{ideal} plasma simulations are the best possible model for the pulsar magnetosphere that contains an inherently \emph{non-ideal} region, namely the equatorial current sheet (ECS) beyond the closed zone \citep{Contopoulos2016,Contopoulos_2019MNRAS.482L..50, Contopoulos2020}. Here, we study with rigorous technical depth how this non-ideal region can affect the global dynamics of the force-free aligned rotator magnetosphere - effectively serving as a blueprint for force-free magnetospheres of other compact objects. \emph{Ideal} FFE evolve Maxwell's equations for the electromagnetic fields $\mathbf{E}$ and $\mathbf{B}$ while rigorously maintaining the force-free conditions $\mathbf{E}\cdot\mathbf{B}=0$ (equivalent to the no Ohmic heating condition $\mathbf{E}\cdot \mathbf{j}=0$, where $\mathbf{j}$ is the electric current) and $\mathbf{E}^2-\mathbf{B}^2<0$ (magnetic dominance). \emph{Non-ideal} force-free fields are those fields that allow for perturbations to the condition $\mathbf{E}\cdot\mathbf{B}=0$ by non-negligible electric fields $E_\parallel$ along the magnetic field $\mathbf{B}$ \citep[e.g.][]{Lyutikov_2003MNRAS.346..540}. Such fields were used in the literature to induce the concept of resistivity into FFE, either by specifically designed driving currents \citep[e.g.,][]{Alic2012} or by self-consistently modelled alterations to the current of ideal FFE \citep{Komissarov2004,Parfrey2017}. In numerical models of FFE, it is common to enforce the preservation of the $\mathbf{E}\cdot\mathbf{B}=0$ and $\mathbf{E}^2-\mathbf{B}^2<0$ conditions algebraically by resetting the electric field \emph{instantaneously} wherever they are not fulfilled \citep[e.g.][]{Palenzuela2010,Mahlmann2020}. Exploiting the specifics of our numerical methodology \citep{Mahlmann2020b}, which combines the instantaneous algebraic extraction of all non-ideal electric fields while evolving the charge continuity equation, we identified another mechanism that adds diffusivity to an ideal FFE scheme \citep{Mahlmann2020c}. Namely, the misalignment of electric fields $\mathbf{E}$ and charge density $\rho$ can significantly alter an FFE evolution. In this numerical survey, we further develop the findings obtained with an idealised setup that triggers tearing modes in force-free current sheets \citep{Mahlmann2020c} to the astrophysical relevant scenario of pulsar magnetospheres. In the not uncontroversial realm of magnetospheric simulations in the force-free limit, we announced in \citet{Mahlmann2020b} that ambiguities in the standard reference of the force-free aligned rotator required further attention. In fact, \citet{Contopoulos2016} pointed out these ambiguities (\emph{trade secrets}) that arise when simulating magnetospheres with non-ideal regions, such as the pulsar and Wald magnetospheres, in the ideal plasma limit, as one finds in ideal MHD and FFE. A sensitivity for the dazzling amount of calibration that time-dependent numerical simulations require is rarely transmitted along with the visually appealing results themselves. This manuscript is an effort to bring transparency to the modeling of one astrophysical scenario that crosses the constraints set by FFE. It aims at enabling the reader to ask crucial questions when evaluating results from the simulations of magnetospheres and intends to place some landmarks for the development of future hybrid methods that are not restricted to the ideal regime. This manuscript is organised as follows. In Sect.~\ref{sec:methodology}, we review the employed numerical methodology as well as the pulsar magnetosphere initial data (Sect.~\ref{sec:simulationsetup}). Sect.~\ref{sec:forcefreealigned} presents the outcome of the conducted simulations of a force-free aligned rotator. The results are grouped by different topics. First, we examine the dependency of the luminosity at the light cylinder (LC) on the method employed to preserve the consistency between the charge distribution and the currents (Sect.~\ref{sec:lcluminosity}). Section~\ref{sec:econservation} analyses the conservation of energy beyond the light cylinder. An array of ancillary high-resolution models yields additional insights into the subtleties of ideal FFE simulations in Sect.~\ref{sec:tradesecrets}. Specifically, we assess the role of the magnetic dominance condition (Sect.~\ref{sec:focuseddominance}), compare algebraic corrections of force-free violations to driving currents (Sect.~\ref{sec:drivingfocus}), and study the effect of resistivity models beyond the light cylinder (Sect.~\ref{sec:diffusivityfocus}). The discussion of Sect.~\ref{sec:discussion} includes views on the propagation of force-free violations (Sect.~\ref{sec:nonidealFF}), the diffusive time scales set by the employed hyperbolic/parabolic cleaning (Sect.~\ref{sec:cleaningscales}), and a general picture of diffusivity in force-free magnetospheres (Sect.~\ref{sec:diffusivitydiscuss}). We conclude this survey by summarizing the main takeaways of the presented results in Sect.~\ref{sec:conclusion}. \section{Methodology} \label{sec:methodology} The aligned rotator problem has been studied vastly throughout the last 20 years and now appears to be a well-established test case for FFE codes. Even if it is likely a simplification of the more complex problem of a magnetic dipole that is misaligned with respect to the rotational axis of the pulsar, it still contains an element of special relevance for the overall problem, namely, the ECS. We approach this investigation by means of time-dependent FFE simulations performed with the numerical code presented in \cite{Mahlmann2020b,Mahlmann2020c}. Our method is an enhanced high-order conservative realization of FFE as introduced by \citet{Komissarov2004} and vastly benefits from the \textsc{Carpet} driver \citep{Goodale2002a,Schnetter2004} and its extension to spherical coordinates \citep{Mewes2018,Mewes2020} supported by the infrastructure of the \textsc{Einstein Toolkit}\footnote{\url{http://www.einsteintoolkit.org}}. The scheme that we introduced in \cite{Mahlmann2020b} has since been compared to another force-free MHD method \citep[\textsc{BHAC},][]{Ripperda2019,Ripperda2019a} in the context of Alfvén wave interactions in the highly magnetised limit \citep{Ripperda2021}. It achieved a striking convergence of results across different methods. Also in the context of the aligned force-free rotator \citep[cf. Sect. 5.2 in][]{Mahlmann2020b}, our FFE method reproduced the main characteristics of the pulsar magnetosphere: co-rotating magnetic field lines, a force-free closed zone in the wake of the Y-point, and an ECS. However, \cite{Mahlmann2020b} observed a shift of the Y-point away from the light cylinder and a Poynting flux that was above the value which is treated as an established reference throughout the literature \citep[e.g.,][]{Spitkovsky2006, Tchekhovskoy2013, Etienne2017}. In the following subsections we briefly introduce the governing equations for the problem at hand, and the methods used to integrate them. We also provide the detailed numerical setup in Sect.~\ref{sec:numericalmethod}. \subsection{Numerical method} \label{sec:numericalmethod} We employ the force-free scheme presented in \citet{Mahlmann2020b} to conduct 2D simulations of pulsar magnetosphere on a flat background without spacetime curvature \citep[cf. Sect. 5.2 in][]{Mahlmann2020b}. Ignoring general relativistic effects \citep[negligible for the global dynamics of the pulsar magnetosphere, especially far away from the neutron star surface, though very relevant, especially frame-dragging, for driving efficient pair production;][]{Philippov_2015ApJ...815L..19, Belyaev_2016ApJ...830..119, Gralla_2016ApJ...833..258, Philippov_2018ApJ...855...94}, the equations of FFE are the set of partial differential equations formed by the Maxwell equations together with the corresponding solenoidal constraint ($\nabla\cdot\mathbf{B}=0$) and the charge density, $\rho$ expressed as $\rho=\nabla\cdot\mathbf{E}$. These two equations are integrated in our code by employing the so-called hyperbolic/parabolic cleaning method \citep{Dedner2002,Komissarov2007MNRAS.382..995}, so that the former elliptic constrains become hyperbolic equations and form an \emph{augmented} system of equations: \begin{align} \partial_t \mathbf{B} &= -\nabla \times \mathbf{E}-c_\Psi^2\nabla\Psi\\ \partial_t \mathbf{E} &= \nabla \times \mathbf{B}+c_\Phi^2\nabla\Phi-\mathbf{j} \label{eq:Efield}\\ \partial_t \Psi &= -\nabla\cdot \mathbf{B}-\kappa_\Psi \Psi \label{eq:Psi}\\ \partial_t \Phi &= \nabla\cdot \mathbf{E}- \rho-\kappa_\Phi \Phi \label{eq:Phi}\\ \partial_t \rho &= -\nabla\cdot \mathbf{j} \label{eq:chargeconservation} \end{align} Our base numerical scheme employs the scalar potentials $\Phi$ and $\Psi$ to handle (numerical) errors to the constraints $\nabla\cdot\mathbf{E}=\rho$ and $\nabla \cdot\mathbf{B}=0$, respectively \citep[cf.][]{Komissarov_2006MNRAS.367...19,Mignone2010}. The action of the scalar potentials is controlled by a combination of damping constants, $\kappa_\Psi$ and $\kappa_\Phi$, as well as a set of advection speeds $c_\Psi$ and $c_\Phi$. We will refer to our default scheme as the charge conservative (CC) method, since it involves the charge conservation equation \eqref{eq:chargeconservation}, and guarantees that the charge distribution is consitent with the currents in the domain \citep{Komissarov_etal_2007MNRAS.374..415}. A principal ingredient of our methodology is the current density as observed by the normal observer \citep[cf.][]{Mahlmann2020b,Mahlmann2020c}. It naturally splits into components perpendicular and parallel to the magnetic field three-vector ($\mathbf{j}=\mathbf{j}_\perp + \mathbf{j}_\parallel$): \begin{align} \mathbf{j}_{\perp} &= \rho \frac{\mathbf{E} \times \mathbf{B}}{\mathbf{B}^2}, \label{eq:FFResCurrentPerpendicular1}\\ \mathbf{j}_\parallel &= \frac{ \mathbf{B} \cdot(\nabla\times\mathbf{B}) - \mathbf{E}\cdot(\nabla\times\mathbf{E}) + \kappa_I \: \mathbf{B}\cdot \mathbf{E}}{(1 + \kappa_I\eta)\,\mathbf{B}^2} \:\mathbf{B} \label{eq:FFResCurrentPerpendicular}. \end{align} Here, $\kappa_I$ is the decay rate driving the electric field toward its target value $\mathbf{E}\rightarrow \eta\mathbf{j}$, and $\eta$ is a dissipation coefficient for the electric field that is parallel to the current. In the following sections, we specify whenever we extend the current of \emph{ideal} FFE ($\eta=\kappa_I=0$). In a variation of the default numerical scheme, we ignore the charge conservation equation \eqref{eq:chargeconservation} and compute the charge density appearing in the source terms by imposing $\rho=\nabla\cdot\mathbf{E}$. Specifically, the divergence of the electric field is computed from the cell-centered (volume averaged) electric field values in Eqs.~(\ref{eq:Efield}) and~(\ref{eq:FFResCurrentPerpendicular1}). We maintain the hyperbolic equation for the scalar potential $\Phi$ \eqref{eq:Phi} in order to dissipate and transport away any misalignment between charges and currents and preserve the charge conservation equation (\ref{eq:chargeconservation}) up to truncation error. We note that the $\nabla\cdot\mathbf{E}$ term appearing in Eq.~\eqref{eq:Phi} is computed as a numerical flux for the temporal update of $\Phi$ and, as such, it is obtained from the inter-cell values of the electric field. These interface values are monotonically reconstructed from cell-centred volume averaged values of the electric field. Hereafter, we will refer to this variation of the base scheme as the local charge reconstruction (LCR) method. We note that we employ a finite volume method where none of the variables is staggered off the cell centers \citep[differently from, e.g.][]{Spitkovsky2006, Mignone_2019MNRAS.486.4252}. Thus, the evaluation of $\nabla\cdot\mathbf{E}$ at each numerical cell is performed by employing a fourth order finite difference approximation based on a suitable number of neighboring cell values of $\mathbf{E}$ \citep[with an stencil similar to that used for the evaluation of $\mathbf{j}_\parallel$,][cf. Sect.~3.4]{Mahlmann2020}. Finally, we also considered a second variation of the default numerical scheme that combines the two previous strategies into a \emph{hybrid} method (HCC hereafter). In the HCC algorithm we restrict the use of the LCR method to places where the magnetic dominance condition is violated during any sub-step of the time integration, and the CC method is applied elsewhere. In practice, for the specific context of the aligned rotator magnetosphere, this limits the application of the LCR method to numerical cells affected by the ECS. \subsection{Simulation setup} \label{sec:simulationsetup} As it is a common practice in the field, we employ a non-rotating dipole magnetosphere as initial data \citep[see Sect.~5.1 in][]{Mahlmann2020b}. The initially purely poloidal magnetic flux in the magnetosphere is then \begin{align} \mathbf{B}&=\mu\left(\frac{2\cos\theta}{r^3},\frac{\sin\theta}{r^4},0\right),\\ \left|\mathbf{B}\right|&\equiv B_{\rm d}\left(r,\theta\right)=\frac{\mu}{r^3}\left[3\cos^2\theta+1\right]^{1/2}, \label{eq:Bdipole} \end{align} where we scale the magnetic moment by the stellar radius, $\mu = r_*^{3/2}$ and the vector components are expressed in the orthogonal spherical basis for the coordinates $\{r,\theta,\phi\}$. An axisymmetric rotation is instantaneously switched on across the stellar surface, so that there is a transient period during which a torsional Alfvén wave propagates outwards throughout the magnetosphere. After this transient period, an \emph{approximately} steady state is reached in the domain of extensions $r\times\theta=\left[r_*,751r_*\right]\times\left[0,\pi\right]$. Here, we use the stellar radius $r_*=13.67\,$km \citep[cf.][]{Mahlmann2019}. We evolve the magnetosphere in time for $t=8.35t_{\rm p}$, and $t=30.24t_{\rm p}$ for selected cases, where $t_{\rm p}=2\pi/\Omega_{\rm p}$ is the time of one pulsar revolution, and we choose $\Omega_{\rm p}\approx 646\,$Hz (equivalent to $\Omega_{\rm p}=0.02$ in the units of our numerical code). With this choice, the LC is located at distance $r_{\rm LC}\equiv c/\Omega_{\rm p}=5r_*$. The study presented here evaluates numerical convergence for increasing resolution carefully, using combinations of the radial spacing $\Delta r=r_*/N_r$, $N_r\in\left[32,64,128\right]$, and angular spacing $\Delta \theta=\pi/N_\theta$, $N_\theta\in\left[100,200,400\right]$, respectively. The light-cylinder region is, thus, covered by a minimum of $N_{\rm LC}\in\left[160,320,640\right]$ radial mesh cells, while the total number of radial grid points exceeds this number by a large factor. We note that the outer boundary is sufficiently far away from the central object to avoid any feedback on the star itself. The inner boundary, located at the stellar surface, requires careful attention. To first order, we follow the treatment suggested by \citet{Parfrey2012}. We set the value of $B^r$ not only at the surface, but also within a thin layer $r_{\rm in} \le r\le r_*$. The remaining magnetic field components are treated differently. Namely, they are driven to their dipole values (Eq.~\ref{eq:Bdipole}) at some distance from the surface. In a thin boundary layer consisting of at least as many cells to be covered twice by the stencil of the chosen spatial reconstruction, $B^\theta$ and $B^\phi$ are evolved with the Maxwell equations. Deep inside the interior boundary, all deviations from the initial dipole fields are exponentially damped. Within the whole boundary layer, the electric field is obtained by assuming that it is purely inductive, i.e. \citep[cf.][]{Parfrey2012}, \begin{align} \mathbf{E}=-\left[\mathbf{\Omega}_{\rm p}\times \mathbf{r}\right] \times\mathbf{B} . \end{align} This rather complicated boundary is needed to prevent spurious numerical behaviors at the stellar surface, which happen if all the magnetic field components are held fixed at $r_{\rm in} \le r \le r_*$. Likewise, this boundary acts equivalently to a conducting boundary in as much as it preserves the continuity of the electric field components parallel to the surface and the magnetic field perpendicular to it (i.e., no jumps in $B^r$, $E^\theta$ or $E^\phi$ develop at the stellar surface). It allows to relax the initial values of the electromagnetic field at the stellar surface to the approximate equilibrium values attained after a few rotational periods. The electric charge inside the pulsar boundary layer is calculated in every sub-step of the time integration via $\rho=\nabla\cdot\mathbf{E}$. To ensure consistency at the inner boundary, we set $\Phi=0$ for $r<r_*$, while evolving $\Psi$ freely to evacuate errors to the solenoidal constraint on the magnetic field $\mathbf{B}$. The evaluation of results presented throughout the following sections relies on the comparison of dimensionless quantities. Therefore, we normalise magnetic fields by the polar magnetic field $B_0=B_{\rm d}\left(r_*,0\right)=2\mu/r_*^3$. The charge density is normalised to the Goldreich-Julian charge density \citep{Goldreich_Julian_1969ApJ...157..869} at the polar cap, namely, \begin{align} \rho_0=\rho_{\rm GJ}\left(r_*,0\right)=-\frac{2\Omega_{\rm p} B_0}{c}, \end{align} where $c$ is the speed of light ($c=1$ in the units of our code). Equally, electromagnetic currents will be normalised by the reference current density $j_0=|\rho_0c|$. \begin{table} \centering \caption{Properties of the simulations corresponding to models described in Sect.~\ref{sec:forcefreealigned}. We provide the label of the respective model, the number of zones per stellar radius (in total, we have $\ge 750\times N_r$ radial zones) and in the interval $[0,\pi]$, the strategies to model the electric charge, the $\alpha$ parameter, the Y-point location, the luminosity at the LC, its relative decay up to a distance of $5r_{\rm LC}$, and the colatitude of the closed zone separatrix at the stellar surface. All models of this section evacuate force-free constraint violations by algebraic cropping of the electric fields.} \label{tab:mainmodels} \begin{tabular}{P{0.2cm}P{1.2cm}P{0.4cm}P{0.5cm}P{0.5cm}P{0.7cm}P{0.8cm}P{0.4cm}} \hline \hline & $N_r\times N_\theta$ & $\rho$ & $\alpha$ & $x_0$ & $L_{\rm Y}/L_0$ & $\Delta L/L_{\rm Y}$ & $\theta_{\rm c}$\\ \hline \textbf{La} & $32\times 100$ & LCR & $72$ & $0.94$ & $0.97$ & $0.36$ & $33.2^\circ$\\%$0.58$ \\ \textbf{Lb} & $32\times 100$ & CC & 0.03 & $0.92$ & $1.03$ & $0.30$ & $33.8^\circ$\\%$0.59$ \\ \textbf{Lc} & $32\times 100$ & CC & $0.3$ & $0.88$ & $1.24$ & $0.08$ & $35.0^\circ$\\%$0.61$ \\ \textbf{Ld} & $32\times 100$ & CC & $2.9$ & $0.83$ & $1.67$ & $0.03$ & $36.7^\circ$\\%$0.64$ \\ \textbf{Le} & $32\times 100$ & CC & $9.3$ & $0.78$ & $1.95$ & $0.03$ & $37.8^\circ$\\%$0.66$ \\ \textbf{Lf} & $32\times 100$ & CC & $18$ & $0.71$ & $2.06$ & $0.05$ & $38.4^\circ$\\%$0.67$ \\ \textbf{Lg} & $32\times 100$ & CC & $37$ & $0.78$ & $2.12$ & $0.05$ & $38.4^\circ$\\%$0.67$ \\ \textbf{Lh} & $32\times 100$ & CC & $72$ & $0.77$ & $2.11$ & $0.04$ & $37.8^\circ$\\%$0.66$ \\ \textbf{Li} & $32\times 100$ & CC & $0.2$ & $0.89$ & $1.15$ & $0.10$ & $35.0^\circ$\\%$0.61$ \\ \textbf{Lj} & $32\times 100$ & CC & $0.6$ & $0.86$ & $1.45$ & $0.04$ & $36.1^\circ$\\%$0.63$ \\ \textbf{Lk} & $32\times 100$ & CC & $4.6$ & $0.79$ & $1.65$ & $0.04$ & $36.7^\circ$\\%$0.64$ \\ \textbf{Ll} & $32\times 100$ & CC & $19$ & $0.70$ & $2.21$ & $0.05$ & $38.4^\circ$\\%$0.67$ \\ \textbf{Lm} & $32\times 100$ & CC & $9.3$ & $0.78$ & $1.80$ & $0.06$ & $37.8^\circ$\\%$0.66$ \\ \textbf{Ln} & $32\times 100$ & CC & $37$ & $0.71$ & $2.28$ & $0.08$ & $39.0^\circ$\\%$0.68$ \\ \hline \textbf{Ma} & $64\times 200$ & LCR & $36$ & $0.96$ & $1.01$ & $0.36$ & $32.7^\circ$\\%$0.57$ \\ \textbf{Mb} & $64\times 200$ & CC & 0.02 & $0.95$ & $1.06$ & $0.35$ & $33.2^\circ$\\%$0.58$ \\ \textbf{Mc} & $64\times 200$ & CC & $0.2$ & $0.91$ & $1.21$ & $0.06$ & $33.8^\circ$\\%$0.59$ \\ \textbf{Md} & $64\times 200$ & CC & $1.5$ & $0.86$ & $1.58$ & $0.03$ & $36.1^\circ$\\%$0.63$ \\ \textbf{Me} & $64\times 200$ & CC & $4.6$ & $0.80$ & $1.87$ & $0.04$ & $36.7^\circ$\\%$0.64$ \\ \textbf{Mf} & $64\times 200$ & CC & $9.3$ & $0.74$ & $1.95$ & $0.05$ & $37.2^\circ$\\%$0.65$ \\ \textbf{Mg} & $64\times 200$ & CC & $18$ & $0.76$ & $2.07$ & $0.04$ & $37.2^\circ$\\%$0.65$ \\ \textbf{Mh} & $64\times 200$ & CC & $36$ & $0.71$ & $2.20$ & $0.06$ & $37.8^\circ$\\%$0.66$ \\ \textbf{Mi} & $64\times 200$ & CC & $0.1$ & $0.92$ & $1.19$ & $0.07$ & $34.4^\circ$\\%$0.60$ \\ \textbf{Mj} & $64\times 200$ & CC & $0.3$ & $0.91$ & $1.43$ & $0.03$ & $35.0^\circ$\\%$0.61$ \\ \textbf{Mk} & $64\times 200$ & CC & $2.3$ & $0.87$ & $1.56$ & $0.02$ & $35.5^\circ$\\%$0.62$ \\ \textbf{Ml} & $64\times 200$ & CC & $9.2$ & $0.77$ & $2.17$ & $0.04$ & $37.8^\circ$\\%$0.66$ \\ \textbf{Mm} & $64\times 200$ & CC & $4.6$ & $0.83$ & $1.72$ & $0.04$ & $36.1^\circ$\\%$0.63$ \\ \textbf{Mn} & $64\times 200$ & CC & $19$ & $0.65$ & $2.37$ & $0.07$ & $39.0^\circ$\\%$0.68$ \\ \hline \textbf{Ha} & $128\times 400$ & LCR & $18$ & $0.99$ & $0.96$ & $0.32$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Hb} & $128\times 400$ & CC & $0.1$ & $0.62$ & $1.17$ & $0.10$ & $33.8^\circ$\\%$0.59$ \\ \textbf{Hc} & $128\times 400$ & CC & $0.7$ & $0.81$ & $1.53$ & $0.01$ & $34.4^\circ$\\%$0.60$ \\ \textbf{Hd} & $128\times 400$ & CC & $2.3$ & $0.72$ & $1.83$ & $0.03$ & $35.5^\circ$\\%$0.62$ \\ \textbf{He} & $128\times 400$ & CC & $4.6$ & $0.68$ & $2.04$ & $0.04$ & $36.7^\circ$\\%$0.64$ \\ \textbf{Hf} & $128\times 400$ & CC & $9.2$ & $0.51$ & $2.24$ & $0.23$ & $36.7^\circ$\\%$0.64$ \\ \textbf{Hg} & $128\times 400$ & CC & $18$ & $0.41$ & $2.07$ & $0.21$ & $37.8^\circ$\\%$0.66$ \\ \textbf{Hh} & $128\times 400$ & CC & $0.4$ & $0.90$ & $1.37$ & $<0.01$ & $34.4^\circ$\\%$0.60$ \\ \textbf{Hi} & $128\times 400$ & CC & $1.5$ & $0.64$ & $1.91$ & $0.06$ & $36.1^\circ$\\%$0.63$ \\ \textbf{Hj} & $128\times 400$ & CC & $1.2$ & $0.84$ & $1.50$ & $0.01$ & $34.4^\circ$\\%$0.60$ \\ \textbf{Hk} & $128\times 400$ & CC & $4.6$ & $0.52$ & $2.20$ & $0.18$ & $37.2^\circ$\\%$0.65$ \\ \textbf{Hl} & $128\times 400$ & CC & $2.3$ & $0.85$ & $1.59$ & $0.02$ & $34.4^\circ$\\%$0.60$ \\ \textbf{Hm} & $128\times 400$ & CC & $9.3$ & $0.36$ & $2.42$ & $0.35$ & $40.7^\circ$\\%$0.71$ \\ \hline \hline \end{tabular} \end{table} \section{Force-free aligned rotator} \label{sec:forcefreealigned} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/figure1.jpg} \vspace{-15pt} \caption{Magnetic field component $B^\phi$ (normalised to $B_0$) and parallel force-free current $\mathbf{j}_\parallel/j_0$ for different values of the diffusivity parameter $\alpha\in\left[0.015,0.145,1.447,4.630,9.260,18.52,36.17\right]$ in the CC and in the LCR methods for a resolution of $N_r = 64$. In all cases, we employ $c_\Phi=1$, and for the run with the LCR method, we choose the extreme value $\alpha=36.17$. Black fieldlines are seeded at the same latitude on the stellar surface and may serve as a reference points for comparability. The light cylinder position is indicated by a solid black line, and an estimate of the Y-point location by a dashed black line. } \label{fig:FLDSJDENS} \end{figure*} In this section, we evaluate and interpret results from an extensive array of simulations (Tab.~\ref{tab:mainmodels}). Specifically, Sects.~\ref{sec:econservation} and \ref{sec:lcluminosity} present results from 41 simulations in the \emph{ideal} FFE limit, spanning different resolutions as well as a parameter range for $\kappa_\Phi$ and $c_\Phi$. Fig.~\ref{fig:FLDSJDENS} marks the starting point - and preceding motivation - of our exploration: The FF equilibrium aligned rotator magnetosphere is vastly different when the LCR method is employed (top left panels), as opposed to a conservative evolution of the charge continuity equation in our CC scheme. A useful dimensionless parameter to classify our results is \begin{align} \alpha=\frac{\kappa_\Phi}{c_\Phi}\Delta h, \label{eq:alphadimensionless} \end{align} where $\Delta h=\text{min}\left[\Delta r, r \Delta\theta, r\sin\theta\Delta\phi\right]$. We provide a detailed interpretation of $\alpha$ throughout Sect.~\ref{sec:cleaningscales}. With an increasing damping coefficient $\kappa_\Phi$ (i.e. with increasing $\alpha$) of the hyperbolic/parabolic cleaning, the Y-point separating the closed zone from the equatorial current sheet moves closer to the central object and, thus, away from the LC. Equally, the amount of reconnecting field lines through the ECS notably decreases as $\alpha$ increases. The LRC method emerges as the \emph{most diffusive} limit of our parameter exploration. In comparable force-free simulations, \citet{Komissarov_2006MNRAS.367...19} finds results that are very similar to this limit. Observing significant reconnection beyond the Y-point, it is concluded that FFE has a tendency to facilitate the \textit{development of strongly dissipative current sheets}. Such solutions \textit{could only be relevant for magnetospheres with effective radiative cooling}. At the same time, one finds results in the literature where - without employing an explicit conservative evolution - the LCR method captures the equatorial current sheet of the pulsar magnetosphere rather well \citep{Spitkovsky2006,Etienne2017}. However, they employ different techniques to preserve the $\nabla\cdot\mathbf{E}=\rho$ and $\nabla\cdot\mathbf{B}=0$ constraints on the electromagnetic fields, based on either staggering the electromagnetic fields or employing the four-vector potential together with the energy-flux formulation of FFE \citep{McKinney2006}. We believe that scrutinizing this discrepancy up to its most finely granulated technical detail is crucial to understand the limits of FFE, how these limits affect astrophysical modeling, and how they can be overcome. \subsection{Luminosity at the light cylinder} \label{sec:lcluminosity} The Poynting flux at the LC is the most commonly cited reference value for which the modeling of FF magnetospheres of aligned rotators is consulted. \citet{Timokhin2006} presents a thorough review of the FF steady pulsar magnetosphere, and we refer to the same reference luminosity \citep[see also][]{Gruzinov_2005PhRvL..94b1101}, \begin{align} L_0 \approx \frac{\mu^2\Omega_p^4}{c^3}. \end{align} Previous results for time-dependent models show that the pulsar luminosity reached in the FF magnetosphere of an aligned pulsar \citep[with a Y-point located at the LC; note that equilibrium solutions with an inward-shifted Y-point exist][]{Goodwin_2004MNRAS.349..213,Contopoulos_2005A&A...442..579,Timokhin2006} is $L_{\rm LC}\equiv L(r=r_{\rm LC})=\left(1.0\pm 0.1\right)L_0$ \citep[e.g.][]{Komissarov_2006MNRAS.367...19,Spitkovsky2006,Tchekhovskoy2013}. In contrast, we shall argue that both the luminosity at the LC and the Y-point location depend on the (numerical) resistivity of the employed algorithm. Ultimately, this resistivity drives a slippage of the magnetic field lines at the Y-point as well as in the region of the ECS trailing it, and triggers their differential rotation in the magnetosphere. \cite{Contopoulos_2005A&A...442..579} explore the possibility that there is a differential rotational velocity of the open magnetic field lines to build solutions where the Y-point can be anywhere inside the LC. Figures~\ref{fig:Poynting} and~\ref{fig:PoyntingCSPEED} display the integrated Poynting flux through concentric spheres, as a function of distance from the central object. These plots show very different behavior beyond the LC (as we will discuss in Sect.~\ref{sec:econservation}). Indeed, we observe a transition towards the Poynting flux of the LCR method, and its dependence on distance from the star, for decreasing values of $\alpha$ in the set of data represented in Fig.~\ref{fig:Poynting}. The results shown for the CC method in Figs.~\ref{fig:Poynting} and~\ref{fig:PoyntingCSPEED} are obtained for different combinations of the cleaning parameters $\kappa_\Phi$ and $c_\Phi$ (the rest of the algorithmic elements in our numerical code are fixed). The broad range of luminosities spans between $\sim L_0$ (for the smallest value of $\kappa_\Phi=0.1$) and $\sim 2.3 L_0$ (for $\kappa_\Phi\ge 128$). There is a notable difference regarding the pulsar luminosity among different methods and within the CC method with distinct numerical parameters. As we argue below, the differences arise by the change in the location of the Y-point. Increasing $\kappa_\Phi$ and decreasing $c_\Phi$ (both changes yielding an increase of $\alpha$), thus, limiting the spread of numerical errors buffered in the cleaning potential $\Phi$, increases the luminosity at the LC. We observe a small growth of the total luminosity for $\kappa_\Phi>64$ when the numerical resolution is doubled. We will argue in Sects.~\ref{sec:tradesecrets} and \ref{sec:discussion} that this is because very large values of $\alpha$ also require very large numerical resolution to properly resolve the very fast damping of divergence cleaning errors in time. The LCR method corresponds to the limit in which errors to charge conservation spread through the domain without constraint. Hence, changes in the electric field on the ECS performed to restore the magnetic dominance condition are communicated \textit{instantaneously} all over the stencil of the discretization of the $\nabla$ operator in a single time iteration of the method (coupling as many as 12 zones around a given numerical cell, if a fourth order finite differences formula is employed). Figure~\ref{fig:FLDSJDENS} lucidly illustrates that the Y-point moves away from the LC with increasing values of $\alpha$. A Y-point closer to the stellar surface boosts the electromagnetic luminosity of the pulsar \citep{Timokhin2006}. The amount of open magnetic field lines increases for a decreasing dimensionless distance $x_0\equiv r_{\rm Y}/r_{\rm LC}$ of the Y-point from the central object. This is directly related to the angular extension of the polar cap, which can be quantified by measuring the colatitude of the closed zone region (see below). In Fig.~\ref{fig:YpointLuminosity}, we present a large selection of simulated Y-point luminosities ($L_{\rm Y}$) vs. their estimated Y-point position, excluding results that did not yield equilibrium magnetospheres for very small or very large values of $\alpha$ (see discussion in Sect.~\ref{sec:discussion}). To approximate the Y-point location, we evaluate the drift velocity along the equator and assign $x_0$ to the position where the velocity is comparable to a small parameter, which equals the grid spacing $\Delta r$ in magnitude. We employ a similar approximation technique to the evaluation of the closed zone colatitude $\theta_{\rm c}$ (Tabs.~\ref{tab:mainmodels} and~\ref{tab:ancilmodels}), with a suitably strong decay at the transition to the co-rotation region. We find that, approximately, $\sin\theta_{\rm c}\sim (r_*/r_{\rm Y})^{1/2} \sim 0.5x_0^{-1/2}$. Roughly in line with the findings from \citet{Timokhin2006}, we measure a correlation between the Y-point location and luminosity. For large values of $\alpha$, associated to smaller values of $x_0$, the errors in these measurements - averaged over several pulsar revolutions - become larger. These errors are likely linked to degrading numerical accuracy induced by the increased stiffness of the augmentation equations (\ref{eq:Psi}) and (\ref{eq:Phi}) for large values of $\alpha$ (see Sect.~\ref{sec:discussion}). We note that the correlation between $x_0$ and $L_{\rm Y}/L_0$ found numerically approaches the theoretical relation of \citet{Timokhin2006}, in which $L_{\rm Y}\propto x_0^{\lambda}$, with $\lambda = -2.065$, more so as the numerical resolution increases. Yet another interesting correlation derived from our results is that $L_{\rm Y}/L_0\approx 0.57(\sin\theta_{\rm c})^{0.19}$, which supports the claim that a larger luminosity at the Y-point correlates with a larger polar cap angle. Interpreting our findings, we cautiously suggest a possibility to obtain quasi-stationary magnetospheres in which the Y-point is not located at the LC but inside it. The location of the Y-point depends on the diffusivity at the ECS. Smaller diffusivity moves the Y-point inward and increases the outgoing Poynting flux, as we show in Fig.~\ref{fig:YpointLuminosity}. The root of this dependency is the (magnetic fieldline) coupling between the fieldline footpoints at the stellar surface and part of the ECS (precisely, the part adjacent to the Y-point). The existence of such a region of coupling between the stellar surface and the ECS was suggested, e.g., by \citet{Contopoulos_2019MNRAS.482L..50} for stationary hybrid solutions combining FFE and a non-ideal ECS. Our time-dependent models reproduce many aspects of these equilibrium states. In the right panel of Fig.~\ref{fig:YpointLuminosity} we evaluate the specific dependency of the luminosity on the diffusion parameter $\alpha$. A power-law of the form $L_{\rm Y}/L_0\propto \alpha^{0.1}$ is found from a fit to our computed models. The parameter $\alpha$ controls the numerical diffusion of the electric field, if we assume that numerical diffusion mimics the physical one to some extent \citep[i.e. $\alpha\propto 1/\eta$, where $\eta$ is the resistivity, see discussion Sect.~\ref{sec:discussion} and][]{Mahlmann2020c}. Thus, our results suggest that the luminosity of the aligned rotator is inversely proportional to the resistivity at the ECS. Nevertheless, because of the small powerlaw index in the aforementioned $\alpha$-luminosity-relation, large changes in the resistivity are required to produce significant variations of the luminosity at the LC. \subsection{Energy dissipation beyond the Y-point} \label{sec:econservation} \citet{Tchekhovskoy2013} hold a reliable benchmark for the convergence of pulsar magnetosphere modeling and its comparison between FFE and MHD. We emphasise the following property of their results: MHD models show a decay of the Poynting flux beyond the light cylinder that diminishes with higher resolution. However, their FFE models of the aligned rotator magnetosphere show $43\% - 50\%$ dissipation that converge to a stable amount of dissipation ($43\%$) for the highest resolution. We observe such behavior for the case of LCR, as it is marked by the thick red lines in Fig.~\ref{fig:Poynting}. In all other cases using the CC method, the dissipation of Poynting flux along the radial direction is significantly less. This observation is also supported by the field line images of Fig.~\ref{fig:FLDSJDENS}. There, field lines notably reconnect beyond the LC in the limit of low $\alpha$, while the cleaning of numerical errors shapes the ECS over longer distances. Stationary solutions of the force-free aligned rotator magnetosphere involving an ECS (where the FFE approximation does not hold) have been obtained \citep[e.g.][]{Contopoulos_2007A&A...466..301, Contopoulos_2014ApJ...781...46, Contopoulos2020}. The configurations built by \citet{Contopoulos2020} show that there is a region of the ECS that may extend beyond the LC, where magnetic field lines can still be closed. Our time-dependent models certainly reveal such a region, and its properties depend on the model parameters, which ultimately determine the level of (numerical) dissipation of the method. Small-scale structures in the equatorial plane (such as plasmoid-like formations) do not automatically appear by increasing resolution. Rather, it seems that an efficient damping of charge conservation errors allows them to emerge. Remarkably, not only the smallest luminosity $L_{\rm LC}$ is attained by the LCR method, but, most importantly, also the largest relative decrease of the luminosity, $\Delta L/L_{\rm LC} = \left[L(r=5r_{\rm LC})- L_{\rm LC}\right]/L_{\rm LC}$ between the light cylinder and $r=5r_{\rm LC}$. Taking $\Delta L/L_{\rm LC}$ as a measure of the diffusivity of the algorithm, we conclude that the LCR method is more diffusive than the CC method. This conclusion is numerically robust, since duplicating the spatial resolution does not notably change the dissipation beyond the Y-point observed in the individual models. Figs.~\ref{fig:Poynting} and~\ref{fig:PoyntingCSPEED} show a significant variability of the Poynting flux beyond the light cylinder for small values of $\kappa_\Phi$. For most values of $\kappa_\Phi$, the Poynting flux beyond the LC is not monotonically decaying, but shows spikes associated to the ejection of plasmoid-like structures that move \emph{outwards} along the ECS (see Fig.~\ref{fig:FLDSJDENS}). The fact that these blobs of strong currents move outwards even if they are produced inside the LC is relevant: they do not contribute to the growth of the closed magnetospheric region. This finding is in contrast to \citet{Komissarov_2006MNRAS.367...19}, who claims that part of the plasmoids will move inwards, increasing the size of the closed magnetosphere with time and, hence, pushing the Y-point towards the LC. This assertion has also led \cite{Spitkovsky2005} and \cite{McKinney2005} to suggest that all configurations with $x_0<1$ would be unstable or transitory \citep[see also][]{Kalapotharakos_2009A&A...496..495,Yu2011,Parfrey_2012MNRAS.423.1416}. We do not notice such behavior here and, thus, our results suggest that closed magnetospheric configurations with a Y-point inside the LC may survive for many dynamical times. In order to compute $\Delta L/L_{\rm LC}$ when it is spatially (and temporarily) variable (i.e., using the CC method with small values of $\alpha$), it is necessary to smooth out the data taking a suitable moving average. Larger values of $\kappa_\Phi$ or smaller advection speeds $c_\Phi$ (i.e., larger values of $\alpha$) yield magnetospheres with smaller Poynting flux dissipation beyond the LC. We note that the dissipation takes place at the ECS, where the force-free approximation is not strictly valid, specifically because the electric field strength becomes larger than the magnetic one and non-ideal electric fields with $\mathbf{E}\cdot\mathbf{B}\neq 0$ become dynamically important. Hence, the different amounts of relative dissipation are closely connected with the numerical handling of the force-free constraints and the propagation of errors from the regions where FFE is breached. Extreme values of $\alpha$ suppress the hyperbolic/parabolic cleaning, letting the charge and the corresponding electric field divergence become misaligned over time. Nevertheless, very strong damping cleans errors very rapidly but effectively decouples the evolution equations of $\Phi$ from the underlying system of physical balance laws (in this case, Maxwell's equations), and renders the scalar potential dynamically negligible. As cases of intermediate $\alpha$ do not show this variability, we empirically demonstrate that there is an optimal range of values of $\kappa_\Phi$, corresponding to $\alpha\sim 1$. In this range, the CC method works very efficiently, effectively minimizing the dissipation at the ECS. We find strong evidence for a much better conservation of energy flux beyond the LC than what is quoted throughout the literature, where results seem to correspond to the most diffusive case of our parameter exploration, as to say the LRC method. As a matter of fact, for the intermediate values of $\alpha\in\left[1.45, 4.63\right]$, the current $\mathbf{j}_\parallel$ is very efficiently suppressed in the equatorial region beyond the Y-point (Fig.~\ref{fig:FLDSJDENS}). Indeed, this is the reason which explains the significantly reduced dissipation beyond the LC in the models using the CC method and optimal values of $\alpha\sim 1$. For the smallest values of $\alpha$, the variability timescales are of the order of the polar cap light-crossing time, namely $\sim r_*\theta_{\rm c}/c\sim r_*/2c$ (see also the discussion in Sect.~\ref{sec:diffusivitydiscuss}). \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/figure2.pdf} \vspace{-6pt} \caption{Poynting flux as a function of the distance from the central rotator. We present results after $16.7$ rotation periods for different damping constants $\kappa_\Phi$ (and a constant advection parameter $c_\Phi=1$) for different resolutions. The tests employing the LCR method are indicated by thick red lines.} \label{fig:Poynting} \end{figure} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/figure3.pdf} \vspace{-6pt} \caption{As Fig.~\ref{fig:FLDSJDENS} but for selected damping constants $\kappa_\Phi$ and different choices for the advection parameter $c_\Phi$.} \label{fig:PoyntingCSPEED} \end{figure} \begin{figure*} \centering \includegraphics[width=0.85\textwidth]{figures/figure4.pdf} \vspace{-6pt} \caption{Dependence of the luminosity on the Y-point location and diffusion parameter $\alpha$. For the Y-point location (left panel), measurements from simulations at five different moments in time are averaged (standard deviation indicated in error bars). Fit functions are derived from for all models that find a reliable equilibrium (solid dots) and are represented by colored dashed lines. As a reference, we compare these results to the corresponding relation in \citet[][black line]{Timokhin2006}. Errors to these fit functions are represented by the lightly shaded regions of the respective color. Outliers that do not find stable equilibria are denoted by colored circumferences. The right panel shows the normalised luminosity measured at the light cylinder, evaluated against the diffusion parameter $\alpha$. A fit function to the data is shown with a thick solid line. Uncertainties to the fit function are given as a shaded region around the central line.} \label{fig:YpointLuminosity} \end{figure*} \section{Sources of dissipation} \label{sec:tradesecrets} In this part of the manuscript, we present an array of ancillary high-resolution simulations that will be key for assessing the role of numerical and/or phenomenological diffusivity in shaping the overall magnetospheric structure, and disclosing several \emph{trade secrets} \citep[cf.][]{Contopoulos2016} in the dynamical modeling of aligned rotator magnetospheres. In order to speed up the calculations at the higher numerical resolution, (i.e. with a grid spacing $\Delta r=r_*/128$, $\Delta\theta=\pi/400$) we will employ a radially re-scaled coordinate system for the remainder of this section. Specifically, for $r\gtrsim 3r_{\rm LC}$, the grid spacing increases by a factor of $a=1.001$ in each grid point along the radial direction. This helps us to reduce the number of points enclosed by the simulation domain while keeping sufficient resolution around the central object. Besides this change introduced for numerical convenience, we further replace the boundary layer described in Sect.~\ref{sec:simulationsetup} by perfect conductor boundary condition within the Riemann solver \citep[cf.][]{Munz2000}. Specifically, at the inter-cell face where the stellar surface is located, we set the following 'left' (L) state of the Riemann problem (corresponding to the interior of the star): \begin{align} \Psi_L &= \Psi_R\\ \Phi_L &= -\Phi_R\\ \mathbf{E}_L &= -\mathbf{E}_R + 2(\mathbf{E}_R \cdot \mathbf{\hat r}) \mathbf{\hat r}\\ \mathbf{B}_L &= +\mathbf{B}_R - 2(\mathbf{B}_R \cdot \mathbf{\hat r}) \mathbf{\hat r} , \label{eq:BCs} \end{align} where $\mathbf{\hat r}$ is the unit radial vector (normal to the stellar surface) and R denotes the respective 'right' state. The reason to consider different boundary conditions is the fact that we seek stationary (or nearly stationary) magnetospheric configurations, in which case boundary conditions nearly completely determine the structure of the magnetosphere. Hence, using a variety of boundary conditions allows as to gauge their influence on the most salient features of our solutions, namely, the location of the Y-point and the closely related dissipation at the ECS. As we shall see, the boundary strategies used throughout this work produce consistent results. Table~\ref{tab:ancilmodels} shows a list of all the simulations that we describe in the following subsections. \begin{table} \centering \caption{Properties of the set of high-resolution simulations corresponding to models described in Sect.~\ref{sec:tradesecrets}. We provide the label of the respective model, the strategies used to deal with violations of the force-free conditions, the strategies to model the electric charge, the phenomenological resistivity induced by a suitable current, the Y-point location, the luminosity at the LC, its relative decay up to a distance of $5r_{\rm LC}$, and the colatitude of the closed zone separatrix at the stellar surface. Models $\mathbf{Bg}$ to $\mathbf{Br}$ employ $\kappa_I=8$ in the parametrization of Eq.~(\ref{eq:FFResCurrentPerpendicular}).} \label{tab:ancilmodels} \begin{tabular}{P{0.2cm}P{1.2cm}P{0.4cm}P{0.5cm}P{0.5cm}P{0.7cm}P{0.8cm}P{0.4cm}} \hline \hline & $\mathbf{E}\cdot\mathbf{B}$ & $\rho$ & $\alpha$ & $x_0$ & $L_{\rm Y}/L_0$ & $\Delta L/L_{\rm Y}$ & $\theta_{\rm pc}$\\ \hline \textbf{Ba} & algebraic & LCR & $4.6$ & $1.04$ & $0.96$ & $0.29$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Bb} & algebraic & CC & $4.6$ & $0.69$ & $2.10$ & $0.03$ & $37.2^\circ$\\%$0.65$ \\ \textbf{Bc} & algebraic & HCC & $4.6$ & $0.68$ & $2.04$ &$<0.01$ & $36.7^\circ$\\%$0.64$ \\ \textbf{Bd} & algebraic & LCR & $0.7$ & $1.02$ & $0.96$ & $0.32$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Be} & algebraic & CC & $0.7$ & $0.83$ & $1.52$ & $0.02$ & $34.4^\circ$\\%$0.60$ \\ \textbf{Bf} & algebraic & HCC & $0.7$ & $0.74$ & $1.49$ & $<0.01$ & $35.0^\circ$\\%$0.61$ \\ \hline \textbf{Bg} & $\eta = 0.0$ & LCR & $4.6$ & $1.04$ & $0.95$ & $0.31$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Bh} & $\eta = 0.0$ & HCC & $4.6$ & $0.81$ & $1.68$ & $<0.01$ & $34.4^\circ$\\%$0.60$ \\ \textbf{Bi} & $\eta = 0.0$ & LCR & $0.7$ & $1.04$ & $0.95$ & $0.30$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Bj} & $\eta = 0.0$ & HCC & $0.7$ & $0.94$ & $1.11$ & $<0.01$ & $31.5^\circ$\\%$0.55$ \\ \hline \textbf{Bk} & $\eta = 1.0$ & HCC & $0.7$ & $0.91$ & $1.15$ & $0.02$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Bl} & $\eta = 10^{-1}$ & HCC & $0.7$ & $0.93$ & $1.10$ & $0.02$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Bm} & $\eta = 10^{-2}$ & HCC & $0.7$ & $0.95$ & $1.10$ & $0.01$ & $30.9^\circ$\\%$0.54$ \\ \textbf{Bn} & $\eta = 10^{-3}$ & HCC & $0.7$ & $0.97$ & $1.10$ & $0.01$ & $30.9^\circ$\\%$0.54$ \\ \textbf{Bo} & $\eta = 1.0$ & CC & $0.7$ & $0.90$ & $1.30$ & $<0.01$ & $32.7^\circ$\\%$0.57$ \\ \textbf{Bp} & $\eta = 10^{-1}$ & CC & $0.7$ & $0.91$ & $1.29$ & $<0.01$ & $32.7^\circ$\\%$0.57$ \\ \textbf{Bq} & $\eta = 10^{-2}$ & CC & $0.7$ & $0.91$ & $1.28$ & $0.01$ & $32.7^\circ$\\%$0.57$ \\ \textbf{Br} & $\eta = 10^{-3}$ & CC & $0.7$ & $0.90$ & $1.29$ & $0.02$ & $32.7^\circ$\\%$0.57$ \\ \hline \textbf{Bs} & algebraic & LCR & $0.0$ & $0.47$ & $1.34$ & $0.42$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Bt} & algebraic & LCR & $0.1$ & $0.52$ & $1.03$ & $0.32$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Bu} & algebraic & LCR & $2.3$ & $1.02$ & $0.96$ & $0.32$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Bv} & algebraic & LCR & $9.3$ & $1.02$ & $0.96$ & $0.32$ & $31.5^\circ$\\%$0.55$ \\ \textbf{Bw} & algebraic & LCR & $18$ & $1.02$ & $0.96$ & $0.31$ & $31.5^\circ$\\%$0.55$ \\ \hline \hline \end{tabular} \end{table} \subsection{The role of the violations of FFE constraints} \label{sec:focuseddominance} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{figures/figure5.jpg} \caption{Comparison of different treatments of the charge $\rho$ for some configurations specified in Tab.~\ref{tab:ancilmodels}. We display the toroidal magnetic field in the left panel of each respective model. The corresponding right panels visualise the force-free violations accumulating during one full time-step of the FFE integration. We show the growth of non-ideal electric fields (absolute value) emerging transiently due to the violation of the $\mathbf{E}\cdot\mathbf{B}=0$ constraint by blue colors, and locations where the magnetic dominance condition is breached by yellow contours. The vertical solid and dashed black lines denote the position of the LC and of the Y-point, respectively. } \label{fig:FFCOND_COMPARE} \end{figure*} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/figure6.pdf} \vspace{-6pt} \caption{Dissipation via different channels of numerical diffusion as a function of the dimensionless parameter $\alpha$ (Eq.~\ref{eq:alphadimensionless}) in a quasi-equilibrium state at $t\approx 7.5t_p$. An estimate of the favorable range for $\alpha$ that minimises the dissipation by force-free violations is indicated as a light blue region. Different colors of the markers indicate different resolutions and treatment of charge density evolution. The shapes of the markers vary with different $c_\Phi$. We provide reference values of the respective dissipation channel for the LCR method of highest $\alpha$ as star-shaped data-points.} \label{fig:QDissipation} \end{figure} We compare the final (quasi-)equilibrium of three different numerical treatments of the electric charge in the magnetosphere in models $(\mathbf{Ba})-(\mathbf{Bf})$. Since we have introduced changes in the boundary conditions and in the numerical grid, we confirm the results from the previous section (incorporating the aforementioned modifications with respect to models in Sect.~\ref{sec:forcefreealigned}). First, we use the LCR method throughout the entire magnetosphere for two distinct values of $\alpha$ (models $\mathbf{Ba}$ and $\mathbf{Bd}$). Second, we use the CC method in models $\mathbf{Bb}$ and $\mathbf{Be}$. Finally, we apply the HCC method in models $\mathbf{Bc}$ and $\mathbf{Bf}$ using different values of $\alpha$. Figure~\ref{fig:FFCOND_COMPARE} demonstrates that the cases where charge is not conservatively evolved in the entire domain (models $\mathbf{Ba}$ and $\mathbf{Bd}$) produce congruent results independent of the cleaning parameter $\alpha$. Such solutions have a large amount of reconnecting field lines beyond the light cylinder and a luminosity at $r_{\rm LC}$ that approaches $L_{\rm LC}/L_0\sim 1$. The small relative differences observed in the position of the Y-point ($\lesssim 2\%$) and the insignificant change in $L_{\rm LC}/L_0$ compared with the $\simeq 10\%$ relative difference in $\Delta L/L_{\rm LC}$ highlight the fact that the variation of $\alpha$ (by a factor $\approx 6.6$) mostly affects the dissipation dynamics of the ECS and its neighborhood. When charge is evolved by a separate continuity equation, any misalignment of the electric field divergence and the charge density is cleaned out by the scalar potential $\Phi$. As we presented above (Sect.~\ref{sec:forcefreealigned}), altering the cleaning parameter $\alpha$ shifts the position of the Y-point and changes the Poynting flux at $r_{\rm LC}$. While most of the cases employing the CC method have a well-maintained current sheet beyond the Y-point (contrasting the reconnection in models $\mathbf{Ba}$ and $\mathbf{Bd}$), excessive cleaning (driven by large values of $\alpha$) can induce variability of the current sheet along the vertical direction. This is obvious when comparing models $\mathbf{Bb}$ and $\mathbf{Be}$, but can also be observed for large values of $\alpha$ in Fig.~\ref{fig:FLDSJDENS}. As we see when comparing models $\mathbf{Bb}$ and $\mathbf{Bc}$, using the HCC scheme stabilises the current sheet on the equator, damping vertical displacements in the ECS. Inside the LC, models $\mathbf{Bc}$ and $\mathbf{Bf}$ only slightly differ from their CC counterparts. The location of the Y-point, and the luminosity differ less than $\simeq 2\%$ among HCC and CC models for $\alpha=0.7$, while for $\alpha=4.6$, HCC models show values of $x_0$ and $L_{\rm LC}/L_0$ in between of the ones of CC and LCR models (Tab.~\ref{tab:ancilmodels}). Beyond the LC, the differences between HCC and CC models are driven by insufficient damping, especially in the model with larger $\alpha$ ($\mathbf{Bb}$), where the ECS is distorted and the violations of the magnetic dominance condition are more patchy and intermittent along it. The fact that HCC models $\mathbf{Bc}/\mathbf{Bf}$ look more like $\mathbf{Bb}/\mathbf{Be}$ than like $\mathbf{Ba}/\mathbf{Bd}$, respectively, results from a subtle combination of two effects. On the one hand, the corrections to the electric fields after violations of the magnetic dominance condition are larger when using a local reconstruction of charge, either globally (LCR) or locally (HCC), than in CC models. This seems natural as the largest charges are created as a result of the sharp discontinuities of the electric field across the ECS, where LCR and HCC models undergo the same corrections in the charge density. On the other hand, the commonality in the (hyperbolic) evacuation of the errors triggered by violations of the FFE constraints at the ECS results in a greater similarity between HCC and CC models than to LCR models (the feedback of $\nabla\cdot\mathbf{E}$ on $\Psi$ is significantly reduced because of the explicit form in which $\rho$ is constrained in the LCR method; Sect.~\ref{sec:numericalmethod}). We can, thus, conclude that the large diffusivity in case of LCR method is primarily induced by corrections to the electric fields after violations of the magnetic dominance condition. The (algebraic) correction of violations to the force-free conditions has different consequences for each of the constraints. Deviations from the $\mathbf{E}\cdot\mathbf{B}=0$ condition build up continuously and are distributed throughout the domain, though they are larger close to current sheets. We visualise this in the panels of Fig.~\ref{fig:FFCOND_COMPARE} that show the $\mathbf{E}\cdot\mathbf{B}$ errors normalised to the local magnetic field strength in a blue gradient. Violations include both very small inaccuracies that result from truncation errors of the algorithm, and strong non-ideal electric fields emerging, for example, at current sheets. In models employing the LCR method ($\mathbf{Ba}$ and $\mathbf{Bd}$), the hyperbolic part of the cleaning still operates to drive violations of the $\mathbf{E}\cdot\mathbf{B}$ constraint away from the ECS (we note a \emph{wave} pattern - concentric structures - apparently emerging from the equatorial plane at about $r\approx 1.2 r_{\rm LC}$ in the respective panels of Fig.~\ref{fig:FFCOND_COMPARE}). That pattern persists in models $\mathbf{Bb}$ and $\mathbf{Be}$, as well as in $\mathbf{Bc}$ and $\mathbf{Bf}$, where we employed the CC and the HCC method. However, in these cases, the violations of $\mathbf{E}\cdot\mathbf{B}$ can affect a larger region, especially for small values of $\alpha$. The mid-panel of Fig.~\ref{fig:QDissipation} shows that models employing the CC method systematically dissipate less energy by Ohmic processes (here approximated by the amount of current parallel to the electric field added up over the whole domain on a single timestep) than models using the LCR method for $\alpha\lesssim 5$. The condition $\mathbf{E}^2-\mathbf{B}^2\leq 0$ is only relevant when significant non-ideal electric fields have built up, in the setup at hand at the ECS, as it is lucidly illustrated by the yellow contours in Fig.~\ref{fig:FFCOND_COMPARE}. It is a natural impulse to associate the high diffusivity observed for the setups using the LCR method to the violations of the magnetic dominance. Looking at the time evolution of the violation of the magnetic dominance constraint, we observe that in models implementing the LCR method everywhere, one finds regions of $\mathbf{E}^2>\mathbf{B}^2$ that consistently cover the ECS in the range $x_0\lesssim r/r_{\rm LC}\lesssim 1.7$ (and likely beyond). In contrast, models employing the CC and HCC methods display an intermittent set of spots of smaller extension, where $\mathbf{E}^2>\mathbf{B}^2$ in the equatorial region beyond the Y-point. The top panel of Fig.~\ref{fig:QDissipation} displays the electric energy subtracted from the whole domain during one iteration of the time-integrator. Contributions to this channel of diffusion result from mesh cells where the magnetic dominance constraint is breached. In practice, we calculate averages from several snapshots of the results to ensure that the quoted dissipation estimates have (more or less) stabilised. LCR models computed at the highest resolution ($N_r=128$; magenta squares) systematically dissipate more (the least about the same) energy by restoring the magnetic dominance condition than models computed with the CC method and the same resolution (dark green symbols) for $\alpha\lesssim 3$. Similarly to the electric energy dissipation in the correction of the $\mathbf{E}\cdot\mathbf{B}=0$ constraint, there is an interval $0.5 \lesssim \alpha\lesssim 3$, where dissipative losses induced by the restoration of the magnetic dominance condition are minimised. In contrast to the trend found for the CC method, the dissipation triggered by violations of the FFE constraints is quite insensitive to $\alpha$ when evaluated for the LCC method (see magenta symbols in the upper and mid panels of Fig.~\ref{fig:QDissipation}). In order to understand this behavior, we have computed the electric energy dissipated by the hyperbolic/parabolic cleaning algorithm, which is given by $|c_\Phi^2\mathbf{E}\cdot\nabla \Phi|$ in the lower panel of Fig.~\ref{fig:QDissipation}. While the gradients of the cleaning potential $\Phi$ are large enough, increasing $\alpha$ (i.e. damping the divergence errors faster than shifting them away) reduces the diffusion through this channel, independent of the method used to evolve or reconstruct the charge density (LCR or CC). The dissipation through this channel is significantly larger than that driven by violations of the FFE constraints for $\alpha\lesssim 5$ and dominates the overall diffusion of electromagnetic energy in the magnetosphere. Above that value of $\alpha$, the total dissipated energy in the $|c_\Phi^2\mathbf{E}\cdot\nabla \Phi|$ channel is smaller than in the other two channels (for this, we compare the numerical values in the top and mid panels of Fig.~\ref{fig:QDissipation} to those in the bottom panel). From that point on, the cleaning algorithm in the CC method cannot efficiently evacuate and damp the errors induced in $\nabla\cdot\mathbf{E}$ by the restoration of the FFE constraints. Thus, the energy dissipation in models employing the CC method, becomes dominated by the violation of the FFE constraints above a certain value of $\alpha$. This reasoning substantiates our claim about the existence of an optimal interval for $0.7\lesssim\alpha\lesssim 3$, where the overall dissipation in the magnetosphere is smaller using the CC method than the LCR one. The insensitivity of the dissipation due to violations of the FFE constrains on $\alpha$ for LCR models (for a fixed numerical resolution) can be explained by the limited evacuation of FFE violations, e.g., from current sheets. As $\rho\approx\nabla\cdot\mathbf{E}$ (up to discretization errors), the right-hand-side of Eq.~\eqref{eq:Phi} is significantly smaller than in the models equipped with the CC method; the two mechanisms (cleaning of divergence errors and enforcing FFE violations) operate independently. \subsubsection{Algebraic corrections versus driver currents} \label{sec:drivingfocus} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/figure7.jpg} \vspace{-18pt} \caption{Comparison of magnetospheric dynamics for different values of $\eta$ in the current given by Eq.~(\ref{eq:FFResCurrentPerpendicular}). From \textit{left} to \textit{right}, the phenomenological resistivity is increasing as $\eta\in\left\{10^{-3},10^{-2},10^{-1},1.0\right\}$. The background color denotes the (drift) velocity component $v^y$, as to say the velocity pointing perpendicular to the ECS. The separatrices where $v^y=0$ are highlighted by magenta contours. The top row comprises models implementing the HCC method, the bottom row those of fully conservative evolution of the charge density $\rho$ (CC models).} \label{fig:ETA_COMPARE} \end{figure*} We have seen in the previous section that violations of the FFE constraints are fundamentally connected to the magnetospheric structure. It is, therefore, natural to assess whether different methods to restore the FFE constraints impact on the overall dissipation and topology of the magnetosphere. Throughout the literature, various techniques are used to correct any deviations from the $\mathbf{E}\cdot\mathbf{B}=0$ condition. These techniques split up into the ones that apply algebraic resets of the electric field at each violation (as done in the models shown so far), and those that modify the Ohm's law encoded in a suitable current to drive the system into a force-free state \citep[cf.][and references therein]{Mahlmann2020b}. It is very much justified to suspect that all the variability in the luminosity identified in Sect.~\ref{sec:forcefreealigned} stems from this critical ingredient to force-free codes, rather than from their connection to charge conservation. To dissect this subtle issue, we conduct simulations of the models $(\mathbf{Bg})-(\mathbf{Bh})$. Without algebraic resets of the $\mathbf{E}\cdot\mathbf{B}=0$ condition, we employ $\kappa_I=8$ in the force-free current presented in Eq.~(\ref{eq:FFResCurrentPerpendicular}). In this way, our method is comparable to the ones that employ driving currents, namely, a formulation of Ohm's law with a finite resistivity $\sigma_\parallel$ that acts along the direction of the magnetic field \citep[e.g.,][]{Alic2012,Komissarov2004}. In Fig.~\ref{fig:FFCOND_COMPARE}, we contrast the magnetospheric equilibrium found when combining a suitable current to drive $\mathbf{E}\cdot\mathbf{B}\rightarrow 0$ for the LCR method (models $\mathbf{Bg}$ and $\mathbf{Bi}$), and for the HCC method (models $\mathbf{Bh}$ and $\mathbf{Bj}$). One straightforward observation is that a local reconstruction of charge remains the most diffusive limit in view of the much larger decrease of luminosity with distance measured by $\Delta L/L_{\rm LC}$ (Tab.~\ref{tab:ancilmodels}). For the HCC models $\mathbf{Bh}$ and $\mathbf{Bj}$, the luminosity decrease beyond the LC is as small as in the case in which algebraic cutbacks of the electric field enforce $\mathbf{E}\cdot\mathbf{B}=0$ (models $\mathbf{Bc}$ and $\mathbf{Bf}$). The luminosity of the HCC models that implement a driving current to enforce $\mathbf{E}\cdot\mathbf{B}\rightarrow 0$ decreases by $\sim 18\%-25\%$ with respect to models using algebraic resets of the electric field (models $\mathbf{c}$ and $\mathbf{f}$). This is a direct consequence of the larger value of $x_0$ in models $\mathbf{Bh}$ and $\mathbf{Bj}$, which place their Y-points closer to the LC. In contrast, models using the LCR method display only a small change of the luminosity ($\lesssim 1\%$) when comparing models with different strategies to enforce $\mathbf{E}\cdot\mathbf{B}=0$ (i.e. models $\mathbf{Ba}/\mathbf{Bd}$ vs models $\mathbf{Bg}/\mathbf{Bi}$). Thus, the smaller luminosity of models using the LCR method is not fully accounted for by the algebraic - rather harsh - corrections we apply to non-ideal electric fields. \subsubsection{Diffusivity models beyond the light cylinder} \label{sec:diffusivityfocus} One technique that is often associated to the capacity of an FFE scheme to resolve current sheets is a finite resistivity induced by a suitably chosen Ohm's law \citep{Alic2012,Parfrey2017}. In \citet{Mahlmann2020c}, we explore the action of the current prescribed by Eq.~(\ref{eq:FFResCurrentPerpendicular}) during the development of 2D tearing modes. In this section, we review the impact of such phenomenological resistivities on the current sheet of the aligned rotator magnetosphere. To this purpose, we prescribe the following phenomenological resistivity: \begin{align} \eta=\eta_{\rm bg}+(\eta_{\rm d}-\eta_{\rm bg})\, \frac{1+\tanh(r_{\rm cyl}-r_{\rm LC})}{2} . \end{align} This resistivity becomes the driving value $\eta_{\rm d}$ for $r_{\rm cyl}>r_{\rm LC}$, where $r_{\rm cyl}=r\sin\theta$ is the cylindrical radius. In other words, inside the LC, all tests have the same (small) background resistivity $\eta_{\rm bg}$, where we use $\eta_{\rm bg}=10^{-5}\ll \eta_{\rm d}$. In Fig.~\ref{fig:ETA_COMPARE} we compare the magnetospheric evolution of different values of $\eta_{\rm d}$ for the HCC and CC models (\textbf{Bk})-(\textbf{Bn}) and (\textbf{Bo})-(\textbf{Br}), respectively. As we also denote in Tab.~\ref{tab:ancilmodels}, we observe - very much like in the previous section - that the Y-point is closer to the LC, and the luminosity at the LC is lower in case of models using the HCC method. Regarding the dissipation of Poynting flux beyond the LC, the whole series of models (\textbf{Bk})-(\textbf{Br}) display similar and relatively low values of $\Delta L/L_{\rm LC}$, especially if we compare these values to the analogous ones obtained with the LCR method and $\eta=0$ (models \textbf{Bg} and \textbf{Bi}). For most of the phenomenological resistivities considered here (say $\eta\le 10^{-2}$), HCC models are more diffusive than CC models and this global measurement (of luminosity) manifests itself at the local level in a smoother velocity profile along the equatorial current sheet. Contrasting this, local reconnection events with large drift velocities (normal to the ECS) become visible in Fig.~\ref{fig:ETA_COMPARE} (bottom row of panels) for the cases using the CC method (especially in models \textbf{q} and \textbf{r}). At the same time, the width of the resistive layer, in which we can identify an inflow (drift) velocity into the ECS, increases with larger $\eta_{\rm d}$. \section{Discussion} \label{sec:discussion} \subsection{Action of non-ideal electric fields in FFE} \label{sec:nonidealFF} Our method extracts deviations from the ideal FFE condition $\mathbf{E}\cdot\mathbf{B}=0$ by the algebraic reset \begin{align} E^i\rightarrow E^k\left(\delta^i_{\hspace{4pt}k}-B_k\frac{B^i}{\mathbf{B}^2}\right). \label{eq:DBcutback} \end{align} in each sub-step of the time-integrator. This surgical intervention instantaneously achieves (ideal) perpendicularity of electric and magnetic fields. At this point, the results presented in Sect.~\ref{sec:tradesecrets} suggest two different dynamical readjustments. First, a local reconstruction of charge adds an amount of charge into the domain that can be computed taking the divergence of Eq.~\eqref{eq:DBcutback}. By assuming a pointwise correction as well as $\nabla\cdot\mathbf{B}=0$, one obtains \begin{align} \rho\rightarrow\rho-\nabla\left[\frac{\mathbf{E}\cdot\mathbf{B}}{|\mathbf{B}|^2}\right]\cdot \mathbf{B}. \label{eq:misalignedrecon} \end{align} As there is no discrepancy between the charge density $\rho$ and $\nabla\cdot\mathbf{E}$, the cleaning potential $\Phi$ reduces its role to a true scalar correction of (very small) numerical truncation errors. Non-ideal fields still alter the system of conservation laws by the loss of both, energy conservation, and charge conservation. The charge density is one constituent of the current that acts as a source of Ampère's law. A source or sink of it, due to the addition of charge when fixing the violation of the $\mathbf{E}\cdot\mathbf{B}=0$ constraint, is dynamically relevant and may have a global impact as it can be transported away from the numerical cells where it is initially generated. Second, in a system that transports charge density in a fully conserved way, the correction in Eq.~\eqref{eq:DBcutback} does not induce local alterations of $\rho$. However, it induces a discrepancy $\mathcal{R}$ between $\rho$ and $\nabla\cdot\mathbf{E}$ that corresponds to the same amount that was identified above: \begin{align} \mathcal{R}\equiv\nabla\cdot\mathbf{E}-\rho\rightarrow\nabla\left[\frac{\mathbf{E}\cdot\mathbf{B}}{|\mathbf{B}|^2}\right]\cdot \mathbf{B}. \label{eq:misalignedcons} \end{align} Such a mismatch between the divergence of electric fields and charge density will cause a change of the cleaning potential $\Phi$ that is not necessarily a small correction. Just as the current density, $\Phi$ acts as a source of Ampère's law. Furthermore, $\Phi$ is a correction to its conservative flux. With the same logic as above, it is, thus, dynamically relevant and with a potential impact beyond the places where charge corrections are induced due to the violation of the $\mathbf{E}\cdot\mathbf{B}=0$ constraint. With the similarity of Eqs.~(\ref{eq:misalignedrecon}) and~(\ref{eq:misalignedcons}) it is not surprising that the LCR method may be regarded, in some aspects, as a low $\alpha$ limit of the CC method (with only weak action of the scalar cleaning potential $\Phi$). Strikingly, the action of non-ideal electric fields on a global scale remains relevant even when replacing the algebraic corrections to force-free violations (Eq.~\ref{eq:DBcutback}) by the continuous action of a driving current as introduced in Eq.~(\ref{eq:FFResCurrentPerpendicular}). In this case, the non-ideal term \begin{align} \mathcal{S}_{\rm ni}=\kappa_I\frac{\mathbf{E}\cdot\mathbf{B}}{|\mathbf{B}|^2}\mathbf{B}=\kappa_I\mathbf{E}_\parallel \label{eq:nonidealsource} \end{align} emerges a source of heating in the energy evolution equation \citep[][Eq.~2.73]{MahlmannPhD}. Its purpose is to continuously drive the electromagnetic field to a force-free state. The results presented in Sect.~\ref{sec:drivingfocus} allow two notable interpretations: i) the diffusion induced by Eq.~(\ref{eq:nonidealsource}) does not significantly change the dissipation of Poynting flux along the ECS (measured by $\Delta L/L_{\rm LC}$ obtained by applying Eq.~\ref{eq:DBcutback}); and ii) explicit driving currents on cell-centered meshes are not able to overcome the need of charge conservation in FFE. All the conducted simulations (including those employing driving currents) always reduce to the most diffusive limit whenever the LCR method is applied. \subsection{Time-scales of the hyperbolic/parabolic cleaning} \label{sec:cleaningscales} Equations \eqref{eq:Efield} and \eqref{eq:Phi} can be manipulated to show that both, the scalar potential $\Phi$ and the difference parameter $\mathcal{R}$, obey the telegraph equation \citep{Komissarov2007MNRAS.382..995} \begin{align} &\partial_{tt}^2 \Phi + \kappa_\Phi \partial_t \Phi - c_\Phi^2\nabla^2 \Phi=0 \label{eq:telegraph1}\\ &\partial_{tt}^2 \mathcal{R} + \kappa_\Phi \partial_t \mathcal{R} - c_\Phi^2\nabla^2 \mathcal{R}=0. \end{align} Let $\tau$ and $l$ be the characteristic time and length scales of change of $\Phi$ (or $\mathcal{R}$), respectively. From Eq.~\eqref{eq:telegraph1}, approximating $\partial_t\Phi\approx \Phi/\tau$ and $\nabla\Phi\approx \Phi/l$, we obtain \begin{align} &\frac{\Phi}{\tau^2} + \kappa_\Phi \frac{\Phi}{\tau} - c_\Phi^2\frac{\Phi}{l^2}\approx 0 . \label{eq:telegraph2} \end{align} In the limit $\tau\ll 1/\kappa_\Phi$, Eq.~\eqref{eq:telegraph2} yields $\tau\approx l/c_\Phi\equiv\tau_a$, where $\tau_a$ has the meaning of an advection timescale for cleaning errors. In the complementary limit $\tau\gg 1/\kappa_\Phi$, one obtains $\tau\approx l^2 \kappa_\Phi/c_\Phi^2\equiv\tau_d$, where $\tau_d$ can be interpreted as the diffusion timescale for the cleaning of errors. The ratio between both time scales is precisely the parameter $\alpha$ defined in Eq.\,\eqref{eq:alphadimensionless}, i.e. $\alpha = \tau_d/\tau_a$. As has been noted throughout the literature \citep[e.g.][]{Mignone2010,Mahlmann2019,Mahlmann2020c}, mostly in the context of cleaning errors to the $\nabla\cdot\mathbf{B}=0$ constraint, a careful calibration of the parameters controlling hyperbolic/parabolic cleaning is necessary. The same holds true for the cleaning of errors to the $\rho=\nabla\cdot\mathbf{E}$ constraint, and we conducted a thorough calibration for the simulations shown in this paper. One way of optimizing the cleaning parameter $\alpha$ is to evaluate the dissipation induced by the source terms to the energy evolution equation, as it was presented in \citet[][Eq.~2.73]{MahlmannPhD}. The relevant channels are dissipation by cleaning ($\propto c_\Phi^2\mathbf{E}\cdot\nabla\Phi$), as well as Ohmic heating fueled by non-ideal electric fields ($\propto\mathbf{E}\cdot\mathbf{J}$). With the results in Fig.~\ref{fig:QDissipation} we find that employing the $\Phi$ cleaning with $\alpha\approx 1$ yields optimal results. This means that an optimal regime is reached when the timescales of dissipation ($\tau_d$) and advection ($\tau_a$) of numerical errors induced by the violation of FFE constraints (in the aligned force-free rotator, predominantly at the ECS) are of the same order. First, the dissipation by corrections of non-ideal electric fields is minimised across all models in this regime (light blue stripe). Second, the dissipation by the cleaning potential stabilises at a steady value across resolutions. This plateau roughly coincides with the blue shaded region and can be interpreted as a trade-off between excessively weak cleaning (inducing larger dissipation for small values of $\alpha$ because of the oscillatory behavior of the electric field after applying corrections to enforce the force-free regime) and the extreme case of over-damping that drives the scalar function $\Phi$ to zero very rapidly. \subsection{Magnetospheric structure} \label{sec:structure} In Fig.~\ref{fig:CURRENT_ZOOM}, we display the structure of currents and charges in two selected regions of the magnetosphere. Specifically, we examine a location close to the polar cap and another one around the Y-point for models using the LCR method (model \textbf{Bd}) and CC method (model \textbf{Hc}). Both models have the same value of $\alpha=0.7$ and the same numerical resolution. The overall charge distribution looks qualitatively like the one obtained by previous force-free models \citep[e.g.][in axial symmetry or \citealt{Kalapotharakos_2009A&A...496..495, Kalapotharakos_2012ApJ...749....2} in three dimensions]{Parfrey_2012MNRAS.423.1416} and in a number of PIC simulations of aligned rotators \citep[e.g.,][]{Chen_2014ApJ...795L..22,Philippov2014,Cerutti2016,Brambilla_2018ApJ...858...81}. Negative charges fill the regions above the polar caps, while positive ones fill the closed magnetosphere up to the Y-point. The structure of the Y-point and of the ECS adjacent to it consists of a set of charge layers of alternating sign stacked vertically in model \textbf{Bd} (Fig.~\ref{fig:CURRENT_ZOOM}, bottom left panel). This layered structure is also observed in, e.g. \citet[][cf. their Fig.~16]{Parfrey_2012MNRAS.423.1416}. Along the equatorial region, a positively charged layer with a thickness $\sim 0.027r_{\rm LC}$ emerges, reproducing the results of a positive surface charge density along the ECS shown in \cite{Timokhin2006}. In model \textbf{Hc}, the layered charge structure is destroyed due to time-variable episodes of reconnection along the ECS. Still, the positively charged central layer is intertwined with regions of negative charge. The (more) variable structure of the ECS in models using the CC method arises because of the existence of a finite time induced by restoring the stationary condition $\nabla\cdot\mathbf{E}=\rho$. This finite time is brought about by the (finite) propagation speed ($c_\Phi\sim c$) of the hyperbolic part of the equation controlling $\Phi$, and modulated by the (finite) diffusion timescale $\tau_d$ (see Sect.~\ref{sec:cleaningscales}). In the LCR method, this time is zero, as to say the restoration of $\nabla\cdot\mathbf{E}=\rho$ is instantaneous. However, the current does not follow the change in the charge density distribution instantaneously. The directions of the poloidal current (displayed by the colored arrows in Fig.~\ref{fig:CURRENT_ZOOM}) shows a return current flowing along the ECS and continuing over the closed zone separatrix converging on the Y-point and extending up to the stellar surface. However, the structure of the current is not simple. Among currents flowing towards the stellar surface, we also find anti-parallel currents flowing away from the surface. We point out the interesting observation that above the polar caps a region with super Goldreigh-Julian charge density, i.e. $\rho>\rho_0$ (enclosed by the cyan dashed line in the upper panels of Fig.~\ref{fig:CURRENT_ZOOM}), emerges. Under the assumption of stationarity, in a small layer around the ECS with a thickness $2h\ll r$, \cite{Contopoulos_2014ApJ...781...46} constructed a simple analytic model for the electromagnetic structure, where the electric field toroidal component is zero. However, in our models, we find a time-dependent $|E_\phi(r,z,t)|\ne 0$ in places along the ECS (and $x>1$) where the magnetic dominance condition is breached. Indeed, the model using the CC method shows a quasi-periodic pattern of the form $E_\phi(r,z,t)\sim E_\phi(r,h)\sin\left[8\pi(x-t)\right] $. For the model using the LCR method, the spatial frequency is about two times larger. The other two components of the electric field roughly follow the analytic model of \cite{Contopoulos_2014ApJ...781...46}, namely, they follow the relations $E_r(x,h)\approx -xB_z(x,z)$ and $E_z(x,h)\approx xB_r(x,z)$. Because of the presence of displacement currents and the (comparatively much smaller) contributions of the term $c_\Phi^2\nabla\Phi$ in Eq.~\eqref{eq:Efield}, the current density in the ECS is not given by $\mathbf{j}_{\rm steady}=\nabla\times\mathbf{B}$ as in the stationary analytic model of \cite{Contopoulos_2014ApJ...781...46}. This is lucidly represented in the bottom panels of Fig.~\ref{fig:CURRENT_ZOOM}, where the ratio $\Delta J/j_0=(|\mathbf{j}|-|\nabla\times\mathbf{B}|)/j_{0}$ differs from zero (indicating that the current density includes contributions other than $\mathbf{j}_{\rm steady}$) mostly in the vicinity of the ECS and also along the current sheets surrounding the closed magnetospheric region. At this point, there is an interesting difference between models using LCR and CC methods, namely, models with a local charge reconstruction have a current smaller than $\mathbf{j}_{\rm steady}$ ($\Delta J/j_0<0$) along most of the current sheet. In contrast, the model \textbf{Hc} shows smaller average deviations along the ECS between $\mathbf{j}$ and $\mathbf{j}_{\rm steady}$, and surrounding it in intermediate patches beyond the LC. That finding is unexpected given the larger variability along the ECS encountered, in general, for CC models. \subsection{Diffusivity in force-free magnetospheres} \label{sec:diffusivitydiscuss} \begin{figure} \centering \includegraphics[width=0.47\textwidth]{figures/figure8a.jpg}\\ \includegraphics[width=0.47\textwidth]{figures/figure8b.jpg}\\ \includegraphics[width=0.47\textwidth]{figures/figure8c.jpg} \vspace{-6pt} \caption{Charge and current structure in time-dependent aligned rotator models. Upper and mid panels: Charge ($\rho$; colored background, normalised to the absolute value of the Goldreich-Julian charge density, $|\rho_0|$) and current ($\mathbf{j}_\parallel$; arrows, normalised to $\rho_0 c$) structure of selected models (see panel legends). We visualise two different zooms into the magnetosphere, namely the polar cap region (top panels), and the Y-point vicinity (bottom panels). The direction of the current flow is indicated by arrows for regions where $j_\parallel/j_0>0.01$. The cyan dashed line denotes the location where $\rho=\rho_0$ (specifically, in between that line and the stellar surface, the charge density is larger than the Goldreich-Julian charge density). Bottom panels: Difference between the current $\mathbf{j}$ in our time-dependent models and the current in the steady state case $\mathbf{j}_{\rm steady}=\nabla\times\mathbf{B}$, normalised to the Goldreich-Julian current density $j_0$. We display the poloidal magnetic field lines; the vertical black line denotes the position of the LC.} \label{fig:CURRENT_ZOOM} \end{figure} \begin{figure*} \centering \includegraphics[width=1.0\textwidth]{figures/figure9.jpg} \vspace{-18pt} \caption{Averaging of magnetospheric fields and Poynting fluxes during a time of $\Delta t=(7.5-6.8)t_{\rm p}=1.3t_{\rm p}$ for selected models ($\mathbf{Ba}$, $\mathbf{Be}$, and $\mathbf{Bb}$). Small variations beyond the light cylinder, especially those observed in Fig.~\ref{fig:Poynting} (shown again as thick, transparent lines in the background of the right panel) are compensated across timescale $\Delta t\gtrsim 1t_{\rm p}$.} \label{fig:Averaging} \end{figure*} Resetting the charge density $\rho$ to the instantaneous value of $\nabla\cdot\mathbf{E}$ assures compatibility of electric fields and the corresponding charge distribution. However, removing significant fractions of the electric field from the domain for ensuring FFE conditions alters the charge distribution in the domain by an amount that is not necessarily of the order of the truncation error. In our default (CC) scheme \citep{Mahlmann2020b}, the charge is evolved with a separate continuity equation. Changes to the electric field by the algebraic reset of force-free errors introduce a misalignment between the charge density and the divergence of electric fields. The degree of localization of this misalignment to inherently non force-free regions, such as current sheets, can be interpreted as a diffusive length scale. We control this localization in our hyperbolic/parabolic cleaning procedure with the parameter $\alpha$, that expresses the ratio of diffusion to advection timescales for the cleaning errors. The LCR method turns out to be the most diffusive one, using as a measurement of the diffusivity the dissipation of Poynting flux beyond the LC. In the models resorting to the LCR method, the effect of violations to ideal FFE conditions spreads throughout the domain, with notable consequences for the global energetics. While the Y-point relaxes to a location close to the LC (cf. Fig.~\ref{fig:FFCOND_COMPARE}), field lines reconnect across the ECS and induce a notable dissipation of the outward transported energy ($\sim 30\%$). When augmented by a conservative evolution of the charge density with a suitable calibration of the divergence cleaning parameter $\alpha$, reconnection beyond the LC is greatly reduced - as is the dissipation of energy. At the same time, the Y-point is pushed away from the light cylinder (cf. Fig.~\ref{fig:FFCOND_COMPARE}), and the overall luminosity increases. Though the action of the cleaning potential $\Phi$ can become excessively large for $\alpha\gg 1$, the increase of the luminosity with the corresponding change of the Y-point location roughly follows the trend that is theoretically expected for small and intermediate values of $\alpha$ (cf. Fig.~\ref{fig:YpointLuminosity}). We explore an interpretation of our results in light of the parameterization of the diffusivity beyond the LC in terms of the pair formation multiplicity $\kappa$ suggested by \cite{Contopoulos2020}. These authors show that smaller values of $\kappa$ eventuate in more dissipation at the ECS and, conversely, $\kappa\gg 1$ yields very little or no dissipation at all. When drawing a direct comparison to the results presented throughout this paper, the models endowed with the LCR method would correspond to values of $\kappa\sim 1$. Models employing our the CC scheme (our default), would compare to values $\kappa\gg 1$. The physical conditions in a typical pulsar magnetosphere tend to produce many pairs per Goldreich-Julian charge particle in the polar cap \citep[e.g.][and references therein]{Timokhin_2015ApJ...810..144,Contopoulos_2019MNRAS.482L..50}. Hence, in this regard, our CC models potentially reproduce the usual conditions met in actual rotating pulsars more closely, though the limitations set by the force-free regime naturally persist. The panels of Fig.~\ref{fig:FFCOND_COMPARE} that show the location of violations of the FFE constraints illustrate two notable aspects: i) deviations from $\mathbf{E}\cdot\mathbf{B}=0$ are not a localised phenomenon. Emerging from genuinely non-ideal regions, such as current sheets, non-ideal fields can spread throughout the domain. ii) A change in the treatment of charge conservation not only minimises the dissipation induced by corrections of the $\mathbf{E}\cdot\mathbf{B}=0$ condition, but also changes the electromagnetic structure of the current sheet itself. Cases $\mathbf{Ba}/\mathbf{Bd}$ show a region with $\mathbf{E}^2-\mathbf{B}^2>0$ along the length of the current sheet. Contrasting this, such violations only occur at X-points in the current sheet of models $\mathbf{Bb}/\mathbf{Be}$. The acceleration of particles and the production of radiation demand the existence of an electric field component parallel to the magnetic field, namely that $\mathbf{E}\cdot\mathbf{B}\ne 0$. Hence, the structure of the regions of the magnetosphere, where the strongest violations of the $\mathbf{E}\cdot\mathbf{B}=0$ condition occur, are likely closely related to the production of pulsar radiation \citep[e.g.][]{Timokhin_2013MNRAS.429...20}. Looking at Fig.~\ref{fig:FFCOND_COMPARE}, these violations, although extended in the magnetosphere as stated above, are maximised in the vicinity of the Y-point, suggesting that the Y-point is the most important site for particle acceleration in the magnetosphere. This result is backed up by PIC simulations of, e.g. \cite{Chen_2014ApJ...795L..22} and gives support to the theoretical "ring-of-fire" model of \cite{Contopoulos_Stefanou_2019MNRAS.487..952}. Violations to the ideal force-free conditions in combination with the transport of charge conservation errors originating during their correction (Sect.~\ref{sec:nonidealFF}) are the main driver of diffusivity in force-free aligned pulsar magnetospheres. The relative importance of the dissipation triggered by the enforcement of either the $\mathbf{E}\cdot\mathbf{B}=0$ or the $\mathbf{E}^2-\mathbf{B}^2<0$ conditions is similar across these two particular channels, as we can observe in the magnitudes of the electric energy lost within a given time step in Fig.~\ref{fig:QDissipation}. However, the magnitude of the dissipation triggered by each of them may depend on the order and frequency with which they are applied, as well as on the mesh and time-integrator \citep[cf.][]{Spitkovsky2006}. Nevertheless, the dominant contribution to the dissipation along these channels stems from the ECS, where the magnetic dominance condition is chronically breached. We suggest that the magnetic dominance condition is really the origin of the differences between the various methods of dealing with the charge treatment as presented in this paper. In the HCC models using local charge reconstruction only in grid zones where $\mathbf{E}^2-\mathbf{B}^2>0$, the dissipation through this channel is minimised (see light green colored squares in the upper panel of Fig.~\ref{fig:QDissipation}), and the Ohmic dissipation is maximised (mid-panel of Fig.~\ref{fig:QDissipation}). Since $\mathbf{E}^2-\mathbf{B}^2>0$ is only reached at the ECS, it is standing to reason that the specific restoration of the magnetic dominance constraint may induce global changes in the magnetospheric structure, its luminosity, and, certainly, on the amount of dissipation of Poynting flux beyond the LC. The resistivity models used beyond the LC in Sect.~\ref{sec:diffusivityfocus} provide twofold insight. First, they allow us to estimate the numerically induced diffusivity $\eta_0$ across the ECS. The effect of $\eta_d$ will only become noticeable when the phenomenological resistivity is larger than the numerical diffusivity of the method. From Fig.~\ref{fig:ETA_COMPARE} we can estimate that $\eta_0\lesssim 10^{-1}$. Second, and in line with the results presented in \citet{Mahlmann2020c}, a choice of $\eta_d\gtrsim 10^{-1}$ in the current presented in Eq.~(\ref{eq:FFResCurrentPerpendicular}) allows to properly model dynamics of the resistive layer of the ECS. Increasing $\eta_d$ gradually drives relatively large scale inflows into the ECS (as traced by the drift velocity component perpendicular to it), effectively mimicking physical dynamics in the non-ideal region. As we established very competitive convergence of our high-order FFE method \citep{Mahlmann2020c}, we suggest that this relatively large value of $\eta_0$ is, indeed, induced by the non-ideal fields emerging in the ECS. In this context it becomes clear why several FFE methods need to employ special treatments of the ECS in the pulsar magnetosphere \citep{McKinney2006,Etienne2017}, namely, to reduce the extent of the diffusive regions by an ad-hoc prescription. As stated in, e.g. \cite{Timokhin_2013MNRAS.429...20}, a necessary criterion imposed by observational constraints on the pulsar magnetosphere is the \emph{stationarity} in a statistical sense. In other words, any local fluctuations on timescales smaller than the LC light-crossing time $\tau_{\rm LC}=r_{\rm LC}/c$ (or more likely, over the rotational period of the pulsar $t_{\rm p}$) that average to a stationary state, may also account for the stability of the pulsar mean profiles and sharpness of the peaks in the spectra of gamma-ray pulsars. A very salient feature related to the treatment of the charge conservation equation in FFE is the obvious time-dependence of the magnetosphere, driven by episodes of magnetic reconnection along the ECS. However, the magnetospheres resulting from the CC method (including a suitable conservative treatment of the charge) are stationary if we average them out over timescales comparable to $\tau_{\rm LC}$. In order to support this statement, we show the time-averaged map of the toroidal magnetic field over an interval $\Delta t$ slightly larger than one pulsar rotational period in Fig.~\ref{fig:Averaging} (left panels). We find even stronger evidence than in the polar distribution of the toroidal field when evaluating the Poynting flux as a function of distance for the averaged models. It displays a radial dependence which radically smooths the spatial variability (Fig.~\ref{fig:Averaging}, right panel). We further probed the long-term stability of the representative models $\mathbf{Ba}$, $\mathbf{Be}$, and $\mathbf{Bb}$ by tracking their evolution during $\gtrsim 30$ rotational periods \citep{SupplementaryMediaA}. The models show stability over such time-scales in all the characteristic properties discussed in the previous section, especially regarding the Y-point location and pulsar luminosity. \section{Conclusions} \label{sec:conclusion} In a deep exploration with our recently developed force-free code \citep{Mahlmann2020b,Mahlmann2020c}, we exploit the diffusion time-scale induced by hyperbolic/parabolic cleaning of charge conservation errors (Sect.~\ref{sec:cleaningscales}) to quantify an aspect that has not been systematically assessed so far in many FFE simulations, including our own. Namely, the \emph{global} imprint of \emph{local} violations to the force-free constraints. At the example of the force-free aligned rotator magnetosphere, we demonstrate that balancing the amount of damping and advection of charge conservation errors (encoded in the parameter $\alpha$) can alter the global structure of the simulated magnetosphere. Specifically, by decreasing the amount of numerical diffusion arising from violations to the FFE constraints, the Y-point moves away from the light cylinder while the outgoing Poynting flux increases by a factor of a few (Sect.~\ref{sec:forcefreealigned}). In summary, our exploration clarifies several \emph{technical} aspects that should become central for the assessment of (global) FFE simulations. First, the localization of force-free violations to small regions in resistive layers, such as current sheets, are key to reduce the diffusivity that is induced into FFE by non-ideal electric fields or breaches of magnetic dominance (Sect.~\ref{sec:nonidealFF}). In our method, we achieve and control such a localization by combining a conservative evolution of charge density with a hyperbolic/parabolic cleaning of errors to Gauss' law. We suggest that $\alpha\lesssim 1$ is an optimal parameter for the minimization of the combined channels of numerical diffusion (Sect.~\ref{sec:focuseddominance}). Second, in the inherently non-ideal aligned rotator magnetosphere, the ECS is the main source of numerical diffusion by inducing strong field gradients and violations to the FFE constraints. We identify a strong dependence of the dynamics on the specific treatment of violations to the magnetic dominance condition (Sect.~\ref{sec:diffusivitydiscuss}). Third, the extreme nature of algebraic corrections to the force-free conditions are not the main reason for the luminosity dependence of the global magnetosphere. Driving currents yield very similar results (Sect.~\ref{sec:drivingfocus}). Finally, different treatments of force-free violations, especially at Y-points and current sheets, are likely to change the resistive time scales of the evolution and to have a notable impact on the equilibrium magnetospheres. We extend our analysis to so-called phenomenological resistivity models, where adapted driving currents mimic the development of resistive layers around current sheets (Sect.~\ref{sec:diffusivityfocus}). \textit{FFE is a robust way to model energy flows in highly magnetised plasma}, as we find in the magnetospheres of many astrophysical objects. This statement can safely be extended to situations in which the global dynamics of field lines drives the transient appearance of inherently non-ideal regions, such as current sheets. Specifically, we argued this in the context of accretion of magnetic loops onto rapidly spinning BHs \citep{Mahlmann2020}, and the shearing of fields driven by interacting Alfvén waves \citep{Ripperda2021}. However, in situations where areas of genuine (physical) resistivity drive the global field line dynamics, employing FFE methods has to be carefully benchmarked. This work is, to our knowledge, the first extensive calibration of an FFE method for the specific application of astrophysical magnetospheres. We find the global flows of energy to be extremely sensitive to the treatment of FFE violations. Our results agree with comparable numerical surveys throughout the literature only in the limit of strong numerical diffusion induced by the ECS. Finally, we suggest that scenarios like the aligned rotator should be handled with care when used as a standard test for FFE methods. The stability and magnitude of the Poynting flux beyond the light cylinder can be used as a primer for the diffusivity of the respective method. It could be argued that operating in the well-established, but ultimately limited, regimes of \emph{ideal} fluid approximations for the modeling of global scenarios that are affected or even driven by genuinely resistive effects will require more and more care in the future. The desire to overcome many orders of scale separation has to go hand-in-hand with a deep understanding of the diffusive properties of the employed numerical methods. Until we can transition into an era of multi-regime astro-plasma codes, we find it reassuring to have the limits of our FFE method laid out transparently. \section*{Acknowledgements} This work has been supported by the Spanish Ministry of Science, Education and Universities (PGC2018-095984-B-I00) and the Valencian Community (PROMETEU/2019/071). We furthermore thank for support from the COST Actions PHAROS CA16214 and GWverse CA16104. This manuscript relies vastly on high performance computing resources. They were provided with the \textit{LluisVives} machine at the Servei d’Informàtica of the \textit{Universitat de València} (financed by the FEDER funds for Scientific Infrastructures; IDIFEDER-2018-063) and extensively supplemented by allocations on the \textit{MareNostrum} and \textit{Tirant} supercomputers of the \textit{Spanish Supercomputing Network} (AECT-2021-1-0006, AECT-2021-1-0007). \section*{Data Availability} The data underlying this article will be shared on reasonable request to the corresponding author. \bibliographystyle{mnras}
{ "timestamp": "2021-09-30T02:00:25", "yymm": "2109", "arxiv_id": "2109.13936", "language": "en", "url": "https://arxiv.org/abs/2109.13936" }
\section{Introduction} \label{intro} Text simplification is the task of modifying the structure of a text to make it easier to read and comprehend while preserving the content and approximating the original meaning. Linguistically, simple sentences are defined as having only one subject and one verb or predicate. Therefore, a complex sentence can be rewritten into multiple simpler sentences while retaining the same meaning. When simplifying texts, a myriad of rewriting transformations have been explored, ranging from replacing complex words or phrases for simpler synonyms, to changing the syntactic structure of the sentence. Moreover, modern automated text simplification approaches are data-driven, attempting to simplify sentences using parallel corpora of aligned complex-simplified sentences. Text simplification allows humans to read texts more efficiently and faster, empowering them to take part in society and become more active in their daily actions and healthcare \cite{Dannells}. Another vital application of text simplification is reading assistance it provides, especially for people with reading disabilities \cite{Carroll1999,Inui2003}, low-literacy readers \cite{Watanabe2009}, or non-native speakers \cite{Siddhartha2002}. \begin{table*}[h] \centering \resizebox{0.85\textwidth}{!}{% \begin{tabular}{|l|l|} \hline \textbf{Original SQuAD Sentence} & \textbf{Transferred Simple-SQuAD Sentence} \\ \hline \begin{tabular}[c]{@{}l@{}}Clark also claimed that Abdul gave him preferential \\ treatment on the show \underline{due} to their affair.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Clark also claimed that Abdul gave him preferential\\ treatment on the show\underline{. This was due} to their affair.\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}In his Prices and Production (1931), Hayek argued \\ that the business cycle resulted from the central bank's\\ inflationary credit expansion and its transmission over \\ time\underline{, leading} to a capital misallocation caused by the \\ artificially low interest rates.\end{tabular} & \begin{tabular}[c]{@{}l@{}}In his Prices and Production (1931), Hayek argued\\ that the business cycle resulted from the central bank's\\ inflationary credit expansion and its transmission over \\ time\underline{. This led} to a capital misallocation caused by the \\ artificially low interest rates.\end{tabular} \\ \hline \end{tabular } \caption{Examples of Simple and Complex sentences, from our proposed Simple-SQuAD dataset and the SQuAD dataset. Underlined parts denotes the splitting point.} \label{tab:simple-introduction} \end{table*} There has been considerable work done where NLP applications exploit the benefits of text simplification. Long sentences with complex syntax or those laden with long-distance dependencies often pose difficulties for various downstream tasks. Text simplification has been used to improve the performance for these tasks including machine translation \cite{Hasler2017}, summarization \cite{Silveira2012}, semantic role labeling \cite{Vickrey2008,Evans2019}, and information extraction \cite{Evans2019}. This opens up the avenue for exploring simplification for question-answering. To the best of our knowledge, text simplification for comprehension-based question-answering has not been explored yet. In this work, we make the following contributions:- \begin{itemize} \item We propose a transformers-based text-simplification pipeline that splits and rephrases a complex sentence into its simpler constituent sentences. We outline each step in the dataset creation pipeline, including data preprocessing, performing text simplification, thresholding the simplified sentence based on the quality of the transfer, and offset finding of answers from each question-answer pair present in the dataset. \item We propose a new dataset \textit{Simple-SQuAD} by converting each context in the \textit{SQuAD} dataset using the proposed text-simplification pipeline. \footnote{Made available at the following Github repository: \url{https://github.com/kartikeypant/text-simplification-qa-www2021}.} \item We perform automated and human evaluation to determine the quality of the text simplification model. \item We perform event based analysis and sentence-length based transfer analysis to give deeper insights into the transfer process. \item We then benchmark both \textit{Simple-SQuAD} and \textit{SQuAD} for predictive performance in the \textit{Simple-SQuAD} question-answering task. \end{itemize} \section{Related Works} \subsection{Text Simplification} Text simplification has attracted a great deal of attention due to its potential impact on society. Prior explorations on text simplification contain a myriad of approaches, which include from the use of hand-crafted syntactic rules \cite{Carroll1999,Vickrey2008}, statistical simplification model \cite {Zhu2010}, quasi-synchronous grammar \cite{Woodsend2011} and the semantic hierarchy of simplified sentences for recursively splitting and rephrasing complex sentences \cite{Niklaus2019}. Recently, Machine Translation, both statistical and neural, has also been used for the task of text simplification \cite{Narayan2014,Nisioi2017}. The process of text simplification has been observed to help in the performance of multiple downstream tasks. Silveira et al.~\cite{Silveira2012} explored the use of a sentence simplification module in summarization systems, concluding that the simplification module removes expendable information that helps in accommodating relevant data in a summary. Hasler et al.~\cite{Hasler2017} showed that the performance of source simplification could improve translation quality in machine translation systems. Evans et al.~\cite{Evans2019} proved the efficacy of integrating a text simplification step for improving the predictive performance of semantic role labeling and information extraction methodologies. Narayan et al.~\cite{Narayan2017} introduced a new text simplification task, named Split-and-Rephrase on their proposed WebSplit dataset, containing $1,066,115$ parallel instances of complex and sequence of simple sentences having a similar meaning. The goal of the task is to split a complex input sentence into shorter sentences while preserving the meaning. In this task, the emphasis is on sentence splitting and rephrasing, with no deletion and no lexical or phrasal simplification. They further proposed five models, ranging from vanilla sequence-to-sequence to semantically-motivated models to benchmark the proposed task. Aharoni and Goldberg~\cite{Aharoni2018} and Botha et al.~\cite{Botha2018} extended the work by introducing new datasets, with a more extensive vocabulary and split examples to improve the efficacy of the prior benchmarks. We use WikiSplit, introduced by Botha et al.~\cite{Botha2018}, for training our sentence simplification module. \subsection{Style Transfer} In recent works, textual style transfer has been shown to produce grammatically fluent, and information-preserved texts with fairly accurate target attributes. For the task in a semi-supervised setting, various methodologies are exploited, including back-translation~\cite{Prabhumoye2018}, back-translation with attribute conditioning~\cite{Pant2020}, specialized transfer methodologies like Delete, Retrieve, Generate~\cite{Li2018}. However, in the presence of a large parallel corpus, sequence-to-sequence models perform competitively. Aharoni and Goldberg~\cite{Aharoni2018} exploited a copy-mechanism based sequence-to-sequence model with attention~\cite{Bahdanau2015} for the text simplification transfer. Transformers~\cite{Vaswani2017} have been shown to perform robust language modeling, given enough data, helping in various downstream tasks. Due to the parallelized nature of the architecture, it is possible to train using much larger datasets than recurrent neural networks. However, it becomes necessary to optimize hyperparameters carefully while ensuring scalability ~\cite{Popel2018}. Transformer-based methodologies for the task of sentence simplification have been explored by Marayuma and Yamamoto~\cite{Maruyama2019} for Japanese and Zhao et al.~\cite{Zhao2018} for a smaller Wikipedia-based dataset. However, to the best of our knowledge, there has been no work done for supervised style transfer exploiting transformer-based models for the significantly large \textit{WikiSplit} dataset. \section{Corpus Creation} \subsection{Preliminaries} \subsubsection{\textbf{SQuAD}} For exploring the effect of text simplification in the question-answering downstream task, we use Stanford Question Answering Dataset (SQuAD comprehension), a reading comprehension dataset released by Rajpurkar et al. ~\cite{Rajpurkar2016}. It consists of $100,000+$ questions from $536$ articles posed by crowdworkers on a set of Wikipedia articles, where the answer to each question is a segment of text from the corresponding reading passage. Unlike other datasets, \textit{SQuAD} does not provide a list of answer choices for each question; instead, systems must select the answer from all possible spans in the passage. Though the system must deal with a large number of candidate answers, yet span-based answers are easier to evaluate than free-form answers. Further, it is diverse in terms of answer types, containing a significant percentage of dates, numeric data, adjective phrases, verb phrases, clauses as answers. The predictive performance of models in the \textit{SQuAD} dataset is evaluated using \textit{Exact-match} and \textit{F1-score}. \subsubsection{\textbf{WikiSplit}} For the task of text simplification, we use the \textit{WikiSplit} corpus \cite{Botha2018}, a parallel corpus consisting of complex sentences and its consequent sequence of simple sentences having similar meaning for the Split and Rephrase task. It contains a set of one million naturally occurring sentence rewrites mined from English Wikipedia, providing $60$ times more examples and $90$ times more vocabulary as compared to the \textit{WebSplit} corpus introduced by Narayan et al. ~\cite{Narayan2017}. Using a larger dataset for the task of text simplification increases the efficacy of the model and the quality of the transferred data ~\cite{Botha2018,Popel2018}. \begin{figure*}[h] \centering \includegraphics[width=1\textwidth]{architecture-diagram.png} \caption{Style Transfer Architecture} \label{fig:architecture-diagram} \end{figure*} \subsection{Approach} \subsubsection{\textbf{Style Transfer}} This subsection outlines the process of textual style transfer using transformers for converting the complex contexts in the \textit{SQuAD} dataset into their simpler counterparts. Figure~\ref{fig:architecture-diagram} illustrates the process for a sample context and question-answer pair from the \textit{SQuAD} dataset. We use spaCy’s Sentencizer \footnote{\href{https://spacy.io/api/sentencizer}{https://spacy.io/api/sentencizer}} to tokenize the contexts into their respective constituent sentences. This process enables us to perform the sentence-level transfer, thus ensuring higher degrees of overall transfer quality. We tokenize each sentence using a SentencePiece tokenizer, trained on $3.24$ \textit{GBs} English Wikipedia . While training the SentencePiece tokenizer, we mark the numerical tokens as custom-defined ones, ensuring no tokenization happens for such tokens. This step helps in preserving the numerical tokens in the transfer process by the transfer model. We transfer each tokenized sentence using a transformers based machine translation model. We use the OpenNMT-py \cite{OpenNMT_Paper} toolkit for the implementation process. The model consists of $6$-layered transformers architecture with $8$ self-attention heads and $1028$ sized hidden feed-forward layer. We trained the model for $20000$ training steps with a dropout of $0.1$ and a batch size of $2048$. For optimization, we used Adam optimizer with a \textit{beta2} value of $0.998$, accumulating the gradient twice. We use an initial learning rate of $2$ with \textit{noam} decay method and $800$ ($4\%$ of total) warmup steps. \subsubsection{\textbf{Thresholding}} In this section, we outline the thresholding techniques used in the generated sequence of simple sentences to maintain their quality. We execute the following post-editing steps to reduce noise from the dataset: \begin{enumerate} \item \textbf{Perplexity} : We use open \textit{GPT-2} \cite{Radford2018} as the language model to assign perplexity to the generated sentences. We threshold the transferred sequence of sentences that have the perplexity between $50$ and $600$ and discard all the remaining sequences. This thresholding ensures that the sequence of sentences is fluent, as measured by the language model. Table \ref{tab:thresholding-split} shows that $84951$ of $91757$ sentences are within the fluency thresholding, comprising of $~93\%$ of the total sentences. \item \textbf{Length of the original sentence}: We weed out all sentences whose original length was shorter than five tokens. This heuristic is based on the condition that sentences lesser than five tokens are highly likely to be simple, reducing the number of false positives to a significant extent efficiently. Table \ref{tab:thresholding-split} shows that the thresholding process removes $15.16\%$ of the already thresholded sentences leaving $~78.5\%$ of the total sentences. \item \textbf{Redundancy in sequence of the transferred sentences}: We remove all transferred sentences that display a redundancy behavior, which is defined to be one in which two or more sentences in the sequence are the same. We introduce this step after a careful analysis of the transferred sentences. Table \ref{tab:thresholding-split} illustrates that the thresholding removes $1.57\%$ of the already thresholded sentences leaving $~77.31\%$ of the total sentences. \end{enumerate} \begin{table}[] \centering \resizebox{0.48\textwidth}{!}{% \begin{tabular}{|l|r|r|} \hline \begin{tabular}[c]{@{}l@{}}\textbf{Thresholding} \\ \textbf{Step}\end{tabular} & \begin{tabular}[c]{@{}r@{}}\textbf{Percentage of} \\ \textbf{Total Sentences}\end{tabular} & \begin{tabular}[c]{@{}r@{}}\textbf{Thresholded} \\ \textbf{Sentences}\end{tabular} \\ \hline \textbf{Step 1}: Perplexity & 92.58\% & 84,951 \\ \hline \begin{tabular}[c]{@{}l@{}}\textbf{Step 2}: Original \\ Length Thresholding\end{tabular} & 78.5\% & 72,075 \\ \hline \begin{tabular}[c]{@{}l@{}}\textbf{Step 3}: Redundancy \\ Thresholding\end{tabular} & 77.31\% & 70,944 \\ \hline \end{tabular}% } \caption{Sentences left after each thresholding process step} \label{tab:thresholding-split} \end{table} \subsubsection{\textbf{Automated Evaluation}} We perform an automated sentence-level analysis of the style transfer model. We determine the extent of content preservation, lexical simplicity, and reading ease of the generated sentences using the following three metrics: \begin{enumerate} \item \textbf{BLEU} (Papineni et al.~\cite{Papineni2002}): We use the self-BLEU score taking the original sentence as the reference to measure the extent of content preservation. Table \ref{tab:metric-distribution} shows that our model achieves a mean high self-BLEU of $75.66$, implying that the transferred sentences display a high level of content-based similarity with the original sentence \item \textbf{SARI} (Xu et al.~\cite{Xu2016}): We use the SARI metric to measure the quality of lexical simplicity in the transferred sentences. It analyzes the words added, deleted, and retained by a simplification model. Our version of SARI compares the model’s output to the original sentence. There is a high correlation with human judgments of simplicity gain and the SARI metric (Xu et al., 2016). In Table \ref{tab:metric-distribution}, we observe a mean SARI value of $30.08$ with a low standard deviation of $2.21$, denoting most sentences to have a high level of lexical simplicity. \item \textbf{Flesch–Kincaid Grade level} (Kincaid et al.~\cite{Kincaid1975}): Flesch– Kincaid Grade Level (FKGL) is a widely-used metric for text readability. It represents the corresponding U.S. grade level, whose education is appropriate for understanding the text. As illustrated in Table \ref{tab:metric-distribution}, the Flesch-Kincaid Grade Level for the sentences decreases by $5.43$ grade levels on average. Thus, the transferred sentences have much higher readability than the original sentences. \end{enumerate} \begin{table}[h] \centering \resizebox{0.46\textwidth}{!}{% \begin{tabular}{|l|r|r|} \hline \textbf{Metric} & \textbf{Mean} & \textbf{Standard Deviation} \\ \hline \textbf{BLEU} & 75.66 & 13.68 \\ \hline \textbf{SARI} & 30.08 & 2.21 \\ \hline \textbf{FKGL(Original)} & 13.29 & 5.74 \\ \hline \textbf{FKGL(Transferred)} & 7.86 & 3.94 \\ \hline \end{tabular}% } \caption{Automated analysis of transferred sentences} \label{tab:metric-distribution} \end{table} \subsubsection{\textbf{Human Evaluation}} Although we use commonly used metrics for the evaluation, automated evaluation of generative models of text is still an open research problem ~\cite{Hu2017}. We perform a human evaluation to analyze the quality of the transferred sentences accurately. In the human evaluation, we randomly sampled $50$ sentence pairs from the thresholded dataset consisting of both transferred sentences and their original variants. The human evaluators were asked to rate each sentence pair on a $1$-$5$ Likert scale on the following metrics: \begin{enumerate} \item \textbf{Fluency}: A high \textit{Fluency} score, $4$ or $5$, denotes that the transferred sentence is well constructed. A medium score of $3$ denotes that the sentence contains lexical errors, while a low score of $2$ or $1$ denotes major errors and extremely poor constructions respectively. \item \textbf{Relative Simplicity}: A sentence pair is given a high \textit{Relative Simplicity} score, $4$ or $5$, if the transferred sentence is significantly simplified as compared to the original sentence. If the simplicity remains the same, it is given a medium score of $3$ score. Moreover, a low score represents that the transferred sentence is more complex than the original sentence. \item \textbf{Content Preservation}: A high \textit{Content Preservation} score of $4$ and $5$ indicates that the content of the original sentence was well-preserved in the transferred sentence. A medium score of $3$ denotes that there were minor differences between the transferred sentences and the original sentences. A low score of $2$ and $1$ denotes major violation of content preservation. \end{enumerate} To verify the inter-rater agreement, we perform a Krippendorff's alpha analysis across all three metrics. Our analysis shows that we obtain an averaged Krippendorff's alpha inter-rater agreement of $0.63$ over all metrics, denoting reasonable agreement between the raters. \begin{table}[h] \centering \begin{tabular}{|l|r|} \hline \textbf{Metric} & \textbf{Average Value} \\ \hline \textbf{Fluency} & 4.26 \\ \hline \textbf{Relative Simplicity} & 3.91 \\ \hline \textbf{Content Preservation} & 4.10 \\ \hline \end{tabular} \caption{Human Evaluation Results} \label{tab:human-evaluation} \end{table} Table \ref{tab:human-evaluation} illustrates the results of the performed human evaluation. We observe that the transferred sentences were judged to be fluent, significantly simplified, and content-preserved compared to their original counterparts. \subsubsection{\textbf{Offset Finding and Dataset Finalization}} In this subsection, we outline the process of finding the offsets for each question-answer pair and reconstructing our proposed simplified version of \textit{SQuAD}, \textit{Simple-SQuAD}, from the thresholded sentences. Firstly, we preprocess the sentences, which involves removing the split sentence indicator and reconstructing context texts from both thresholded and original sentences. If a sentence is not simplified, we use the original sentence itself, ensuring minimal loss of information due to the transfer process. Secondly, for every question-answer pair, we calculate the answer’s character offset by using exact matching in the reconstructed context. If we do not find an exact match, we use case-insensitive pattern matching to calculate the offset. We found the offsets for $88,690$ questions in total. Finally, we create the following two datasets: \textit{Simple-SQuAD} and \textit{Original}. The \textit{Simple-SQuAD} dataset contains the reconstructed context with the calculated character offsets for each answer. The \textit{Original} dataset contains the original context itself but with only those questions whose answers are present in the \textit{Simple-SQuAD} dataset. We create the new \textit{Original} dataset to ensure an equal number of question-answer pairs, leading to a fair comparison. This \textit{Original} dataset represents the \textit{SQuAD} candidate for the benchmarking process. \section{Benchmarking Experiments} This section highlights the models and experiments performed for benchmarking the \textit{Simple-SQuAD}. We compare the results obtained for \textit{Simple-SQuAD} against the \textit{SQuAD} dataset. \subsection{Model Used} For benchmarking \textit{Simple-SQuAD}, we use two different variations of RoBERTa as introduced by Liu et al.~\cite{Liu2019RoBERTaAR}. RoBERTa is a replication study of BERT pretraining, which is trained on more extensive training data with bigger batches, longer sequences, and dynamically changing masking patterns. Consequently, RoBERTa achieves better results over BERT and attains state-of-the-art results on \textit{GLUE}, \textit{RACE}, and \textit{SQuAD}. \subsection{Experimental Settings} We perform a dataset-based ablation study, experimenting with multiple variants of input datasets for each model. Firstly, we finetune the model on the \textit{SQuAD} and the \textit{Simple-SQuAD} dataset separately for $2$ epochs. We then finetune the \textit{Simple-SQuAD} trained model on the \textit{SQuAD} dataset and the \textit{SQuAD}-trained model on the \textit{Simple-SQuAD} dataset for $2$ epochs each. We benchmark the results for each of these combinations of the dataset input to better infer the effect of simplifying sentences in the original dataset. For benchmarking, we use $442$ training articles containing $78,810$ questions and $48$ development articles containing $9880$ questions. Thus, we have a $90$:$10$ and an approximate $89$:$11$ train-test split based on the number of articles and the number of questions, respectively. We used $10\%$ of the training examples as a validation set for both our models. For both $RoBERTa_{Base}$ and $RoBERTa_{Large}$, we use a maximum sequence length of $380$, a stride of $128$, a maximum query length of $64$, and a maximum answer length of 30. We use a learning rate of $1*10^{-5}$ with a weight decay of $0.01$. \subsection{Results} This section outlines the result of the ablation study to determine the effect of text simplification on the question answering downstream task in the \textit{SQuAD} dataset. We observe that text simplification improves the predictive performance of both $RoBERTa_{base}$ and $RoBERTa_{large}$. \begin{table}[h] \centering \resizebox{0.47\textwidth}{!}{% \begin{tabular}{|l|l|r|r|} \hline \textbf{Model} & \textbf{Input} & \textbf{Exact} & \textbf{F1} \\ \hline $RoBERTa_{Base}$ & SQuAD & 0.787 & 0.863 \\ \hline $RoBERTa_{Base}$ & Simple-SQuAD & 0.786 & 0.866 \\ \hline $RoBERTa_{Base}$ & Simple-SQUAD $\to$ SQuAD & 0.799 & 0.876 \\ \hline $RoBERTa_{Base}$ & SQuAD $\to$ Simple-SQuAD & \textbf{0.803} & \textbf{0.878} \\ \hline \end{tabular}% } \caption{Benchmarking Results for $RoBERTa_{Base}$.} \label{tab:results-RoBERTa-base} \end{table} Table \ref{tab:results-RoBERTa-base} illustrates the results for the $RoBERTa_{Base}$ model. We observe an increase of $2.03\%$ in \textit{Exact Match} and $1.74\%$ in \textit{F1} when fine-tuning it with \textit{SQuAD} followed by \textit{Simple-SQuAD}, in contrast with the model when trained on only SQuAD. \begin{table}[h] \centering \resizebox{0.47\textwidth}{!}{% \begin{tabular}{|l|l|r|r|} \hline \textbf{Model} & \textbf{Input} & \textbf{Exact} & \textbf{F1} \\ \hline $RoBERTa_{Large}$ & SQuAD & 0.835 & 0.905 \\ \hline $RoBERTa_{Large}$ & Simple-SQuAD & \textbf{0.852} & \textbf{0.917} \\ \hline $RoBERTa_{Large}$ & Simple-SQUAD $\to$ SQuAD & 0.838 & 0.907 \\ \hline $RoBERTa_{Large}$ & SQuAD $\to$ Simple-SQuAD & 0.836 & 0.908 \\ \hline \end{tabular}% } \caption{Benchmarking Results for $RoBERTa_{Large}$.} \label{tab:results-RoBERTa-large} \end{table} Table \ref{tab:results-RoBERTa-large} illustrates the results for the $RoBERTa_{Large}$ model. Similar to $RoBERTa_{Base}$, we observe an increase of $2.04\%$ in \textit{Exact Match} and $1.33\%$ in \textit{F1} when fine-tuning it with \textit{Simple-SQuAD} when compared to model when trained on \textit{SQuAD}. \begin{table*}[h] \centering \resizebox{\textwidth}{!}{% \begin{tabular}{|l|l|l|} \hline {\textbf{Category}} & \textbf{Original} & \textbf{Transferred} \\ \hline {\textbf{\begin{tabular}[c]{@{}l@{}}Inter-Event \\ Splitting\end{tabular}}} & \begin{tabular}[c]{@{}l@{}}Although his administrative abilities \underline{had been noticed},\\ on the eve of the U.S. entry into World War II he had\\ never held an active command above a battalion and\\ was far from \underline{being considered} by many as a potential\\ commander of major operations.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Although his administrative abilities \underline{had been noticed}\\ on the eve of the U.S. entry into World War II he had\\ never held an active command above a battalion.\\ He was far from \underline{being considered} by many as a\\ potential commander of major operations.\end{tabular} \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}Intra-Event \\ Splitting\end{tabular}} & \begin{tabular}[c]{@{}l@{}}Clark also \underline{claimed} that Abdul gave him preferential\\ treatment on the show due to their affair.\end{tabular} & \begin{tabular}[c]{@{}l@{}}Clark also \underline{claimed} that Abdul gave him preferential\\ treatment on the show. This was due to their affair.\end{tabular} \\ \hline \end{tabular}% } \caption{Examples of event-based splitting.} \label{tab:event-edit-analysis} \end{table*} \begin{figure*}[ht] \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=1\linewidth]{bleu_plot.png} \caption{BLEU} \label{fig:sub-first} \end{subfigure} \begin{subfigure}{.49\textwidth} \centering \includegraphics[width=1\linewidth]{sari_plot.png} \caption{SARI} \label{fig:sub-second} \end{subfigure} \newline \begin{subfigure}{1\textwidth} \centering \includegraphics[width=.5\linewidth]{fkgl_plot.png} \caption{FKGL} \label{fig:sub-third} \end{subfigure} \caption{Plots illustrating sentence-length based transfer analysis} \label{fig:sentence-length-distribution} \end{figure*} \section{Discussion} \subsection{Edit Analysis} We conduct an event-based analysis of the edits performed by the transfer model to convert the originally complex sentences into simpler forms. We use the definition of a linguistic event as defined in Pustejovsky~\cite{Pustejovsky1991}. We perform the analysis on $50$ parallel sentences in their pre and post simplification forms, classifying each edit into the following two classes: \textit{Inter-Event Splitting} and \textit{Intra-Event Splitting}. \textit{Inter-Event Splitting} denotes the type of edit in which the model splits two different events. As illustrated in Table~\ref{tab:event-edit-analysis}, the events of \textit{notice} and \textit{consider} are split into two different sentences, thus simplifying the complex sentence containing the two events. On the other hand, \textit{Intra-Event Splitting} is the type of edit in which all the simplified sentences contain the same event as the original sentence, as is illustrated for the event \textit{claim} in Table~\ref{tab:event-edit-analysis}. In our analysis, we found $32\%$ of the instances to show \textit{Inter-Event Splitting}, showing that our model can capture event boundaries. On the other hand, $60\%$ of the total instances show successful \textit{Intra-Event Splitting}, illustrating that the model can capture intra-event detailing boundaries. Interestingly, $8\%$ of the total instances displayed unsuccessful attempts of \textit{Intra-Event Splitting}, which can be improved in future work. \subsection{Transfer Analysis} For transfer analysis, we divide all the sentences in \textit{Simple-SQuAD} into four buckets based on the original sentence length split equally in terms of word-level tokenization ( $0-20$ tokens, $20-27$ tokens, $27-36$ tokens and $36-432$ tokens). We then compute the following three metrics on a sentence level: BLEU score, SARI scores, and FKGL scores. In Figure~\ref{fig:sentence-length-distribution}, we observe that the performance of the model varies with the sentence length. \begin{enumerate} \item \textbf{BLEU}: Sentence-preservation, measured through average BLEU score, is directly proportional to the sentence length. However, the standard deviation of the BLEU scores first decreases then increases as we increase the sentence length. \item \textbf{SARI}: Lexical simplicity, measured through average SARI score, first increases then decreases as we increase the sentence length. However, the standard deviation of the SARI scores first decreases then increases as sentence length increases. \item \textbf{FKGL}: Text readability, measured via sentence-level FKGL score, was computed for both transferred and original sentences. We observe that sentence-level FKGL score of transferred sentences is directly proportional to the sentence length. Whereas sentence-level FKGL scores for original sentences first decreases then increases with sentence length. Moreover, the mean sentence-level FKGL score for the transferred sentences was always lower than that for the original sentences regardless of the sentence length. \end{enumerate} \section{Conclusion} In this work, we study the effect of text simplification in the comprehension based question-answering downstream task using the \textit{SQuAD} dataset. For \textit{Simple-SQuAD} corpus creation, we use a transformers based style transfer model to transfer complex sentences to sequences of simple sentences while retaining the original meaning. We further use post-editing techniques to reduce noise from the dataset, followed by the use of heuristics to find required offsets for an answer in each question-answer pair. We prove the efficacy of our model using automated evaluation as well as human evaluation. We then benchmark \textit{Simple-SQuAD} using two different variants of RoBERTa and perform an ablation study to investigate the effects of text simplification using four different variations of input. We prove that text simplification in the question-answering downstream task increases the predictive performance of the models. We further conduct edit-type analysis and sentence-length analysis to give insights about the transfer process. Future work may include improving style transfer performance using a more extensive corpus for text simplification and exploring effects of text simplification for other downstream tasks like text summarization, sentiment analysis.
{ "timestamp": "2021-09-30T02:01:57", "yymm": "2109", "arxiv_id": "2109.13984", "language": "en", "url": "https://arxiv.org/abs/2109.13984" }
\section{Introduction} Digitalisation leads to a transformation of internal business processes, but also very notably of customer-facing services. While most attention is paid to services in the B2C domain, there is also a rising interest in digitalising knowledge-intensive services in the B2B domain, such as consultancy in general \cite{nissen2018digital} and IT consultancy in particular \cite{werth2016self}. Such transformation implies that a digital service takes over (partially) the role of a human consultant and that companies can use that service to help themselves to the required advice. Obviously, such digital services will be able to give advice only for restricted domains -- often, advice will consist in recommending items from a predefined set of solution components. Thus, digital consulting services can be thought of as recommender systems. As we have laid out in our previous work \cite{Witschel2018RandomWO}, a recommender that suggests solution components to companies are different in several respects from the typical B2C recommenders that help users in finding e.g. books, movies or music that fits their preferences (see also \cite{felfernig2008constraint}): \begin{itemize} \item \textbf{Requirement-driven:} A consultancy recommender needs to consider business requirements, not personal preferences \item \textbf{Interdependent items:} The recommended items are not simple, atomic and independent products (such as books, movies etc.), but interdependent and sometimes complex components of a larger solution. \item \textbf{No profiles:} While typical B2C recommenders are used repeatedly by the same person, a digital consultancy service has no chance to build up customer profiles through repeated interactions -- companies will usually access the service only once. Thus, a profile of the company needs to be acquired within a single session by the recommender -- one can regard it as forming a \emph{query} that describes the situation of the company seeking advice. \end{itemize} Despite some of these differences, one can establish a ``digital consultancy process'' that will make it possible to apply traditional recommender techniques that have been designed for classical preference-based B2C scenarios. Such a process is based on the following considerations (see also Figure \ref{fig:process}): \begin{enumerate} \item Many companies share the same requirements, just like many persons share preferences. The similarity of requirements often depends on the companies' demographics (e.g. size, industry etc.). Thus, a first step in the digital consultancy process may be to capture company demographics and regard them as an initial company profile or \textbf{initial query}. This allows from the beginning to establish a certain similarity between companies. \item Later, the similarity of context and requirements manifests itself in accepting similar suggestions from the recommender. Since solutions will be complex, one may construct a repeated interaction with the recommender in the form of iterations: after entering the company demographics (step 1), the business user receives a first set of recommendations and selects from those some first elements of a solution. These elements are added to the initial company profile to form an \textbf{extended query} and the recommender is invoked again. This process is repeated, each time with a more verbose query (we will later use the term ``query verbosity'' to refer to the growing amount of information that the query contains). \end{enumerate} \begin{figure}[!h] \begin{center} \includegraphics[width=0.9\textwidth]{process} \caption{Iterative process for business consultancy recommenders} \label{fig:process} \end{center} \end{figure} Following this iterative process will allow us to assess the \emph{similarity} of company contexts by comparing queries of a company to those of previous users of the service -- with an increasing degree of accuracy as the query is iteratively extended. Since similarity is at the heart of both content-based and collaborative filtering approaches \cite{bobadilla2013recommender}, being able to assess similarities is an important prerequisite for applying these approaches. In addition, we build up company profiles during the process which makes it possible to apply content-based filtering. The iterative refinement makes it also possible to take into account the interdependence of solution elements by identifying, in each new step, new elements that fit to the already selected elements. Although collaborative and content-based filtering become applicable through our iterative process, they may not be the best choice because a) collaborative filtering does not lend itself readily to incorporating company demographics (or other forms of general context) and b) because both do not foresee the use of human-provided knowledge about the business domain which might be helpful. In fact, previous research has argued for the use of case-based reasoning (CBR) in business recommenders \cite{bridge2005case} because CBR is a proven way of re-using solutions to business problems. Constraint-based recommenders \cite{felfernig2008constraint} are another family of algorithms that have been put forward as a good way of satisfying business requirements. In our previous work \cite{Witschel2018RandomWO}, we used a graph, as a simple, flexible and easily extensible means of representing both historic user choices and explicit human knowledge about a business domain, together with a random walk approach to generate recommendations. We found that especially explicit knowledge about associations between solution elements improves the recommender performance. On the other hand, taxonomic knowledge, e.g. about relationships between industries, did not help. While our previous work was able to benefit from a graph's flexibility and ease of incorporating new domain knowledge easily \cite{Witschel2018RandomWO,minkov2017graph}, it is not perfectly suited to accommodate and make use of all possibly relevant attributes of a company's context. For instance, it does not easily allow to represent and compare numeric attributes (such as company size) or simple string attributes containing longer passages of text. Thus, the goal of our extended research was to explore options of combining graph-based random walks with other forms of recommenders, above all CBR-based ones. \subsection{Application scenario} \label{sec:scenario} We performed our recommender experiments in the domain of IT consultancy, more precisely business intelligence (BI) consultancy. Typically, companies using a BI consultancy service -- before being able to tackle the ``technical'' elements of a BI solution -- initially seek advice regarding \begin{itemize} \item Suitable \emph{key performance indicators (KPIs)} that can be used to monitor and measure the company's success in achieving its goals. A typical KPI might be ``sales revenue''. \item Adequate \emph{dimensions} to describe the values of KPIs, e.g. to characterise sales by product that was sold, channel through which it was sold and/or date when it was sold \item Suitable \emph{representations}, e.g. charts or tables that help to analyse KPI values along dimensions (e.g. a chart showing temporal evolution of sales revenue for different products) \end{itemize} Here, we focus on the first two types of solution elements. Obviously, the question of which dimensions should be chosen depends on the KPIs to be monitored. Choice of KPIs, in turn, is often determined by the (type of) industry of a company -- e.g. companies who produce energy tend to have KPIs that differ substantially from those of, say, architects. KPIs are also usually determined by the business process (e.g. sales) that should be analysed. \subsection{Contribution} \label{sec:contribution} Given the application scenario that we just sketched, our main research goal in this work is to construct a new hybrid recommender that optimally supports the requirements of a B2B consultancy service. We investigate hybridisation because \begin{enumerate} \item despite the existence of some previous work, we do not yet have reliable knowledge about which type of recommender is best suited for the task and \item we do know that different recommenders have different strengths and weaknesses in general that affect their ability to represent and accommodate certain types of knowledge and/or inputs and their ability to deal with lack of such knowledge (``cold-start problems''). \end{enumerate} We will first investigate the performance of algorithms individually and then -- by a more detailed analysis of their strengths and weaknesses on the data -- propose and evaluate some hybridisation strategies that will lead to superior performance by joining the strengths of the best-suited recommenders. \section{Related Work} \label{sec:related} Digitalising consultancy services has been discussed recently for the domain of IT consulting. In \cite{werth2016self}, a ``computer-executed consulting (CEC) service'' is proposed, which replaces, most notably, the two steps of a) interviewing client representatives and b) creating a report that summarises the interview results. The digital service is designed by human consultants and consists of a) a series of questionnaires (replacing the interviews) and b) an automated report creation module. Obviously, there is a rough correspondence between these components and the step of a) formulating a query and b) getting recommendations for that query in Figure \ref{fig:process}. The proposed CEC service is general-purpose. Therefore, although it mentions the need for more intelligence in the report creation module and the option of using recommender systems, it does not discuss any details of how to use recommenders.\\ Application of recommender systems has been discussed for more specific consultancy tasks such as optimisation of product assortments \cite{witschel15c}, selection of cloud or web services \cite{zhang2012declarative,kritikos2017towards,yao2015unified} or adaptation of conditions in agriculture \cite{laliwala2006semantic}. In all these cases, the set of possible items that can be recommended is known and well-defined and the task consists in selecting and possibly orchestrating the items. In its simplest interpretation, the term ``orchestration'' means simply that the selected services should be well aligned with each other, e.g. for optimal cross-selling opportunities \cite{witschel15c} or for obtaining a consistent complex cloud service configuration \cite{yao2015unified}. This is also the case for our BI consultancy service, see Section \ref{sec:scenario} and \cite{Witschel2018RandomWO}.\\ In terms of algorithms, business-oriented recommender systems have to deal with \textbf{complexity} in terms of company contexts (input) and solutions (output). Attempts to deal with such complexity can be divided into several categories: \begin{itemize} \item Augmentations of content-based filtering (CBF): approaches in this category model both the input and the output complexities and establish the degree to which both of them match. For instance, constraint-based recommenders \cite{felfernig2008constraint,felfernig2015constraint} help to model product features and constraints to be expressed about them and then ensure constraint satisfaction. Other approaches use tree-like structures to model items and user preferences \cite{wu2015fuzzy} or use multiple levels on which queries and items are matched (such as recommending first providers and then actual services in a service recommender, \cite{mohamed2016multi}). In CBF, additional knowledge can be incorporated, e.g. into the function that determines the similarity between an item and the user profile. Often, this is knowledge about user context, item features and/or domain-specific constraints. For instance, \cite{carrer2012social} and \cite{blanco2008flexible} use ontologies to represent and reason about item features and to apply this knowledge in a sophisticated similarity measure that takes into account ``hidden relationships'' \cite{blanco2008flexible}. Middleton et al. \cite{middleton2004ontological} use an ontology to represent user profiles and engage users in correcting the profiles before assessing profile-item similarities. \item Augmentations of collaborative filtering: Case-based recommenders \cite{bridge2005case,bousbahi2015mooc} can be seen as a special form of collaborative filtering since they recommend items used in solutions of companies that are similar to the current company. However, instead of only considering already chosen items, case-based recommenders' similarity measures take into account context variables that describe e.g. company demographics and other relevant aspects of the company's problem and/or initial situation. \item Graph-based recommenders \cite{bogers2010movie,zhang2013random,minkov2017graph} have been put forward because of their ability to accommodate a wide variety of forms of contexts in a flexible way without much effort. Random walks \cite{Fouss2007,Huang2002} are a predominant type of algorithm to provide recommendations based on graph structures. Because of their simplicity, graphs also have limitations, e.g. in modeling and matching simple string-valued attributes of input cases or in modeling certain forms of complex solution structures. The possibility to use graph-based recommenders to ``mimick'' traditional recommender approaches, such as collaborative or content-based filtering, has been explored in \cite{lee2013pathrank}. For this, one needs to assign different weight to different types of graph relations. \end{itemize} Obviously, all of these approaches employ and model various types of knowledge. An overview of the different kinds of knowledge that recommenders may use can be found in \cite{felfernig2008constraint,Witschel2018RandomWO}. What distinguishes the business recommenders from most others is the use of \emph{domain knowledge}. Often, this knowledge is obtained from human experts as discussed in \cite{felfernig2008constraint,tarus2017knowledge,Witschel2018RandomWO}.\\ Finally, forming hybrid recommenders \cite{burke2002hybrid,burke2007hybrid} is an active field of research since combinations of different approaches can often help to combine the strengths and/or avoid the weaknesses of the combined approaches. For instance, content-based filtering can be combined with collaborative filtering (CF), e.g. to mitigate the so-called cold-start problems associated with CF, i.e. problems with recommending newly introduced items or serving new users: new items can be recommended immediately by content-based techniques as long as they have a meaningful description that can be matched against user profiles. Besides cold start problems, hybridisation can be used e.g. to augment similarity in collaborative filtering with the reasons behind user preferences and thus give it a stronger CBR flavour \cite{burke2000case}. Further possibly complementary strengths and weaknesses of knowledge-based and knowledge-weak recommenders are discussed in \cite{burke2000knowledge}.\\ Overall, there is a rather large number of suggestions for enriching recommenders with contextual knowledge. However, as outlined in Section \ref{sec:contribution}, we see a gap in exploring which of these suggestions is best suited to support scenarios of business consultancy. We furthermore see a need to gain a deeper understanding of the (complementary) strengths and weaknesses of the mentioned approaches that will lead to successful hybridisation strategies. \section{Methodology} As mentioned in Section \ref{sec:contribution}, the main goal of our research is to find a recommender form that optimally supports B2B consultancy services. To study such services, we worked together with a company that provides business intelligence (BI) consultancy, as described in Section \ref{sec:scenario}. \subsection{Awareness of current consultancy practice} \label{sec:method-awareness} As described in our previous work \cite{Witschel2018RandomWO}, our research started by interviewing two consultants to understand how they work and which knowledge they require to make the necessary recommendations to their customers. We also obtained some documents that were used to document the outcomes of meetings and workshops with customers. This was the basis for us to define the structure of consultancy cases: it gave us an insight into the demographic and contextual variables (attributes) that consultants need to know about each company. It also allowed us to grasp roughly the kind of reasoning that they employed to transfer their experiences to new cases. The corresponding findings are summarised in Section \ref{sec:interviews}.\\ We then constructed a case base out of the past experience of the consultancy and identified cases that represent the business context of customers; each business process that a company wanted to analyse resulted in a separate case. Overall, this resulted in a case base with 82 entries.\\ To support our extended research, we performed a second round of interviews to gain further awareness of how consultants currently assess (implicitly or explicitly) the similarity between customer cases. More precisely, we asked them to which degree they take into account each attribute in the case (e.g. the industry, the core business processes to be analysed, the target group, the goal of the consultation), i.e. we elicited the importance they assign to each attribute while deriving recommendations for their customers. \subsection{Recommender selection and configuration} Next, we used the gathered knowledge to configure a selection of recommender algorithms that we wanted to compare: \begin{itemize} \item Collaborative filtering, using both item-based and user-based k-nearest neighbour algorithms, as provided by the LibRec library \cite{guo2015librec}. \item A random walk algorithm based on a ``case graph'' as described in \cite{Witschel2018RandomWO}. \item A CBR-based recommender that applies similarity-weighted scoring to the elements contained in similar cases. The weights mentioned above were used here to define the contribution of the local similarities within the global similarity function in CBR. \end{itemize} A precise description of recommender configurations can be found in Section \ref{sec:config}. \subsection{Experiments} We then designed an experimental setup \cite{mastersthesis} to compare the initial recommender configurations, as well as our new hybrid recommender strategies. This setup consists in a \textit{leave-one-case-out} evaluation: for each case $C$, we used the case base as the training data by \textit{omitting} $C$. Out of $C$, we constructed queries $Q_C$ at different verbosity levels: simple queries with no input elements and gradually more verbose queries containing an increasing number of randomly chosen KPIs from the case $C$. The random selection of the input elements is not realistic as this information is usually provided by the customer. However, we did not have any information about the order in which customers added elements to their solution in the past and thus had to resort to this strategy. For the evaluation of recommender outputs, we used the knowledge of originally chosen elements in $C$ as a definition of relevance: for a query $Q_C$, we observed whether a recommender was able to retrieve (and rank highly) the elements in the original case $C$. That is, for each \emph{ranking} of recommended items that a recommender produced in response to a query $Q_C$, we computed mean average precision \cite{voorheesBook} of these rankings by treating all elements originally contained in $C$ as relevant and all others as irrelevant. As mentioned above, this experimental setup was used first to evaluate each recommender in isolation. We then analysed the strengths and weaknesses of each recommender (see Section \ref{sec:strengths}) and formed new hybrid recommender strategies (see Section \ref{sec:hybrid}) that we evaluated with the same experimental environment to see whether the hybridisation could bring about an improvement (see Section \ref{sec:experiment2}). \section{Interview findings: case structure and similarity measure} \label{sec:interviews} As mentioned in Section \ref{sec:method-awareness}, we performed two rounds of interviews with consultants to understand their current work and knowledge processing procedures. Here, we summarise the findings from both rounds of interviews (see also \cite{Witschel2018RandomWO} for more details on the first round): \begin{itemize} \item Customers often come to the meetings with some important KPIs and dimensions (i.e. solution elements) already in mind. However, the degree to which customers have initial ideas can vary greatly. We have reflected this variance by creating queries at different verbosity levels. \item In terms of company demographics, consultants consider the industry of a customer as the main criterion for finding similar past cases. Further relevant variables that we elicited were the target group of the solution (e.g. only management or all employees) and the goal of the BI project (expressed in natural language). Finally, consultants use of course all known customer preferences from initial meetings (see above), i.e. any already known solution elements to remember past cases with similar elements. The business process was also mentioned by consultants as an important variable. Because of its importance, we chose not to use it simply as a ranking criterion for the retrieval of similar cases, but as a filter: for a given company, we created separate cases for each business process the company wanted to analyse and retrieved only cases with the same business process (analogously, we built separate case base graphs for the graph recommender, see below). \item In the second round of interviews, we asked the consultants to quantify the relative importance of these types of attributes. Although quantifying something as abstract as a variables contribution to a similarity score is a hard task, we were able to verify in some preliminary experiments that the chosen weights gave quite good results as compared to other potential weight configurations. The resulting weights are shown in Table \ref{tab:weights}. \item When talking to a customer from a yet unknown industry, consultants tried to remember cases of customers from \emph{similar} industries. Since our attempts to use an industry taxonomy for improved similarity assessment in a graph-based recommender were not particularly successful, we did not consider this kind of reasoning in this work. However, we did use the industry taxonomy to define a local similarity measure for industries within a CBR-based recommender (see Section \ref{sec:cbr-rec}). \end{itemize} \begin{table} \caption{Local similarities and weights for CBR recommender} \begin{center} \begin{tabular}{ p{3cm} | p{3cm} | p{2cm} } \textbf{Case Attribute} & \textbf{Local similarity measure} & \textbf{Weight} \\ \hline Industry & Taxonomy & 0.24 \\ \hline Goal & TF-IDF & 0.06 \\ \hline Target Group & Jaccard & 0.1 \\ \hline KPIs and dimensions & TF-IDF & 0.6 \\ \end{tabular} \end{center} \label{tab:weights} \end{table} \section{Recommender configurations} \label{sec:config} Based on the interview findings, we created suitable configurations of the recommenders to be used in the experiments \cite{mastersthesis}, as described in the following subsections. \subsection{Collaborative filtering} Since the association between solution elements (which we will call items for simplicity) and cases is binary -- an item is either part of the case's solution or not -- we can describe this situation as one of ``implicit feedback recommendation'' \cite{zhang2018social}. It means that the user-item matrix does not contain true ratings, but binary entries -- in our case, we replaced users with customers. However, this does not require to change the way in which Collaborative Filtering algorithms work on the matrix. In our experiment, we used the user-based \emph{userknn} and the item-based \emph{itemknn} implementations from the LibRec package \cite{guo2015librec}.\\ Since \emph{userknn} and \emph{itemknn} do not allow us to make use of the additional attributes listed in Table \ref{tab:weights}, ``simple'' queries that do not contain any items (verbosity level 0) could not be designed. We also expect the collaborative filtering algorithms to have inferior results for low verbosity queries. \subsection{Random walk recommender} The configuration for the graph-based recommender was re-used from \cite{Witschel2018RandomWO}, where the case graph incorporated the explicit knowledge acquired from the consultants. In this technique, the case graph was built by creating a node for each case and connecting it to a node representing the industry as well as to nodes representing solution elements. As mentioned in Section \ref{sec:interviews}, we built a separate graph for each business process to be analysed. Target group and goal were not represented in this approach: since there are only three possible target groups, the corresponding nodes would have had a very high degree, thus diluting the PageRank scores. Since goals are string attributes, a node representation was not straightforward for them (although future work might consider extracting salient terms and representing them as nodes). The recommended elements were scored using the PageRank with Priors algorithm \cite{White2003} on that graph. The scores represent the probability of reaching a node in the case graph (e.g. the elements to be recommended) through a random walk that is biased towards the input elements in the query. For verbosity level 0, the random walk-based recommender uses only the industry node as a query -- we also expect suboptimal results here. \subsection{CBR-based recommender} \label{sec:cbr-rec} In the case of the CBR recommender, primarily three factors were considered in the configuration: \begin{itemize} \item Similarity measures depending on attribute type: based on the taxonomy-tree approach proposed by \cite{bergmann1998use}, the industry attribute uses the industry taxonomy derived by \cite{Witschel2018RandomWO} that categorizes the customers of the consultancy based on their similarities (e.g. customers that are likely to share KPIs and dimensions). For the attributes goal (free text) and KPI, we could apply the TF-IDF \cite{huang2008similarity} similarity measure by creating a corpus of goals and KPIs respectively from the case base for the computation of inverse document frequencies (IDF). Although KPIs are not free text, applying TF-IDF is appropriate to disregard repeated terms like "Number of", "Amount", since they do not add significant value to the recommendations. Lastly, for the attribute target group, we calculated the Jaccard coefficient \cite{huang2008similarity} as a case may have more than one target audience from the possible values "employees"/"middle management"/"top management". \item The number $n$ of the most relevant (top) cases retrieved: The number of the retrieved cases played a significant role in calculating the scores of the recommended elements, which in turn determine the ranking. For an element appearing in any of the retrieved cases $R(Q_C)$ for a query $Q_C$, the score of that element is the sum of the scores of all the retrieved cases in which the element occurs: \begin{equation} score(i) = \sum_{C_j \in R(Q_C):i \in C_j}{sim(C,C_j)} \end{equation} Obviously, the larger the case base, the larger we can choose $n$, i.e. the maximum size of $R(Q_C)$. For a rather small case base like ours, we expect that smaller values of $n$ will work better since larger values will likely imply a ``topic drift'' by including rather dissimilar cases. The score of the case $sim(C,C_j)$ was generated by the CBR recommender using the global similarity function, which is the weighted average of the local similarity measures \cite{Richter1998}: $sim(C,C_j) = \sum{w_k sim_k(C,C_j)}$. \item For that weighted average, we used the weights assigned to the local similarity measures $sim_k$ shown in Table \ref{tab:weights}. \end{itemize} The retrieved ranking of matching cases was first filtered by business process such as to return only cases with matching process before applying the local similarity measures. \section{Experiment 1: Strengths and weaknesses of recommenders} \label{sec:strengths} The goal of our first experiment was to identify a recommendation technique that performs well for different query verbosity values \cite{mastersthesis}. The results of Experiment 1 are shown in Table 2. Note that the verbosity refers to the absolute number of solution elements that the query contained. From the results, we can see very clearly that the Collaborative Filtering algorithms obviously suffer too much from their inability to accommodate contextual knowledge. Their performance is substantially worse than that of the other recommenders. Regarding, those, we observed that the performance of the CBR recommender is better than the graph-based recommender, however, there is no improvement in the performance of the CBR recommender above a certain query verbosity. Thus, one can see that retrieving a single case is restrictive for the recommendations since only a limited number of elements are available which in turn creates a recall problem. The performance of the graph-based recommender, on the other hand, steadily improves as more elements are added to the query. In order to enable the CBR recommender to stretch its (better) performance to any size of the query, we repeated the leave-one-case-out evaluation by retrieving more number of most relevant cases. With the top two retrieved cases, the performance of the CBR recommender improved further, however, again only up to a certain query verbosity. By retrieving more and more cases, it was possible to overcome the recall problem and achieve a steady improvement in the performance of the CBR recommender, similar to the graph-based recommender. Nonetheless, one can observe that retrieving more cases also introduces more noise, consequently decreasing the overall performance of the CBR recommender. Thus, increasing the number of retrieved cases seems to be neither the optimal nor a generic solution to the recall problem of the CBR recommender because of its severe precision-degrading effect. \begin{table} \caption{Experiment 1: MAP values for individual recommendation techniques for different configurations} \begin{center} \begin{tabular}{ p{1.6cm} | p{1.1cm} | p{1.1cm} | p{1.5cm} | p{1.5cm} | p{1.5cm} | p{1.5cm}| p{1.5cm}} \multirow{2}{10em}{\textbf{Query\\verbosity}} & \textbf{user-knn} & \textbf{item-knn} & \textbf{Graph-based} & \multicolumn{4}{c}{\textbf{CBR}}\\ {} & {} & {} & {} & {top 1} & {top 2} & {top 3} & {top 5} \\ \hline 0 & - & - & 0.408 & 0.773 & 0.783 & 0.773 & 0.747 \\ 5 & 0.487 & 0.420 & 0.566 & 0.777 & 0.805 & 0.774 & 0.714 \\ 10 & 0.497 & 0.416 & 0.646 & 0.785 & 0.805 & 0.772 & 0.719 \\ 15 & 0.498 & 0.413 & 0.689 & 0.787 & 0.807 & 0.766 & 0.709 \\ 20 & 0.501 & 0.411 & 0.713 & 0.787 & 0.807 & 0.776 & 0.713 \\ 30 & 0.498 & 0.411 & 0.733 & 0.787 & 0.812 & 0.780 & 0.716 \\ 40 & 0.499 & 0.411 & 0.742 & 0.787 & 0.812 & 0.783 & 0.719 \\ 100 & 0.499 & 0.409 & 0.746 & 0.787 & 0.812 & 0.781 & 0.718 \\ \end{tabular} \end{center} \end{table} Overall, to achieve an optimum performance, the CBR recommender needs to be configured to retrieve a low number of most relevant cases. Yet, if a customer needs a solution with more elements than are available in the (small number of) retrieved cases, the CBR recommender fails to expand its range of recommendations. The graph-based recommender, on the other hand, can leverage the whole range of elements available in the case base and hence seems to be a better solution for increasing recall without adding too much noise. We, therefore, see a benefit in combining the graph-based and CBR recommendation techniques using a hybrid strategy. \section{A new hybrid recommender design} \label{sec:hybrid} In Section \ref{sec:related}, we saw that Hybrid recommender systems are commonly used to overcome the weaknesses of individual recommendation techniques. Of the seven hybrid recommender strategies described by \cite{burke2007hybrid}, strategies like \textit{switching}, \textit{cascade} or \textit{mixed} are not ideal (and the others are not applicable), as the results show that CBR recommender is clearly the better performer. Since we would like the graph-based recommender to contribute by adding more relevant elements where CBR is limited, we adopted the weighted combination method because it allows to "overrule" the decisions of the CBR by adjusting the importance (weight) given to either CBR or graph-based recommender. A representation of the weighted hybrid strategy adopted by us is shown in Figure \ref{fig:Hybrid}. \begin{figure}[ht] \centering \includegraphics[scale=0.3]{Weighted_Hybrid_Adapted.png} \caption{Weighted Hybrid design, adapted from \cite{burke2007hybrid}} \label{fig:Hybrid} \end{figure} For designing the hybrid strategy, we built upon the CBR configuration to retrieve the most relevant two cases, as this configuration achieved the optimum performance in the previous experiment. We now explore if the recall issue of CBR can be resolved by adding some component of the graph-based recommendations. We first normalised the scores of the individual recommendation techniques using min-max normalisation, since the graph-based and CBR recommenders have their own (different) scoring mechanisms, as described in Section \ref{sec:config}. We then combined the normalised scores of both recommenders and calculated the hybrid weighted score using Equation \ref{eq:hybridscore}. \begin{equation} \label{eq:hybridscore} hybrid\_score(item)=\alpha \cdot |CBR(item)| + (1-\alpha) \cdot |PR(item)| \end{equation} where $|\cdot|$ refers to min-max score normalisation.\\ Because of the CBR recommender's strength in dealing with sparse, i.e. low-verbosity query and the relative strength of the graph-based recommender in handling high-verbosity queries, we made the mixture parameter $\alpha$ dependent on the query verbosity, i.e. the number of referred elements $|q|$ in the query $q$: \begin{equation} \label{eq:alpha} \alpha = \left\{ \begin{array}{ll} 1-\frac{(1-\beta)\cdot |q|}{\bar c} & \textrm{if $|q| \leq \bar c$}\\\beta & \textrm{otherwise}\end{array} , 0 < \beta < 1 \right.\\ \end{equation} Here, $\bar c$ refers to half the average size of all cases in the case base in terms of their number of referred elements (KPIs) and serves as the "verbosity threshold". \begin{figure}[ht] \centering \includegraphics[width=0.6\textwidth]{alpha-graph} \caption{Mixture parameter $\alpha$ as a function of query verbosity $q$} \label{fig:alpha} \end{figure} Since CBR was the better performer of the two recommendation techniques, we designed Equation \ref{eq:alpha} such that the weight of the CBR recommender ($\alpha$) is never 0. On the other hand, we do not set $\beta$ to 1 as this would give a full weight to CBR, which we already know has limitations performing as a "pure" recommender. Additionally, from the results of Experiment 1, we concluded that a CBR-heavy hybrid recommender would perform better for queries below the verbosity threshold and vice-versa, also taken care in Equation \ref{eq:alpha}. Figure \ref{fig:alpha} shows the dependency between $\alpha$ and query verbosity $|q|$ graphically, for $\beta=0.3$ and $\bar c=14$. We can see how $\beta$ acts as the ``minimum CBR contribution'' and that below the verbosity threshold, less and less weight is given to CBR as verbosity increases. \begin{table} \caption{Experiment 2: MAP values for individual recommendation techniques and hybrid strategy} \begin{center} \begin{tabular}{ p{2cm} | p{1.5cm} | p{1.5cm} | p{1.5cm} | p{1.5cm}| p{1.5cm} | p{1.5cm}} \multirow{2}{10em}{\textbf{Query size (verbosity)}} & \textbf{Graph-based} & \textbf{CBR} & \textbf{Hybrid} & \textbf{Hybrid} & \textbf{Hybrid} & \textbf{Hybrid} \\ {} & {} & {top 2 cases} & {$\beta$=0.1} & {$\beta$=0.3} & {$\beta$=0.5} & {$\beta$=0.9} \\ \hline 0 & 0.408 & 0.783 & 0.801 & 0.801 & 0.801 & 0.801 \\ 5 & 0.566 & 0.805 & 0.836 & 0.835 & 0.836 & 0.835 \\ 10 & 0.646 & 0.805 & 0.843 & 0.845 & 0.839 & 0.840 \\ 15 & 0.689 & 0.807 & 0.804 & 0.856 & 0.849 & 0.849 \\ 20 & 0.713 & 0.807 & 0.817 & 0.862 & 0.853 & 0.851 \\ 30 & 0.733 & 0.812 & 0.821 & 0.864 & 0.856 & 0.852 \\ 40 & 0.742 & 0.812 & 0.825 & 0.866 & 0.858 & 0.854 \\ 100 & 0.746 & 0.812 & 0.827 & 0.866 & 0.858 & 0.854 \\ \end{tabular} \end{center} \end{table} With this setup, we carried out the second experiment - leave-one-case-out evaluation for different query verbosity, using the hybrid strategy. Our goal, now, was to find the appropriate combination of weights that could overcome the recall issue of CBR without impacting its performance. After every run, we compared the mean average precision for each recommendation technique with that of the hybrid strategy, as seen in Table 3. \section{Experiment 2: Performance of hybrid recommender} \label{sec:experiment2} To find the right combination of the graph-based and CBR recommender, we experimented with different values of $\beta$, starting with a very low value. The lower values of $\beta$ indicate a higher weight to the graph-based recommender. The performance of the hybrid recommender appears to be better than either of the individual recommendation techniques, however, the precision issue of the graph recommender still shows its negative impact for very low values of $\beta$. The verbosity threshold for our experiments was at 14, and it can be observed that the performance suddenly dips at 15 input elements for $\beta$=0.1 (where MAP = 0.843 for a verbosity of 10 and MAP = 0.804 for verbosity 15).\\ On the other hand, although a high $\beta$ resolves the precision problem, the performance is not optimum because for e.g. $\beta=0.9$, the graph recommender's ability to provide more recall is not sufficiently leveraged. From the results for $\beta$=0.3, one can conclude that the right value of $\beta$ can cure both the recall problem of the CBR recommender and the precision problem of the graph recommender, and thus gives an optimum performance among the individual recommendation techniques and the various configurations of the hybrid strategy put together. \section{Conclusions} In this work, we considered the application of recommender systems to business consultancy. We have argued how certain consultancy tasks can be formulated as recommendation problems, especially in the domain of IT consultancy -- e.g. selection and orchestration of web services or selection of Key Performance Indicators and dimensions for Business Intelligence (BI) solutions. Since such problems are in several respects different from the typical, purely preference-based B2C recommenders, we have addressed the question which (combinations of) recommendation techniques are most suitable for these new B2B scenarios. We worked with data from the BI consultancy domain and performed experiments with a range of known recommender techniques. These techniques offer a varying degree of possibility to feed -- besides the item choices that a company makes -- contextual knowledge, such as company demographics, into the algorithm. This ranges from none (collaborative filtering) over limited (graph-based random walks) to full coverage (CBR-based recommender). Our initial comparison showed that -- as one might expect -- the CBR-based recommendation benefits from its ability to accommodate more contextual knowledge and provides the best results. However, we also recognised a limitation: CBR-based recommenders have a free parameter, namely $n$, the number of most similar cases to use for the identification of possible solution elements. We found that, for the rather small case base in our experiments, small values of $n$ performed better. Obviously, a larger $n$ implies more noise coming from more dissimilar cases. In our previous work \cite{Witschel2018RandomWO}, we already observed that including cases e.g. from different, but similar industries can be dangerous. On the other hand, limiting $n$ also limits the potential recall of the recommender, i.e. some useful items from less similar cases are excluded. Obviously, a graph-based approach -- although less precise -- offers a natural way to include more items, also from the more dissimilar cases.\\ We, therefore, explored the combination of CBR-based recommendation with a graph-based recommender in order to combine its strengths in terms of precision with the graph-based recommender's strength in providing more relevant items in the lower ranks. We followed a weighted hybridisation strategy. The weight was dynamic, giving more and more importance to the graph recommender with the growing size of the query. This makes sense since contextual knowledge becomes less important as we know more and more about already chosen items. Because of the superior performance of the CBR recommender, we also designed the weighting so as to ensure that there is always a certain minimum weight given to it. It turned out that indeed this minimum weight should not be 0.\\ We found that the weighted hybrid performed -- at all levels of query verbosity -- better than any of the individual recommenders. Although we have only tested the hybrid on one particular data set, we believe that we can carefully conclude from this that a CBR recommender's problems in balancing between precision and recall can be overcome by combining it with another recommender that is less limited by case boundaries and can contribute better recall at lower ranks. The graph-based recommender was able to achieve that in our experiments. In future work, we plan to apply our approach also to different domains and data sets. In that context, it will also be important to study more closely the relationship between the size and characteristics of the case base and the optimal choice of the parameter $n$ of the case-based recommender. \bibliographystyle{splncs04}
{ "timestamp": "2021-12-07T02:41:07", "yymm": "2109", "arxiv_id": "2109.13922", "language": "en", "url": "https://arxiv.org/abs/2109.13922" }
\section{Algorithm for Commutative Case} \label{sec:commutative_alg} In this section, we consider a easier case that $O$ is unitary and commutes with the Hamiltonian $H$, and give a two-step quantum-classical hybrid algorithm for Problem~\ref{prob:gs_prop_est}. More specifically, suppose the initial state can be expanded in the eigenbasis as follows: $\ket{\phi_0}=\sum_{k} c_k \ket{\psi_k}$ with $p_k:=|c_k|^2$. We note that $\{\ket{\psi_k}\}$ is also an eigenbasis of $O$ since $O$ and $H$ commute. In Step 1, we run \cite{lt21}'s algorithm to estimate the ground state energy $\lambda_0$ and the overlap between the initial state and the ground state $p_0$. In Step 2, we construct a similar CDF function for the density $\sum_k O_k p_k \delta(x-\lambda_k)$, where $O_k := \bra{\psi_k} O\ket{\psi_k}$. If we evaluate the CDF at $\lambda_0$, we can obtain an estimate of $O_0$. \subsection{Step 1: estimate the initial overlap}\label{sec:est_overlap} We first run the procedure $\textsc{EstimateGSE}$ (Algorithm~\ref{alg:gs_energy}) to estimate the ground state energy $\lambda_0$ with an additive error $\epsilon$. Let $x^\star$ be the output. We remark that $x^\star$ satisfy $C(x^\star + \tau \epsilon)\geq p_0$ and $C(x^\star - \tau \epsilon) = 0$. However, we can only extract $p_0$ from the ACDF $\widetilde{C}(x)$, which satisfies: \begin{align} C(x-\tau\epsilon) -\eta/8 \leq \widetilde{C}(x)\leq C(x+\tau\epsilon) + \eta/8~~~\forall x\in [-\pi/3, \pi/3]. \end{align} If $[x-\tau \epsilon, x+\tau\epsilon]$ contains a ``jump'' of $C(x)$, i.e., an eigenvalue $\lambda_k$, then the approximation error of $\widetilde{C}(x)$ will be large. Hence, we say a point $x$ is ``good'' for $\lambda_k$ if $[x-\tau \epsilon, x+\tau\epsilon]$ is contained in $[\tau \lambda_k, \tau \lambda_{k+1})$. It is easy to see that $\widetilde{C}(x)$ will be an $\eta/8$-additive approximation of $\sum_{j\leq k}p_k$ if $x$ is good. Our goal is to find an $x_{\mathsf{good}}$ that is good for $\lambda_0$, and estimating $\widetilde{C}(x_{\mathsf{good}})$ gives the overlap $p_0$. The following claim gives a way to construct $x_{\mathsf{good}}$ using the spectral gap of $H$. \begin{claim}[Construct $x_{\mathsf{good}}$]\label{clm:good_x} Let $\gamma$ be the spectral gap of the Hamiltonian $H$. For any $\epsilon\in (0, \gamma /4)$, $x^\star + \tau \gamma /2$ is good for $\lambda_0$, where $x^\star$ is the output of $\textsc{EstimateGSE}(\epsilon, \eta)$ (Algorithm~\ref{alg:gs_energy}). \end{claim} \begin{proof} We know that $x^\star$ satisfies: \begin{align} x^\star - \tau \epsilon < \tau\lambda_0 \leq x^\star + \tau \epsilon. \end{align} Then, we have \begin{align} x^\star + \tau \gamma/2 > \tau\lambda_0 - \tau \epsilon + \tau \gamma/2 > \tau \lambda_0+\tau \epsilon. \end{align} We also have \begin{align} x^\star + \tau \gamma/2 < &~ \tau \lambda_0 + \tau \epsilon + \tau \gamma/2 \\ \leq &~ \tau (\lambda_1 - \gamma) + \tau \epsilon + \tau \gamma /2\notag\\ = &~ \tau \lambda_1 + \tau (\epsilon - \gamma/2)\notag\\ < &~ \tau \lambda_1-\tau \epsilon. \end{align} Therefore, $x^\star$ is good for $\lambda_0$. \end{proof} We note that in \cite{lt21}, the ACDF's approximation error is chosen to be $\eta/8$. We may directly change it to $\epsilon\eta/8$ without significantly changing the circuit depth, since by Lemma~\ref{lem:approx_Heaviside} the degree of $F$ can only blowup by a log factor of $\epsilon$. \begin{lemma}[Estimating the overlap]\label{lem:est_overlap} For any $\epsilon_0,\nu\in (0, 1)$, the overlap $p_0:=|\langle \phi_0|\psi_0\rangle|^2$ can be estimated with multiplicative error $1\pm O(\epsilon_0)$ with probability $1-\nu$ by running the quantum circuit (Figure~\ref{fig:hadamard_test}) $\widetilde{O}(\epsilon_0^{-2}\eta^{-2})$ times with expected total evolution time $\widetilde{O}(\gamma^{-1}\epsilon^{-2}\eta^{-2})$ and maximal evolution time $O(\gamma^{-1})$. \end{lemma} \begin{proof} By Claim~\ref{clm:good_x}, if we set the additive error of ground state energy $\lambda_0$ to be $O(\gamma)$, then we can construct an $x_{\mathsf{good}}$ that is good for $\lambda_0$. By Theorem~\ref{thm:lt21_main}, it can be done with maximum quantum evolution time $\widetilde{O}(\gamma^{-1})$ and the expected total quantum evolution time $\widetilde{O}(\gamma^{-1}\eta^{-2})$. Notice that we need to take $d=O(\delta^{-1}\log(\delta^{-1}\epsilon_0^{-1}\eta^{-1}))$ (Line~\ref{ln:set_d} in Algorithm~\ref{alg:gs_energy}) to make $\widetilde{C}(x_{\mathsf{good}})$ be an $O(\epsilon_0 \eta)$-approximation of $p_0$, where $\delta = \tau \gamma$. Next, we estimate $\widetilde{C}(x_{\mathsf{good}})$ with additive error $\eta \epsilon$ with probability $1-\nu$. We have an unbiased estimator \begin{align} \overline{G}(x; \mathbf{Z,J})={\cal F}\mathbf{Z}e^{i\theta_{\mathbf{J}}+\mathbf{J}x} \end{align} for $\widetilde{C}(x)$, where ${\bf Z}:=X+iY$ is measured from the Hadamard test, and ${\bf J}$ is a random variable for the Hamiltonian evolution time sampled proportional to the Fourier weight of $F$, i.e., $\Pr[{\bf J}=j]=|\hat{F}_j|/{\cal F}$ for $-d\leq j\leq d$ and ${\cal F}:=\sum_{|j|\leq d}|\hat{F}_j|$. We can show that $\overline{G}(x; \mathbf{Z,J})$ has variance $O(\log^2(d))$, and one estimate can be obtained with evolution time $\widetilde{O}(\tau d/\log (d))$ in expectation. If we repeatedly sample $\overline{G}(x; \mathbf{Z,J})$ and take the mean of them, then by Chebyshev's inequality, the sample complexity is $\widetilde{O}(\epsilon_0^{-2}\eta^{-2} \nu^{-2})$ to have an additive error $O(\epsilon_0\eta)$ with probability $1-\nu$. Instead, we can use the so-called ``median-of-means'' trick to reduce the sample complexity. More specifically, let $N_g=O(\log(1/\nu))$ and $K=O(\epsilon_0^{-2})$. We first partition $m=N_gK$ samples $(Z_1, J_1),\dots,(Z_{m},J_m)$ into $N_g$ groups of size $K$. Then, for any $i\in [N_g]$, the $i$-th group mean is \begin{align} \overline{G}_i := \frac{1}{K}\sum_{j=1}^{K} \overline{G}(x; Z_{(i-1)K + j}, J_{(i-1)K + j}). \end{align} The final estimator is given by the median of these group means, i.e., \begin{align} \overline{G}(x):=\mathrm{median}(\overline{G}_1,\dots,\overline{G}_{N_g}). \end{align} By Chernoff bound, it is easy to see that $\overline{G}(x)$ has an additive error at most $(\eta\epsilon_0)$ with probability $1-\nu$. It will imply that multiplicative error is at most $1\pm O(\epsilon_0)$ since $p_0=\Theta(\eta)$. And the sample complexity of $\overline{G}(x)$ is $\widetilde{O}(\epsilon_0^{-2}\eta^{-2})$. Hence, the expected total evolution time is $\widetilde{O}(\gamma^{-1}\epsilon_0^{-2}\eta^{-2})$. Since we run the same quantum circuit to estimate $\overline{G}(x)$, the maximal evolution time is still $\widetilde{O}(\gamma^{-1})$. \end{proof} \subsection{Step 2: estimate the $O$-weighted CDF} To estimate the expectation value of $O$, consider the following quantum circuit: \begin{figure}[ht] \centering \begin{displaymath} \Qcircuit @C=1.0em @R=1.2em { & & & &\\ \lstick{\ket{0}} &\gate{\mathrm{H}} &\ctrl{1} & \ctrl{1} & \gate{\mathrm{W}} & \gate{\mathrm{H}} &\meter\\ \lstick{\ket{\phi_0}} & \qw & \gate{e^{-ij\tau H}} & \gate{O} &\qw &\qw &\qw } \end{displaymath} \caption{Quantum circuit parameterized by $j$. $\mathrm{H}$ is the Hadamard gate and $\mathrm{W}$ is either $I$ or a phase gate $S$.} \label{fig:hadamard_test_o} \end{figure} Define the random variables $X_j, Y_j$ be as follows: for $W=I$, $X_j:=1$ if the outcome is 0, and $X_j:=-1$ if the outcome is 1. For $W=S$, $Y_j:=-1$ if the outcome is 0, and $Y_j:=1$ if the outcome is 1. Then, we have the following claim on the expectation of the random variables $X_j,Y_j$: \begin{claim}[A variant of Hadamard test]\label{clm:estimator_expectation_observable} For any $j\in \Z$, the random variable $X_j + i Y_j$ is an un-biased estimator for $\bra{\phi_0} O e^{-ij\tau H}\ket{\phi_0}$. \end{claim} The proof is deferred to Appendix~\ref{sec:quantum_hadamard}. We can expand $\bra{\phi_0}O e^{-ij\tau H} \ket{\phi_0}$ in the eigenbasis of $H$ (which is also an eigenbasis of $O$): \begin{align} \bra{\phi_0}O e^{-ij\tau H} \ket{\phi_0} = &~ \sum_{k,k'} c_k^* c_{k'}e^{-ij\tau \lambda_k} \bra{\psi_k} O \ket{\psi_k'}\notag\\ =&~ \sum_k p_k O_k e^{-ij\tau \lambda_k}, \end{align} where the last step follows from the simultaneous diagonalization of $O$ and $H$, and $O_k:=\bra{\psi_k} O \ket{\psi_k}$. We may assume that $|O_k|\leq 1$ for any $k\in \mathbb{N}$. Inspired by the ground state energy estimation algorithm in \cite{lt21}, we define the $O$-weighted ``density function'' for the observable as follows: \begin{align} p_O(x) := \sum_k p_k O_k \delta(x - \tau \lambda_k). \end{align} Note that $p_O(x)$ can be negative at some points. Suppose the eigenvalues of $\tau H$ is within $[-\pi/3, \pi/3]$. Then, we define the $O$-weighted CDF and ACDF for $p_O(x)$ similar to \cite{lt21}: \begin{align} C_O(x):=(H * p_O)(x), ~~~\widetilde{C_O}(x) := (F * p_O)(x), \end{align} where $H$ is the $2\pi$-periodic Heaviside function and $F=F_{d,\delta}$ is the Fourier approximation of $H$ constructed by Lemma~\ref{lem:approx_Heaviside}. It is easy to verify that $C_O(x)$ equals to $\sum_{k}p_kO_k\mathbf{1}_{x\geq p_kO_k}$ for any $x\in [-\pi/3, \pi/3]$. The following lemma gives an unbiased estimator for the $O$-weighted ACDF. \begin{lemma}[Estimating the $O$-weighted ACDF]\label{lem:est_acdf_observable} For any $x\in [-\pi, \pi]$, there exists an unbiased estimator $\overline{G_O}(x)$ for the $O$-weighted ACDF $\widetilde{C_O}(x)$ with variance $\widetilde{O}(1)$. Furthermore, $\overline{G_O}(x)$ runs the quantum circuit (Figure~\ref{fig:hadamard_test_o}) with expected total evolution time $O(\tau d/\log(d))$, where $d$ is the Fourier degree of $F$ . \end{lemma} \begin{proof} $\widetilde{C_O}(x)$ can be expanded in the following way: \begin{align} \widetilde{C_O}(x) = &~ (F * p_O)(x)\\ = &~ \int_{-\pi}^\pi F(x-y) p_O(y) \d y\notag\\ = &~ \sum_{|j|\leq d} \int_{-\pi}^\pi \hat{F}_j e^{ij(x-y)} p_O(y)\d y\notag\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \int_{-\pi}^\pi p_O(y) e^{-ijy}\d y\notag\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \sum_k p_k O_ke^{-ij\tau \lambda_k}\notag\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \cdot \bra{\phi_0} O e^{-ij\tau H} \ket{\phi_0}, \end{align} where the third step follows from the Fourier expansion of $F(x-y)$, the fifth step follows from the property of Dirac's delta function, and the last step follows from the definition of $p_k$ and the eigenvalues of matrix exponential. Define an estimator $G(x; {\bf J}, {\bf Z})$ as follows: \begin{align} G(x; {\bf J}, {\bf Z}):={\cal F}\cdot {\bf Z} e^{i(\theta_{\bf J} + {\bf J}x)}, \end{align} where $\theta_j$ is defined by $\hat{F}_j = |\hat{F}_j|e^{i\theta_j}$, ${\bf Z}=X+iY$ measured from the quantum circuit (Figure~\ref{fig:hadamard_test_o}) with parameter $j={\bf J}$, and ${\cal F}=\sum_{|j|\leq d}|\hat{F}_j|$. Then, we show that $G(x; {\bf J}, {\bf Z})$ is un-biased: \begin{align} \E[G(x; {\bf J}, {\bf Z})] = &~ \sum_{|j|\leq d} \E\left[(X_j + iY_j)e^{i(\theta_j + jx)}|\hat{F}_j|\right]\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \cdot \E\left[X_j + iY_j\right]\notag\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \cdot \bra{\phi_0} O e^{-ij\tau H} \ket{\phi_0}\notag\\ = &~ \widetilde{C}(x), \end{align} where the third step follows from Claim~\ref{clm:estimator_expectation_observable}. Moreover, the variance of $G$ can be upper-bounded by: \begin{align} \Var[G(x; {\bf J}, {\bf Z})]= &~ \E[|G(x; {\bf J}, {\bf Z})|^2] - |\E[G(x; {\bf J}, {\bf Z})]|^2\\ \leq &~ \E[|G(x; {\bf J}, {\bf Z})|^2]\notag\\ \leq &~ 2{\cal F}^2, \end{align} where the third step follows from $|e^{i(\theta_J+Jx)}|=1$, and the last step follows from $X_j, Y_j\in \{\pm 1\}$. \iffalse Hence, we can take $N_s:=\frac{2{\cal F}^2}{\sigma^2}$ independent samples of $(J, Z)$, denoted by $\{(J_k, Z_k)\}_{k\in [N_s]}$ and compute \begin{align} \overline{G}(x) := \frac{1}{N_s}\sum_{k=1}^{N_s} G(x; J_k, Z_k). \end{align} Then, we have \begin{align} \E[\overline{G}(x)] = \widetilde{C}(x),~~\text{and}~~\mathrm{Var}[\overline{G}(x)]\leq \sigma^2. \end{align} The expected total evolution time is \begin{align*} {\cal T}_{\mathsf{tot}} := N_s \tau \E[|J|]= \frac{2{\cal F}^2}{\sigma^2} \tau \sum_{|j|\leq d}|j|\cdot \frac{|\hat{F}_j| }{{\cal F}}= \frac{2{\cal F}\tau}{\sigma^2}\sum_{|j|\leq d}|j||\hat{F}_j|. \end{align*} \fi By Lemma~\ref{lem:approx_Heaviside}, we know that $|\hat{F}_j|=O(1/|j|)$. Hence, we have ${\cal F} = \sum_{|j|\leq d}O(1/|j|) = O(\log d)$. Thus, $\Var[G(x; {\bf J}, {\bf Z})]=O(\log^2(d))$. The expected total evolution time is \begin{align} {\cal T}_{\mathsf{tot}} := \E[|J|]= \tau \sum_{|j|\leq d}|j|\cdot \frac{|\hat{F}_j| }{{\cal F}}= O(\tau d / \log(d)). \end{align} \iffalse Thus, the number of samples is \begin{align} N_s = O\left(\frac{\log^2 d}{\sigma^2}\right). \end{align} And the expected total evolution time is \begin{align} {\cal T}_{\mathsf{tot}} = O\left(\frac{\tau d\log d}{\sigma^2}\right). \end{align} \fi The lemma is then proved. \end{proof} The following lemma shows that the $O$-weighted CDF $C_O(x)$ can be approximated by the $O$-weighted ACDF $\widetilde{C_O}(x)$: \begin{lemma}[Approximating the $O$-weighted CDF]\label{lem:approx_o_cdf} For any $\epsilon>0$, $0<\delta < \pi/6$, let $F(x) := F_{d,\delta}(x)$ constructed by Lemma~\ref{lem:approx_Heaviside} with approximation error $\eta \epsilon/8$. Then, for any $x\in [-\pi/3, \pi/3]$, it holds that: \begin{align} C_O(x-\delta) - \eta\epsilon/8 \leq \widetilde{C_O}(x) \leq C_O(x + \delta) + \eta \epsilon/8. \end{align} \end{lemma} The proof is very similar to Lemma~\ref{lem:approx_acdf}, so we omit it here. We can take $\delta:= \tau \gamma/5$ and let $x_{\mathsf{good}}:=x^\star + \tau \gamma/2$. Then, by Claim~\ref{clm:good_x}, we know that $x_{\mathsf{good}}$ is good for $\lambda_0$, i.e., $[x_{\mathsf{good}}-\tau \gamma, x_{\mathsf{good}} + \tau\gamma]\subset (\tau \lambda_0, \tau \lambda_1)$. Hence, $\widetilde{C}_O(x_{\mathsf{good}})$ satisfies \begin{align} \left|\widetilde{C}_O(x_{\mathsf{good}})-p_0 O_0\right|\leq \eta \epsilon /8. \end{align} The following lemma shows how to estimate $\widetilde{C_O}(x_{\mathsf{good}})$, which is very similar to Lemma~\ref{lem:est_overlap}. \begin{lemma}[Estimating $p_0O_0$]\label{lem:est_p0O0} For any $\epsilon_1, \nu\in (0, 1)$, $p_0O_0$ can be estimated with multiplicative error $1\pm O(\epsilon_1)$ with probability $1-\nu$ by runs the quantum circuit (Figure~\ref{fig:hadamard_test}) $\widetilde{O}(\epsilon_1^{-2}\eta^{-2})$ times with expected total evolution time $\widetilde{O}(\gamma^{-1} \epsilon_1^{-2}\eta^{-2})$ and maximal evolution time $O(\gamma^{-1})$. \end{lemma} \subsection{Putting it all together} In this section, we will put the components together and prove the following main theorem, which gives an algorithm for the ground state property estimation. \begin{theorem}[Ground state property estimation with commutative observable, restate]\label{thm:app_sim_com} Suppose $p_0\geq \eta$ for some known $\eta$, and let $\gamma>0$ be the spectral gap of the Hamiltonian. Then, for any $\epsilon, \nu\in (0, 1)$, the ground state property $\bra{\psi_0}O\ket{\psi_0}$ can be estimated within additive error at most $\epsilon$ with probability $1-\nu$, such that: \begin{enumerate} \item the number of times running the quantum circuits (Figure~\ref{fig:hadamard_test} and \ref{fig:hadamard_test_o}) is $\widetilde{O}(\epsilon^{-2}\eta^{-2})$, \item the expected total evolution time is $\widetilde{O}(\gamma^{-1}\epsilon^{-2}\eta^{-2})$, \item the maximal evolution time is $\widetilde{O}(\gamma^{-1})$. \end{enumerate} \end{theorem} \begin{proof} By Lemma~\ref{lem:est_overlap}, we obtain an estimate $\overline{p_0}$ for $p_0$ with the guarantee that \begin{align}\label{eq:approx_p_0} \big|\overline{p_0}-p_0\big|\leq O(\eta \epsilon_0), \end{align} where $\epsilon_0$ will be chosen shortly. By Lemma~\ref{lem:est_p0O0}, we obtain an estimate $\overline{p_0O_0}$ for $p_0O_0$ with the guarantee that \begin{align}\label{eq:approx_p0O0} \big|\overline{p_0O_0}-p_0O_0\big|\leq O(\eta\epsilon_1), \end{align} where $\epsilon_1$ will be chosen shortly. Then, we have \begin{align} \left|\frac{\overline{p_0O_0}}{\overline{p_0}}-O_0\right|= &~ \left|\frac{\overline{p_0O_0}}{\overline{p_0}} - \frac{p_0O_0}{\overline{p_0}}+\frac{p_0O_0}{\overline{p_0}}-\frac{p_0O_0}{p_0}\right|\\ \leq &~ \frac{|\overline{p_0O_0}-p_0O_0|}{\overline{p_0}} + |p_0O_0|\left|\frac{1}{\overline{p_0}}-\frac{1}{p_0}\right|\notag\\ \leq &~ \frac{O(\eta\epsilon_1)}{p_0-O(\eta\epsilon_0)} + |p_0O_0|\left|\frac{1}{p_0-O(\eta\epsilon_0)}-\frac{1}{p_0}\right|\notag\\ \leq &~ \frac{O(\eta\epsilon_1)}{\eta-O(\eta\epsilon_0)}+|p_0O_0|\left|\frac{1}{p_0-p_0O(\epsilon_0)}-\frac{1}{p_0}\right|\notag\\ \leq &~ O(\epsilon_1)(1-O(\epsilon_0)) + |O_0|(1+O(\epsilon_0)-1)\notag\\ \leq &~ O(\epsilon_0+\epsilon_1), \end{align} where the second step follows from the triangle inequality, the third step follows from Eqs.~\eqref{eq:approx_p_0} and \eqref{eq:approx_p0O0}, the third step follows from $p_0\geq \eta$, the fifth step follows from $\frac{1}{1-x}\leq 1+O(x)$ for $x\in (0,1)$. Hence, if we take $\epsilon_0=\epsilon_1=O(\epsilon)$, we will achieve additive error at most $\epsilon$. For the success probability, we can make Eq.\eqref{eq:approx_p_0} hold with probability $1-\nu/2$ in Lemma~\ref{lem:est_overlap} and Eq.\eqref{eq:approx_p0O0} hold with probability $1-\nu/2$ in Lemma~\ref{lem:est_p0O0}. Then, by the union bound, we get a good estimate with probability at least $1-\nu$. The computation costs follow directly from Lemma~\ref{lem:est_overlap} and Lemma~\ref{lem:est_p0O0}. And the proof of the theorem is then completed. \end{proof} \begin{algorithm}[ht] \caption{Ground State Property Estimation (Commutative Case)} \label{alg:gs_prop_com} \begin{algorithmic}[1] \algrenewcommand\algorithmicprocedure{\textbf{procedure}} \Procedure{EstimateGSProp}{$\epsilon,\tau, \eta, \gamma, \nu$} \State $\delta \gets O(\tau \gamma)$, $d\gets O(\delta^{-1}\log(\delta^{-1}\epsilon^{-1}\eta^{-1}))$ \For{$j\gets -d,\dots,d$} \State Compute $\hat{F}_j:=\hat{F}_{d,\delta,j}$ and $\theta_j$ \EndFor \State \Comment{Estimate the ground state energy} \State $x^\star\gets \textsc{EstimateGSE}(\gamma/8, \tau, \eta, \nu/10)$ \State $x_{\mathsf{good}}\gets x^\star + \tau \gamma / 2$ \State \Comment{Generate samples from the Hadamard test circuits} \State $N_g\gets O(\log(1/\nu))$, $K\gets O(\epsilon^{-2})$ \For{$k\gets 1,\dots,N_g K$} \State Sample $(Z_k, J_k)$ from the quantum circuit (Figure~\ref{fig:hadamard_test}) \State Sample $(Z_k', J_k')$ from the quantum circuit (Figure~\ref{fig:hadamard_test_o}) \EndFor \State \Comment{Estimate $p_0$} \For{$i\gets 1,\dots, N_g$} \State $\overline{G}_i\gets \frac{1}{K}\sum_{j=1}^K\overline{G}(x_{\mathsf{good}}; Z_{(i-1)K+j}, J_{(i-1)K+j})$ \EndFor \State $\overline{p_0}\gets \mathrm{median}(\overline{G}_1,\dots,\overline{G}_{N_g})$ \State \Comment{Estimate $p_0O_0$} \For{$i\gets 1,\dots, N_g$} \State $\overline{G}_i'\gets \frac{1}{K}\sum_{j=1}^K\overline{G}(x_{\mathsf{good}}; Z_{(i-1)K+j}', J_{(i-1)K+j}')$ \EndFor \State $\overline{p_0O_0}\gets \mathrm{median}(\overline{G}_1',\dots,\overline{G}_{N_g}')$ \State \Return $\overline{p_0O_0}/\overline{p_0}$ \EndProcedure \end{algorithmic} \end{algorithm} \section{Algorithm for General Unitary Observables} \label{sec:unitary_alg} In this section, we will prove the following theorem for unitary observables in the general case: \begin{theorem}[Ground state property estimation with general unitary observable]\label{thm:app_sim} Suppose $p_0\geq \eta$ for some known $\eta$ and the spectral gap of the Hamiltonian $H$ is at least $\gamma$. For any $\epsilon,\nu\in (0, 1)$, there exists an algorithm for estimating the ground state property $\bra{\psi_0}O\ket{\psi_0}$ within additive error at most $\epsilon$ with probability at least $1-\nu$, such that: \begin{enumerate} \item the expected total evolution time is $\widetilde{O}(\gamma^{-1}\epsilon^{-2}\eta^{-2})$ \item the maximal evolution time is $\widetilde{O}(\gamma^{-1})$. \end{enumerate} \end{theorem} In the following parts, we will first introduce the 2-d $O$-weighted density function and CDF, which extend the commuting observables to the general case. Then, we will show how to combine them with the overlap estimation in Section~\ref{sec:est_overlap} for proving Theorem~\ref{thm:app_sim}. \subsection{2-d $O$-weighted density function and CDF} Let $\ket{\phi_0} = \sum_k c_k \ket{\psi_k}$ where $|c_k|^2 = p_k$. In general, $O$ and $H$ may not commute. Hence, we consider a more symmetric form of expectation: $\bra{\phi_0}e^{-ij\tau H} Oe^{-ij'\tau H}\ket{\phi_0}$, which can be expanded in the eigenbasis of $H$ as follows: \begin{align} \bra{\phi_0}e^{-ij\tau H} O e^{-ij'\tau H}\ket{\phi_0} = &~ \sum_{k,k'} c_k^* c_{k'} e^{-ij\tau \lambda_k} e^{-ij'\tau \lambda_{k'}}\bra{\psi_k} O \ket{\psi_k'}\notag\\ =&~ \sum_{k,k'} c_k^* c_{k'} e^{-ij\tau \lambda_{k'}}e^{-ij'\tau \lambda_{k'}}\bra{\psi_k}O\ket{\psi_{k'}} \end{align} Similar to the commutative case, we define a 2-d $O$-weighted density function: \begin{align} p_{O,2}(x,y) := \sum_{k,k'}c_k^*c_{k'} O_{k,k'}\delta(x-\tau\lambda_k) \delta(y-\tau\lambda_{k'}), \end{align} where $O_{k,k'}:=\bra{\psi_k}O\ket{\psi_{k'}}$. Then, define the corresponding 2-d $O$-weighted CDF function as follows: \begin{align} C_{O,2}(x):=(H_2 * p_{O,2})(x,y), \end{align} where $H_2(x,y):=H(x)\cdot H(y)$, the 2-d $2\pi$-periodic Heaviside function. We first justify that $C_{O,2}$ is indeed a CDF of $p_{O,2}$ in $[-\pi/3, \pi/3]$: \begin{align} C_2(x,y) = &~ \int_{-\pi}^\pi \int_{-\pi}^\pi H_2(x - u, y-v) p(u,v) \d u \d v\\ = &~ \sum_{k,k'} c_k^*c_{k'} O_{k,k'}\cdot \int_{-\pi}^\pi \int_{-\pi}^\pi H_2(x- u,y-v) \delta(u-\tau\lambda_k) \delta(v-\tau\lambda_{k'})\d u\d v\notag\\ = &~ \sum_{k,k'} c_k^*c_{k'} O_{k,k'}\cdot H(x - \tau \lambda_k)H(y - \tau \lambda_{k'})\notag\\ = &~ \sum_{k,k'} c_k^*c_{k'} O_{k,k'} \cdot \mathbf{1}_{x \geq \tau \lambda_k,y\geq \tau\lambda_{k'}}\notag\\ = &~ \sum_{\substack{k: \tau\lambda_k \leq x,\\k': \tau \lambda_{k'}\leq y}} c_k^*c_{k'} O_{k,k'}. \end{align} Hence, the definition of $C_{O,2}$ is reasonable. Then, we show that $C_{O,2}$ can be approximated similar to the 1-d case. Let $F_2(x)$ be the 2-d approximated Heaviside function, i.e., \begin{align} F_2(x,y):=F(x) \cdot F(y). \end{align} The 2-d $O$-weighted approximated CDF (ACDF) is defined to be \begin{align} \widetilde{C_{O,2}}(x,y) := (F_2 * p_{O,2})(x,y). \end{align} The following lemma shows that $\widetilde{C_{O,2}}(x,y)$ is close to $C_{O,2}(x',y')$ for some $(x',y')$ close to $(x,y)$. \begin{lemma}[Approximation ratio of the 2-d $O$-weighted ACDF]\label{lem:approx_acdf_2d} For any $\epsilon>0$, $0<\delta < \pi/6$, let $F_2(x,y) := F_{d,\delta}(x) \cdot F_{d,\delta}(y)$ constructed by Lemma~\ref{lem:approx_Heaviside}. Then, for any $x,y\in [-\pi/3, \pi/3]$, the 2-d $O$-weighted ACDF $\widetilde{C_{O,2}}(x,y) = (F_2*p_{O,2})(x,y)$ satisfies: \begin{align} C_{O,2}(x-\delta, y-\delta) -2\epsilon \leq \widetilde{C_{O,2}}(x,y) \leq C_{O,2}(x + \delta, y+\delta) + 2\epsilon. \end{align} \end{lemma} \begin{proof} By (2) in Lemma~\ref{lem:approx_Heaviside}, we have \begin{align} |F(x) - H(x)|\leq \epsilon ~~~\forall x\in [-\pi + \delta, -\delta]\cup [\delta, \pi - \delta], \end{align} which implies that for all $x,y\in [-\pi + \delta, -\delta]\cup [\delta, \pi - \delta]$, \begin{align} |F_2(x,y) - H_2(x,y)|\leq &~ |F(x)F(y)-H(x)H(y)|\\ = &~ |F(x)F(y)- F(x) H(y) + F(x) H(y) - H(x)H(y)|\notag\\ \leq &~ F(x) |F(y)-H(y)| + H(y)|F(x)-H(x)|\notag\\ \leq &~ (F(x) + H(y))\epsilon\notag\\ \leq &~ 2\epsilon, \end{align} where the last step follows from $F(x)\in [0,1]$ by (1) in Lemma~\ref{lem:approx_Heaviside}. Furthermore, we also have for $x\in [-\delta, \delta]$, $y\in [-\pi + \delta, -\delta]$, \begin{align} |F_2(x,y)-H_2(x,y)| \leq &~ |F(x)F(y)-H(x)H(y)|\\ = &~ |F(x)F(y)|\tag{$H(y)=0$}\\ \leq &~ F(y)\notag\\ \leq &~ \epsilon. \end{align} Similarly, for $x\in [-\pi+\delta, -\delta]$, $y\in [-\delta, \delta]$, \begin{align} |F_2(x,y)-H_2(x,y)| \leq \epsilon. \end{align} Define $F_{L,2} := F_2(x - \delta,y-\delta)$ such that \begin{align}\label{eq:F_L} |F_{L,2}(x) - H_2(x)|\leq 2\epsilon ~~~\forall (x,y)\in &~ [-\pi + 2\delta, 0]\times [-\pi+2\delta, \pi]\\ \cup &~ [-\pi+2\delta, \pi]\times [-\pi+2\delta, 0]\notag\\ \cup &~ [2\delta, \pi]\times [2\delta, \pi].\notag \end{align} For $\widetilde{C_{L,2}}(x,y) := (F_{L,2} * p_{O,2})(x,y)$, we have $\widetilde{C_{L,2}}(x,y) = \widetilde{C_{O,2}}(x - \delta,y-\delta)$. Let $p_2:=p_{O,2}$. Then, for any $x,y \in [-\pi/3, \pi/3]$, we have \begin{align}\label{eq:approx_C_2} &\left|C_{O,2}(x,y) - \widetilde{C_{L,2}}(x,y)\right| =~ \left|\int_{-\pi}^{\pi}\int_{-\pi}^{\pi} p_2(x-u,y-v) (H_2(u,v) - F_{L,2}(u,v)) \d u \d v\right|\\ \leq &~ \int_{-\pi}^{\pi}\int_{-\pi}^{\pi} p_2(x-u,y-v) |H_2(u,v) - F_{L,2}(u,v)| \d u \d v\notag\\ = &~ \left(\int_{-\pi}^0\int_{-\pi}^{\pi} + \int_{0}^{\pi}\int_{-\pi}^0+ \int_{2\delta}^\pi\int_{2\delta}^\pi\right) p_2(x-u,y-v)) |H_2(u,v) - F_{L,2}(u,v)| \d u \d v\notag\\ & + \left(\int_0^{2\delta}\int_0^{\pi} + \int_0^{\pi}\int_0^{2\delta} - \int_0^{2\delta}\int_{0}^{2\delta}\right) p_2(x-u,y-v)) |H_2(u,v) - F_{L,2}(u,v)| \d u \d v\notag\\ \leq &~ 2\epsilon\cdot \left(\int_{-\pi}^0\int_{-\pi}^{\pi} + \int_{0}^{\pi}\int_{-\pi}^0+ \int_{2\delta}^\pi\int_{2\delta}^\pi\right) p_2(x-u,y-v)\d u\d v\notag\\ & + \left(\int_0^{2\delta}\int_0^{\pi} + \int_0^{\pi}\int_0^{2\delta} - \int_0^{2\delta}\int_{0}^{2\delta}\right) p_2(x-u,y-v)) |H_2(u,v) - F_{L,2}(u,v)| \d u \d v\notag\\ \leq &~ 2\epsilon + \left(\int_0^{2\delta}\int_0^{\pi} + \int_0^{\pi}\int_0^{2\delta} - \int_0^{2\delta}\int_{0}^{2\delta}\right) p_2(x-u,y-v)) |H_2(u,v) - F_{L,2}(u,v)| \d u \d v\notag\\ \leq &~ 2\epsilon + \left(\int_0^{2\delta}\int_0^{\pi} + \int_0^{\pi}\int_0^{2\delta} - \int_0^{2\delta}\int_{0}^{2\delta}\right) p_2(x-u,y-v) \d u\d v\notag\\ = &~ 2\epsilon + \left(\int_{x-2\delta}^{x}\int_{y-\pi}^y + \int_{x-\pi}^x\int_{y-2\delta}^y -\int_{x-2\delta}^x\int_{y-2\delta}^y\right) p_2(u,v) \d u\d v\label{eq:integral_last}\\ = &~ 2\epsilon + C_{O,2}(x,y) - C_{O,2}(x-2\delta,y-2\delta),\notag \end{align} where the second step follows from Cauchy-Schwarz inequality, the third step follows from partitioning the integration region, the forth step follows from Eq.~\eqref{eq:F_L} and the fact that $p(x,y)$ is supported in $[-\pi/3,\pi/3]\times [-\pi/3,\pi/3]$ and $\delta<\pi/6$ (see Figure~\ref{fig:integral} (a)), the fifth step follows from $p_{O,2}(x)$ is a density function, the last step follows from $C_{O,2}(x)$ is the CDF of $p_{O,2}(x)$ in $[-\pi, \pi]\times [-\pi,\pi]$ and $x,y\in [-\pi/3,\pi/3]$ (see Figure~\ref{fig:integral} (b)). \begin{figure}[ht] \centering \subfigure[] {\includegraphics{fig/integral.pdf}} \subfigure[] {\includegraphics[scale=0.95]{fig/integral_2.pdf}}\label{fig:integral_2} \caption{(a) is the integral region for Eq.~\eqref{eq:approx_C_2}, where the integral in regions 1-6 can be upper bounded by Eq.~\eqref{eq:F_L}. (b) is the integral region for Eq.~\eqref{eq:integral_last}.} \label{fig:integral} \end{figure} Hence, we have \begin{align} \widetilde{C_{L,2}}(x,y) \geq &~ C_{O,2}(x,y) - (2\epsilon + C_{O,2}(x,y) - C_{O,2}(x - 2\delta,y-2\delta))\notag\\ = &~ C_{O,2}(x - 2\delta,y-2\delta) - 2\epsilon, \end{align} which proves the first inequality: \begin{align} \widetilde{C_{O,2}}(x - \delta,y-\delta) \geq C_{O,2}(x-2\delta,y-2\delta) - 2\epsilon. \end{align} Similarly, we can define $F_{R,2} := F_2(x + \delta,y+\delta)$ and $\widetilde{C_{R,2}}(x,y) := (F_{R,2} * p_2)(x,y)$. We can show that \begin{align} \left|C_{O,2}(x,y) - \widetilde{C_{R,2}}(x,y)\right| \leq 2\epsilon + C_{O,2}(x+2\delta,y+2\delta) - C_{O,2}(x,y), \end{align} which gives \begin{align} \widetilde{C_{O,2}}(x + \delta,y+\delta) \leq C_{O,2}(x + 2\delta,y+2\delta) +2\epsilon. \end{align} The lemma is then proved. \end{proof} \subsection{Estimating the 2-d ACDF} We use the following parameterized quantum circuit to estimate the 2-d $O$-weighted ACDF $\widetilde{C_{O,2}}(x,y)$. \begin{figure}[H] \centering \begin{displaymath} \Qcircuit @C=1.0em @R=1.2em { & & & &\\ \lstick{\ket{0}} &\gate{\mathrm{H}} &\ctrl{1} & \ctrl{1} & \ctrl{1} & \gate{\mathrm{W}} & \gate{\mathrm{H}} &\meter\\ \lstick{\ket{\phi_0}} & \qw & \gate{e^{-it_1 H}} & \gate{O} & \gate{e^{-it_2H}} &\qw &\qw &\qw } \end{displaymath} \caption{Quantum circuit parameterized by $t_1,t_2$. $\mathrm{H}$ is the Hadamard gate and $\mathrm{W}$ is either $I$ or a phase gate $S$. } \label{fig:hadamard_test_o_2d} \end{figure} \begin{lemma}[Estimate 2-d $O$-weighted ACDF]\label{lem:est_2d_acdf} For any $x,y\in [-\pi/3, \pi/3]$, for any $\epsilon, \nu\in (0,1)$, we can estimate $\widetilde{C_{O,2}}(x,y)$ with additive error $\eta\epsilon$ with probability $1-\nu$ by running the quantum circuit (Figure~\ref{fig:hadamard_test_o_2d}) $O(\epsilon^{-2}\eta^{-2}\log(1/\nu))$ times with maximal evolution time $\widetilde{O}(\gamma^{-1})$ and total expected evolution time $\widetilde{O}(\gamma^{-1}\epsilon^{-1}\eta^{-1})$. \end{lemma} \begin{proof} $\widetilde{C_{O,2}}(x,y)$ can be expanded in the following way: \begin{align} \widetilde{C_{O,2}}(x,y) = &~ (F_2 * p_2)(x,y)\\ = &~ \int_{-\pi}^\pi \int_{-\pi}^\pi F_2(x-u,y-v) p_2(u,v) \d u \d v\notag\\ = &~ \sum_{|j|\leq d,|j'|\leq d} \int_{-\pi}^\pi\int_{-\pi}^\pi \hat{F}_j \hat{F}_{j'}e^{ij(x-u)} e^{ij'(y-v)} p_2(u,v)\d u\d v\notag\\ = &~ \sum_{|j|\leq d,|j'|\leq d} \hat{F}_j\hat{F}_{j'}e^{i(jx+j'y)}\int_{-\pi}^\pi\int_{-\pi}^\pi p_2(u,v) e^{-iju}e^{-ij'v}\d u \d v\notag\\ = &~ \sum_{|j|\leq d,|j'|\leq d} \hat{F}_j\hat{F}_{j'}e^{i(jx+j'y)} \sum_{k,k'} c_k^*c_k O_{k,k'} e^{-ij\tau\lambda_k}e^{-ij'\tau\lambda_{k'}}\notag\\ = &~ \sum_{|j|\leq d,|j'|\leq d} \hat{F}_j\hat{F}_{j'}e^{i(jx+j'y)} \cdot \bra{\phi_0} e^{-ij\tau H} O e^{-ij'\tau H}\ket{\phi_0}, \end{align} To estimate $\bra{\phi_0} e^{-ij\tau H} O e^{-ij'\tau H}\ket{\phi_0}$, we use the multi-level Monte Carlo method. Define a random variables $J,J'$ with support $\{-d, \cdots, d\}$ such that \begin{align}\label{eq:def_J2} \Pr[J=j,J'=j']=\frac{|\hat{F}_j| |\hat{F}_{j'}|}{{\cal F}^2}, \end{align} where ${\cal F}:=\sum_{|j|\leq d}|\hat{F}_j|$. Then, let $Z:=X_{J,J'} + i Y_{J,J'}\in \{\pm 1\pm i\}$. Define an estimator $\overline{G_2}(x; J, J', Z)$ as follows: \begin{align}\label{eq:def_g2} \overline{G_2}(x,y; J, Z):={\cal F}^2\cdot Z e^{i(\theta_J + Jx)}e^{i(\theta_{J'} + J'y)}, \end{align} where $\theta_j$ is defined by $\hat{F}_j = |\hat{F}_j|e^{i\theta_j}$, and similar definition for $\theta_{j'}$. Then, we show that $\overline{G_2}(x,y; J, Z)$ is un-biased: \begin{align} \E[\overline{G_2}(x,y; J, J', Z)] = &~ \sum_{|j|\leq d, |j'|\leq d} \E\left[(X_{j,j'} + iY_{j,j'})e^{i(\theta_j + jx)}e^{i(\theta_{j'} + j'y)}|\hat{F}_j||\hat{F}_{j'}|\right]\\ = &~ \sum_{|j|\leq d, |j'|\leq d} \hat{F}_j\hat{F}_{j'} e^{ijx}e^{ij'y} \cdot \E\left[X_{j,j'} + iY_{j,j'}\right]\notag\\ = &~ \sum_{|j|\leq d, |j'|\leq d} \hat{F}_j\hat{F}_{j'} e^{ijx}e^{ij'y} \cdot \bra{\phi_0} e^{-ij\tau H} O e^{-ij'\tau H}\ket{\phi_0}\notag\\ = &~ \widetilde{C_2}(x,y), \end{align} where the third step follows from Claim~\ref{clm:estimator_expectation}. Moreover, the variance of $\overline{G_2}$ can be upper-bounded by: \begin{align} \Var[\overline{G_2}(x,y; J, J', Z)]= &~ \E[|\overline{G_2}(x,y; J,J', Z)|^2] - |\E[\overline{G_2}(x,y; J,J', Z)]|^2\\ \leq &~ \E[|\overline{G_2}(x,y; J,J', Z)|^2]\notag\\ = &~ {\cal F}^4 \cdot \E[|X_{J,J'} + i Y_{J,J'}|^2]\notag\\ = &~ 2{\cal F}^4, \end{align} where the third step follows from $|e^{i(\theta_J+Jx)}|=|e^{i(\theta_{J'}+J'y)}|=1$, and the last step follows from $X_{j,j'}, Y_{j,j'}\in \{\pm 1\}$. By Lemma~\ref{lem:approx_Heaviside}, we know that ${\cal F} = \widetilde{O}(1)$. Hence, we have for all $x,y\in [-\pi/3, \pi/3]$, \begin{align} \E[\overline{G_2}(x,y)]=\widetilde{C_{O,2}}(x,y),~~~\text{and}~~~\Var[\overline{G_2}(x,y)]=\widetilde{O}(1). \end{align} Then, using median-of-means estimator, we can obtain an $\epsilon$-additive error estimate of $\widetilde{C_{O,2}}(x,y)$ with probability $1-\nu$ using $O(\epsilon^{-2}\eta^{-2}\log(1/\nu))$ samples. The maximal evolution time is $2d=\widetilde{O}(\gamma^{-1})$. And the expected evolution time for one trial is \begin{align} \tau \sum_{|j|,|j'|\leq d}(j+j')\frac{|\hat{F}_j||\hat{F}_{j'}|}{{\cal F}^2}=2\tau \sum_{|j|\leq d}j\frac{|\hat{F}_j|}{{\cal F}}=O(\tau d / \log(d)). \end{align} Hence, the total expected evolution time is $\widetilde{O}(\gamma^{-1}\epsilon^{-2}\eta^{-2})$. The lemma is then proved. \end{proof} \begin{figure}[ht!] \centering \subfigure[] {\includegraphics[scale=2]{fig/interval_1d.pdf}} \subfigure[] {\includegraphics[scale=1.5]{fig/interval_2d.pdf}} \caption{(a) shows a point that is \emph{good} for $\lambda_0$, where the blue interval is the approximation region such that $\widetilde{C_O}(x_{\mathsf{good}})$ is close to $C(x)$ for some $x$ in this interval. (b) shows a good point in the 2-d case, where in the green square, the 2-d $O$-weighted CDF $C_{O,2}$ takes the same value $C_{O,2}(\lambda_0, \lambda_0)$. And the blue square is the approximation region of $(x_{\mathsf{good}}, y_{\mathsf{good}})$ such that $\widetilde{C_{O,2}}(x_{\mathsf{good}}, y_{\mathsf{good}})$ is close to some $C_{O,2}(x,y)$ in this region.} \label{fig:my_label} \end{figure} Similar to the 1-d case, we can construct a ``good'' point for $(\lambda_0, \lambda_0)$ via the following claim. \begin{claim}[Construct 2-d good point]\label{clm:2d_good} Let $\gamma$ be the spectral gap of the Hamiltonian $H$. Let $x_{\mathsf{good}}:=x^\star + \tau\gamma/2$ where $x^\star$ is the output of $\textsc{EstimateGSE}(\gamma/8, \tau, \eta, \nu/10)$ (Algorithm~\ref{alg:gs_energy}). Then, $(x_{\mathsf{good}}, x_{\mathsf{good}})$ is good for $(\lambda_0, \lambda_0)$. In particular, for any $\epsilon\in (0, 1)$, if the approximation error of $F(x)$ is set to be $\epsilon\eta$, then \begin{align} \left|\widetilde{C_{O,2}}(x_{\mathsf{good}}, x_{\mathsf{good}})-C_{O,2}(\lambda_0, \lambda_0)\right|\leq 2\epsilon \eta. \end{align} \end{claim} \begin{proof} By Claim~\ref{clm:good_x}, we know that $x_{\mathsf{good}}$ is good for $\lambda_0$, i.e., $[x_{\mathsf{good}}-\delta, x_{\mathsf{good}} + \delta]$ is contained in $[\lambda_0, \lambda_1)$. It also holds in the 2-d case for $(x_{\mathsf{good}}, x_{\mathsf{good}})$. Then, by Lemma~\ref{lem:est_2d_acdf}, we have \begin{align} C_{O,2}(x_{\mathsf{good}}-\delta, x_{\mathsf{good}}-\delta) -2\epsilon \eta \leq \widetilde{C_{O,2}}(x_{\mathsf{good}},x_{\mathsf{good}}) \leq C_{O,2}(x_{\mathsf{good}} + \delta, x_{\mathsf{good}}+\delta) + 2\epsilon \eta. \end{align} The claim then follows from $C_{O,2}(x,y)=C_{O,2}(\lambda_0, \lambda_0)$ for any $(x,y)\in [\lambda_0, \lambda_1)\times [\lambda_0, \lambda_1)$. \end{proof} \subsection{Putting it all together} The main algorithm for the ground state property estimation will first estimate the ground state energy $\lambda_0$ and the overlap $p_0$, which are described in Section~\ref{sec:est_overlap}. Then, by Lemma~\ref{lem:est_2d_acdf} and Claim~\ref{clm:2d_good}, the weighted expectation $p_0O_0$ can also be estimated. Hence, we will obtain an estimate for $O_0=\bra{\psi_0}O\ket{\psi_0}$. \begin{algorithm}[ht] \caption{Ground State Property Estimation (General Case)} \label{alg:gs_prop} \begin{algorithmic}[1] \algrenewcommand\algorithmicprocedure{\textbf{procedure}} \Procedure{EstimateGSProp}{$\epsilon,\tau, \eta, \gamma, \nu$} \State $\delta \gets O(\tau \gamma)$, $d\gets O(\delta^{-1}\log(\delta^{-1}\epsilon^{-1}\eta^{-1}))$ \For{$j\gets -d,\dots,d$} \State Compute $\hat{F}_j:=\hat{F}_{d,\delta,j}$ and $\theta_j$ \EndFor \State \Comment{Estimate the ground state energy} \State $x^\star\gets \textsc{EstimateGSE}(\gamma/8, \tau, \eta, \nu/10)$ \State $x_{\mathsf{good}}\gets x^\star + \tau \gamma / 2$ \State \Comment{Generate samples from the Hadamard test circuits} \State $B\gets O(\log(1/\nu))$, $K\gets \widetilde{O}(\epsilon^{-2})$ \For{$k\gets 1,\dots,B K$} \State Sample $(Z_k, J_k)$ from the quantum circuit (Figure~\ref{fig:hadamard_test}) \State Sample $(Z_k'', J_{k,1}'', J_{k,2}'')$ from the quantum circuit (Figure~\ref{fig:hadamard_test_o_2d}) \EndFor \State \Comment{Estimate $p_0$} \For{$i\gets 1,\dots, B$} \State $\overline{G}_i\gets \frac{1}{K}\sum_{j=1}^K\overline{G}(x_{\mathsf{good}}; Z_{(i-1)K+j}, J_{(i-1)K+j})$ \EndFor \State $\overline{p_0}\gets \mathrm{median}(\overline{G}_1,\dots,\overline{G}_{B})$\label{ln:prop_p0} \State \Comment{Estimate $p_0O_0$} \For{$i\gets 1,\dots, B$} \State $\overline{G}_i''\gets \frac{1}{K}\sum_{j=1}^K\overline{G_2}(x_{\mathsf{good}},x_{\mathsf{good}}; Z_{(i-1)K+j}'', J_{(i-1)K+j, 1}'', J_{(i-1)K+j, 2}'')$\Comment{Eq.~\eqref{eq:def_g2}} \EndFor \State $\overline{p_0O_0}\gets \mathrm{median}(\overline{G}_1'',\dots,\overline{G}_{B}'')$\label{ln:prop_p0O0} \State \Return $\overline{p_0O_0}/\overline{p_0}$ \EndProcedure \end{algorithmic} \end{algorithm} \begin{proof}[Proof of Theorem~\ref{thm:app_sim}] We first analyze the estimation error of Algorithm~\ref{alg:gs_prop}. By Lemma~\ref{lem:est_overlap}, $\overline{p_0}$ (Line~\ref{ln:prop_p0}) has additive error at most $O(\eta\epsilon)$. By Lemma~\ref{lem:est_2d_acdf} and Claim~\ref{clm:2d_good}, $\overline{p_0O_0}$ (Line~\ref{ln:prop_p0O0}) has additive error at most $O(\eta\epsilon)$. Then, by a similar error propagation analysis in Theorem~\ref{thm:app_sim_com}, we get that \begin{align} \left|\frac{\overline{p_0O_0}}{\overline{p_0}}-O_0\right|\leq O(\epsilon). \end{align} For the success probability, Algorithm~\ref{alg:gs_prop} has three components: estimate ground state energy, estimate $p_0$, and estimate $p_0O_0$. By our choice of parameters, each of them will fail with probability at most $\nu/3$. Hence, Algorithm~\ref{alg:gs_prop} will succeed with probability at least $1-\nu$. The maximal evolution time and the total expected evolution time follows from Theorem~\ref{thm:lt21_main}, Lemma~\ref{lem:est_overlap}, and Lemma~\ref{lem:est_2d_acdf}. \end{proof} \section{Handling non-unitary observables} \label{sec:general_alg} One may notice that Algorithm \ref{alg:gs_prop} works only for unitary observables because it needs to use the circuit in Figure \ref{fig:hadamard_test_o_2d} to estimate $\bra{\phi_0}e^{-it_2 H} Oe^{-it_1 H}\ket{\phi_0}$ for certain $t_1, t_2 \in \mathbb{R}$, in which controlled-$O$ must be a unitary operation. In this section, we show that under reasonable assumptions this algorithm can be modified to estimate the ground state property $\bra{\psi_0} O \ket{\psi_0}$ where $O$ is a general observable. Before we present this result, one may wonder why it is necessary. After all, we can always decompose $O$ into a linear combination of Pauli strings $O=\sum_{\vec s} w_{\vec s} P_{\vec s}$, and use Algorithm \ref{alg:gs_prop} to estimate each term $\mu_{\vec s} := \bra{\psi_0} P_{\vec s} \ket{\psi_0}$ individually, and return $\sum_{\vec s} w_{\vec s} \mu_{\vec s}$ as the result. While this strategy works in principle, it might be not efficient enough to be practical, depending on the weights $w_{\vec s}$'s of Pauli strings in the linear expansion of $O$. Alternatively, one can fix the issue of Algorithm \ref{alg:gs_prop} by designing a procedure for estimating $\bra{\phi_0}e^{-it_2 H} Oe^{-i t_1 H}\ket{\phi_0}$ for arbitrary non-unitary $O$. Such quantities are utilized in the same way as before. We have followed this approach and found that it is possible when there is a block-encoding of $O$. Namely, suppose $O$ is an $n$-qubit observable with $\|O\| \le 1$ and $U$ is an $(n+m)$-qubit unitary operator such that \begin{align} (\bra{0^m}\otimes I) U (\ket{0^m} \otimes I) = \alpha^{-1} O \end{align} for some $\alpha \ge \|O\|$. More details about the block-encoding model can be found in \cite{cgj18,lc19,gslw19,ral20}. Then we can still perform Hadamard test for $U$ to estimate $\bra{\phi_0}e^{-it_2 H} Oe^{-it_1 H}\ket{\phi_0}$ for arbitrary $t_1, t_2 \in \mathbb{R}$. The main theorem of this section is stated below: \begin{theorem}[Ground state property estimation with block-encoded observable]\label{thm:app_sim_block} Suppose $p_0\geq \eta$ for some known $\eta$ and the spectral gap of the Hamiltonian $H$ is at least $\gamma$. Suppose we have access to the $\alpha$-block-encoding of the observable $O$. For any $\epsilon,\nu\in (0, 1)$, there exists an algorithm for estimating the ground state property $\bra{\psi_0}O\ket{\psi_0}$ within additive error at most $\epsilon$ with probability at least $1-\nu$, such that: \begin{enumerate} \item the expected total evolution time is $\widetilde{O}(\gamma^{-1}\epsilon^{-2}\eta^{-2}\alpha^2)$, \item the maximal evolution time is $\widetilde{O}(\gamma^{-1})$. \end{enumerate} \label{thm:gspe_complexity_general_case} \end{theorem} \begin{proof}[Proof sketch of Theorem \ref{thm:app_sim_block}] The algorithm for handling non-unitary block-encoded observables is quite similar to Algorithm \ref{alg:gs_prop} for handling unitary observables, except that it relies on a different procedure to estimate $\bra{\phi_0}e^{-it_2 H} Oe^{-it_1 H}\ket{\phi_0}$ for arbitrary $t_1, t_2 \in \mathbb{R}$. Here we briefly describe this procedure and defer the detailed analysis to Appendix \ref{sec:hadamard_test_block_encoding}. Let $C \mhyphen V := \ket{0}\bra{0} \otimes I + \ket{1}\bra{1} \otimes V$ be the controlled-V operation for arbitrary unitary operator $V$. Let $\ket{\phi_0}$ be an arbitrary $n$-qubit state. Consider the following procedure (as illustrated in Figure \ref{fig:hadamard_test_block_encoding}: \begin{figure}[ht] \centering \begin{displaymath} \Qcircuit @C=1.0em @R=1.2em { & & & &\\ \lstick{\ket{0}} &\gate{\mathrm{H}} &\ctrl{2} & \ctrl{1} & \ctrl{2} & \gate{\mathrm{W}} & \gate{\mathrm{H}} &\meter\\ \lstick{\ket{0^m}} & \qw & \qw & \multigate{1}{U} & \qw &\meter\\ \lstick{\ket{\phi_0}} & \qw & \gate{e^{-it_1 H}} & \ghost{U} & \gate{e^{-it_2 H}} & \qw & \qw &\qw } \end{displaymath} \caption{Quantum circuit parameterized by $t_1,t_2$. $\mathrm{H}$ is the Hadamard gate and $\mathrm{W}$ is either $I$ or a phase gate $S$. $U$ is the block-encoding of the non-unitary observable $O$. } \label{fig:hadamard_test_block_encoding} \end{figure} \begin{enumerate} \item Prepare the state $\ket{0} \ket{0^m} \ket{\phi_0}$. \item Apply a Hadamard gate on the first register. \item Apply a $C \mhyphen e^{-i H t_1}$ on the first and third registers. \item Apply $C \mhyphen U$ on the current state, obtaining \begin{align} \dfrac{1}{\sqrt{2}} \left ( \ket{0} \ket{0^m} \ket{\phi_0} + \ket{1} U \ket{0^m} e^{-i H t_1}\ket{\phi_0} \right). \end{align} \item Measure the second register in the standard basis. If the outcome is not $0^m$, then this procedure fails; otherwise, continue. The probability of this step succeeding is \begin{align} p_{succ} = \dfrac{1+ \alpha^{-2} \bra{\phi_0} e^{i H t_1} O^2 e^{-i H t_1} \ket{\phi_0}} {2}, \end{align} and when this event happens, the state becomes \begin{align} \dfrac{1}{\sqrt{2 p_{succ}}} \left [ \ket{0} \ket{\phi_0} + \alpha^{-1} \ket{1} O e^{-i H t_1} \ket{\phi_0} \right ]. \end{align} \item Apply a $C \mhyphen e^{-i H t_2}$ on the first and third registers. The state becomes \begin{align} \dfrac{1}{\sqrt{2 p_{succ}}} \left [ \ket{0} \ket{\phi_0} + \alpha^{-1} \ket{1} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0} \right ]. \end{align} \item Apply $W=I$ or phase gate $S$ on the first register. \item Apply a Hadamard gate on the first register. \item Measure the first register in the standard basis. Then if $W=I$, the (conditional) probability of getting outcome $0$ is \begin{align} \mathbb{P}[0 | succ] = \dfrac{p_{succ} + \alpha^{-1} \operatorname{Re}[ \bra{\phi_0} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0}] }{2 p_{succ}}; \end{align} if $W=S$, this probability is \begin{align} \mathbb{P}[0 | succ] = \dfrac{p_{succ} - \alpha^{-1} \operatorname{Im}[ \bra{\phi_0} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0}] }{2 p_{succ}}. \end{align} \end{enumerate} Now we define two random variables $X$ and $Y$ as follows. First, we run the above procedure with $W=I$ in step 7. If step 5 fails, $X=0$; otherwise, if the measurement outcome is $0$ or $1$ in step 9, then $X=\alpha$ or $-\alpha$, respectively. One can show that $X$ is an unbiased estimator of $\operatorname{Re}[ \bra{\phi_0} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0}]$, i.e. \begin{align} \mathbb{E}[X]=\operatorname{Re}[\bra{\phi_0} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0}]. \end{align} $Y$ is defined similarly. We run the above procedure with $W=S$ in step 7. If step 5 fails, $Y=0$; otherwise, if the measurement outcome is $1$ or $0$ in step 9, then $Y=\alpha$ or $-\alpha$, respectively. Then $Y$ is an unbiased estimator of $\operatorname{Im}[\bra{\phi_0} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0}]$, i.e. \begin{align} \mathbb{E}[Y]=\operatorname{Im}[\bra{\phi_0} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0}]. \end{align} It follows that $Z:=X+iY$ is an unbiased estimator of $\bra{\phi_0} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0}$, i.e. \begin{align} \mathbb{E}[Z]=\bra{\phi_0} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0}. \end{align} Note that $|Z|^2 = |X|^2 + |Y|^2 \le 2\alpha^2$ with certainty. Equipped with the above method for estimating $\bra{\phi_0} e^{-i H t_2} O e^{-i H t_1} \ket{\phi_0}$ for arbitrary $t_1, t_2 \in \mathbb{R}$, we can now use the same strategy as in Lemma \ref{lem:est_2d_acdf} to estimate $\widetilde{C_{O,2}}(x,y)$. The other components of Algorithm \ref{alg:gs_prop} remain intact. The analysis of this modified algorithm is almost the same as before, except that now we have \begin{align} \Var[\overline{G_2}(x,y)]=\widetilde{O}(\alpha^2). \end{align} As a consequence, compared to Theorem \ref{thm:app_sim}, the total evolution time of this modified algorithm is larger by a factor of $O(\alpha^2)$, while its maximal evolution time is of the same order. \end{proof} \section{Applications} \label{sec:apps} In this section, we discuss some applications of our ground state property estimation algorithm. To define an application of the ground state property estimation algorithm, we must specify a Hamiltonian of interest $H$ and an observable of interest $O$. An example application used in quantum chemistry and materials is the Green's function (see, e.g. \cite{tong2021fast}), where $O=a_i(z-(H-E_0)^{-1})a_j^\dagger$. In the following two sections we describe another example from quantum chemistry and materials as well as an example of a linear algebraic subroutine. \subsection{Charge density} The primary application of the technique is the estimation of ground state properties of physical systems. Here we describe how to compute the charge density of a molecule, which can be used to compute properties like electric dipole moments of a molecule \cite{rice2021quantum}. From a second-quantized representation of the electronic system (assuming fixed positions of the nuclear positions), the charge density is determined from the one-particle reduced density matrix as, \begin{align} \rho(\vec{r}) = -e \sum_{p,q} D_{p,q} \phi_p^{\*}(\vec{r})\phi_q(\vec{r}), \end{align} where $e$ is the electric constant, $D_{p,q}$ is the one-electron reduced density matrix (1RDM) of the ground state, and $\phi_q(\vec{r})$ are the basis wave functions chosen for the second-quantized representation of the electronic system \cite{helgaker2014molecular}. The 1RDM of the ground state is a matrix of properties of the ground state with each entry defined as \begin{align} D_{p,q} = \bra{\psi_0}a_p^{\dagger}a_q\ket{\psi_0}, \end{align} where $a_p$ are annihilation operators. The operators involved in the 1RDM can each be expressed as a linear combination of unitary operators using the Majorana representation $a_p=\frac{1}{2}(\gamma_{2p}+i\gamma_{2p+1})$, where the $\gamma_k$ are hermitian and unitary\footnote{To implement this application on a quantum computer we must represent the unitaries as operations on qubits. For an $n$-electron system, using the Jordan-Wigner or Bravyi-Kitaev transformation \cite{seeley2012bravyi}, each Majorana operator, and products thereof, can be represented as a Pauli string. }, \begin{align} D_{p,q} = \frac{1}{4}\left(\bra{\psi_0}\gamma_{2p}\gamma_{2q}\ket{\psi_0}-i\bra{\psi_0}\gamma_{2p+1}\gamma_{2q}\ket{\psi_0}+i\bra{\psi_0}\gamma_{2p}\gamma_{2q+1}\ket{\psi_0}+\bra{\psi_0}\gamma_{2p+1}\gamma_{2q+1}\ket{\psi_0}\right). \end{align} Accordingly, we may use the method of Section \ref{sec:unitary_alg} to estimate each entry of the 1RDM and then obtain the charge density function of the ground state. As a point of comparison, we could alternatively use the variational quantum eigensolver algorithm to prepare an approximation to the ground state and then directly estimate each of the Pauli expectation values. However, there is no guarantee on whether a target accuracy for the ground state approximation can be achieved. Remarkably, the methods introduced in this paper can be used to ensure a target accuracy in the estimation regardless of the quality of ground state approximation, though possibly at the cost of an increase in runtime. \subsection{Quantum linear system solver} In the seminal \cite{hhl09} paper, a quantum algorithm is proposed to generate a quantum state approximately proportional to the solution of a linear system of equations. Namely, given a linear system $A \vec{x}=\vec{b}$, the algorithm produces a quantum state close to $\ket{x} := \frac{\sum_j x_j \ket{j}}{\sqrt{\sum_j |x_j|^2}}$, where $x_j$'s are the entries of $\vec{x}=A^{-1}\vec{b}$. In fact, in many cases, we only need to know $\bra{x}M \ket{x}$, where $M$ is a linear operator. For example, in quantum mechanics, many features of $\ket{x}$ can be extracted in this way, including normalization, moments, etc. One approach to solve this problem is first solving the linear system using any quantum linear system solver \cite{hhl09, cks17, cgj18, gslw19} to obtain the state $\ket{x}$ and then performing the measurement of $M$. However, a shortcoming of this method is that most of the quantum linear system solvers require deep quantum circuits. And hence, the needed quantum resources may not be accessible in the near future. Recently, a few quantum algorithms \cite{blrm19, hbr19, syso19} were developed to solve linear systems of equations by encoding such a system into an effective Hamiltonian \begin{align} H_G:=A^\dagger (I-\ket{b}\bra{b})A, \end{align} whose ground state corresponds to the solution vector $\ket{x}$. We can combine this idea with our ground state property estimation algorithm to get a low-depth algorithm for estimating the properties of linear system solution. More specifically, suppose we can simulate the Hamiltonian $H_G$ for some specified time and we know the normalization factor $\tau$ such that the eigenvalues of $\tau H_G$ are in $[-\pi/3, \pi/3]$. For the operator $M$, we can assume that $M$ can be decomposed into a linear combination of Pauli operators $M=\sum_{\ell=1}^L c_\ell \sigma_\ell$, or we assume that $M$ is given in the block-encoding form. The estimation algorithm has two steps: \begin{enumerate} \item Run a quantum linear system algorithm (e.g. \cite{syso19}, \cite{an2019quantum}, or \cite{lin2020optimal}) with constant precision to prepare an initial state $\ket{\phi_0}$ such that $|\bra{\phi_0}x\rangle|^2$ is $\Omega(1)$. \item Using $\ket{\phi_0}$ from step 1 as the initial state, run Algorithm~\ref{alg:gs_prop} to estimate $\bra{x}M\ket{x}$ within $\epsilon$-additive error for any $\epsilon\in (0,1)$. \end{enumerate} Step 1 takes $\tilde{O}(\kappa)$ time, where $\kappa$ is the condition number of $A$. To analyze the computation cost of the second step, we need a lower-bound on the spectral gap of $H_G$. Since $\bra{x}A^\dagger (I-\ket{b}\bra{b})A\ket{x}=0$, we have $\lambda_0(H_G)=0$. For the second smallest eigenvalue, since $H_G=A^\dagger A - A^\dagger \ket{b}\bra{b}A$, by Weyl's inequality, we have \begin{align} \lambda_1(H_G) \geq &~ \lambda_0(A^\dagger A) - \lambda_1(A^\dagger \ket{b}\bra{b}A)\notag\\ = &~ \lambda_0(A^\dagger A), \end{align} where the second step follows from $A^\dagger \ket{b}\bra{b}A$ is rank-1. Due to the normalization, the smallest (normalized) singular value of $A$ is $\Omega(\kappa^{-1})$. Hence, we have $\gamma= \Omega(\kappa^{-2})$. By Theorem~\ref{thm:app_sim}, the maximal evolution time of the Hamiltonian will be $\widetilde{O}(\kappa^2)$. To further improve the circuit depth, we may apply the gap amplification technique \cite{sb13, syso19} to quadratically increase the spectral gap of $H_G$. Specifically, consider the following family of Hamiltonians: \begin{align}\label{eq:lin_hamiltonian} \bar{H}'_G(s):=\sigma^+\otimes \bar{A}^\dagger(s) (I-\ket{\bar{b}}\bra{\bar{b}}) + \sigma^-\otimes (I-\ket{\bar{b}}\bra{\bar{b}}) \bar{A}(s), \end{align} where $\sigma^{\pm}=(X\pm iY)/2$, $\bar{A}(s):=(1-s)Z \otimes I + s X \otimes A$, $\ket{\bar{b}}:=\ket{+}\ket{b}$ and $s \in [0, 1]$. Note that these Hamiltonians act on the original system and two ancilla qubits. Then we have \begin{align} (\bar{H}'_G(s))^2=\begin{bmatrix} \bar{H}_G(s) & 0\\ 0 & (I-\ket{\bar{b}}\bra{\bar{b}}) \bar{A}(s) \bar{A}^\dagger(s) (I-\ket{\bar{b}}\bra{\bar{b}}) \end{bmatrix}, \end{align} where \begin{align} \bar{H}_G(s):=\bar{A}^\dagger(s)(I-\ket{\bar{b}}\bra{\bar{b}}) \bar{A}(s). \end{align} As shown in \cite{syso19}, the eigenvalues of $\bar{H}'_G(s)$ are \begin{align} \left\{0, 0, \pm\sqrt{\lambda_1(s)}, \pm\sqrt{\lambda_2(s)}, \dots\right\}, \end{align} where $\lambda_j(s)$'s are the nonzero eigenvalues of $\bar{H}_G(s)$. Furthermore, let $\ket{x(s)}$ be the unique ground state of $\bar{H}_G(s)$. Note that $\ket{x(0)}=\ket{-}\ket{b}$ and $\ket{x(1)}=\ket{+}\ket{x}$. Then the ground space of $\bar{H}'_G(s)$ is spanned by $\{\ket{0}\ket{x(s)}, \ket{1}\ket{\bar{b}}\}$. In addition, for $s=1$, one can use Weyl's ineqality to show that $\lambda_1(1) \ge \kappa^{-2}$, which implies that the smallest nonzero eigenvalue of $\bar{H}_G'(1)$ is $\Omega(\kappa^{-1})$, as desired. We can use the algorithm in \cite{syso19} to prepare a state that has $\Omega(1)$ overlap with $\ket{0}\ket{x(1)}=\ket{0}\ket{+}\ket{x}$ in $\tilde{O}(\kappa)$ time. Specifically, this algorithm starts with the state $\ket{0}\ket{x(0)}=\ket{0}\ket{-}\ket{b}$, performs a sequence of unitary operations of form $e^{-i t_k \bar{H}_G'(s_k)}$ on it, and outputs a state $\epsilon$-close to $\ket{0}\ket{x(1)}$ in $\tilde{O}(\kappa \epsilon^{-1})$ time. Here we set $\epsilon=\Theta(1)$ and the time cost of this procedure is $\tilde{O}(\kappa)$. After obtaining a state $\ket{\phi_0}$ that has $\Omega(1)$ overlap with $\ket{0}\ket{+}\ket{x}$, we run Algorithm~\ref{alg:gs_prop} on $\ket{\phi_0}$, $\bar{H}'_G(1)$ and $\tilde{M}:=\ket{0}\bra{0}\otimes \ket{+}\bra{+}\otimes M$ to estimate $\bra{0,+,x}\tilde{M}\ket{0,+,x}=\bra{x}M\ket{x}$. Notice that since we know the ground state energy of $\bar{H}'_G(1)$ is zero, we do not need to first estimate the ground state energy using Algorithm~\ref{alg:gs_energy}. Instead, we directly evaluate the $O$-weighted CDF at zero. Therefore, by Theorem~\ref{thm:app_sim_block}, we get the following result: \begin{corollary}[Quantum linear system solution property estimation] \label{cor:qlss} For a linear system $A\vec{x}=\vec{b}$, suppose $A$ has singular values in $[-1, -1/\kappa]\cup [1/\kappa, 1]$ for $\kappa>1$, and the eigenvalues of $\bar{H}'_G(1)$ (Eq.~\eqref{eq:lin_hamiltonian}) are in $[-\pi/3, \pi/3]$. Furthermore, suppose we can implement $e^{-i t \bar{H}'_G(s)}$ (Eq.~\eqref{eq:lin_hamiltonian}) in $\tilde{O}(t)$ time for all $s \in [0, 1]$. Then, for any linear operator $M$ given by its $\alpha$-block encoding unitary $U_M$, and for any $\epsilon\in (0,1)$, the expectation value $\bra{x}M\ket{x}$ can be estimated with $\epsilon$-additive error with high probability such that: \begin{itemize} \item the depth of each circuit is $\widetilde{O}(\kappa)$. \item the expected total runtime is $\widetilde{O}(\kappa\epsilon^{-2}\alpha^2)$. \end{itemize} \end{corollary} For comparison, the algorithm in \cite{syso19} needs $\widetilde{O}(\kappa\epsilon^{-1})$ circuit depth to obtain a state that is $\epsilon$-close to $\ket{x}$, which is larger than ours. Moreover, to estimate $\bra{x}M\ket{x}$, even with amplitude estimation, it still needs $\Omega(\epsilon^{-1})$ copies of the state to achieve $\epsilon$-additive error. Hence, its total runtime will be $\widetilde{O}(\kappa\epsilon^{-2})$, nearly matching our result (ignoring the dependence on the $\alpha$ factor). \section{Discussion and Outlook}\label{sec:discuss} We have shown a quantum-classical hybrid algorithm for estimating properties of the ground state of a Hamiltonian, such that the quantum circuit depth is relatively small and only poly-logarithmically depends on $\epsilon^{-1}$. Therefore, the algorithm has a significant advantage in high-accuracy estimation, and it is possible to be implemented in early fault-tolerant devices. In practice, our algorithm can solve many important tasks by combining with some initial state preparation methods (e.g., VQE or QAOA). In this paper, we provide two examples, one in quantum chemistry and another in solving linear systems. And we believe more applications will be explored in the future. Another important direction is to improve the total evolution time of our algorithm which quadratically depends on $\epsilon^{-1}$. The blowup comes from evaluating the $O$-weighted CDF in high precision and a trade-off between maximal evolution time and total evolution time. However, this does not meet the Heisenberg-limit of linear dependence on $\epsilon^{-1}$ for generic Hamiltonians \cite{aa17}. In our main result (Theorem~\ref{thm:app_sim}), the $\epsilon^{-2}\eta^{-2}$ factor comes from the number of samples needed to reduce the estimator's error to $O(\epsilon\eta)$. Amplitude estimation can be used to reduce this number of samples and the total evolution time. However, this comes at the cost of significantly increasing the maximal evolution time, which could require large fault-tolerant overheads for reliable implementation. A strategy to achieve improved performance that is more amenable to early fault tolerant quantum computers is to use recently introduced ``enhanced sampling'' techniques \cite{wang2021minimizing}. If $\lambda$ characterizes the fidelity decay rate of the circuit as deeper circuits are used, then we would expect to need a maximal evolution time of $O(\lambda^{-1}\gamma^{-1})$ and an total evolution time of $O(\lambda\gamma^{-1}\epsilon^{-2}\eta^{-2})$. Note that because this approach incorporates the impact of error into the algorithm, the maximal evolution time is of no concern. Rather than being a cost that needs monitoring, the maximal evolution time is chosen by the algorithm to minimize the total evolution time. With this, we expect that as the quality of devices is improved, the performance of the algorithm improves proportionally. We note that a similar approach can also be applied to improve the total evolution time in \cite{lt21} from $\widetilde{O}(\epsilon^{-1}\eta^{-2})$ to $\widetilde{O}(\lambda\epsilon^{-1}\eta^{-2})$. This work fits into the paradigm of ``beyond the ground state energy'' and studies more general properties of the ground state. Can we go further beyond the ground state? Some prior works have explored the estimation of such kind of properties of Hamiltonian. For example, Brown, Flammia, and Schuch \cite{bfs11} studied the density of states. Jordan, Gosset, and Love \cite{jgl10} focused on the energy of excited states. Gharibian and Sikora \cite{gs15} identified the energy barriers. Watson and Bausch \cite{watson2021complexity} explored detecting phase transitions via order parameters. In general, for an unknown Hamiltonian, these estimation problems will be hard. An interesting open problem is, given some prior knowledge of the Hamiltonian, can we design efficient or low-depth quantum algorithms for estimating Hamiltonian properties beyond ground state? \section{Ground State Property Estimation Problem} \label{sec:gspe} In this section, we will formally define the ground state property estimation problem. This problem was initially studied by Ambainis \cite{amb14} as the approximate simulation problem (\textsf{APX-SIM}), and he proved that \textsf{APX-SIM} is $\mathsf{P^{QMA[\log]}}$-complete\footnote{$\mathsf{P^{QMA[\log]}}$ contains the problems with polynomial-time classical algorithms that are allowed to make $O(\log n)$ queries to an oracle solving a promise problem in \textsf{QMA}.}. \begin{problem}[Approximate simulation (\textsf{APX-SIM}), \cite{amb14}]\label{prob:app_sim} Given a $k$-local Hamiltonian $H$, an $\ell$-local observable $O$, and real numbers $a,b,\epsilon$ such that $b-a\geq 1/\poly(n)$, and $\epsilon\geq 1/\poly(n)$, for $n$ the number of qubits the Hamiltonian $H$ acts on, decide: \begin{itemize} \item \textbf{Yes case:} $H$ has a ground state $\ket{\psi_0}$ such that $\bra{\psi_0}O\ket{\psi_0}\leq a$, \item \textbf{No case:} for any state $\ket{\psi}$ with $\bra{\psi}H\ket{\psi}\leq \lambda_0 + \epsilon$ where $\lambda_0$ is the ground state energy of $H$, it holds that $\bra{\psi_0}O\ket{\psi_0}\geq b$. \end{itemize} \end{problem} In the follow-up works, \textsf{APX-SIM} was shown to be $\mathsf{P^{QMA[\log]}}$-complete even for 5-local Hamiltonian and 1-local observable \cite{gy19}, and also for some physics models like 2D Heisenberg model and 1D nearest-neighbor, translationally invariant model \cite{gpy20,wbg20}. However, these previous studies only focused on the decision version of this problem. For the purpose of designing efficient algorithms, we first define the ``search version'' of \textsf{APX-SIM} as follows: \begin{problem}[Search version of \textsf{APX-SIM}]\label{prob:apx_sim_search} Given a Hamiltonian $H$, an (local) observable $O$, and $\epsilon\in (0, 1)$, with $\Omega(1)$ probability, estimate $\bra{\psi_0} O \ket{\psi_0}$ with an additive/multiplicative error at most $\epsilon$. \end{problem} In general, Problem~\ref{prob:apx_sim_search} will not be more tractable than Problem~\ref{prob:app_sim}. Thus, we may need some prior information about the Hamiltonian $H$ and its ground state. Motivated by the widely used variational quantum eigensolver (VQE) \cite{pmsy14,mrba16} and the Hartree-Fock method \cite{so12} in quantum chemistry, it is often the case that for many real-world Hamiltonians, we are able to efficiently prepare an initial state $\ket{\phi_0}$ that has a nontrivial overlap with the ground state. Moreover, we assume that the Hamiltonian $H$ has a nontrivial spectral gap, where a large family of Hamiltonians in practice satisfy this condition. With these assumptions, we formally define the ground state property estimation problem as follows: \begin{problem}[Ground state property estimation (GSPE)]\label{prob:gs_prop_est} Given a Hamiltonian $H$ with spectral gap $\gamma$ and ground state $\ket{\psi_0}$, an observable $O$, a unitary $U_I$ such that it prepares an initial state $\ket{\phi_0}$ with $|\langle \phi_0|\psi_0\rangle|^2\geq \eta$, and $\epsilon\in (0,1)$, estimate $\bra{\psi_0} O\ket{\psi_0}$ with an additive/multiplicative error at most $\epsilon$ with $\Omega(1)$ probability. \end{problem} \begin{remark} We notice that when $O=H$, Problem~\ref{prob:gs_prop_est} becomes the ground state energy estimation problem. Moreover, the prior knowledge of a large overlap for the initial state is required for all quantum algorithms with provable performance guarantees (e.g. \cite{gtc19,lt20,lt21}). It is also worth noting that even with these assumptions, it is unlikely to use a purely classical algorithm to estimate the ground state energy or property to high precision (unless $\mathsf{P}=\mathsf{BQP}$)~\cite{gg21}. \end{remark} We propose a high-accuracy, early fault-tolerant quantum algorithm for GSPE that satisfies the following properties: \begin{itemize} \item The maximal evolution time depends \emph{logarithmically} on the accuracy $\epsilon$ and overlap $\eta$. \item In addition to the Hamiltonian evolution and observable implementation, it only uses one additional ancilla qubit. \end{itemize} \section{Technical details of the Hadamard test of block-encoded observable} \label{sec:hadamard_test_block_encoding} In this section, we give detailed analysis of the Hadamard test for block-encodings which plays a crucial role in the proof of Theorem \ref{thm:gspe_complexity_general_case}. We first note that the quantum state before the final measurements is as follows: \begin{align} \ket{\phi_1}=\begin{cases} \frac{1}{\sqrt{2}}\left(\ket{+}\ket{0^m}\ket{\phi_0} + \ket{-}(I\otimes e^{-iHt_2})U(I\otimes e^{-iH t_1})\ket{0^m}\ket{\phi_0}\right)& \text{if}~W=I,\\ \frac{1}{\sqrt{2}}\left(\ket{+}\ket{0^m}\ket{\phi_0} + i\ket{-}(I\otimes e^{-iHt_2})U(I\otimes e^{-iH t_1})\ket{0^m}\ket{\phi_0}\right)& \text{if}~W=S. \end{cases} \end{align} \paragraph{Case 1: $W=I$} We measure the first two registers. If the outcome is $(0, 0^m)$, the (un-normalized) remaining state is: \begin{align} &(\bra{0}\bra{0^m}\otimes I)\frac{1}{\sqrt{2}}\left(\ket{+}\ket{0^m}\ket{\phi_0} + \ket{-}(I\otimes e^{-iHt_2})U(I\otimes e^{-iH t_1})\ket{0^m}\ket{\phi_0}\right)\notag\\ = &~ \frac{1}{2}\ket{\phi_0}+\frac{1}{2\alpha}e^{-iHt_2}Oe^{-iHt_1}\ket{\phi_0} \end{align} Hence, this event happens with the following probability: \begin{align} &\Pr[\text{the outcome is }(0,0^m)|W=I]\\ =&~ \bra{\phi_0}\left(\frac{1}{2}I+\frac{1}{2\alpha}e^{iHt_1}O^\dagger e^{iHt_2}\right)\left(\frac{1}{2}I+\frac{1}{2\alpha}e^{-iHt_2}Oe^{-iHt_1}\right)\ket{\phi_0}\notag\\ =&~ \frac{1}{4}\left(1+\frac{1}{\alpha}\bra{\phi_0}e^{iHt_1}O^\dagger e^{iHt_2}\ket{\phi_0}+\frac{1}{\alpha}\bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0} + \frac{1}{\alpha^2}\bra{\phi_0}e^{iHt_1}O^\dagger Oe^{-iHt_1}\ket{\phi_0}\right). \end{align} Similarly, if the outcome is $(1, 0^m)$, the remaining (un-normalized) state is \begin{align} \frac{1}{2}\ket{\phi_0}-\frac{1}{2\alpha}e^{-iHt_2}Oe^{-iHt_1}\ket{\phi_0}, \end{align} and the probability is \begin{align} &\Pr[\text{the outcome is }(1,0^m)|W=I]\notag\\ = &~ \frac{1}{4}\left(1-\frac{1}{\alpha}\bra{\phi_0}e^{iHt_1}O^\dagger e^{iHt_2}\ket{\phi_0}-\frac{1}{\alpha}\bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0} + \frac{1}{\alpha^2}\bra{\phi_0}e^{iHt_1}O^\dagger Oe^{-iHt_1}\ket{\phi_0}\right). \end{align} Hence, the expectation of $X$ is \begin{align} \E[X]=&~ \alpha \cdot (\Pr[\text{the outcome is }(0,0^m)|W=I] - \Pr[\text{the outcome is }(1,0^m)|W=I])\\ = &~ \frac{1}{2}(\bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0} + \bra{\phi_0}e^{iHt_1}O^\dagger e^{iHt_2}\ket{\phi_0})\notag\\ = &~ \frac{1}{2}(\bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0} + \overline{\bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0}})\notag\\ = &~ \Re \bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0}. \end{align} \paragraph{Case 2: $W=S$} Similar to the case 1, we have \begin{align} & \Pr[\text{the outcome is }(0,0^m)|W=S]\\ =&~ \bra{\phi_0}\left(\frac{1}{2}I-\frac{i}{2\alpha}e^{iHt_1}O^\dagger e^{iHt_2}\right)\left(\frac{1}{2}I+\frac{i}{2\alpha}e^{-iHt_2}Oe^{-iHt_1}\right)\ket{\phi_0}\notag\\ =&~ \frac{1}{4}\left(1-\frac{i}{\alpha}\bra{\phi_0}e^{iHt_1}O^\dagger e^{iHt_2}\ket{\phi_0}+\frac{i}{\alpha}\bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0} + \frac{1}{\alpha^2}\bra{\phi_0}e^{iHt_1}O^\dagger Oe^{-iHt_1}\ket{\phi_0}\right). \end{align} And \begin{align} &\Pr[\text{the outcome is }(1,0^m)|W=S]\notag\\ = &~ \frac{1}{4}\left(1+\frac{i}{\alpha}\bra{\phi_0}e^{iHt_1}O^\dagger e^{iHt_2}\ket{\phi_0}-\frac{i}{\alpha}\bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0} + \frac{1}{\alpha^2}\bra{\phi_0}e^{iHt_1}O^\dagger Oe^{-iHt_1}\ket{\phi_0}\right). \end{align} Hence, \begin{align} \E[Y]=&~ \alpha \cdot (\Pr[\text{the outcome is }(1,0^m)|W=S] - \Pr[\text{the outcome is }(0,0^m)|W=S])\\ = &~ \frac{i}{2}(-\bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0} + \overline{\bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0}})\notag\\ = &~ \Im \bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0}. \end{align} Therefore, \begin{align} \E[X+iY] = \bra{\phi_0}e^{-iHt_2}O e^{-iHt_1}\ket{\phi_0}. \end{align} \subsection{Generalized Hadamard test} In this subsection, we study the generalized Hadamard test for block-encodings and we will show that the estimator's variance can be reduced by replacing the first Hadamard gate with an $\alpha$-dependent single-qubit gate. Suppose $W=I$ and we replace the first Hadamard gate with the following single-qubit gate: \begin{align} G(a, b, \theta):=\begin{bmatrix} a & b\\ -e^{i\theta}\overline{b} & e^{i\theta}\overline{a} \end{bmatrix}, \end{align} where $\theta\in \mathbb{R}$, $a,b\in \C$ with $|a|^2 + |b|^2=1$. Then, we have \begin{align*} &\ket{0}\ket{0^m}\ket{\phi_0}\\ \xrightarrow{G(a, b, \theta)}&~ a\ket{0}\ket{0^m}\ket{\phi_0} -e^{i\theta}\overline{b} \ket{1}\ket{0^m}\ket{\phi_0}\\ \xrightarrow{C\mhyphen e^{-iHt_1}}&~ a\ket{0}\ket{0^m}\ket{\phi_0} -e^{i\theta}\overline{b} \ket{1}(I\otimes e^{-iHt_1})\ket{0^m}\ket{\phi_0}\\ \xrightarrow{C\mhyphen U}&~ a\ket{0}\ket{0^m}\ket{\phi_0} -e^{i\theta}\overline{b} \ket{1}U(I\otimes e^{-iHt_1})\ket{0^m}\ket{\phi_0}\\ \xrightarrow{C\mhyphen e^{-iHt_2}}&~ a\ket{0}\ket{0^m}\ket{\phi_0} -e^{i\theta}\overline{b} \ket{1}(I\otimes e^{-iHt_2})U(I\otimes e^{-iHt_1})\ket{0^m}\ket{\phi_0}\\ \xrightarrow{G(p,q,\rho)}&~ a(p\ket{0} - e^{i\rho}\overline{q}\ket{1})\ket{0^m}\ket{\phi_0} -e^{i\theta}\overline{b} (q\ket{0} + e^{i\rho}\overline{p}\ket{1})(I\otimes e^{-iHt_2})U(I\otimes e^{-iHt_1})\ket{0^m}\ket{\phi_0}\\ =: &~ \ket{\phi_1}. \end{align*} Hence, the un-normalized remaining state after the measurement with outcome $(0, 0^m)$ is: \begin{align} (\bra{0}\bra{0^m}\otimes I)\ket{\phi_1} = ap \ket{\phi_0} - \frac{e^{i\theta}\overline{b}q}{\alpha} e^{-iHt_2}Oe^{-iHt_1}\ket{\phi_0}. \end{align} It implies that \begin{align} &\Pr[\text{the outcome is }(0, 0^m)|W=I]\\ = &~ \bra{\phi_0} \left(\overline{a}\overline{p} I - \frac{e^{-i\theta}b\overline{q}}{\alpha} e^{iHt_1}O^\dagger e^{iHt_2}\right) \left(ap I - \frac{e^{i\theta}\overline{b}q}{\alpha} e^{-iHt_2} O e^{-iHt_1}\right)\ket{\phi_0}\notag\\ = &~ |a|^2|p|^2 + \frac{|b|^2|q|^2}{\alpha^2}\bra{\phi_0}e^{iHt_1} O^\dagger O e^{-iHt_1}\ket{\phi_0}\notag\\ -& \frac{e^{i\theta}\overline{a}\overline{b}\overline{p}q}{\alpha} \bra{\phi_0}e^{-iHt_2} O e^{-iHt_1}\ket{\phi_0} - \frac{e^{-i\theta}abp\overline{q}}{\alpha} \overline{\bra{\phi_0}e^{-iHt_2} O e^{-iHt_1}\ket{\phi_0}}. \end{align} On the other hand, the un-normalized state for the outcome $(1, 0^m)$ is \begin{align} (\bra{1}\bra{0^m}\otimes I)\ket{\phi_1} = -e^{i\rho}a\overline{q} \ket{\phi_0} - \frac{e^{i(\theta+\rho)}\overline{b}\overline{p}}{\alpha} e^{-iHt_2}Oe^{-iHt_1}\ket{\phi_0}, \end{align} and the probability is \begin{align} &\Pr[\text{the outcome is }(1, 0^m)|W=I]\\ = &~ \bra{\phi_0} \left(-e^{-i\rho}\overline{a}q I - \frac{e^{-i(\theta+\rho)}bp}{\alpha} e^{iHt_1}O^\dagger e^{iHt_2}\right) \left(-e^{i\rho}a\overline{q} I - \frac{e^{i(\theta+\rho)}\overline{b}\overline{p}}{\alpha} e^{-iHt_2} O e^{-iHt_1}\right)\ket{\phi_0}\notag\\ = &~ |a|^2|q|^2 + \frac{|b|^2|p|^2}{\alpha^2}\bra{\phi_0}e^{iHt_1} O^\dagger O e^{-iHt_1}\ket{\phi_0}\notag\\ +&\frac{e^{i\theta}\overline{a}\overline{b}\overline{p}q}{\alpha}\bra{\phi_0}e^{-iHt_2} O e^{-iHt_1}\ket{\phi_0} + \frac{e^{-i\theta}abp\overline{q}}{\alpha}\overline{\bra{\phi_0}e^{-iHt_2} O e^{-iHt_1}\ket{\phi_0}} \end{align} If we choose $|p|=|q|=\frac{1}{\sqrt{2}}$, then we have \begin{align} &\Pr[\text{the outcome is }(1, 0^m)|W=I] - \Pr[\text{the outcome is }(0, 0^m)|W=I]\notag\\ = &~ \Re \frac{4e^{i\theta} \overline{a}\overline{b}\overline{p}q}{\alpha}\bra{\phi_0}e^{-iHt_2} O e^{-iHt_1}\ket{\phi_0}. \end{align} Notice that to make the Hadamard test work, we need the coefficient $\frac{4e^{i\theta} \overline{a}\overline{b}\overline{p}q}{\alpha}$ to be a real or an imaginary number. \iffalse Moreover, to reduce the variance, we should choose the parameters $a, b, p, q,\theta$ to maximize the amplitude. However, \begin{align*} \left|e^{i\theta} \overline{a}\overline{b}\overline{p}q\right|= \frac{1}{2}|a||b|=\frac{1}{2}|a|\sqrt{1-|a|^2}\leq \frac{1}{4}. \end{align*} It means that if the first gate is the Hadamard gate, the variance is minimized by a rough estimate. \fi Now, we show how to choose the parameters to minimize the variance. Without loss of generality, we may assume $a,b\in (0,1)$ such that $a^2+b^2=1$ and use $p,q$ to cancel the phase factor, i.e., $e^{i\theta} \overline{a}\overline{b}\overline{p}q=\frac{1}{2}ab$. It gives that: \begin{align} &\Pr[\text{the outcome is }(1, 0^m)|W=I] - \Pr[\text{the outcome is }(0, 0^m)|W=I]\notag\\ =&~ \frac{2ab}{\alpha}\Re \bra{\phi_0}e^{-iHt_2} O e^{-iHt_1}\ket{\phi_0}, \end{align} and \begin{align} &\Pr[\text{the outcome is }(1, 0^m)|W=I] + \Pr[\text{the outcome is }(0, 0^m)|W=I]\notag\\ = &~ a^2 + \frac{b^2}{\alpha^2}\bra{\phi_0}e^{iHt_1}O^\dagger Oe^{-iHt_1}\ket{\phi_0} \end{align} Now, define the random variable as follows: \begin{align} X:=\begin{cases} \frac{\alpha}{2ab} & \text{if the outcome is }(1, 0^m),\\ -\frac{\alpha}{2ab} & \text{if the outcome is }(0, 0^m),\\ 0 & \text{otherwise}. \end{cases} \end{align} Then, we have \begin{align} \E[X]=\Re \bra{\phi_0}e^{-iHt_2} O e^{-iHt_1}\ket{\phi_0}. \end{align} And we have \begin{align} \Var[X]=&~ \E[X^2]-\E[X]^2\notag\\ =&~ \frac{\alpha^2}{4a^2b^2}\left(a^2 + \frac{b^2}{\alpha^2}\bra{\phi_0}e^{iHt_1}O^\dagger Oe^{-iHt_1}\ket{\phi_0} \right) - \left(\Re \bra{\phi_0}e^{-iHt_2} O e^{-iHt_1}\ket{\phi_0}\right)^2. \end{align} The second term is fixed for any parameters. And for the first term, we have \begin{align} \frac{\alpha^2}{4a^2b^2}\left(a^2 + \frac{b^2}{\alpha^2}\bra{\phi_0}e^{iHt_1}O^\dagger Oe^{-iHt_1}\ket{\phi_0} \right) = &~ \frac{\alpha^2}{4b^2} + \frac{1}{4a^2}\bra{\phi_0}e^{iHt_1}O^\dagger Oe^{-iHt_1}\ket{\phi_0}\\ = &~ \frac{\alpha^2}{4(1-a^2)} + \frac{\|Oe^{-iHt_1}\ket{\phi_0}\| ^2}{4a^2}\notag\\ \geq &~ \frac{1}{4}(\alpha + \|Oe^{-iHt_1}\ket{\phi_0}\|)^2, \end{align} where the minimizer is at $a:=\sqrt{\frac{\|Oe^{-iHt_1}\ket{\phi_0}\|}{\alpha + \|Oe^{-iHt_1}\ket{\phi_0}\|}}$. However, since we do not know the value of $\|Oe^{-iHt_1}\ket{\phi_0}\|$, there are two approaches to resolve this issue: (1) use another quantum circuit to estimate $\|Oe^{-iHt_1}\ket{\phi_0}\|$ and then set the parameters; (2) just take $a:=\sqrt{\frac{1}{\alpha+1}}$. Notice that when the first gate is the Hadamard gate, i.e., $a=\frac{1}{\sqrt{2}}$, we have \begin{align} \Var\left[X~\Big|~a=\frac{1}{\sqrt{2}}\right] = \frac{1}{2}(\alpha^2 + \|Oe^{-iHt_1}\ket{\phi_0}\|^2). \end{align} When $a=\sqrt{\frac{1}{\alpha+1}}$, we have \begin{align} \Var\left[X~\Big|~a=\frac{1}{\sqrt{\alpha+1}}\right]=&~ \frac{1}{4}\alpha(\alpha+1) + \frac{1}{4}\|Oe^{-iHt_1}\ket{\phi_0}\|^2(\alpha+1)\\ = &~ \frac{1}{2}(\alpha^2 + \|Oe^{-iHt_1}\ket{\phi_0}\|^2) -\frac{1}{4}(\alpha-1)(\alpha- \|Oe^{-iHt_1}\ket{\phi_0}\|^2)\notag\\ \leq &~ \Var\left[X~\Big|~a=\frac{1}{\sqrt{2}}\right], \end{align} where the last step follows from $\alpha\geq 1$ and $\|Oe^{-iHt_1}\ket{\phi_0}\|^2\leq 1$. Therefore, we can reduce the estimator's variance by choosing $a=\sqrt{\frac{1}{\alpha}}$. Moreover, if $\alpha$ is large, the new variance is about half of the variance using the Hadamard gate. Similar strategy can also be used to reduce the variance of the random variable $Y$. \section{Introduction} One of the primary applications of quantum computing is the simulation of materials and molecules, which are inherently quantum mechanical. It is hoped that future powerful quantum computers will be used in the development of materials and drug discovery \cite{cao2018potential}. Although they have yet to realize commercial application, quantum computers have been improving at a rapid rate, increasing the demand for quantum algorithms with high-impact use cases. To date, the main focus of quantum algorithm development for quantum chemistry and materials has been on ground state energy estimation \cite{cao2019quantum}. This problem is mathematically formulated as estimating the lowest eigenvalue of the Hamiltonian matrix that characterizes the physical system. One of the first quantum chemistry applications of quantum computers was to use quantum phase estimation for estimating the ground state energy of small molecules \cite{aspuru2005simulated}. More recently, the variational quantum eigensolver algorithm \cite{peruzzo2014variational} was developed to use near-term intermediate-scale quantum (NISQ) computers to solve the ground state energy estimation problem. However, in characterizing materials or analyzing small molecules for drug discovery, one often needs to estimate properties of the ground state beyond just the energy. These include transport properties \cite{meir1992landauer}, electric dipole moments \cite{jensen2017introduction}, and molecular forces \cite{o2019calculating}. Such properties depend on expectation values of observables $O$ with respect to the ground state of a Hamiltonian $H$. The problem of estimating such quantities was studied in \cite{amb14, gy19, gpy20}, showing that it is even harder, in a complexity theoretic sense, than the ground state energy estimation problem in general. A straightforward approach to estimating ground state properties is to first (approximately) prepare the ground state, from which properties can be estimated. Many algorithms (e.g. \cite{pw09,gtc19,lt20}) have been developed for ground state preparation. However, these algorithms only work for idealistic quantum computers, and the quantum circuit depths involved in these methods are too deep to even be implemented on early fault-tolerant quantum computers. Another approach to preparing ground states that is more amenable to near-term quantum computers is to use the variational quantum eigensolver algorithm \cite{mcardle2019digital, o2019calculating}. However, recent work has suggested that VQE alone is not practical for solving problems of industrial relevance \cite{gonthier2020identifying}; estimation methods which are more efficient (e.g. \cite{wang2021minimizing}) than prepare and measure estimation, as used in VQE, seem necessary in order for quantum computers to compete with state-of-the-art methods in quantum chemistry and materials. Further issues with the variational quantum eigensolver and its variants are that there are no guarantees on the quality of the output ground state and that heuristic optimization methods struggle to prepare high-fidelity ground states. This motivates the development of quantum algorithms for ground state property estimation (GSPE) which are both reliable \emph{and} able to be run on near-term quantum computers (e.g. early fault-tolerant quantum devices) with the following characteristics: (1) The circuit depth (or the maximal Hamiltonian evolution time) is small even with the price of increasing the total circuit size (or evolution time). (2) The number of logical qubits is limited. The early fault-tolerant model captures the challenges of building a large-scale long-time coherent quantum device, while also being able to solve many important problems with provable performance guarantees \cite{bmn21,bom21,cam21,lt21,lay22}. The central question that this paper addresses is then: \begin{center} \emph{Is it possible to estimate ground state properties of a Hamiltonian reliably using early fault-tolerant quantum computers?} \end{center} In this paper, we provide an affirmative answer to this question. Furthermore, we propose an algorithm for the ground state property estimation using low-depth quantum circuits. The main theorem is stated as follows: \begin{theorem}[Main theorem, informal]\label{thm:intro_app_sim} Given a Hamiltonian $H$ and an observable $O$. Suppose we have access to a unitary $U_I$ that prepares a state $\ket{\phi_0}$ that has non-trivial overlap with the ground state $\ket{\psi_0}$ of $H$. Then, there exists an algorithm to estimate $\bra{\psi_0}O\ket{\psi_0}$ with high accuracy and low-depth: the maximal Hamiltonian evolution time is $\widetilde{O}(\gamma^{-1})$, where $\gamma$ is the spectral gap of $H$. \end{theorem} We make a few remarks about our main result. First, we note that the maximal evolution time, which is the maximal length of time we need to perform coherent time evolution, can roughly determine the depth of the quantum circuit. Our result achieves a nearly-linear dependence on $\gamma^{-1}$ and only poly-logarithmic on the accuracy $\epsilon^{-1}$, which improves the $\widetilde{O}(\epsilon^{-1})$ maximal evolution time in the ground state energy estimation algorithms \cite{som19,lt21,cbk21,ral21}. Second, our result does not violate the Heisenberg limit because the total evolution time still depends on $\poly(\epsilon^{-1})$. Third, similar to almost all prior works in ground state preparation and energy estimation (e.g. \cite{som19,lt20,lt21}), we need the assumption that the initial state has some \emph{nontrivial overlap} with the ground state, as otherwise the problem will become computationally intractable. Last, we consider the Hamiltonian as a black-box, which is a common model in this field. To implement our algorithm, for sparse local Hamiltonian, we can use the current state-of-the-art Hamiltonian simulation methods \cite{bcc15,lc17,cmn18,lc19} with gate complexity depending linearly in the evolution time and logarithmically in the accuracy. \paragraph{Comparison to the straightforward method. }We can compare our algorithm with the straightforward approach of GSPE that first prepares the ground state and then applies quantum phase estimation (QPE) to estimate the ground state property. \begin{itemize} \item In the first step, to achieve an $\epsilon$-accuracy for the estimation, the ground state need to be prepared with fidelity at least $1-\epsilon$ using the methods in \cite{gtc19,lt20}, which have circuit depth $\widetilde{O}(\gamma^{-1}\eta^{-1})$ where $\eta$ is the overlap between the initial state and the ground state. \item In the second step, QPE \cite{kos07,ral21} requires circuit depth $\widetilde{O}(\epsilon^{-1})$ for an $\epsilon$-accuracy estimation for the ground state property. \end{itemize} Therefore, this straightforward approach has circuit depth $\widetilde{O}(\gamma^{-1}\eta^{-1}+\epsilon^{-1})$, while our algorithm has circuit depth $\widetilde{O}(\gamma^{-1})$. Furthermore, they also need many (i.e., $\omega(1)$) additional ancilla qubits for preparing the ground state, while we only use one ancilla qubit. Our algorithm has a great advantage when the Hamiltonian's spectral gap is much larger than the estimation accuracy, making it easier to be implemented in the early fault-tolerant devices. \paragraph{Organization. In Section \ref{sec:gspe} we formally state the problem of ground state property estimation. In Section \ref{sec:gsee} we review the method developed in \cite{lt21} for estimating ground state energies. In the next three sections we explain our main algorithms and give an analysis for their performances starting from the simplest case and building to the most-involved, general case. Section \ref{sec:commutative_alg} presents the case of a unitary observable which commutes with the Hamiltonian. Section \ref{sec:unitary_alg} presents the case of a unitary observable which does not necessarily commute with the Hamiltonian. Section \ref{sec:general_alg} describes the case of a general observable. Then, Section \ref{sec:apps} gives two applications of the ground state property estimation algorithm. Section~\ref{sec:discuss} gives a discussion of the results and presents some open questions. \section{Ground State Energy Estimation}\label{sec:lt21_details} In this section, we review the techniques in \cite{lt21}, which proposed a hybrid quantum/classical algorithm for estimating the ground state energy of a Hamiltonian. Compared with the algorithms in previous works, the algorithm in \cite{lt21} uses fewer quantum resources and does not need to access the block-encoding of the Hamiltonian. First of all, they assumed that the given initial state $\ket{\phi_0}$\footnote{In \cite{lt21}, they allowed the initial state to be a mixed state. For simplicity, we still denote it as $\ket{\phi_0}$.} has a nontrivial overlap with the ground state of $H$. \subsection{Quantum part of the algorithm}\label{sec:quantum_hadamard} Fix $j\in \mathbb{Z}$. Suppose we want to estimate $\Re(\bra{\phi_0}e^{-ij\tau H}\ket{\phi_0})$. Then, we set $\mathrm{W}=I$ and define a random variable $X_j$ as follows: \begin{align*} X_j:=\begin{cases}1 & \text{if the outcome is}~0\\ -1 & \text{if the outcome is}~1 \end{cases}. \end{align*} Since the state before the measurement is \begin{align} \frac{1}{2} (\ket{0}\otimes (I + e^{-ij\tau H})\ket{\phi_0} + \ket{1}\otimes (I - e^{-ij\tau H})\ket{\phi_0}), \end{align} we have \begin{align}\label{eq:expect_real_part} \E[X_j]=&~ \Pr[X_j=0]-\Pr[X_j=1]\notag\\ = &~ \frac{1}{4}\bra{\phi_0} (I + e^{ij\tau H})(I + e^{-ij\tau H}) \ket{\phi_0} - \frac{1}{4}\bra{\phi_0} (I - e^{ij\tau H})(I - e^{-ij\tau H}) \ket{\phi_0}\notag\\ = &~ \frac{1}{2}\bra{\phi_0} (e^{ij\tau H} + e^{-ij\tau H})\ket{\phi_0}\notag\\ = &~ \Re (\bra{\phi_0}e^{-ij\tau H}\ket{\phi_0}). \end{align} For the imaginary part $\Im(\bra{\phi_0}e^{-ij\tau H}\ket{\phi_0})$, we can set $W$ to be the phase gate $\begin{bmatrix} 1 & 0\\0 & -i \end{bmatrix}$ and define the random variable $Y_j$ similarly. Then, we have \begin{align}\label{eq:expect_img_part} \E[Y_j] = \Im (\bra{\phi_0}e^{-ij\tau H}\ket{\phi_0}). \end{align} Therefore, Eqs.~\eqref{eq:expect_real_part} and \eqref{eq:expect_img_part} implies the following claim: \begin{claim}[Estimator of the Hamiltonian expectation]\label{clm:estimator_expectation} For any $j\in \Z$, the random variable $X_j + i Y_j$ is an un-biased estimator for $\bra{\phi_0}e^{-ij\tau H}\ket{\phi_0}$. \end{claim} \subsection{Classical part of the algorithm} Let $\tau$ be a normalization factor such that $\|\tau H\|\leq \pi/3$. Suppose the initial state $\ket{\phi_0}$ can be decomposed in the eigenspace of $H$ as $\ket{\phi_0} = \sum_{k} \sqrt{p_k} \ket{\psi_k}$. Let $p(x)$ be the following density function (spectral measure): \begin{align} p(x) := \sum_k p_k \delta(x - \tau \lambda_k)~~~\forall x\in [-\pi, \pi]. \end{align} That is, $p(x)$ is the distribution of the state energy with respect to $\tau H$ after we measure $\ket{\phi_0}$ in the eigenbasis of $H$. Define the $2\pi$-periodic Heaviside function by \begin{align}\label{eq:def_heaviside} H(x) = \begin{cases} 1 & x\in [2k\pi, (2k+1)\pi)\\ 0 & x \in [(2k-1)\pi, 2k\pi) \end{cases}~~~\forall k\in \mathbb{Z}. \end{align} Then, we define the $2\pi$-periodic CDF of $p$ as the convolution of $H$ and $p$: \begin{align} C(x) := (H * p)(x). \end{align} For any $x\in [-\pi/3, \pi/3]$, for any $w\in \mathbb{Z}$, we have \begin{align} C(x + 2w\pi) = &~ \int_{-\pi}^\pi H(x+2w\pi - t) p(t) \d t\\ = &~ \sum_{k} p_k\cdot \int_{-\pi}^\pi H(x+2w\pi - t) \delta(t - \tau \lambda_k)\d t\notag\\ = &~ \sum_{k} p_k\cdot H(x + 2w \pi - \tau \lambda_k)\notag\\ = &~ \sum_{k} p_k \cdot \mathbf{1}_{x \geq \tau \lambda_k}\notag\\ = &~ \sum_{k: \lambda_k \leq x} p_k, \end{align} where the first step follows from the definition of convolution, the second step follows from Dirac delta function's property, and the third step follows from $H$ has period $2\pi$. We note that $C(x)$ is right continuous and non-decreasing in $[-\pi/3, \pi/3]$. However, we cannot directly evaluate $C(x)$, but we can approximate it! Define the approximate CDF (ACDF) as \begin{align}\label{eq:def_acdf} \widetilde{C}(x) := (F * p) (x), \end{align} where $F(x) = \sum_{|j|\leq d} \hat{F}_j e^{ijx}$ is a low Fourier-degree approximation of the Heaviside function $H(x)$ such that \begin{align} |F(x) - H(x)|\leq \epsilon~~~\forall x\in [-\pi+\delta, -\delta]\cup [\delta, \pi - \delta]. \end{align} The construction of $F$ is given by Lemma~\ref{lem:approx_Heaviside}. Furthermore, the approximation error of $\widetilde{C}(x)$ is bounded by \begin{align} C(x-\delta) - \epsilon \leq \widetilde{C}(x) \leq C(x + \delta) + \epsilon, \end{align} for any $x\in [-\pi/3, \pi/3]$, $\delta \in (0, \pi/6)$ and $\epsilon>0$. \subsubsection{Estimating the ACDF} The goal of this section is to prove Lemma~\ref{lem:est_acdf}, which constructs an estimator for $\widetilde{C}(x)$ (defined by Eq.~\eqref{eq:def_acdf}). \begin{lemma}[Estimating the ACDF]\label{lem:est_acdf} For any $\sigma>0$, for any $x\in [-\pi, \pi]$, there exists an un-biased estimator $\overline{G}(x)$ for the ACDF $\widetilde{C}(x)$ with variance at most $\sigma^2$. Furthermore, $\overline{G}(x)$ runs the quantum circuit (Figure~\ref{fig:hadamard_test}) $O(\frac{\log^2 d}{\sigma^2})$ times with expected total evolution time $O(\frac{\tau d\log d}{\sigma^2})$. \end{lemma} \begin{proof} $\widetilde{C}(x)$ can be expanded in the following way: \begin{align} \widetilde{C}(x) = &~ (F * p)(x)\\ = &~ \int_{-\pi}^\pi F(x-y) p(y) \d y\notag\\ = &~ \sum_{|j|\leq d} \int_{-\pi}^\pi \hat{F}_j e^{ij(x-y)} p(y)\d y\notag\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \int_{-\pi}^\pi p(y) e^{-ijy}\d y\notag\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \sum_k p_k e^{-ij\tau \lambda_k}\notag\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \cdot \bra{\phi_0} e^{-ij\tau H} \ket{\phi_0}, \end{align} where the third step follows from the Fourier expansion of $F(x-y)$, the fifth step follows from the property of Dirac's delta function, and the last step follows from the definition of $p_k$ and the eigenvalues of matrix exponential. To estimate $\bra{\phi_0} e^{-ij\tau H} \ket{\phi_0}$, we use the multi-level Monte Carlo method. Define a random variable $J$ with support $\{-d, \cdots, d\}$ such that \begin{align}\label{eq:def_J} \Pr[J=j]=\left|\hat{F}_j\right|/{\cal F}, \end{align} where ${\cal F}:=\sum_{|j|\leq d}|\hat{F}_j|$. Then, let $Z:=X_J + i Y_J\in \{\pm 1\pm i\}$. Define an estimator $G(x; J, Z)$ as follows: \begin{align*} G(x; J, Z):={\cal F}\cdot Z e^{i(\theta_J + Jx)}, \end{align*} where $\theta_j$ is defined by $\hat{F}_j = |\hat{F}_j|e^{i\theta_j}$. Then, we show that $G(x; J, Z)$ is un-biased: \begin{align*} \E[G(x; J, Z)] = &~ \sum_{|j|\leq d} \E\left[(X_j + iY_j)e^{i(\theta_j + jx)}|\hat{F}_j|\right]\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \cdot \E\left[X_j + iY_j\right]\\ = &~ \sum_{|j|\leq d} \hat{F}_j e^{ijx} \cdot \bra{\phi_0} e^{-ij\tau H} \ket{\phi_0}\\ = &~ \widetilde{C}(x), \end{align*} where the third step follows from Claim~\ref{clm:estimator_expectation}. Moreover, the variance of $G$ can be upper-bounded by: \begin{align*} \mathrm{Var}[G(x; J, Z)]= &~ \E[|G(x; J, Z)|^2] - |\E[G(x; J, Z)]|^2\\ \leq &~ \E[|G(x; J, Z)|^2]\\ = &~ {\cal F}^2 \cdot \E[|X_J + i Y_J|^2]\\ = &~ 2{\cal F}^2, \end{align*} where the third step follows from $|e^{i(\theta_J+Jx)}|=1$, and the last step follows from $X_j, Y_j\in \{\pm 1\}$. Hence, we can take $N_s:=\frac{2{\cal F}^2}{\sigma^2}$ independent samples of $(J, Z)$, denoted by $\{(J_k, Z_k)\}_{k\in [N_s]}$ and compute \begin{align*} \overline{G}(x) := \frac{1}{N_s}\sum_{k=1}^{N_s} G(x; J_k, Z_k). \end{align*} Then, we have \begin{align*} \E[\overline{G}(x)] = \widetilde{C}(x),~~\text{and}~~\mathrm{Var}[\overline{G}(x)]\leq \sigma^2. \end{align*} The expected total evolution time is \begin{align*} {\cal T}_{\mathsf{tot}} := N_s \tau \E[|J|]= \frac{2{\cal F}^2}{\sigma^2} \tau \sum_{|j|\leq d}|j|\cdot \frac{|\hat{F}_j| }{{\cal F}}= \frac{2{\cal F}\tau}{\sigma^2}\sum_{|j|\leq d}|j||\hat{F}_j|. \end{align*} By Lemma~\ref{lem:approx_Heaviside}, we know that $|\hat{F}_j|=O(1/|j|)$. Hence, we have ${\cal F} = \sum_{|j|\leq d}O(1/|j|) = O(\log d)$. Thus, the number of samples is \begin{align*} N_s = O\left(\frac{\log^2 d}{\sigma^2}\right). \end{align*} And the expected total evolution time is \begin{align*} {\cal T}_{\mathsf{tot}} = O\left(\frac{\tau d\log d}{\sigma^2}\right). \end{align*} The lemma is then proved. \end{proof} \iffalse \begin{remark} Suppose we directly estimate each term in \begin{align*} \widetilde{C}(x) = \sum_{|j|\leq d} \hat{F}_j e^{ijx} \cdot \bra{\phi_0} e^{-ij\tau H} \ket{\phi_0}. \end{align*} Let $\overline{Z_j}$ be the estimator of $\bra{\phi_0} e^{-ij\tau H} \ket{\phi_0}$. We want $\overline{Z_j}$ to satisfy: \begin{itemize} \item $\E[\overline{Z_j}]=\bra{\phi_0} e^{-ij\tau H} \ket{\phi_0}$, \item $\mathrm{Var}[\overline{Z_j}]=\sigma_j^2$ such that $\sum_{|j|\leq d}|\widehat{F}_j|^2 \sigma_j^2\leq \sigma^2$. \end{itemize} Let $\{X_j^k, Y_j^k\}_{k\in [N_j]}$ denote $N_j$ independent samples from the quantum circuit (Figure~\ref{fig:hadamard_test}) with parameter $j$. Then, define \begin{align*} \overline{Z_j}:=\frac{1}{N_j}\sum_{k=1}^{N_j} X_j^k + iY_j^k. \end{align*} \begin{align*} \mathrm{Var}[X_j+iY_j] =&~ \E[|X_j+iY_j|^2] - |\E[X_j+iY_j]|^2\\ = &~ 2 - |\bra{\phi_0}e^{-ij\tau H}\ket{\phi_0}|^2 \end{align*} We have \begin{align*} \mathrm{Var}[\overline{Z_j}]\leq \frac{2}{N_j}. \end{align*} Thus, we need \begin{align} \sum_{|j|\leq d,j\ne 0}\frac{2}{N_j}|\widehat{F}_j|^2\leq \sigma^2. \end{align} One choice for $N_j$ is \begin{align*} N_j : = \max\left\{\frac{4d}{\sigma^2}|\widehat{F}_j|^2, 1\right\}. \end{align*} Then, we may bound the expected total evolution time by \begin{align*} \sum_{|j|\leq d} N_j \cdot \tau |j|=&~ \tau \sum_{|j|\leq d}\frac{4d}{\sigma^2}|\widehat{F}_j|^2 |j|\\ \leq &~ \frac{\tau d}{\sigma^2}\sum_{|j|\leq d}\Theta(|j|^{-2}) |j|\\ = &~ \Theta\left(\frac{\tau d \log(d)}{\sigma^2}\right), \end{align*} which is the same as using the multilevel Monte Carlo. However, the above estimate is not correct because for $|j|\geq 2\sqrt{d}/\sigma$, $4d\sigma^{-2}|\widehat{F}_j|^2\leq 1$. Hence, the correct estimation is \begin{align*} \sum_{|j|\leq d} N_j \cdot \tau |j| = &~ \tau \sum_{|j|\leq 2\sqrt{d}/\sigma}\frac{4d}{\sigma^2}|\widehat{F}_j|^2 |j| + \tau \sum_{2\sqrt{d}/\sigma \leq |j|\leq d} 1\cdot |j|\\ = &~ \Theta\left(\frac{\tau d \log(d)}{\sigma^2}+ \tau d^2 \right). \end{align*} Therefore, we could lose a $d^2$ factor if we directly estimate each term. \end{remark} \fi \subsubsection{Inverting the CDF} We first define the CDF inversion problem: \begin{definition}[The CDF inversion problem]\label{def:inv_cdf} For $0 < \delta < \pi/6$, $0 < \eta < 1$, find $x^\star\in (-\pi/3, \pi/3)$ such that \begin{align*} C(x^\star + \delta) > \eta /2, \quad C(x^\star-\delta) < \eta. \end{align*} \end{definition} \begin{remark} The condition in Definition~\ref{def:inv_cdf} is weaker than $\eta/2<C(x)<\eta$ due to the discontinuity of $C(x)$. For any CDF $C(x)$, such an $x^\star$ must exist: let $a := \sup~\{x\in (-\pi/3, \pi/3): C(x) \leq \eta/2\}$ and $b:=\inf~\{x\in (-\pi/3, \pi/3): C(x) \geq \eta\}$. Since $C(x)$ is non-decreasing, we have $a\leq b$. And any $x\in (a-\delta, b+\delta)$ satisfies the condition in Definition~\ref{def:inv_cdf}. \end{remark} Then, we give an algorithm that solves the CDF inversion problem. \begin{lemma}[Inverting the CDF, Theorem 2 in \cite{lt21}]\label{lem:invert_cdf} There exists an algorithm that solves the CDF inversion problem (Definition~\ref{def:inv_cdf}) with probability at least $1-\nu$ such that: \begin{enumerate} \item the number of independent samples of $(J,Z)$ is \begin{align*} O\left(\eta^{-2}\cdot (\log(\nu^{-1})+\log\log(\delta^{-1}))\cdot (\log(\delta^{-1})+\log \log (\delta^{-1}\eta^{-1}))^2\right) \end{align*} \item the expected total evolution time is \begin{align*} O\left(\tau \eta^{-2}\cdot \delta^{-1}\log(\delta^{-1}\eta^{-1})\cdot (\log(\delta^{-1})+\log\log(\delta^{-1}\eta^{-1}))\cdot (\log(\nu^{-1})+\log\log(\delta^{-1})) \right) \end{align*} \item the maximal evolution time is \begin{align*} O\left(\tau \delta^{-1}\log(\delta^{-1}\eta^{-1})\right) \end{align*} \item the classical running time is \begin{align*} O\left(\eta^{-2}\log(\delta^{-1})\cdot (\log(\nu^{-1})+\log\log(\delta^{-1}))\cdot (\log(\delta^{-1})+\log \log (\delta^{-1}\eta^{-1}))^2\right). \end{align*} \end{enumerate} \end{lemma} \begin{proof} For any $x\in [-\pi/3, \pi/3]$, at least one of the following conditions will hold: \begin{align}\label{eq:cond_dec_cdf_inv} C(x + \delta) > \eta /2, ~~\text{or}~~C(x-\delta) < \eta. \end{align} Suppose we have a sub-routine $\textsc{Certify}(x, \delta, \eta, \{J_k, Z_k\})$ such that if $C(x + \delta) > \eta /2$, it returns 0; otherwise, it returns 1. Then, we can solve the CDF inversion problem via the binary search (Algorithm~\ref{alg:cdf_inv}). \begin{algorithm}[t] \caption{Inverting the CDF} \label{alg:cdf_inv} \begin{algorithmic}[1] \algrenewcommand\algorithmicprocedure{\textbf{procedure}} \Procedure{InvertCDF}{$\eta, \delta, \{J_k, Z_k\}$} \State $x_{L}\gets -\pi/3$, $X_{R}\gets \pi/3$ \While{$x_R - x_L > 2\delta$}\label{ln:binary_while} \State $x_M\gets (x_L + x_R)/2$ \State $u\gets \textsc{Certify}(x_M, (2/3)\delta, \eta, \{J_k, Z_k\})$ \If{$u=0$} \State $x_R\gets x_M + (2/3)\delta$ \Else \State $x_L\gets x_M - (2/3)\delta$ \EndIf \EndWhile \State \Return $(x_L + x_R)/2$ \EndProcedure \end{algorithmic} \end{algorithm} In Line~\ref{ln:binary_while}, $x_L$ and $x_R$ always satisfy the following conditions: \begin{align*} C(x_L)<\eta, \quad C(x_R)>\eta/2, \end{align*} which is guaranteed by $\textsc{Certify}(x_M, (2/3)\delta, \eta, \{J_k, Z_k\})$. Then, when the while-loop ends, we have $x_R-x_L\leq 2\delta$. Let $x^\star := (x_L + x_R)/2$ be the output of Algorithm~\ref{alg:cdf_inv}. Then, we get that \begin{align*} C(x^\star +\delta) \geq &~ C(x_R) > \eta/2,\\ C(x^\star -\delta) \leq &~ C(x_L) < \eta. \end{align*} And it is easy to see that Algorithm~\ref{alg:cdf_inv} will call $\textsc{Certify}$ $L:=O(\log(1/\delta))$ times. Then, by Lemma~\ref{lem:certify_two_cases} and union bound, Algorithm~\ref{alg:cdf_inv} will be correct with probability at least $1-\nu$. We note that different runs of \textsc{Certify} can share a same set of samples $\{J_k,Z_k\}$, which does not affect the union bound. Hence, the number of samples and the total evolution time follows directly from Lemma~\ref{lem:certify_two_cases} and $d=O(\delta^{-1}\log(\delta^{-1}\eta^{-1}))$. \end{proof} \begin{lemma}[\textsc{Certify} sub-routine]\label{lem:certify_two_cases} For any $\nu>0$, there exists an algorithm that distinguishes the two cases in Eq.~\eqref{eq:cond_dec_cdf_inv} for any $x\in [-\pi/3, \pi/3]$ with probability at least $1-O(\nu/L)$ using \begin{align*} O\left(\eta^{-2}\log^2(d)(\log(1/\nu)+\log\log(1/\delta))\right) \end{align*} independent samples of $(J,Z)$, and total evolution time \begin{align*} O\left(\eta^{-2}\tau d \log(d)(\log(1/\nu)+\log\log(1/\delta)) \right) \end{align*} in expectation. \end{lemma} \begin{proof} To decide which one of the conditions holds for $x$, we can estimate the ACDF $\widetilde{C}(x)$. If we take $\epsilon=\eta/8$ in Lemma~\ref{lem:approx_Heaviside}, then the constructed ACDF satisfies \begin{align*} C(x-\delta) - \eta/8 \leq \widetilde{C}(x)\leq C(x+\delta) + \eta/8. \end{align*} Thus, \begin{align*} &\widetilde{C}(x) > (5/8)\eta ~~\Rightarrow~~C(x+\delta)>\eta/2,\\ &\widetilde{C}(x) < (7/8)\eta ~~\Rightarrow~~C(x-\delta)<\eta. \end{align*} Then, we can distinguish $\widetilde{C}(x) > (5/8)\eta$ or $\widetilde{C}(x) < (7/8)\eta$ by the estimator in Lemma~\ref{lem:est_acdf}. \begin{algorithm}[t] \caption{Distinguish the two cases in Eq.~\eqref{eq:cond_dec_cdf_inv}} \label{alg:cdf_dis} \begin{algorithmic}[1] \algrenewcommand\algorithmicprocedure{\textbf{procedure}} \Procedure{Certify}{$x,\eta, \delta, \{J_k, Z_k\}$} \State $c\gets 0$, $N_b\gets \Omega(\log(1/\nu)+\log\log(1/\delta))$ \For{$1\leq r\leq N_b$} \State Compute $\overline{G}(x)$ using $\{J_k, Z_k\}_{k\in [(r-1)N_s+1, rN_s]}$\Comment{Lemma~\ref{lem:est_acdf}} \If{$\overline{G}(x)\geq (3/4)\eta$} \State $c\gets c+1$ \EndIf \EndFor \State \Return $\mathbf{1}_{c\leq N_b/2}$ \EndProcedure \end{algorithmic} \end{algorithm} In Algorithm~\ref{alg:cdf_dis}, we compute the estimator $\overline{G}(x)$ $N_b$ times independently, where each time we use $N_s$ samples of $(J, Z)$. We note that an error occurs when $\widetilde{C}(x)>(7/8)\eta$ but $\overline{G}(x) < (3/4)\eta$, or $\widetilde{C}(x)<(5/8)\eta$ but $\overline{G}(x) > (3/4)\eta$ (when $(5/8)\eta \leq \widetilde{C}(x)\leq (7/8)\eta$, any output is correct). By Chebyshev's inequality, we have \begin{align*} \Pr[\overline{G}(x)~\text{has an error}]\leq &~\Pr\left[\overline{G}(x) < \frac{3}{4}\eta ~\Big|~ \widetilde{C}(x)>\frac{7}{8}\eta\right] + \Pr\left[\overline{G}(x) > \frac{3}{4}\eta ~\Big|~ \widetilde{C}(x)<\frac{5}{8}\eta\right]\\ \leq &~ 2\cdot \frac{\sigma^2}{\eta^2/64}\\ \leq &~ \frac{1}{4}, \end{align*} if we take $\sigma^2=O(\eta^2)$ in Lemma~\ref{lem:est_acdf}. Then, by the Chernoff bound, we have \begin{align*} \Pr[\textsc{Certify}~\text{makes an error}]\leq \exp(-\Omega(N_b)) \leq \nu/L, \end{align*} if we take $N_b := \Omega(\log (L/\nu))=\Omega(\log(1/\nu)+\log\log(1/\delta))$. Thus, the total number of samples is \begin{align*} N_b N_s = O\left(\eta^{-2}\log^2(d)(\log(1/\nu)+\log\log(1/\delta))\right), \end{align*} and the expected total evolution time is \begin{align*} O\left(\eta^{-2}\tau d \log(d)(\log(1/\nu)+\log\log(1/\delta)) \right), \end{align*} which complete the proof of the lemma. \end{proof} \subsubsection{Estimating the ground state energy} \begin{corollary}[Ground state energy estimation, Corollary 3 in \cite{lt21}] If $p_0\geq \eta$ for some known $\eta$, then with probability at least $1-\nu$, the ground state energy $\lambda_0$ can be estimated within additive error $\epsilon$, such that: \begin{enumerate} \item the number of times running the quantum circuit (Figure~\ref{fig:hadamard_test}) is $\widetilde{O}(\eta^{-2})$. \item the expected total evolution time is $\widetilde{O}(\epsilon^{-1}\eta^{-2})$. \item the maximal evolution time is $\widetilde{O}(\epsilon^{-1})$. \item the classical running time is $\widetilde{O}(\eta^{-2})$. \end{enumerate} \end{corollary} \begin{proof} Suppose we can solve the CDF inversion problem (Definition~\ref{def:inv_cdf}) for $\delta = \tau \epsilon$ and $\eta$, i.e., we find an $x^\star$ such that \begin{align*} C(x^\star + \tau \epsilon) > \eta/2 > 0, ~~~C(x^\star - \tau \epsilon) < \eta \leq p_0. \end{align*} Since $C(x)$ cannot take value between $0$ and $p_0$, we have \begin{align*} x^\star + \tau \epsilon \geq \tau \lambda_0, ~~~ x^\star - \tau \epsilon < \tau \lambda_0, \end{align*} which is \begin{align*} |x^\star/\tau - \lambda_0|\leq \epsilon. \end{align*} The costs of this algorithm follows from Lemma~\ref{lem:invert_cdf}. \end{proof} \subsection{Low Fourier degree approximation of the Heaviside function} We construct the low degree approximation of the Heaviside function in this section.\footnote{The construction in \cite{lt21} is not enough to prove Lemma~\ref{lem:approx_acdf} because the range of $F_{d,\delta}$ is $[-\epsilon/2, 1+\epsilon]$ while Lemma~\ref{lem:approx_acdf} requires the range to be $[0,1]$. We fix this issue in Lemma~\ref{lem:approx_Heaviside}.} \begin{lemma}[Constructing low degree approximation of $H$]\label{lem:approx_Heaviside} Let $H(x)$ be the $2\pi$-period Heaviside function (Eq.~\eqref{eq:def_heaviside}). For any $\delta \in (0, \pi/2)$ such that $\tan(\delta/2)\leq 1-1/\sqrt{2}$, there exists a $d=O(\delta^{-1}\log(\delta^{-1}\epsilon^{-1}))$ and a $2\pi$-period function $F_{d,\delta}(x)$ of the form: \begin{align} F_{d,\delta}(x) = \frac{1}{\sqrt{2\pi}}\sum_{j=-d}^d \widehat{F}_{d,\delta,j} \cdot e^{ijx} \end{align} such that \begin{enumerate} \item $F_{d,\delta}(x)\in [0, 1]$ for all $x\in \mathbb{R}$. \item $|F_{d,\delta}(x) - H(x)|\leq \epsilon$ for $x\in [-\pi + \delta, -\delta]\cup [\delta, \pi - \delta]$. \item $|\widehat{F}_{d,\delta,j}|=\Theta(1/|j|)$ for $j\ne 0$. \end{enumerate} \end{lemma} \begin{proof} We first construct $F_{d,\delta}'(x)$ by mollifying the Heaviside function with $M_{d,\delta}(x)$ in Lemma~\ref{lem:mollifier}: \begin{align} F'_{d,\delta}(x) := (M_{d,\delta} * H)(x) = \int_{-\pi}^{\pi} M_{d,\delta}(y) H(x-y) \d y. \end{align} We can verify that $F'_{d,\delta}$ has Fourier degree at most $d$. It follows from the Chebyshev polynomial $T_d(x)$ is of degree $d$. Hence, the Fourier coefficients of $M_{d,\delta}(x)$: \begin{align} \widehat{M}_{d,\delta, j} = \frac{1}{\sqrt{2\pi}}\int_{-\pi}^\pi M_{d,\delta}(x)e^{-ijx}\d x \ne 0 \end{align} only if $j\in \{-d,\dots, d\}$. Since $F_{d,\delta}$ is a convolution of $M_{d,\delta}$ and $H$, we have \begin{align} \widehat{F'}_{d,\delta,j}=\sqrt{2\pi} \widehat{M}_{d,\delta, j} \widehat{H}_{j}~~~\forall |j|\leq d. \end{align} Then, we define \begin{align} F_{d,\delta}(x) := \frac{1}{\sqrt{2\pi}}\sum_{j=-d}^d \widehat{F}_{d,\delta,j} \cdot e^{ijx}, \end{align} where \begin{align} \widehat{F}_{d,\delta,j} = \begin{cases} \frac{1}{1+(5/4)\epsilon}\left(\widehat{F'}_{d,\delta, j} + \sqrt{2\pi}\epsilon/4\right) & \text{if}~j=0,\\ \frac{1}{1+(5/4)\epsilon}\widehat{F'}_{d,\delta, j} & \text{otherwise}. \end{cases} \end{align} It is easy see that \begin{align} F_{d,\delta}(x) = \frac{F'_{d,\delta}(x) + \epsilon/4}{1+(5/4)\epsilon}~~~\forall x\in \mathbb{R}. \end{align} Then, we will show that taking $d=O(\delta^{-1}\log(\delta^{-1}\epsilon^{-1}))$ is enough to satisfy (1)-(3). \paragraph{Part (1):} We first compute the range of $F'_{d,\delta}(x)$: \begin{align} F'_{d,\delta}(x)\leq \int_{-\pi}^{\pi} |M_{d,\delta}(y)|\d y\leq 1+\frac{4\pi}{{\cal N}_{d,\delta}}, \end{align} where the second step follows from (2) in Lemma~\ref{lem:mollifier}. \iffalse For the lower bound, first consider $x\in [0, \delta]$. Then, \begin{align*} F_{d,\delta}(x)= &~ \int_{-\pi}^{\pi} M_{d,\delta}(y) H(x - y) \d y\\ = &~ \int_{x-\pi}^{x} M_{d,\delta}(y) \d y\\ = &~ \int_{x-\pi}^{-\delta} M_{d, \delta}(y) \d y + \int_{-\delta}^{x} M_{d, \delta}(y) \d y\\ \geq &~ \int_{x-\pi}^{-\delta} -\frac{1}{{\cal N}_{d,\delta}} \d y + \int_{-\delta}^{x} \frac{1}{{\cal N}_{d,\delta}} \d y\\ = &~ \frac{-(\pi -x-\delta)+(x + \delta)}{{\cal N}_{d,\delta}}\\ = &~ \frac{2(x+\delta) - \pi}{{\cal N}_{d,\delta}} \end{align*} For $x\in [-\delta, 0]$, we have \begin{align*} F_{d,\delta}(x)= &~ \int_{-\pi}^{\pi} M_{d,\delta}(y) H(x - y) \d y\\ = &~ \int_{-\pi}^{x} M_{d,\delta}(y) \d y + \int_{x+\pi}^{\pi} M_{d,\delta}(y) \d y\\ = &~ \int_{-\pi}^{-\delta} M_{d, \delta}(y) \d y + \int_{-\delta}^{x} M_{d, \delta}(y) + \d y+ \int_{x+\pi}^{\pi} M_{d,\delta}(y) \d y\\ \geq &~ \frac{-(\pi-\delta -x)+x+\delta}{{\cal N}_{d,\delta}}\\ = &~ \frac{2(x + \delta)-\pi}{{\cal N}_{d,\delta}} \end{align*} In general, \fi On the other hand, \begin{align*} F'_{d,\delta}(x)\geq -\frac{1}{{\cal N}_{d,\delta}} \int_{-\pi}^\pi H(y)\d y = \frac{-\pi}{{\cal N}_{d,\delta}}. \end{align*} Hence, if we take $d=O(\delta^{-1}\log(\delta^{-1}\epsilon^{-1}))$ such that \begin{align}\label{eq:d_N} {\cal N}_{d,\delta}\geq C_1 e^{d\delta/\sqrt{2}}\sqrt{\frac{\delta}{d}}\cdot \mathrm{erf}(C_2\sqrt{d}\delta)\geq \frac{4\pi}{\epsilon} \end{align} holds, we will have \begin{align}\label{eq:F_upper_bound} -\epsilon/4 \leq F'_{d,\delta} \leq 1+\epsilon. \end{align} Therefore, for all $x\in \mathbb{R}$, \begin{align} F_{d,\delta}(x) = \frac{F'_{d,\delta}(x) + \epsilon/4}{1+(5/4)\epsilon}\in [0,1]. \end{align} \paragraph{Part (2):} The approximation error of $F'_{d,\delta}$ is \begin{align} |F'_{d,\delta}(x) - H(x)| \leq &~ \left|\int_{-\pi}^{\pi} M_{d,\delta}(y) (H(x-y)-H(x))\d y\right|\notag\\ \leq &~ \int_{-\pi}^{\pi} |M_{d,\delta}(y)| |H(x-y)-H(x)|\d y, \end{align} where the first step follows from (2) in Lemma~\ref{lem:mollifier}, and the second step follows from the triangle inequality. Fix $x\in [-\pi+\delta, -\delta]\cup [\delta, \pi-\delta]$. If $y\in (-\delta, \delta)$, then $H(x-y) = H(x)$ and \begin{align} \int_{-\delta}^{\delta} |M_{d,\delta}(y)| |H(x-y)-H(x)|\d y = 0. \end{align} If $|y|\geq \delta$, by (1) in Lemma~\ref{lem:mollifier}, we have $|M_{d,\delta}|\leq \frac{1}{{\cal N}_{d,\delta}}$. Since $|H(x-y)-H(x)|\leq 1$, we have \begin{align} \left(\int_{-\pi}^{-\delta} + \int_{\delta}^{\pi}\right) |M_{d,\delta}(y)| |H(x-y)-H(x)|\d y \leq \frac{2\pi}{{\cal N}_{d,\delta}}\leq \epsilon/2, \end{align} where the last step follows from Eq.~\eqref{eq:d_N}. Therefore, \begin{align} |F'_{d,\delta}(x) - H(x)|\leq \epsilon/2~~~\forall |x|\in [\delta, \pi-\delta]. \end{align} Thus, \begin{align} |F_{d,\delta}(x) - H(x)| = &~ \left|\frac{F'_{d,\delta}(x) + \epsilon/4}{1+(5/4)\epsilon} - H(x)\right|\\ \leq &~ |F'_{d,\delta}(x) - H(x)| + \frac{(5/4)\epsilon}{1+(5/4)\epsilon} |F'_{d,\delta}(x)| + \frac{\epsilon/4}{1+(5/4)\epsilon}\notag\\ \leq &~ \epsilon/2 + \frac{(5/4)\epsilon}{1+(5/4)\epsilon}(1+\epsilon) + \frac{\epsilon/4}{1+(5/4)\epsilon}\notag\\ \leq &~ 2\epsilon, \end{align} where the second step follows from the triangle inequality, the third step follows from Eq.~\eqref{eq:F_upper_bound}. By scaling for $\epsilon$, we can make the approximation error at most $\epsilon$. \paragraph{Part (3):} Since $|\widehat{F'}_{d,\delta,j}|=\sqrt{2\pi} |\widehat{M}_{d,\delta, j}| |\widehat{H}_{j}|$, we first bound $|\widehat{M}_{d,\delta, j}|$: \begin{align} \left|\widehat{M}_{d,\delta, j}\right| \leq \frac{1}{\sqrt{2\pi}}\int_{-\pi}^\pi |M_{d,\delta}(x)|\d x\leq \frac{1}{\sqrt{2\pi}}\left(1+\frac{4\pi}{{\cal N}_{d,\delta}}\right)\leq \frac{1+\epsilon}{\sqrt{2\pi}}, \end{align} where the second step follows from (2) in Lemma~\ref{lem:mollifier} and the last step follows from Eq.~\eqref{eq:d_N}. For $|\widehat{H}_j|$, if $j\ne 0$, we have \begin{align} \widehat{H}_j = \frac{1}{\sqrt{2\pi}}\int_{-\pi}^{\pi} H(x) e^{-ijx}\d x = \frac{1}{\sqrt{2\pi}}\int_{0}^{\pi} e^{-ijx}\d x = \begin{cases} \frac{\sqrt{2}}{i\sqrt{\pi}j} & \text{if }j~\text{is odd},\\ 0 & \text{if }j~\text{is even}. \end{cases} \end{align} Hence, for $j\ne 0$, \begin{align} |\widehat{F'}_{d,\delta, j}|\leq \sqrt{2\pi} \cdot \frac{1+\epsilon}{\sqrt{2\pi}}\cdot \sqrt{\frac{2}{\pi}}\frac{1}{|j|}=\frac{1+\epsilon}{\sqrt{\pi/2}|j|}. \end{align} Then, by definition, we get that \begin{align} |\widehat{F}_{d,\delta, j}|\leq \frac{1+\epsilon}{\sqrt{\pi/2}(1+(5/4)\epsilon)|j|}= \Theta(1/|j|). \end{align} The proof of the lemma is completed. \end{proof} The following lemma shows the approximation ratio of the ACDF $\widetilde{C}(x)$ constructed from the low degree approximated Heaviside function $F(x)$ by Lemma~\ref{lem:approx_Heaviside}. \begin{lemma}[Approximation ratio of the ACDF]\label{lem:approx_acdf} For any $\epsilon>0$, $0<\delta < \pi/6$, let $F(x) := F_{d,\delta}(x)$ constructed by Lemma~\ref{lem:approx_Heaviside}. Then, for any $x\in [-\pi/3, \pi/3]$, the ACDF $\widetilde{C}(x) = (F*p)(x)$ satisfies: \begin{align*} C(x-\delta) -\epsilon \leq \widetilde{C}(x) \leq C(x + \delta) + \epsilon. \end{align*} \end{lemma} \begin{proof} By (2) in Lemma~\ref{lem:approx_Heaviside}, we have \begin{align} |F(x) - H(x)|\leq \epsilon ~~~\forall x\in [-\pi + \delta, -\delta]\cup [\delta, \pi - \delta]. \end{align} Define $F_L := F(x - \delta)$ such that \begin{align}\label{eq:F_L_1d} |F_L(x) - H(x)|\leq \epsilon ~~~\forall x\in [-\pi + 2\delta, 0]\cup [2\delta, \pi]. \end{align} For $\widetilde{C}_L(x) := (F_L * p)(x)$, we have $\widetilde{C}_L(x) = \widetilde{C}(x - \delta)$, and for $x \in [-\pi/3, \pi/3]$, \begin{align} |C(x) - \widetilde{C}_L(x)| =&~ \left|\int_{-\pi}^{\pi} p(x-y) (H(y) - F_L(y)) \d y\right|\\ \leq &~ \int_{-\pi}^{\pi} p(x-y) |H(y) - F_L(y)| \d y\notag\\ = &~ \left(\int_{-\pi}^0 + \int_{2\delta}^\pi\right) p(x-y)|H(y)-F_L(y)|\d y+ \int_0^{2\delta} p(x-y)|H(y)-F_L(y)|\d y\notag\\ \leq &~ \epsilon\cdot \left(\int_{-\pi}^0 + \int_{2\delta}^\pi\right) p(x-y)\d y + \int_0^{2\delta} p(x-y)|H(y)-F_L(y)|\d y\notag\\ \leq &~ \epsilon + \int_0^{2\delta} p(x-y)|H(y)-F_L(y)|\d y\notag\\ \leq &~ \epsilon + \int_0^{2\delta} p(x-y) \d y\notag\\ = &~ \epsilon + \int_{x-2\delta}^{x} p(y) \d y\notag\\ = &~ \epsilon + C(x) - C(x-2\delta), \end{align} where the second step follows from Cauchy-Schwarz inequality, the forth step follows from Eq.~\eqref{eq:F_L_1d}, the fifth step follows from $p(x)$ is a density function, the sixth step follows from $H(y) = 1$ and $F_L(y)\in [0,1]$ for $y\in [0, 2\delta]$, the last step follows from $C(x)$ is the CDF of $p(x)$ in $[-\pi, \pi]$. Hence, we have \begin{align} \widetilde{C}_L(x) \geq C(x) - (\epsilon + C(x) - C(x - 2\delta)) = C(x - 2\delta) - \epsilon, \end{align} which proves the first inequality: \begin{align} \widetilde{C}(x - \delta) \geq C(x-2\delta) - \epsilon. \end{align} Similarly, we can define $F_R := F(x + \delta)$ and $\widetilde{C}_R(x) := (F_R * p)(x)$. We can show that \begin{align} |C(x) - \widetilde{C}_R(x)| \leq \epsilon + C(x+2\delta) - C(x), \end{align} which gives \begin{align} \widetilde{C}(x + \delta) \leq C(x + 2\delta) +\epsilon. \end{align} The lemma is then proved. \end{proof} \subsubsection{Technical lemma} \begin{lemma}[Mollifier, Lemma 5 in \cite{lt21}]\label{lem:mollifier} Define $M_{d,\delta}(x)$ to be \begin{align} M_{d,\delta}:=\frac{1}{{\cal N}_{d,\delta}}T_d\left( 1 + 2\frac{\cos(x) - \cos (\delta)}{1+\cos (\delta)}\right) \end{align} where $T_d(x)$ is the $d$-th Chebyshev polynomial of the first kind, and \begin{align} {\cal N}_{d,\delta}:=\int_{-\pi}^{\pi} T_d\left( 1 + 2\frac{\cos(x) - \cos (\delta)}{1+\cos (\delta)}\right) \d x. \end{align} Then \begin{enumerate} \item $|M_{d,\delta}(x)|\leq \frac{1}{{\cal N}_{d,\delta}}$ for $x\in [-\pi, -\delta]\cup [\delta, \pi]$, and $M_{d,\delta}(x)\geq \frac{1}{{\cal N}_{d,\delta}}$ for $x\in [-\delta, \delta]$. \item $\int_{-\pi}^{\pi} M_{d,\delta}(x)\d x = 1$, $1\leq \int_{-\pi}^{\pi} |M_{d,\delta}(x)| \d x \leq 1+\frac{4\pi}{{\cal N}_{d,\delta}}$. \item When $\tan (\delta/2)\leq 1-1/\sqrt{2}$, we have \begin{align} {\cal N}_{d,\delta} \geq C_1 e^{d\delta/\sqrt{2}}\sqrt{\frac{\delta}{d}}\cdot \mathrm{erf}(C_2\sqrt{d}\delta), \end{align} for some universal constant $C_1,C_2$. \end{enumerate} \end{lemma} The proof can be found in Appendix A in \cite{lt21}, and we omit it here. \iffalse \begin{proof} For (1), note that \begin{align*} 1 + 2\frac{\cos(x) - \cos (\delta)}{1+\cos (\delta)} \in \begin{cases} \big[1-\frac{2\cos(\delta)}{1+\cos(\delta)}, 1\big]\subset [0,1]&\text{if }x\in [-\pi,-\delta]\cup [\delta, \pi]\\ \big[1, 1+2\frac{1-\cos(\delta)}{1+\cos(\delta)}\big]\subset [1,3]& \text{if }x\in [-\delta, \delta] \end{cases}. \end{align*} Since $|T_d(y)|\leq 1$ if $y\in [0, 1]$ and $T_d(y)\geq 1$ if $y\geq 1$, (1) is then proved. \end{proof} \fi \section{An Overview of the Low-Depth Ground State Energy Estimation} \label{sec:gsee} In this section, we provide a brief overview of the low-depth ground state energy estimation algorithm proposed by Lin and Tong \cite{lt21}. Our algorithms are inspired by this algorithm and use it as a subroutine. More specifically, they showed that: \begin{theorem}[\cite{lt21}]\label{thm:lt21_main} Given a Hamiltonian $H$ with eigenvalues in the interval $[-\pi/3, \pi /3]$ and its ground state $\ket{\psi_0}$ has energy $\lambda_0$. And suppose we can prepare an initial state $\ket{\phi_0}$ such that $p_0\geq \eta$ for some known $\eta$, where $p_0:=|\langle \phi_0 | \psi_0\rangle|^2$. Then, for any $\epsilon, \nu\in (0, 1)$, there exists an algorithm that estimates $\lambda_0$ with an additive error $\epsilon$ with probability $1-\nu$, by running a parameterized quantum circuit with the maximum quantum evolution time $\widetilde{O}(\epsilon^{-1})$ and the expected total quantum evolution time $\widetilde{O}(\epsilon^{-1}\eta^{-2})$. \end{theorem} The pseudo-code of their algorithm is given in Algorithm~\ref{alg:gs_energy}. \begin{algorithm}[ht] \caption{Ground State Energy Estimation} \label{alg:gs_energy} \begin{algorithmic}[1] \algrenewcommand\algorithmicprocedure{\textbf{procedure}} \Procedure{EstimateGSE}{$\epsilon,\tau, \eta, \nu$} \State \Comment{Initialization} \State $d\gets O(\delta^{-1}\log(\delta^{-1}\eta^{-1}))$, $\delta \gets \tau \epsilon$\label{ln:set_d} \For{$i\gets -d,\dots,d$} \State $\hat{F}_i\gets \hat{F}_{d,\delta,i}$ \State Compute $\theta_i$, the phase angle of $\hat{F}_i$ \EndFor \State ${\cal F}\gets \sum_{|i|\leq d}|\hat{F}_i|$ \State $N_b\gets \Omega(\log(1/\nu)+\log\log(1/\delta))$, $N_s\gets O(\eta^{-2}\log^2(d))$ \State \Comment{Sampling from the quantum circuit} \For{$k\gets 1,\dots,N_bN_s$} \State Independently sample $J_k\sim [-d,d]$ with $\Pr[J_k=j]\propto |\hat{F}_j|$ \State Measure $(X_k,Y_k)$ by running the quantum circuit with (Figure.~\ref{fig:hadamard_test}) parameter $k$ \State $Z_k\gets X_k + iY_k$ \EndFor \State \Comment{Classical post-processing} \State $x_{L}\gets -\pi/3$, $X_{R}\gets \pi/3$ \While{$x_R - x_L > 2\delta$}\label{ln:inv_cdf_while}\Comment{Invert CDF} \State $x_M\gets (x_L + x_R)/2$ \For{$r\gets 1,\dots, N_b$} \State $\overline{G}_r\gets \frac{{\cal F}}{N_s}\sum_{k=(r-1)N_s + 1}^{rN_s} Z_k e^{i(\theta_{J_k} + J_k x_M)}$\Comment{Multi-level Monte Carlo method}\label{ln:mlmc} \EndFor \If{$|\{r:\overline{G}_r\geq (3/4)\eta\}|\leq N_b / 2$} \State $x_R\gets x_M + (2/3)\delta$ \Else \State $x_L\gets x_M - (2/3)\delta$ \EndIf \EndWhile \State \Return $(x_L + x_R)/2$ \EndProcedure \end{algorithmic} \end{algorithm} The main technique of their algorithm is a classical post-processing procedure that extracts information from the following Hadamard test circuit (Figure~\ref{fig:hadamard_test}). \begin{figure}[H] \centering \begin{displaymath} \Qcircuit @C=1.0em @R=1.2em { & & & &\\ \lstick{\ket{0}} &\gate{\mathrm{H}} &\ctrl{1} & \gate{\mathrm{W}} & \gate{\mathrm{H}} &\meter\\ \lstick{\ket{\phi_0}} & \qw & \gate{e^{-ij\tau H}} &\qw &\qw &\qw } \end{displaymath} \caption{Quantum circuit parameterized by $j$. $\mathrm{H}$ is the Hadamard gate and $\mathrm{W}$ is either $I$ or a phase gate. A detailed analysis of this circuit is given in Appendix~\ref{sec:quantum_hadamard}.} \label{fig:hadamard_test} \end{figure} Let the initial state $\ket{\phi_0}$ be expanded as $\ket{\phi_0}=\sum_k \alpha_k \ket{\psi_k}$ in the eigen-basis of $H$ and let $p_k:=|\alpha_k|^2$ be the overlap with the $k$-th eigenstate. They considered the overlaps $p_0,p_1,\dots$ as a density function: \begin{align} p(x):=\sum_k p_k \delta(x- \lambda_k)~~~\forall x\in [-\pi,\pi]. \end{align} Then, the cumulative distribution function (CDF) $C(x):=\int_{-\pi}^x p(t)\d t$ can be expressed as a convolution of $p(x)$ and the $2\pi$-periodic Heaviside function $H(x)$, which is 0 in $[(2k-1)\pi, 2k\pi)$ and 1 in $[2k\pi, (2k+1)\pi)$ for any $k\in \Z$. Thus, $C(x)$ is also a periodic function, which makes it convenient to apply the Fourier approximation. They showed that $H(x)$ can be approximated by a low-Fourier degree function $F(x)$ in the intervals $[-\pi+\delta, -\delta]$ and $[\delta, \pi-\delta]$. Then, they defined the approximated cumulative distribution function (ACDF) as $\widetilde{C}(x) := (F \star p)(x)$ and proved that \begin{align} C(x-\delta) -\eta/8 \leq \widetilde{C}(x) \leq C(x+\delta) + \eta/8~~~\forall x\in [-\pi/3, \pi/3]. \end{align} Moreover, for each $x$, we have \begin{align} \widetilde{C}(x) = \sum_{|j|\leq d} \hat{F}_j e^{ijx} \cdot \bra{\phi_0}e^{-ijH}\ket{\phi_0}, \end{align} where $\hat{F}_j$ is the Fourier coefficient of $F(x)$. Note that $\bra{\phi_0} e^{-ijH}\ket{\phi_0}$ can be estimated via the parameterized quantum circuit (Figure~\ref{fig:hadamard_test}). Hence, we can estimate the ACDF at every point in $[-\pi/3, \pi/3]$. Moreover, they showed that the multi-level Monte Carlo method can be applied here to save the number of samples needed to achieve a high-accuracy estimation (Line~\ref{ln:mlmc}). Therefore, we can estimate the ground state energy $\lambda_0$ by locating the first non-zero point of the CDF $C(x)$, which is $\eta/8$-approximated by the ACDF $\widetilde{C}(x)$. Since we assume that $p_0\geq \eta$, the approximation error and the estimation error of $\widetilde{C}(x)$ can be tolerated, and we can find $\lambda_0$ via a robust binary search (Line~\ref{ln:inv_cdf_while}). We note that the maximal evolution time of this algorithm corresponds to the Fourier degree of $F(x)$, which is $\widetilde{O}(\epsilon^{-1})$ by the construction, making their algorithm suitable for early fault-tolerant quantum devices. More details of this algorithm and the proofs are given in Appendix~\ref{sec:lt21_details}.
{ "timestamp": "2022-07-07T02:02:16", "yymm": "2109", "arxiv_id": "2109.13957", "language": "en", "url": "https://arxiv.org/abs/2109.13957" }
\section{Introduction} This paper concerns the reconstruction of a narrow laser beam propagating in a turbulent atmosphere; see \cite{BG18,C-SPIE-03,HBDH,RR-OE-08} for references and applications. We assume that the turbulence has correlation length and the laser a central wavelength that are both very small compared to the overall distance of propagation. In such a setting, light propagation is accurately modeled by a deterministic radiative transfer (transport) equation \cite{BKR,BKR-liouv}, as used in \cite{C-SPIE-03}; see \eqref{eq:tr}-\eqref{eq:Q} below. Although inverse transport theory is well developed (see, e.g., \cite{B} and references there), forward models based on the kinetic transport involve a high dimensional reconstruction of its constitutive parameters. In the presence of limited available data, it is advisable to look for more macroscopic models to describe beam propagation. In the highly forward-peaked regime, when light interacts often with the underlying turbulence (small mean free path) but at each scattering event with small variation in its direction (larger transport mean free path), two types of equations are known to emerge: Fokker-Planck and fractional Fokker-Planck models; see \eqref{eq:FP} and \eqref{eq:fFP} below. The derivation of such Fokker-Planck models from radiative transfer equations in the highly forward-peaked regime is done, e.g., in \cite{AS,BL1,BL2,P}. Inverse transport theory (i.e., the reconstruction of their constitutive coefficients from boundary measurements) is mathematically open. It turns out that in the regime of large transport mean-free path (small diffusion coefficient) another accurate approximation is possible. It is based on neglecting back-scattering and is thus valid as long as beams remain sufficiently narrow. The derivation of the Fermi pencil beam and fractional Fermi pencil beam models from Fokker-Planck models was derived recently in \cite{BP-SIMA-20,BP-preprint-20}. We present the models in detail in section \ref{sec:fpb}. Inverse problems based on the Fermi pencil beam models are significantly simpler than those based on radiative transfer or Fokker-Planck equations. Moreover, they offer a parametric (the `fraction' coefficient) set of models for beam spreading based on the possibly unknown statistics of turbulence and thus provide reasonable macroscopic descriptions for the reconstruction of laser beams from off-axis measurements. \medskip Section \ref{sec:kmodels} recalls the aforementioned kinetic models (radiative transfer, Fokker-Planck) and presents the setting for off-axis measurements: a decomposition of light scattering into two components, $Q_F$ modeling forward-peaked beam scattering, and $Q_S$ modeling small, large-angle scattering that generates the signals captured by off-axis detectors. As mentioned above, the Fermi pencil beam models are presented in section \ref{sec:fpb} in detail. We also show their accuracy in a suitable metric when compared to (fractional) Fokker-Planck solutions in the small diffusion regime. The settings for the off-axis measurements are presented in section \ref{sec:rec}. We provide a sufficient set of measurements such that the main axis of propagation of the laser may be estimated. Assuming a cylindrical symmetry of the laser beam, we propose an inversion based on an inverse Radon transform that explicitly provides reconstruction for the turbulence diffusion (including the `fraction' coefficient) as well as the location of its source from a minimal set of off-axis measurements. \section{Kinetic model}\label{sec:kmodels} We model light propagation in scattering media with a kinetic equation of the form \begin{equation}\label{eq:tr} \theta\cdot\nabla_x w + \lambda w = Q(w) + f, \end{equation} where $x\in X\subset \mathbb R^n$ is spatial position (with $n=3$ in most applications), $\theta\in\mathbb S^{n-1}$ the angular direction and $w(x,\theta)$ the particle density. We assume a light speed normalized to $1$ and a constant index of refraction. The parameter $\lambda(x)\geq0$ models intrinsic absorption while $Q(w)$ is a scattering kernel. In the radiative transfer model, the scattering kernel is of the form \begin{equation}\label{eq:Q} Q(w) = \displaystyle\int_{\mathbb S^{n-1}} k(x,\theta,\theta') (w(x,\theta')-w(x,\theta)) d\theta', \end{equation} with $d\theta$ the standard Lebesgue measure on the sphere and $k(x,\theta,\theta')$ quantifying scattering from $\theta'$ to $\theta$ at position $x$. Our objective is to model the propagation of narrow beams of light and to address their reconstruction from off-axis measurements, i.e., measurements performed away from the physical location of the main beam. Beams preserve their structure in environments with large transport mean free path, heuristically defined as a distance over which light direction changes significantly. Thus, for a distance between $\theta$ and $\theta'$ sufficiently large, we expect the scattering kernel $k$ to be small. In many regimes of interest, the mean free path, defined as the (average) distance between successive interactions of light with the underlying medium, may still be quite small. This is the regime of forward-peaked scattering, where $k$ is significant for $\theta$ close to $\theta'$. We thus arbitrarily separate $k$ into a component with $\theta$ close to $\theta'$ and a component $k_S$ where this is not the case. We correspondingly decompose \[ Q = Q_F + Q_S \] with $Q_F$ corresponding to forward-peaked scattering while $Q_S$ models small, large-angle scattering. Since $Q_S$ is small, we perform an expansion in that parameter and define \[ \theta\cdot\nabla_x u + \lambda u= Q_F(u) + f,\qquad \theta\cdot\nabla_x w + \lambda w = Q_F(w) + Q_S(u) + f. \] The first equation for $u$ is the main object of interest to analyze beam spreading. The second equation for $w$ is an approximation of the original $w$ to second order in $Q_S$. Off-axis scattering and measurements are then modeled by $u_S=w-u$. Neglecting spreading $Q_F(u_S)$ and absorption $\lambda u_S$ in the scattered contribution to simplify reconstructions, which is justified for instance when the distance between the beam and the detector arrays is reasonably short, then $u_S$ is the solution of \begin{equation}\label{offaxis_dist} \theta\cdot\nabla_x u_S = Q_S(u). \end{equation} We now focus on the different models for $Q_F(u)$. The scattering kernel \eqref{eq:Q} may be derived from models of wave propagation in heterogeneous media when the wavelength of the wave packets is comparable to the correlation length of the random medium \cite{BKR}. When the correlation length is significantly larger than the wavelength, the limiting equation is of the form of a Fokker-Planck equation instead \cite{P,BKR}, with \[ Q_F= Q_{FP} := \nabla_\theta\cdot D(x,\theta) \nabla_\theta \] where $D(x,\theta)$ is a positive tensor. To simplify, we assume that $D$ is isotropic (independent of $\theta)$ and $Q_F$ then takes the standard form of a Laplace (Beltrami) operator on the unit sphere: \begin{equation} \label{eq:FP} Q_F= D(x) \Delta_\theta. \end{equation} While the Fokker-Planck equation may be formally derived from the radiative transfer model, it was shown in \cite{P} and derived rigorously in \cite{AS,GPR} that fractional Fokker-Planck models may be better macroscopic approximations for highly forward-peaked regimes. More precisely and following \cite{AS}, consider scattering kernels of the (generalized) Henyey-Greenstein form \begin{equation*}\label{HG_kernels} k_g(x,\theta,\theta') = \frac{D(x)}{\big(\frac12(1-g)^2+g(1-\theta'\cdot\theta)\big)^{\frac{n-1}{2}+s}}. \end{equation*} The scattering kernel is isotropic (depends only on $\theta\cdot\theta'$) and is forward-peaked when $g<1$ is close to $1$. The standard Henyey-Greenstein phase function is obtained for $n=3$ and $s=\frac12$. We consider a parameter $0<s<1$. The limiting scattering kernel as $g\to1$ formally takes the form \[ k_F(x,\theta,\theta') = \frac{D(x)}{\big(1-\theta'\cdot\theta\big)^{\frac{n-1}{2}+s}}. \] We then verify that the mean free path vanishes since $\int_{\mathbb S^{n-1}} k_F(x,\theta,\theta')d\theta'=+\infty$. However, $k_F(x,\theta,\theta')\big(u(\theta')-u(\theta)\big)$ {\em is} integrable for $u$ sufficiently regular and the transport mean free path is finite. Beam structures then appear when the latter is actually large. We will call a fractional Fokker Planck equation the limiting kinetic model with scattering operator \begin{equation}\label{eq:fFP} Q_F u (\theta)=Q_{fFP} u(\theta) := \displaystyle\int_{\mathbb S^{n-1}} \frac{D(x)}{\big(1-\theta'\cdot\theta\big)^{\frac{n-1}{2}+s}} \big(u(\theta')-u(\theta)\big)d\theta'. \end{equation} We refer to \cite{AS,GPR} for a rigorous convergence of radiative transfer solutions to the above fractional Fokker-Planck model. We thus obtain a family of fractional Fokker-Planck equations for $0<s<1$ and a standard Fokker-Planck equation formally corresponding to the case $s=1$. It remains to consider the regime of propagation where beam structures may be observed. We obviously need a source term $f$ concentrated in phase space in the vicinity of a point $(x_0,\theta_0)$. We then need to ensure that scattering does not significantly alter the initial directional information. This imposes that the transport mean free path (the mean free path vanishes in all Fokker-Planck models) be large compared to the distance of propagation of interest. Consider again a transport model of the form \[ \theta\cdot\nabla_X u - Qu + \lambda u=0. \] The short distance problem consists of defining $X=\eps^{2s} x$ (for some $s\in(0,1]$ depending on the physics of the problem) and then recasting the above as \[ \theta\cdot\nabla_x u + \eps^{2s} Q u + \eps^{2s} \lambda_\eps u=0. \] We then assume that $\lambda=\eps^{2s} \lambda_\eps>0$ is a small but leading order term. This ensures that dissipation is large enough to prevent larger-scale phenomena to perturb the analysis. This regime of large transport mean free path is the right one to preserve beam structures and leads (following the decomposition presented above) to the small diffusion (fractional) Fokker-Planck problem (see Fig. \ref{fig:beam}). \begin{figure \centerline{ \includegraphics[scale=0.2]{Beam.png} } \caption{Spreading of the Fokker--Planck solution} \label{fig:beam} \end{figure} We observe that scattering is a perturbation of ballistic transport formally obtained when $\eps=0$. The latter is a reasonable approximation of the beam location but fails to account for any dispersion (beam spreading). It turns out that a higher-order expansion in $\eps$ yields the (fractional) Fermi pencil beam models and that these models accurately capture beam dispersion when $\eps$ is small. Summarizing the above derivations, the solutions $u$ describing the propagation of a beam generated by the source term $f$ and $u_S$ describing off-axis measurements satisfy the following coupled system: \begin{equation}\label{FP_and_offaxis} \left\{\begin{array}{l} \theta\cdot\nabla_x u + \eps^{2s} Q_{F} u + \lambda u=f,\\[2mm] \quad\theta\cdot\nabla_x u_S= Q_S(u). \end{array} \right. \end{equation} \section{Fermi pencil beam approximation} \label{sec:fpb} Neglecting nonlinear effects (which may be important) here, (fractional) Fokker-Planck models are appropriate models to describe laser beam propagation in turbulent atmospheres as we recalled in the preceding section. Solving Fokker-Planck equations remains however challenging, both theoretically and computationally. As mentioned above, the simplest approximation is the free (ballistic) transport solution of \[ \theta\cdot \nabla_x u + \lambda u = 0. \] Ballistic models have been used used previously to tackle the off-axis laser detection problem; see, for instance, \cite{HBDH,HB}. However, they ignore important information related to the broadening of the beam which is crucial, for instance, to determine parameters such as the source location. We thus propose to use a more accurate approximation of the Fokker-Planck equation which takes the form of a Fermi pencil beam model. Our starting point for the beam model is a Fokker-Planck (FP) or fractional Fokker-Planck (fFP) equation with small diffusion given by \begin{equation}\label{u_fFP} \theta\cdot\nabla_x u + \lambda u + \epsilon^{2s}D(x)\mathcal{I}^s_\theta[u] = f,\quad (x,\theta)\in\mathbb R^n\times\mathbb S^{n-1}, \end{equation} for a narrow source term $f$ concentrated in the vicinity of a phase space point $(0,\vec{e}_n)\in\mathbb R^n\times\mathbb S^{n-1}$. To cover the local (FP) and non-local (fFP) cases simultaneously, we introduce the notation $\mathcal{I}^s_\theta[u]$ to represent the Laplace-Beltrami operator on the unit sphere $-\Delta_\theta $ when $s=1$, while for $s\in(0,1)$ we set $\mathcal{I}^s_\theta = (-\widetilde{\Delta}_\theta)^su - cu$ with constant $c=\frac{\Gamma(\frac{n-1}{2}+s)}{\Gamma(\frac{n-1}{2}-s)}>0$ (here $\Gamma$ stands for the Gamma function), and $(-\widetilde{\Delta}_\theta)^s$ defined in stereographic coordinates $v=\mathcal{S}(\theta)$ as the following version of the Laplacian \begin{equation}\label{fracLapB} [(-\widetilde{\Delta}_\theta)^su]_{\mathcal{S}} := \frac{1}{2^{2s}}\langle v\rangle^{n-1+2s}(-\Delta_v)^s\left(\frac{[u]_\mathcal{S}}{\langle \cdot\rangle^{n-1-2s}}\right). \end{equation} We now define the terms that appear in \eqref{fracLapB}. The stereographic coordinates and the associated surface measure are defined as (see \cite[p. 35]{Lee}) $\mathcal{S}:\mathbb S^{n-1}\backslash\{(0,\dots,0,-1)\}\to \mathbb R^{n-1}$ where \[ \begin{aligned} &v = \mathcal{S}(\theta):= \frac{1}{(1+\theta_n)}(\theta_1,\dots,\theta_{n-1}), \quad \text{and}\quad d\theta=\frac{2^{n-1}}{\langle v\rangle^{2(n-1)}}dv,\quad\text{with}\quad \langle v\rangle = (1+|v|^2)^{1/2}; \end{aligned} \] while the inverse stereographic transformation is defined as \[ \theta = \mathcal{S}^{-1}(v) := \left(\frac{2v}{\langle v\rangle^2}, \frac{1-|v|^2}{\langle v\rangle^2}\right). \] The term $[u]_\mathcal{S}$ corresponds to the particle density $u$ in stereographic coordinates. Moreover, $(-\Delta_v)^s$ stands for the standard (Euclidean) fractional Laplacian given by the singular integral \[ (-\Delta_v)^sg(v) := c_{n-1,s}\;\text{p.v.}\int_{\mathbb R^{n-1}}\frac{g(v) - g(v+z)}{|z|^{n-1+2s}}dz,\quad \text{for}\quad s\in(0,1), \] for a constant $c_{n-1,s}^{-1}=\int_{\mathbb R^n}\frac{1-e^{i\hat{\xi}\cdot z}}{|z|^{n+2s}}dz>0$. We refer the reader to \cite{AS} for details on this version of the Laplace-Beltrami operator. \begin{remark} The above definition of the Laplacian differs by a factor $2^{-2s}$ from the one used by the authors in \cite{BP-preprint-20}. It allows for a consistent normalization of the diffusion coefficient when passing to stretched coordinates and deducing the Fermi pencil-beam equation. \end{remark} The above diffusion coefficient is scaled such that diffusion away from the main axis of the beam $\{t(0,\dots,0,1),\ t\geq0\}$, occurs at the scale $\eps$ in phase space. The pencil-beam approximation is based on neglecting backscattering, which is justified in narrow beams with small diffusion $\eps\ll1$. Such a diffusion is naturally captured by the following {\em pencil-beam coordinates} (or {\em stretched} coordinates): \begin{equation}\label{pb_coordinates} X = ((2\epsilon)^{-1}x',x^n) \quad\text{and}\quad V = \epsilon^{-1}\mathcal{S}(\theta),\quad (x,\theta)\in\mathbb R^n\times\mathbb S^{n-1}. \end{equation} See Figure \ref{fig:coordinates} for the geometry of the pencil-beam coordinates. \begin{figure \centerline{ \includegraphics[scale=0.25]{Coordinates.png} } \caption{Narrow beam in pencil-beam (or stretched) coordinates.} \label{fig:coordinates} \end{figure} \begin{definition}\label{def:Fpb} We say $U(X,V)$ is a (fractional) pencil-beam if it solves the (fractional) Fermi pencil-beam equation (FBP, respectively fFPB) \begin{equation}\label{fFPB} \partial_{X^n}U + V\cdot \nabla_{X'}U + \widetilde{\lambda} U + \widetilde{D} (-\Delta_V)^sU = 0,\quad (X,V)\in\mathbb R^n_+\times\mathbb R^{n-1} \end{equation} for $s=1$ (respectively, $s\in(0,1)$), and with a singular boundary source \begin{equation}\label{fFPB_source} U(X',0,V) = F_0\delta(X')\delta(V). \end{equation} For any $s\in(0,1]$, it takes the explicit form (in the Fourier domain) \begin{equation}\label{eq:solFPB} \mathcal{F}_{X',V}[U](\xi,X^n,\eta) = F_0 e^{-\int^{X^n}_0\widetilde{\lambda}(r)dr} e^{-\int^{X^n}_0|\eta+(X^n-t)\xi|^{2s}\widetilde{D}(t)dt}. \end{equation} \end{definition} The coefficients $\widetilde{\lambda}$ and $\widetilde{D}$ are functions of $X^n$ only and determine the attenuation and broadening of the pencil-beam along the main axis. They are related to the coefficients $\lambda$ and $D$ of the Fokker-Planck model as follows: denoting by $\vec{e}_n = (0,\dots,0,1)\in \mathbb R^n$ and the map $X\mapsto\widetilde{X}:=X^n\vec{e}_n$, then \begin{equation}\label{eq:tilde} \widetilde{\lambda}(X) := \lambda(\widetilde{X})\quad \text{and}\quad \widetilde{D}(X):= \frac{1}{2^{2s}}D(\widetilde{X}) \quad\text{for}\quad s\in(0,1]. \end{equation} Note that we use $\;\widetilde{\;}\;$ to restrict coordinates or coefficients to the $X^n$-axis (the beam's axis). A pencil-beam solution may be written in the following self-similar form: \begin{equation}\label{Uself-sim} U(X,V) = c_{n-1}\frac{F_0}{(X^n)^{n-1+\frac{n-1}{s}}}\mathfrak{J}\left(\frac{X'}{(X^n)^{1+\frac{1}{2s}}},X^n,\frac{V}{(X^n)^{\frac{1}{2s}}}\right)\text{exp}\left(-\int^{X^n}_0\widetilde{\lambda}(t)dt\right) \end{equation} for some appropriate constant $c_{n-1}>0$ and with $\mathcal{J}$ defined in the Fourier domain by \[ \mathcal{F}_{X',V}[\mathfrak{J}](\xi,X^n,\eta):=\text{exp}\left(-\int^1_0|\eta+t\xi|^{2s}\widetilde{D}(X^n(1-t))dt\right). \] For more details on this and the properties of $\mathfrak{J}$, see \cite{BP-preprint-20}. To quantify the accuracy of the ballistic and FPB approximations, we consider a metric that penalizes by {\em how far} particles are in the approximate model compared to where they should be in a FP model. More precisely, we have: \begin{definition} Given to two positive Radon measures $f,g$, their {\em $(1,\kappa)$-Wasserstein distance} is given by $$ \mathcal{W}^1_{\kappa}(f,g):=\sup\left\{\int_{\mathbb R^n\times \mathbb S^{n-1}}\psi(f-g) : \psi \text{ Lipschitz with }\|\psi\|_\infty\leq 1,\; \text{Lip}(\psi)\leq \kappa\right\}. $$ \end{definition} The gauge of how far particles are from where they should be is given in units of $\kappa^{-1}$ in the sense that $\mathcal{W}^1_{\kappa}(\delta_x,\delta_y)=\kappa|y-x|$. The metric captures beam broadening in the sense that $\mathcal{W}^1_{\kappa}(\varphi_{\epsilon_1},\varphi_{\epsilon_2})$ is proportional to $\kappa|\epsilon_1-\epsilon_2|$, for $\varphi_{\epsilon_i}(x) = \frac{1}{\epsilon_i^{n}}\varphi\left(\frac{x}{\epsilon_i}\right)$, $i=1,2$, two approximations to the delta function. We assume that the source term $f\in L^\infty_{x,\theta}\cap L^1_{x.\theta}$ is highly concentrated around $(0,\vec{e}_n)\in\mathbb R^n\times\mathbb S^{n-1}$ and more procisely that \begin{itemize} \item[(a)] $f\in L^\infty_{x,\theta}\cap L^1_{x.\theta}$ is compactly supported and for some $\delta>0$ small, \[\left| \int f\varphi dxd\theta - F_0\varphi(0,\vec{e}_n)\right|\lesssim \delta \qquad \mbox{for all $\varphi \in C(\mathbb R^n\times\mathbb S^{n-1})$}. \] \end{itemize} In what follows, we denote by $u$ the solution to the FP or fFP equation in \eqref{u_fFP} with a source as above. We denote by $v$ the solution to the ballistic transport: \begin{equation}\label{eq:ballistic} \theta\cdot\nabla_x v + \lambda v = f,\quad (x,\theta)\in\mathbb R^n\times\mathbb S^{n-1}; \end{equation} while the pencil-beam approximation is defined as \begin{equation}\label{Fpb} \mathfrak{u}(x,\theta) = \frac{H(x^n)}{(2\epsilon)^{2(n-1)}}U((2\epsilon)^{-1}x',x^n,\epsilon^{-1}\mathcal{S}(\theta)), \end{equation} with $H$ the Heaviside step function and $U$ solution to the FPB equation when $s=1$, or the fFPB equation otherwise. This is the pullback of $U$ with respect the coordinate transformation \eqref{pb_coordinates}, extended by zero to $x^n<0$, and amplified by the factor $(2\epsilon)^{-2(n-1)}$ to preserve the $L^1$-norm to leading order. The next result (see \cite[Theorem 1.1]{BP-SIMA-20} and \cite[Theorem 1.1]{BP-preprint-20}) summarizes the approximation errors between the various beam models in terms of the small diffusion magnitude $\epsilon>0$ and the resolution parameter $\kappa$. \begin{theorem}\label{thm:W_est} Assume that $f$ satisfies (a) above with $\delta\lesssim \kappa \epsilon^{2s}$. For a fractional exponent $s\in(0,1)$, we have that for any $s'\in (0,s)$ in dimension $n\geq 3$, or any $s'\in(2s-1,s)$ in dimension $n=2$, there exist positive constants $A(n,s),B(n,s,s')$ and $C(n,s,s')$ such that \begin{equation}\label{ineq_thm} A\min\{\kappa\epsilon,1\}\leq\mathcal{W}^1_\kappa(u,v)\leq B (\kappa\epsilon)^{\min\{2s',1\}} \quad\text{and}\quad \mathcal{W}^1_\kappa(u,\mathfrak{u})\leq C \kappa^{s'}\epsilon^{2s'}, \end{equation} where $B\to\infty$ as $s'\to s \leq 1/2$, otherwise, $B$ is independent of $s'$, and $C\to\infty$ as $s'\to s$. For the local case $s=1$, there are positive constants $A(n),B(n),C(n)$ such that \[ \mathcal{W}^1_\kappa(u,v)\leq B\kappa\epsilon \quad\text{and}\quad \mathcal{W}^1_\kappa(u,\mathfrak{u})\leq C \kappa\epsilon^2, \] and for $\kappa \gtrsim \epsilon^{-1}$ we also have $A\kappa\epsilon \leq \mathcal{W}^1_\kappa(u,v)$. \end{theorem} The above result states that the Fermi-pencil beam model $\mathfrak{u}$ is always a more accurate approximation of the Fokker-Planck solution $u$ than the ballistic model $v$. Moreover, when $\kappa=\eps^{-1}$, i.e., when errors in the location of the particles are gauged in the natural scale $\eps$ of beam spreading, then the ballistic transport is inaccurate, as is obvious physically, while the Fermi-pencil beam model retains a reasonable accuracy (of order $\eps$ when $s=1$ for instance). From an inversion perspective, the fractional Fermi-pencil beam model has several advantages: it accurately models beam spreading, which ballistic approximations do not (and hence cannot possibly be used to reconstruct the source location) and at the same time has a reasonably explicit expression as recalled in \eqref{eq:solFPB}, which is not the case of the more accurate (fractional) Fokker-Planck model. Leaving the fraction $s$ unknown also provides an additional parameter to model the statistical properties of the (unknown) turbulence. The next section on the off-axis reconstruction problem uses the Fermi-pencil beam model to describe beam spreading. \section{Beam parameter reconstructions} \label{sec:rec} \subsection{Off-axis measurements} Off-axis measurements are modeled following the decomposition $w=u+u_S$ introduced in section \ref{sec:kmodels}, with $u$ and $u_S$ solutions to \eqref{FP_and_offaxis} with $f\geq0$. In this decomposition, $u$ models the beam's particle density which we can approximate, for instance, by letting $u=\mathfrak{u}$ the pencil-beam solution of \eqref{Fpb}, while $u_S$ represents the off-axis contribution, thus satisfying \begin{equation}\label{off_axis_light} \theta\cdot\nabla_x u_S(x,\theta) = Q_S(u):=\int_{\mathbb S^2} \sigma(x,\zeta,\theta)u(x,\zeta)d\zeta. \end{equation} For the rest of the section, we assume that $\sigma=\sigma(x)$ is isotropic to simplify the analysis. We use a system of coordinates with origin the source of the laser and $\vec{e}_3=(0,0,1)$ the main direction of the beam. We define $x=(x^1,x^2,x^3)\in\mathbb R^3$ with $x'=(x^1,x^2)\in \mathbb R^2$, and similarly $\theta=(\theta^1,\theta^2,\theta^3)\in \mathbb S^2$. We then deduce that at $(x,\theta)$ the density of photons takes the explicit form \begin{equation*} u_S(x,\theta)=\int^\infty_0 \int_{\mathbb S^2} \sigma(x-t\theta)u(x-t\theta,\zeta)d\zeta dt. \end{equation*} A flat screen (array) optical camera is modeled as follows. For $x_0$ the center of the array, $L_0$ its fixed radius (with arrays assumed circular for concreteness), and $\theta_0\in \mathbb S^2$ its direction, we assume that a camera measures light intensity on the set $\mathcal{C}_{x_0,\theta_0}=\{(x,\theta_0):(x-x_0)\cdot\theta_0=0,\; |x-x_0|<L_0\}$, i.e., a disk orthogonal to $\theta_0$ of radius $L_0$ centered at $x_0$. In this idealized model, an array measures light that comes (exactly) orthogonally to the screen. We assume that the camera can be arbitrarily rotated around the point $x_0$. We also assume available measurements for camera centers $x_0\in X\subset\mathbb R^3$. We therefore assume measurements known for \begin{equation}\label{eq:measdata} \mathbb R^3\times \mathbb S^2 \supset \Sigma:= \cup_{x_0\in X} \{(x,\theta)\in \mathbb R^3\times \mathbb S^2; \ (x-x_0)\cdot\theta=0,\ |x-x_0|< L_0\}. \end{equation} The off-axis measurements are therefore characterized by $m(x,\theta)=u_S(x,-\theta)$ for $(x,\theta)\in\Sigma$. We define the measurement operator $\mathcal{M}$, mapping the unknown parameters to the available information, as \begin{equation*} \mathcal{M}:(f,\lambda,D,s,\sigma) \longmapsto m(x,\theta) := u_S(x,-\theta)|_{\Sigma}, \end{equation*} for measurements given explicitly by the integrals \begin{equation}\label{meas1} m(x,\theta)= \int^\infty_0 \int_{\mathbb S^2} \sigma(x+t\theta)u(x+t\theta,\zeta)d\zeta dt\quad\forall (x,\theta)\in \Sigma. \end{equation} Note that the system of coordinates used above, and in particular its origin at the source location, remain unknown. Our capability to recover any of the parameters $(f,\lambda,D,s,\sigma)$ depends on the available measurement set $\Sigma$ and on any prior knowledge we may possess on the parameters. We now present analytical reconstructions of some parameters of the problem. We start with the determination of the beam's axis and then address the inverse problem of determining the location of the laser's source under suitable assumptions. \subsection{Determining the beam's axis} A triangulation procedure is first used to determine an approximation of the central axis of the beam. The beam has a spatial width of order $\eps\ll1$. At this level of approximation, we may model the measurements using the ballistic model $v$ solution of $\theta\cdot\nabla v =f_\eps$ with $f_\eps$ a source term of the form $f_\eps(x,\theta) = \varphi_{\epsilon}(x)\delta_{\vec{e}_3}(\theta)$, with $\varphi_{\epsilon}(x)=\frac{1}{\epsilon^3}\varphi(\epsilon^{-1}x)$, $\varphi$ a smooth nonnegative and compactly supported function near $x=0$, and where $\delta_{\vec{e}_3}$ stands for the Dirac delta on the unit sphere with support $\{\vec{e}_3\}$. Since $v$ is given explicitly by \[ v(x,\theta)=\int^\infty_0 f_\eps(x-t\theta,\theta)dt = \int^\infty_0 \varphi_{\epsilon}(x-t\theta)\delta_{\vec{e}_3}(\theta)dt, \] the observations (which follows by replacing $u$ with $v$ in \eqref{meas1}) take the form \[ \begin{aligned} m(x,\theta) &= \int^\infty_0 \int_{\mathbb S^2} \int^\infty_0\sigma(x+t\theta) \varphi_{\epsilon}(x+t\theta-\tau\zeta)\delta_{\vec{e}_3}(\zeta)d\tau d\zeta dt \\ &= \int^\infty_0 \int^\infty_0\sigma(x+t\theta) \varphi_{\epsilon}(x+t\theta-\tau \vec{e}_3)d\tau dt. \end{aligned} \] Consider now a camera centered at $x_0$. We orientate the camera so as to maximize the intensity measured at $(x_0,\theta_0)$ for $\theta=\theta_0$. For this choice of $\theta$, we observe that the measurements provide another direction $\phi_0\in \mathbb S^2$ such that the measurements $m(x_0+t\phi_0,\theta_0)$ are maximal in the sense that $m(x_0+t\phi_0+\delta \theta_0\times \phi_0,\theta_0)$ decay rapidly in $|\delta|$. This requires that the detector array be sufficiently large $L_0\gg\eps$; see Fig. \ref{fig:geom} for an illustration. With the above information, we obtain that the main direction of the beam belongs to the plane defined by $(x_0,\theta_0,\phi_0)$. Let us now assume the existence of other detectors at $x_j$ for $1\leq j\leq J$ and therefore other planes $(x_j,\theta_j,\phi_j)$ constructed as above. Then, the main direction of the beam belongs to the intersection of these planes. We thus require a minimum of two detectors at $x_0$ and $x_1$ such that the corresponding planes are different and hence intersect along the main direction of the beam; see Fig. \ref{fig:geom}. \begin{figure} \begin{center} \tdplotsetmaincoords{65}{110} \begin{tikzpicture}[tdplot_main_coords,font=\sffamily] \fill[red] (3,0,0) circle (1.5pt); \fill[red] (2,0,0) circle (1.5pt); \fill[red] (1,0,0) circle (2pt); \fill[red] (0,0,0) circle (2.5pt); \fill[red] (-1,0,0) circle (3pt);\fill[red] (-2,0,0) circle (3.7pt); \fill[red](-3,0,0) circle (5pt); \draw[thick] (3,0,0) -- (-3,0,0); \draw[fill=red,opacity=0.3] (3,-3,-2) -- (3,3,2) -- (-3,3,2) -- (-3,-3,-2) -- cycle; \draw[thick] (3,0,0) -- (-3,0,0); \draw[fill=blue,opacity=0.3] (-3,-3,0) -- (-3,3,0) -- (3,3,0) -- (3,-3,0) -- cycle; \draw[fill=green,opacity=0.5] (-.5+1,-3,-.5) -- (-.5+1,-3,.5) -- (.5+1,-3,.5) -- (.5+1,-3,-.5) --cycle; \draw[thick,red] (.5,-3,0) -- (1.5,-3,0); \draw[fill=green,opacity=0.5] (-.5-2,-3,-.5) -- (-.5-2,-3,.5) -- (.5-2,-3,.5) -- (.5-2,-3,-.5) --cycle; \draw[ultra thick,red] (-2.5,-3,0) -- (-1.5,-3,0); \draw[fill=green,opacity=0.2] (.35,-3.1,-1.8) -- (-.35,-3.1,-1.8) -- (-.35,-2.9,-2.2) -- (.35,-2.9,-2.2) --cycle; \draw[very thick,red] (.35,-3,-2) -- (-.35,-3,-2); \node[anchor=south west,align=center] (line) at (8,7,2.5) {Source}; \draw[-latex] (line) to[out=180,in=-75] (3.05,0.05,-0.05); \node[anchor=south west,align=center] (line) at (0,4,1.9) {Beam spreading}; \draw[-latex] (line) to[out=180,in=75] (-3.05,0.05,0.15); \end{tikzpicture} \end{center} \caption{Geometry of measurements: Red: beam spreading; green: three detectors used for triangulation and beam spreading along (now approximately known) axis of propagation.} \label{fig:geom} \end{figure} The above simple triangulation procedure provides an approximate location of the beam axis with an error proportional to the beam width in the absence of additional information on its structure. As indicated above, it relies on (i) a proper orientation of the camera; and (ii) a sufficiently large array (or aperture) $L_0$ compared to the beam's width to identify the direction $\phi_j$. For a determination of the beam's axis based on stochastic properties of the light measurements, which are entirely neglected in the deterministic models considered in this paper, see \cite{BG18}. \begin{comment} \subsection{Determining the beam's axis (OLD)} A triangulation procedure is used to determine the central axis of the beam. {\color{red} A camera pointing towards the beam is able to capture a projection of it and we use this information to determine a plane containing its axis. Consequently, an array of three non-collinear cameras determines the beam's axis by intersecting the planes they produce.} \gb{Why 3? The intersection of two planes is what we want.} For the purpose of identifying the beam's axis, the broadening effect caused by multiple forward-peaked scattering generates an error of order $\eps\ll1$ so that we use the ballistic model $v$ to represent the laser beam. More specifically, we assume a constant broadening (the broadening of the beam is visible since the source is assumed to be far away from the observer) along the beam's trajectory by substituting $u$ with the ballistic solution $v$ in \eqref{meas1}, thus, {\color{red} leading} to $m(x,\theta)$ ---our off-axis measurements--- be given by \[ m(x,\theta)= \int^\infty_0 \int_{\mathbb S^2} \sigma(x+t\theta)v(x+t\theta,\zeta)d\zeta dt\quad\forall (x,\theta)\in \Sigma. \] Here, $v(x,\theta)$ is the solution to \eqref{eq:ballistic} with $\lambda=0$ and a (broad) source $f(x,\theta) = \varphi_{\epsilon}(x)\delta_{\vec{e}_n}(\theta)$, where $\varphi_{\epsilon}(x)=\frac{1}{\epsilon}\varphi(\epsilon^{-1}x)$, $\varphi$ is a smooth nonnegative and compactly supported function with unit $L^1$-norm, and $\delta_{\vec{e}_n}$ stands for the Dirac delta on the unit sphere with support $\{\vec{e}_n\}$. Since $v$ is given explicitly by \[ v(x,\theta)=\int^\infty_0f(x-t\theta,\theta)dt = \int^\infty_0\varphi_{\epsilon}(x-t\theta)\delta_{\vec{e}_n}(\theta)dt, \] the observations take the form \[ \begin{aligned} m(x,\theta) &= \int^\infty_0 \int_{\mathbb S^2} \int^\infty_0\sigma(x+t\theta)\varphi_{\epsilon}(x+t\theta-\tau\zeta)\delta_{\vec{e}_n}(\zeta)d\tau d\zeta dt \\ &= \int^\infty_0 \int^\infty_0\sigma(x+t\theta)\varphi_{\epsilon}(x+t\theta-\tau \vec{e}_n)d\tau dt. \end{aligned} \] Let $(x_0,\theta_0)$ be a camera measuring off-axis photons traveling away from the beam on its visibility region $\mathcal{V}_{x_0}$. A laser passing through the visible region is detected by the camera which sees the projection of the beam, this is, there is $\theta\in\mathcal{C}_{\theta_0}$, some positive values $0<\ell_1<\ell_2$, and a positive function $T(\tau)>0$, such that for all $\tau\in(\ell_1,\ell_2)$ \[ \tau\vec{e}_n - T(\tau)\theta \in \{x\;:\;(x-x_0)\cdot \theta=0,\; |x-x_0|<L_0\}. \] Denoting by $l_{proj}$ the projected line on the plane of equation $(x-x_0)\cdot \theta=0$ (the ``camera's screen"), there is a positive constant $\delta>0$ (of order $\epsilon$) such that the measurements are approximated by \[ m(x,\theta)=\left\{\begin{array}{ll} \displaystyle\int^\infty_0 \int^\infty_0\sigma(x+t\theta)\varphi_{\epsilon}(x+t\theta-\tau \vec{e}_n)d\tau dt&\text{if }\text{dist}(y,l_{proj})<\delta\\ \\ 0&\text{if}\quad \text{dist}(y,l_{proj})\geq\delta. \end{array}\right. \] Letting $B$ the point where $m$ is maximized, we define the plane $P(B,\theta,l_{proj})=\{x=B+t\theta+\tau \hat{l} : t,\tau\in\mathbb R\}$ with $\hat{l}$ the unit-vector associated to the projected line $l_{proj}$. Given at least {\color{red} three} 3-tuples $\{(B_j,\theta_j,l_{proj,j})\}_{j=1,2}$ we are able to identify the beam's axis by intersecting the respective planes, this is, the $\vec{e}_n$-axis is given by {\color{red} $\cap_{j=1,2,3}P(B_j,\theta_j,l_{proj,j})$}. \gb{Why do we need three planes? The intersection of two planes should give the axis, no?} \end{comment} \subsection{Beam structure measurements}\label{sec:beam_meas} We obtained in the preceding section an approximation of the main direction of the beam. The objective of this section is to use the (fractional) Fermi pencil-beam models to reconstruct additional features of the beam including its source location. We therefore assume measurements given by \eqref{meas1} with $u=\mathfrak{u}$ a Fermi pencil-beam solution. We assume now that $\lambda$, $D$ and $\sigma$ are {\em constant}. In particular, $\widetilde{\lambda}=\lambda$ and $\widetilde D=\frac{1}{2^{2s}}D$ in \eqref{eq:tilde}; see definition \ref{def:Fpb}. However, we do not assume them to be known. We also assume here the source term to be a delta function at $x=0$ and direction $\theta=\vec{e}_3$. To relate the measurements $m(x,\theta)$ in \eqref{meas1} for some $(x,\theta)\in \Sigma$, to the unknown coefficients, we first notice that \[ \begin{aligned} m(x,\theta) &= \sigma\int^\infty_0 \int_{\mathbb S^2} \mathfrak{u}(x+t\theta,\zeta)d\zeta dt\\ &= \sigma\int^\infty_0 \int_{\mathbb S^2}\frac{1}{(2\epsilon)^{4}}U((2\epsilon)^{-1}(x'+t\theta'),x^3+t\theta^3,\epsilon^{-1}\mathcal{S}(\zeta))d\zeta dt\\ &= \sigma\int^\infty_0 \int_{\mathbb R^2}\frac{1}{(2\epsilon)^{2}}U((2\epsilon)^{-1}(x'+t\theta'),x^3+t\theta^3,V)\langle \epsilon V\rangle^{-2(3-1)}dV dt. \end{aligned} \] For measurements taken at a direction perpendicular to the beam (i.e., $\theta\cdot \vec{e}_3=0$), and approximating $\langle \epsilon V\rangle^{-4}$ by $1$, we obtain that \[ \begin{aligned} m(x,\theta) & = \sigma\int^\infty_0 \int_{\mathbb R^2}\frac{1}{(2\epsilon)^2} U((2\epsilon)^{-1}(x'+t\theta'),x^3,V)dVdt + E(\epsilon), \\ E(\epsilon) = E(\epsilon;x,\theta) &:= \sigma\int^\infty_0 \int_{\mathbb R^2}\frac{1}{(2\epsilon)^2}U((2\epsilon)^{-1}(x'+t\theta'),x^3,V)\left(\langle \epsilon V\rangle^{-4}-1\right)dV dt. \end{aligned} \] Following \eqref{eq:solFPB} with $n=3$, we compute \[ \begin{aligned} m(x,\theta)-E(\epsilon)&=\sigma\int^\infty_0 \int_{\mathbb R^2}\frac{1}{(2\epsilon)^2} U((2\epsilon)^{-1}(x'+t\theta'),x^3,V)dVdt\\ &=\sigma\int^\infty_0\frac{ 1}{(2\epsilon)^{2}} \mathcal{F}^{-1}_{X'}\left[\mathcal{F}_{X',V}[U](\xi,x^3,0)\right] \left((2\epsilon)^{-1}(x'+t\theta')\right)dt\\ &= \sigma F_0\int^\infty_0\frac{e^{-\lambda x^3}}{(2\epsilon)^{2}} \mathcal{F}^{-1}_{X'}\left[e^{-2^{-2s}D|\xi|^{2s}\int^{x^3}_0|x^3-z|^{2s}dz}\right] \left((2\epsilon)^{-1}(x'+t\theta')\right) dt\\ &= \sigma F_0\int^\infty_0 e^{-\lambda x^3} \mathcal{F}^{-1}_{X'}\left[e^{-\epsilon^{2s}D|\xi|^{2s}\int^{x^3}_0|x^3-z|^{2s}dz}\right] \left(x'+t\theta'\right) dt, \end{aligned} \] where the last line follows from the scaling properties of the Fourier Transform: $\mathcal{F}[f(\delta x)](\xi) = \delta^{-2}\mathcal{F}[f(x)](\delta^{-1}\xi)$. Therefore, \begin{equation}\label{meas3} m(x,\theta)= \int^\infty_0 g(x+t\theta) dt + E(\epsilon) \end{equation} with \begin{equation}\label{meas32} g(x):= \sigma F_0e^{-\lambda x^3}\mathcal{F}^{-1}_{X'}\left[e^{-A_{s}(x^3)|\xi|^{2s}}\right] \left(x'\right), \qquad A_{s}(x^3) = \epsilon^{2s}D\int^{x^3}_0|x^3-z|^{2s} dz. \end{equation} We now estimate $E(\epsilon)$. After a change of variables $t\to t/(2\eps)$ in the above expression, we observe that \[ \begin{aligned} m(x,\theta) & = \frac{\sigma}{2\epsilon}\int^\infty_0 \int_{\mathbb R^2}U((2\epsilon)^{-1}x'+t\theta',x^3,V)\langle \epsilon V\rangle^{-4}dV dt. \end{aligned} \] Let us decompose $x=x\cdot\theta\theta+x\cdot\theta^\perp\theta^\perp+x^3\vec{e}_3$ with $\theta^\perp=\vec{e}_3\times\theta$. A measurement array ${\mathcal C}$ is parametrized by $(x\cdot\theta^\perp,x^3)$ with a support of order $O(\epsilon)$ in the first variable. Since the integral of $m(\cdot,\theta)$ over ${\mathcal C}$ is independent of $\epsilon$, we observe that $\|m(\cdot,\theta)\|_{L^\infty({\mathcal C})}=O(\epsilon^{-1})$. The relative error between our measurements and the line integrals $\int^\infty_0 g(x+t\theta) dt$ is thus given by \[ \frac{\|m(\cdot,\theta) - \int^\infty_0 g(\cdot+t\theta) dt\|_{L^\infty(\mathcal{C})}}{\|m(\cdot,\theta)\|_{L^\infty(\mathcal{C})}} \lesssim \|\epsilon E(\epsilon)\|_{L^\infty(\mathcal{C})}. \] From the definition of $E(\epsilon)$ and a Taylor expansion of $\langle \epsilon V\rangle^{-4}-1$, we find that \[ |2\eps E(\epsilon)|\leq C_{s'} \epsilon^{2s'} \sigma \int^\infty_0 \int U((2\epsilon)^{-1}x'+t\theta',x^3,V)|V|^{2s'}dVdt, \] for some constant $C_{s'}>0$. We now recall \eqref{Uself-sim}. From similar estimates on $\mathcal{F}^{-1}[e^{-t|\xi|^{2s}}](x)$ (see \cite{BG}), we deduce the decay properties of the fundamental solution $\mathfrak{J}$: for a fixed $X^n>0$ there are constants $C>c>0$ such that \[ \frac{c}{(1+|X'|^2+|V|^2)^{\frac{1}{2}(n-1+2s)}}\leq \mathfrak{J}(X',V;X^n)\leq \frac{C}{(1+|X'|^2+|V|^2)^{\frac{1}{2}(n-1+2s)}}. \] Then \[ |2\eps E(\epsilon)|\leq C_{x^3,\sigma,s'}\epsilon^{2s'}\int^\infty_0\int_{\mathbb R^2}\frac{ |V|^{2s'}}{(1+|(2\epsilon)^{-1}x'+t\theta|^2+|V|^2)^{\frac{1}{2}(n-1+2s)}}dVdt, \] while for the local case of $s=1$ (Gaussian beam), the decay of $\mathfrak{J}$ is exponential. Consequently, since $|x'|\gg\epsilon$ (measurements are taken far away from the beam), the integrals above are bounded independently of $\epsilon$ and we conclude that \[ \frac{\|m(\cdot,\theta) - \int^\infty_0 g(\cdot+t\theta) dt\|_{L^\infty(\mathcal{C})}}{\|m(\cdot,\theta)\|_{L^\infty(\mathcal{C})}} = O(\epsilon^{2s'}) \] for any $0<s'<s<1$ (and in fact $s'=1$ if $s=1$ as one may verify). The constant in the error estimate blows up as $s'$ approaches $s\in(0,1)$. \begin{comment} \[ \begin{aligned} \int_{\mathbb S^2} \mathfrak{u}(x,\zeta)d\zeta &= \int_{\mathbb S^2}\frac{1}{(2\epsilon)^{4}}U((2\epsilon)^{-1}x',x^3,\epsilon^{-1}\mathcal{S}(\zeta))d\zeta \\ &=\int_{\mathbb R^2}\frac{1}{(2\epsilon)^{2}}U((2\epsilon)^{-1}x',x^3,V)\frac{1}{\langle \epsilon V\rangle^{2(d-1)}}dV\\ &=\int_{\mathbb R^2}\frac{1}{(2\epsilon)^{2}}U((2\epsilon)^{-1}x',x^3,V)dV + E(\epsilon). \end{aligned} \] \[ \begin{aligned} &= \frac{1}{(2\epsilon)^{2}} \mathcal{F}^{-1}_{X'}\left[\mathcal{F}_{X',V}[U](\xi,x^3,0)\right] \left((2\epsilon)^{-1}x'\right) + (2\epsilon)^{-2}O(\epsilon^2)\\ &= \frac{1}{(2\epsilon)^{2}} \mathcal{F}^{-1}_{X'}\left[e^{-|\xi|^{2s}\int^{x^3}_0|x^3-t|^{2s}\widetilde{D}(t)dt}\right] \left((2\epsilon)^{-1}x'\right) + (2\epsilon)^{-2}O(\epsilon^2). \end{aligned} \] \end{comment} \begin{comment} \gb{It seems we need a very specific source, namely a delta function, for the above to hold, right?} \bp{Yes. In our Fokker-Planck papers we presented this delta-generated pencil-beam model to approximate a Fokker-Planck solution generated by a highly concentrated source. In the Wasserstein distance this approximate is accurate. For the case of a broader source, we can superimpose (delta-generated) pencil-beams but this complicates the analysis since the measurements won't have closed form.} \gb{But then $g$ is radially symmetric. So why do we have two measurement settings? I do not see the difference between sections 4.4.1 and 4.4.2.} \end{comment} Notice that $g(x',x^3)=g(|x'|,x^3)$ is radial in the transverse variable. Since measurements are obtained away from the beam, we observe that $m(x,\theta)$ gives, to leading order in $\epsilon$, the integral $\int_{\ell} g(y)d\ell(y)$ with $\ell$ the line passing through $x$ in the direction $\theta$. The previous analysis then shows that the available measurements provide information on the constitutive coefficients of the (f)FPB models up to an error that is consistent with the approximation of (f)FP by (f)FPB. \begin{comment} \gb{I do not understand the normalization there. It seems the error we are making $O(\eps^2)$ is of the same order as measurements.} \bp{The error is of order $\epsilon^2$ in the Wasserstein metric or any other that integrates the measurements.} \gb{OK. But what does it mean? Can we normalize so that this error can be neglected? At the moment, it looks suspicious. I guess we should have a relative error of order $\eps^2$, not an absolute error?}\bp{I guess that in a pointwise topology the relative error is order $\epsilon^2$. The $\|\cdot\|_\infty$ norm of the source $f_\epsilon$ is of order $\epsilon^{-2}$, thus if we normalize by this factor we should obtain an $\epsilon^2$ error.} \end{comment} \medskip We now describe a measurement setting allowing us to reconstruct $g(x)$ for suitable values of $x$. In the above simplified setting, the beam has a rotational symmetry in $x'$. Consider a camera centered at $x_0$. We choose the orientation of the camera $\theta_0$ such that the measurements at $x_0+r\phi_0+t\psi_0$ for $\psi_0=\phi_0\times\theta_0$ are symmetric in $t\to-t$ and such that $\theta_0$ is orthogonal to the estimated direction of the beam. Up to an error proportional to $\eps$, the camera is orientated as depicted in Fig.\ref{fig:geom} and $\phi_0=\vec{e}_3$ the main direction of the beam. We therefore have access to the measurement $m(x_0+t\phi_0+\tau \psi_0,\theta_0)$ for $t^2+\tau^2<L_0^2$. Let us fix $t$ with $|t|$ sufficiently small. Then $\tau\mapsto m(x_0+t\phi_0+\tau \psi_0,\theta_0)$ provides (an $\eps-$ approximation of) the line integral \[ \tau\mapsto R_1g(\tau)= \int_{\mathbb R} g(x_0+t\phi_0+\tau\psi_0+\mu\theta_0)d\mu. \] Since $L_0\gg\epsilon$, we may assume that $R_1g(\tau)=0$ for large values of $\tau$. For a fixed value of $t$ corresponding to a fixed value of the coordinate $x^3=x_0^3+t\phi_0^3$, we therefore obtain the line integrals (for the two-dimensional set of lines orthogonal to $\phi_0=\vec{e}_3$) of the function $x'\mapsto g(x',x^3)=g(|x'|,x^3)$ since line integrals in other directions $\theta$ such that $\theta\cdot \phi_0=0$ are obtained by cylindrical symmetry. We may then apply a standard inverse Radon transform (see, for instance, \cite{Nat}) to $R_1g(\tau)$ to recover $g(r,z_0)$ for all $r=|x'|$ and $z_0=x_0^3+t$. Here, $(0,0,z_0)$ corresponds to the intersection point between the beam axis $\mu\mapsto \mu\vec{e}_3$ and the orthogonal (observation) line $\mu\mapsto x_0+t\vec{e}_3 + \mu\theta_0$. The inverse Radon transform of rotationally symmetric functions in fact admits the following explicit expression: \begin{equation}\label{eq:invR} g(r,z_0) = -\frac{1}{\pi}\int^\infty_r\frac{\frac{d}{d\tau}R_1g(\tau)}{\sqrt{\tau^2-r^2}}d\tau. \end{equation} \begin{comment} A simpler approach though, consists in recovering only $g(y_0)$ which contains valuable information about the parameters of the beam. The inverse Radon transform method simplifies greatly in this case, giving us the explicit reconstruction formula \[ g(y_0) = -\frac{1}{\pi}\int^\infty_0\frac{\frac{d}{dt}R_1g(t)}{t}dt. \] \gb{There is something I do not understand: $y_0$ depends on $r$ above. We use the 1d inversion (Abel-type integral) to reconstruct the full $g(r=|x'|,x^3)$. So, we should have an inversion of the form $g(r,x^3)=...$. We then only use the case $r=0$ but we should write the inversion for all $g(r,x^3)$. We actually need the integral over $x'$ as well and so need a more general inversion.} \end{comment} \begin{comment} or a one-dimensional version (Abel inverse transform?) directly to $R_1g(t)$ to obtain $g(x',x^3)$. This procedure applies for $x^3$ sufficiently close to $x_0^3$. \gb{Requires more detail. But we really invert a radial RT which is a one dimensional problem. Does it have an explicit solution?}\bp{You are right. Since I'm not super familiarized with the Radon Transform and missed this.} \end{comment} In the presence of several detector arrays modeled by several choices of $x_j\in X$, we obtain from the previous formula the reconstruction of $g(r,x^3)$ for values of $x^3$ sufficiently close to $x^3_j$, $0\leq j\leq J$, and for all $r\geq 0$. See Fig.\ref{fig:geom} for a case with $j=3$ allowing us to reconstruct the beam profile at several positions along the beam axis. \subsection{Determining the beam's main features.}\label{sec:det_beam_feat} Let us assume that we have access to $g(x',x^3)$ as described above for several values along the profile $z=x^3$. We now propose a reconstruction of the source location (where the beam's source is modeled as a delta function) for different measurement scenarios and prior constraints. We assume that the source is $F_0>0$ times a delta function at $x=0$ and $\theta=\vec{e}_3$. From \eqref{meas32}, we thus obtain that $g(x',z)=C_0 e^{-\lambda z} \mathcal{F}^{-1}_{X'}\left[e^{-A_{s}(z)|\xi|^{2s}}\right] \left(x'\right)$ for $z>0$, where $C_0=F_0 \sigma$. Integrating the above expression in $x'\in\mathbb R^2$ amounts to evaluating the Fourier transform at $\xi=0$ so that $\int_{\mathbb R^2} g(x',z)dx' = C_0 e^{-\lambda z}$. By computing this quantity for $z_0<z_1$ and then computing its ratio, we reconstruct $e^{-\lambda(z_1-z_0)}$ and hence $\lambda$ since $z_1-z_0$ is known (as the distance between the two detector arrays). Thus, $C_0e^{-\lambda z_0}$ and $\lambda$ are known at this stage, while $C_0$ and $z_0$ remain unknown. We now use the reconstructed profile $g(x)$ only at the beam's center $g(z):=g(0,z)$. We assume $g(z)$ known for a number of values of $z\in Z$ as described in the preceding paragraph. The simplest setting is when the set $Z$ is finite. From the above considerations, we thus obtain \begin{equation}\label{eq:g0} g(z) = C_0 e^{-\lambda z}\mathcal{F}^{-1}_{X'}\left[e^{-A_{s}(z)|\xi|^{2s}}\right] \left(0\right) = \frac{C_0 e^{-\lambda z}}{4\pi^2}\int e^{-A_{s}(z)|\xi|^{2s}} d\xi. \end{equation} Unlike the ballistic model, the Fermi pencil beam model accounts for beam dispersion which is represented with the last integral above. Moreover, following a computation summarized in the appendix, we observe that \begin{equation}\label{eq:A0} A_s(z) = \Big( \dfrac{C_0 e^{-\lambda z}\Gamma(1/s)}{4\pi sg(z)}\Big)^s. \end{equation} Since $z$ is discrete as we do not expect to be able to monitor the beam width for all values of $z=x^3$, we assume a constant diffusion coefficients $D$, hence $\widetilde D=\frac{1}{2^{2s}}D$ (see \eqref{eq:tilde}). We then verify from the definition in \eqref{meas32} that \begin{equation}\label{eq:As_Dctt} A_s(z) = \dfrac{\eps^{2s}D}{2s+1} z^{2s+1}, \end{equation} so that after inversion: \begin{equation}\label{eq:form1} z = \left(\frac{2s+1}{\eps^{2s}D}\right)^{\frac{1}{2s+1}}\left(\dfrac{C_0 e^{-\lambda z}\Gamma(1/s)}{4\pi sg(z)}\right)^{\frac{s}{2s+1}}. \end{equation} Therefore, if $\eps^{2s}D$ and $s$ are {\em known a priori}, the measurement of $g(z_0)$ for any $z_0$ allows us to reconstruct $z_0$ itself, i.e., the distance from the measured point along the axis to the source location (since $C_0e^{-\lambda z_0}$ is known). Note that $\eps^{2s}D$ is the natural diffusion coefficient appearing in \eqref{u_fFP}. We now consider a more general setting where $\eps^{2s}D$ and $s$ are also unknown. We may recast the above relations as \begin{equation}\label{eq:formg} g(z) = C_0 e^{-\lambda z}\dfrac{\Gamma(1/s)}{4\pi s} \Big(\dfrac{2s+1}{\eps^{2s}D}\Big)^{\frac1s} z^{-2-\frac1s}. \end{equation} Knowledge of $g(z)$ at more than three values of $z$ therefore allows us to reconstruct all coefficients $(\eps^{2s}D,s,z)$ in principle. We define $G(t):=g(z_0)/g(z_0+t)$ and find that \[ G(t) = e^{\lambda t}\Big(1+\frac t{z_0}\Big)^{2+\frac1s}. \] Knowledge of $G(t_1)$ and $G(t_2)$ for $0<t_1<t_2$ provides a unique reconstruction of the source location $z_0$ and the fraction parameter $s$. Indeed, define $\alpha=z_0^{-1}$ and $\mu=2+\frac1s$ so that $\ln G(t)-\lambda t =\mu\ln(1+\alpha t)$. We compute \[ \partial_\alpha \frac{\ln G(t_1)-\lambda t_1}{\ln G(t_2)-\lambda t_2} = \frac{t_1t_2 [H(t_2)-H(t_1)]}{(1+\alpha t_1)(1+\alpha t_2)\ln^2(1+\alpha t_2)} ,\quad H(t):=(\frac1t+\alpha)\ln(1+\alpha t). \] We then obtain $H'(t)=t^{-2}(\alpha t-\ln (1+\alpha t))>0$ so that $\alpha\mapsto \frac{\ln G(t_1)-\lambda t_1}{\ln G(t_2)-\lambda t_2}$ is a strictly increasing function of $\alpha$ when $t_2>t_1>0$. This uniquely determines $\alpha>0$ and hence $\mu$ since $\ln(1+\alpha t_1)$ is now known. Thus $z_0$ and $s$ are uniquely characterized by $g(z_0+t_j)$ for $j=0,1,2$ and $t_0=0<t_1<t_2$. It is then straightforward to reconstruct $\eps^{2s}D$ from $g(z)$ in \eqref{eq:formg} once $s$ and $z=z_0$ are known and $C_0$ from $C_0e^{-\lambda z_0}$ and $\lambda$. \medskip To {\em summarize} the above derivation, we observe that when the beam parameters are constant in $z$, then a finite number of (at least three) measurements of $g(z_j)$ combined with $F_0\sigma e^{-\lambda z_j}=\int_{\mathbb R^2} g(x',z_j)dx'$ uniquely determine $(\sigma F_0,\eps^{2s}D,\lambda,s,z_0)$. The values of $g(z_j)$ are obtained from the off-axis measurements by an explicit inverse Radon transform \eqref{eq:invR}. The parameters $(\eps^{2s}D,\lambda,s)$ characterize the turbulent atmosphere; $z_0$ is a property of the beam, while $C_0=\sigma F_0$ quantifies the strength of the off-axis measurements as a combination of source strength and wide-angle scattering. Only the product $\sigma F_0$ may be reconstructed unambiguously without additional prior information. \section{Generalizations and remarks} \subsection{Errors in line integral measurements} The inversion procedure presented in the previous section relies on the explicit form of the pencil-beam approximations. The error associated to using such approximate models for the laser beam instead of the more accurate Fokker-Planck may be estimated as follows. Let $u_S^1$ and $u_S^2$ be the off-axis particle densities solution to \eqref{off_axis_light} with respective source terms $\int \sigma(x,\zeta,\theta)u(x,\zeta)$ and $\int \sigma(x,\zeta,\theta)\mathfrak{u}(x,\zeta)$. We are denoting here by $u$ the Fokker-Planck solution while $\mathfrak{u}$ corresponds to its pencil-beam approximation. Explicitly, we have \begin{align*} u_S^1(x,\theta) & = \int^\infty_0 \int_{\mathbb S^2}\sigma(x-t\theta,\zeta,\theta)u(x-t\theta,\zeta)d\zeta dt \\ u_S^2(x,\theta) &= \int^\infty_0 \int_{\mathbb S^2}\sigma(x-t\theta,\zeta,\theta)\mathfrak{u}(x-t\theta,\zeta)d\zeta dt. \end{align*} Given a Lipschitz function $\psi$ with $\|\psi\|_\infty\leq 1$ and $\text{Lip}(\psi)\leq \kappa$ with support contained on a compact set $\omega\subset\mathbb R^3$ including our off-axis measurements, we consider the solution $\varphi(x,\theta)=\int^\infty_0\psi(x+t\theta,\theta)dt$ of the equation $ -\theta\cdot\nabla_x\varphi(x,\theta)=\psi $, which satisfies $\|\varphi\|_\infty\leq 1$ and $\text{Lip}(\varphi)\leq \kappa$. Then, \[ \begin{aligned} \int \psi(u_S^1-u_S^2)dxd\theta &= \int \varphi(\theta\cdot\nabla_x u_S^1- \theta\cdot\nabla_x u_S^2)dxd\theta \\ &= \int \sigma(x,\zeta,\theta)\varphi(x,\theta)(u(x,\zeta) - \mathfrak{u}(x,\zeta))d\zeta dxd\theta, \end{aligned} \] which yields $ \int \psi(u_S^1-u_S^2)dxd\theta \lesssim \mathcal{W}^1_\kappa(u,\mathfrak{u}) $ for all $\psi$ as above. Taking supremum among all those $\psi$ and recalling the results in Theorem \ref{thm:W_est}, we obtain the estimate \begin{equation}\label{eq:errormeas} \mathcal{W}^1_{\kappa,\omega}(u_S^1,u_S^2)\leq C\kappa^{s'} \epsilon^{2s'} \end{equation} for $s'$ and the constant $C$ as in the theorem (where $s'=1$ if $s=1$) and where $\mathcal{W}^1_{\kappa,\omega}$ is the $(1,\kappa)$-Wasserstein distance restricted to $\omega$. We thus obtain that the measurement errors generated by replacing Fokker-Planck models by their Fermi pencil beam approximation are small in the above sense when $\eps$ is small. We also refer to \cite{BJ} for the effect of such measurement errors on the reconstruction of the function $g(z)$ from its line integrals. \begin{comment} \gb{We neglected a potentially important parameter: the absorption $\lambda$. It is possible that this term may play an important role in practice. All beams are then multiplied to leading order in $\eps$ by $e^{-\lambda z}$ before wide-angle scattering occurs. We neglect absorption from the scattering off the beam to the detector since we already neglected diffusion along that path. We then observe that \[ \displaystyle\int_{\mathbb R^2} g(x',z) dx' = F_0\sigma e^{-\lambda z}. \] Since this quantity is known for $z_0<z_1$, we reconstruct $e^{-\lambda(z_1-z_0)}$ and hence $\lambda$ since $z_1-z_0$ is known. Using the same notation as above, we find for $G(t)=g(z_0)/g(z_0+t)$ that $\ln G(t)+\lambda t=\mu\ln(1+\alpha t)$. Knowing this information for $0<t_1<t_2$ thus uniquely reconstructs $(\alpha,\mu)$ and hence $(s,z_0)$. This proves that the coefficients $(\sigma F_0,(2\eps)^{2s}D,s,z_0,\lambda)$ are uniquely reconstructed from knowledge of $g(z_j)$ for $j=0,1,2,3$ and $C_0=\int_{\mathbb R^2} g(x',z)dx'$.} \end{comment} \subsection{The local case $s=1$} Recalling definition \ref{def:Fpb} in the local case $s=1$, the Fermi pencil-beam takes the form of the following Gaussian beam: \[ \begin{aligned} U(X,V) &= e^{-\int^{X^3}_0\widetilde{\lambda}(r)dr}\mathcal{F}^{-1}_{X',V}\left[ e^{-\int^{X^3}_0|\eta+(X^3-t)\xi|^{2}D(t)dt}\right]\\ &=\frac{e^{-\int^{X^3}_0\widetilde{\lambda}(r)dr}}{(4\pi)^2(E_2E_0-E_1^2)}\exp\left\{-\frac{E_0|X'-X^3V|^2+2E_1(X'-X^3V)\cdot V+E_2|V|^2}{4(E_2E_0-E_1^2)}\right\} \end{aligned} \] with $E_k(X^3):= \widetilde D\int^{X^3}_0t^k dt$ and $2^{2}\widetilde D=D$ here. We use this exponentially decaying function to define the pencil-beam approximation in \eqref{Fpb}. The reconstruction methodology employed in section \ref{sec:rec} applies to any $s\in(0,1]$. However, the determination of $(\sigma F_0,\epsilon^2D,z,\lambda)$ simplifies when $s=1$. In particular, \eqref{eq:g0} becomes \begin{equation}\label{g_s1_rad} g(z)=\frac{C_0e^{-\lambda z}}{4\pi A_{1}(z)}, \end{equation} in a neighborhood of some $z_0>0$. The factor $C_0e^{-\lambda z}$ as well as $\lambda$ are obtained from the integral $\int_{\mathbb R^2}g(x',z)dx'$ as in section \ref{sec:det_beam_feat}. This yields $A_1(z)=\frac{\epsilon^{2} D}{3}z^{3}$. For a given $t>0$, we compute $A_1(z+t)/A_1(z)=(1+t/z)^3$ and then obtain the distance $z$ following the relation \[ z = \frac{t}{\left(\frac{A_1(z+t)}{A_1(z)}\right)^{1/3}-1}. \] Finally, from knowledge of $z$ and $A_1(z)$ we easily determine the factor $\epsilon^2D$. \begin{comment} \bp{I corrected the computations following your suggestions. I also noticed that since we are assuming $D$ constant, then $\widetilde D = D$ in the fractional case and $\widetilde{D}=D/4$ in the local. This in particular implies that in the local case $(2\epsilon)^2\widetilde D = \epsilon^2D$ is precisely the diffusion coefficient, however, in the fractional case $\epsilon^2D = \epsilon^2\widetilde D$. I changed $(2\epsilon)^2\widetilde D$ for $\epsilon^2 D$ in some of the paragraphs above when we refer to the coefficients being reconstructed.} \gb{Great. I still believe we should unify the notation and introduce only one coefficient $D$ once and for all.} \bp{In order to unify the notation and have the relation $\widetilde{D}=D/2^{2s}$ we need to write explicitly the factor $1/2^{2s}$ in the definition of the (pseudo-) fractional Laplacian on the sphere. The definition would look like \[ [(-\widetilde{\Delta}_\theta)^su]_{\mathcal{S}} := \frac{1}{2^{2s}}\langle v\rangle^{d-1+2s}(-\Delta_v)^s\left(\frac{[u]_\mathcal{S}}{\langle \cdot\rangle^{d-1-2s}}\right). \] This factor indeed appears in all the computations but in the paper we absorbed it with the diffusion parameter. What do you think?} \gb{That sounds good. It is better to have the normalizing constants in the definition of the operators. } \end{comment} \begin{comment} The measurements from one camera allows us to compute the value of $g'(z)$. Moreover, by differentiating the previous expression we get \[ g'(z)=-g(z)\left(\lambda + \frac{A'_1(z)}{A_1(z)}\right) = -g(z)\left(\lambda + \frac{3}{z}\right). \] Since $\lambda$ is computed from the observations (see section \ref{sec:det_beam_feat}) we deduce the reconstruction formula \[ z = \frac{-3}{\lambda + g'(z)/g(z)}. \] From the knowledge of $z,C_0,\lambda$ and $g(z)$ we can easily obtain the factor $(2\epsilon)^2D$. \end{comment} \medskip \subsection{Broader source terms} Several generalizations of the above reconstructions may be considered in the presence of additional measurements. For instance, if measurements are available on a continuum of values of $z_0$ (with $\Sigma$ involving a continuum of values of $x_0$), then the parameters $(\lambda(z), D(z))$ and possibly $s$ as well may be allowed to vary in the $z-$variable; we do not consider this particular setting any further. Here, we consider a generalization with $D$ and $\lambda$ still constant but with a laser source that is spatially broad but still narrow in its direction of emission. The laser beam is modeled as a solution to \eqref{fFPB} with an unknown source \[ U(X',0,V) = G(X',V):=h(X')\delta(V), \] for a nonnegative and integrable function $h$, supported inside the ball of radius $\rho$ with $\rho+\epsilon\ll L_0$ (the latter parameter represents the size of the camera's screen). In the Fourier domain this solution takes the form \[ \mathcal{F}_{X',V}[U](\xi,X^3,\eta) = e^{-\lambda X^3-\frac{D}{2^{2s}}\int^{X^3}_0|\eta+(X^n-t)\xi|^{2s}dt}\mathcal{F}_{X',V}[G](\xi,\eta+X^3\xi). \] The available measurements thus satisfy \[ m(x,\theta) = \int^\infty_0g(x+t\theta)dt + O(\epsilon^{2s'}), \] for $s'\in(0,s)$ close to $s$ (with the error corresponding to the relative one), and with $g(x)$ given by \[ g(x)=\sigma e^{-\lambda x^3}\mathcal{F}^{-1}_{X'}\left[e^{-A_s(x^3)|\xi|^{2s}}\mathcal{F}_{X',V}[G](2\epsilon\xi,2\epsilon \xi x^3)\right](x'). \] Taking into account the explicit form of $G$, the above reduces to \[ g(x)=\sigma e^{-\lambda x^3}\mathcal{F}^{-1}_{X'}\left[e^{-A_s(x^3)\xi^{2s}}\mathcal{F}_{X'}[h](2\epsilon\xi)\right](x'). \] In the simpler case of a cylindrical beam, that is, with $h(X')=h(|X'|)$, the radial symmetry of $h$ is inherited by $g$ and therefore, for each $t\ll 1$, a single measurement of the form $m(x_0+t\phi_0+\tau \psi_0,\theta_0)$ with $\phi_0,\varphi_0$ and $\theta_0$ as in \ref{sec:beam_meas} and $t^2+\tau^2<L_0^2$, gives us all the line integrals on the plane passing through $x_0+t\phi_0$ and perpendicular to $\phi_0=\vec{e}_3$. In the more general case of $h$ not necessarily radial-symmetric, we are forced to acquire a larger set of measurements. The tomographic procedure introduced above involving the inversion of a Radon transform applies in this case provided we observe the beam from an array of multiple cameras totally or partially surrounding the beam. An array of multiple cameras placed on a plane perpendicular to the beam's axis can in principle measure all the line integrals passing near the intersection point of the axis and the plane (say $x_0=(0,0,z_0)$). Therefore, a Radon transform inversion leads to the determination of $g(x',z_0)$ for all $x'\in\mathbb R^2$. See figure \ref{fig:noncylidrical} for an schematic of this measurement geometry. \begin{figure \centerline{ \includegraphics[scale=0.4]{Noncylindrical_case_v2.png} } \caption{Tomographic measuring geometry for non-cylindrical beams. In the presence of cylindrical symmetry one detector is enough.} \label{fig:noncylidrical} \end{figure} In addition, by Fourier transforming $g(x)$ with respect to $x'$ we have access to \begin{equation}\label{Fg} \mathcal{F}_{x'}[g](\xi,z_0)=\sigma e^{-\lambda z_0}e^{-A_s(z_0)|\xi|^{2s}} \mathcal{F}_{X',V}[h](2\epsilon\xi),\quad\forall \xi\in\mathbb R^2, \end{equation} from which we realize that \[ \begin{aligned} \mathcal{F}_{x'}[g](0,z_0)=\int_{\mathbb R^2} g(x',z_0)dx' &=\sigma e^{-\lambda z_0}\int_{\mathbb R^2}h(x')dx'. \end{aligned} \] Consequently, and denoting $C_0=\sigma \int_{\mathbb R^2}h(x')dx'$, we obtain the quantity $C_0e^{-\lambda z_0}$. Computing this for both, $z_0<z_1,$ we obtain $\lambda$ (as done previously). For a fixed (and known) $t>0$, let us now define $\mathcal{G}(\xi,t)= \mathcal{F}_{x'}[g](\xi,z_0)/\mathcal{F}_{x'}[g](\xi,z_0+t)$ and observe that \[ \ln\left(\mathcal{G}(\xi,t)\right) - \lambda t = |\xi|^{2s}\left(A_s(z_0+t)-A_s(z_0)\right). \] Note that while $\mathcal{F}_{x'}[g](\xi,z_0)$ depends on $\xi$, the ratio $\mathcal{G}(\xi,t)$ depends only on $|\xi|$. Evaluating the above at $|\xi|=1$ provides the difference $A_s(z_0+t)-A_s(z_0)$. Subsequently choosing $|\xi|=e$, we apply logarithm to the previous expression and deduce that \[ s = \frac{1}{2}\ln\left(\frac{\ln(\mathcal{G}(\xi,t)-\lambda t)}{A_s(z_0+t)-A_s(z_0)}\right). \] In homogeneous media (i.e., with constant $D$ and $\lambda$), we use the explicit form of $A_s(z_0)$ in \eqref{eq:As_Dctt} to get \begin{equation}\label{logG} \ln\left(\mathcal{G}(\xi,t)\right) - \lambda t = \frac{|\xi|^{2s}\epsilon^{2s}D}{(2s+1)(z_0)^{2s+1}}\left(\left(1 +\frac{t}{z_0}\right)^{2s+1}-1\right). \end{equation} Then, using the above with $t_1<t_2$ and denoting $\alpha=z_0^{-1}$, we have \[ H(\alpha)=\frac{\ln\left(\mathcal{G}(\xi,t_1)\right) - \lambda t_1}{\ln\left(\mathcal{G}(\xi,t_2)\right) - \lambda t_2} = \frac{\left(1 +\alpha t_1\right)^{2s+1}-1}{\left(1 +\alpha t_2 \right)^{2s+1}-1}. \] This is a strictly increasing function of $\alpha$ as we can verify by computing its derivative. Indeed, \[ \begin{aligned} \partial_\alpha H&=(2s+1)\frac{t_1(1+\alpha t_1)^{2s}(\left(1 +\alpha t_2 \right)^{2s+1}-1)-t_2(1+\alpha t_2)^{2s}(\left(1 +\alpha t_1 \right)^{2s+1}-1)}{(\left(1 +\alpha t_2 \right)^{2s+1}-1)^2}\\ &=\frac{t_1t_2}{(\left(1 +\alpha t_2 \right)^{2s+1}-1)^2}\int^\alpha_0\left((1+\alpha t_1)^{2s}(1 +r t_2 )^{2s}-(1+\alpha t_2)^{2s}(1 +r t_1 )^{2s}\right)dr,\\ \end{aligned} \] with a negative integrand since the function $r\mapsto\frac{1+r t_1}{1+r t_2}$ is strictly decreasing. We then conclude that $\alpha = z_0^{-1}$ is uniquely determined by the quantity $H(\alpha)$ and hence so is $s$. Subsequently, we can determine $C_0=\sigma\left(\int_{\mathbb R^2}h(x')dx'\right)$, and $\epsilon^{2s}D$ (and then $A_s(z_0)$) for instance from \eqref{logG}. Lastly, going back to \eqref{Fg}, we are able to determine a rescaled version of $h$ up to a constant factor given by $\left(\int_{\mathbb R^2}h(x')dx'\right)^{-1}$. This follows from the relation \[ \left(\int_{\mathbb R^2}h(x')dx'\right)^{-1}\mathcal{F}_{x'}[h](2\epsilon \xi) = \frac{\mathcal{F}_{x'}[g](\xi,z_0)}{C_0 e^{-\lambda z_0}e^{-A_s(z_0)|\xi|^{2s}}}, \] so that Fourier transforming both sides yields \[ \left(\int_{\mathbb R^2}h(x')dx'\right)^{-1}\frac{1}{(2\epsilon)^2}h\left(\frac{x'}{2\epsilon}\right) = \frac{\mathcal{F}_{x'}^{-1}\left[\mathcal{F}_{x'}[g](\xi,z_0)e^{A_s(z_0)|\xi|^{2s}}\right](x')}{C_0 e^{-\lambda z_0}}. \] Recalling the relation between the stretched and macroscopic variables: $X'=x'/2\epsilon$, we determine from the above (up to a constant factor), the source function $h$ in its natural (stretched) coordinates, namely, \[ \frac{h_\epsilon(X')}{\int_{\mathbb R^2} h_\epsilon(Y)dY} = \frac{\mathcal{F}_{x'}^{-1}\left[\mathcal{F}_{x'}[g](\xi,z_0)e^{A_s(z_0)|\xi|^{2s}}\right](2\epsilon X')}{C_0 e^{-\lambda z_0}}. \] \begin{comment} In terms of polar coordinates $X'=r\theta$ (respectively $\xi=\rho\omega$) we are able to rewrite the previous equality as \[ g(0,z_0) = \frac{\sigma e^{-\lambda z_0}}{4\pi^2}\int_0^{2\pi}\int_0^\infty e^{-A_s(z_0)\rho^{2s}}\mathcal{F}_{X'}[\alpha](2\epsilon \rho,\omega)\rho d\rho d\omega. \] Integrating with respect to $\omega$ gives \[ \int_0^{2\pi}\mathcal{F}_{X'}[\alpha](2\epsilon \rho,\omega)d\omega = 2\pi \int^\infty_0\left(\int^{2\pi}_0\alpha(r,\omega)d\omega \right)J_0(2\epsilon r\rho)rdr \] where $J_0(\rho)=\frac{1}{2\pi}\int_0^{2\pi}e^{-i\rho\cos(\theta)}d\theta$ is the 0th-order Bessel function. We observe that \[ g(0,z_0) = \frac{\sigma e^{-\lambda z_0}}{2\pi} \int^\infty_0\left(\int^{2\pi}_0\alpha(r,\omega)d\omega \right)\left(\int^\infty_0 e^{-A_s(z_0)\rho^{2s}}J_0(2\epsilon r\rho)\rho d\rho\right)rdr, \] which in terms of the 0th-Hankel Transform, $\mathbb{H}_0[f(\cdot)](r)=\int^\infty_0f(\rho)J_0(r \rho)\rho d\rho$, recast as \[ g(0,z_0) = \frac{\sigma e^{-\lambda z_0 }}{2\pi}\int^\infty_0\int^{2\pi}_0\alpha(r,\omega)\mathbb{H}_0[e^{-A_{s}(z_0)(\cdot)^{2s}}](2\epsilon r) rd\omega dr. \] The value of $\mathbb{H}_0[e^{-A_{s}(z_0)(\cdot)^{2s}}](2\epsilon r)$ is explicit for $s=1,1/2$. For the former, \[ \mathbb{H}_0[e^{-A_{s}(z_0)(\cdot)^{2}}](2\epsilon r)=\frac{e^{-\frac{\epsilon^2r^2}{A_1(z_0)}}}{2A_1(z_0)}, \] therefore we get \[ g(0,z_0) = \frac{\sigma e^{\lambda z_0}}{4\pi A_1(z_0)}\int^\infty_0\int^{2\pi}_0\alpha(r,\omega) e^{-\frac{\epsilon^2r^2}{A_1(z_0)}} rd\omega dr, \] or equivalently, \[ g(0,z_0) = \frac{C_0e^{\lambda z_0}}{4\pi A_1(z_0)}\cdot\frac{\int^\infty_0\int^{2\pi}_0\alpha(r,\omega) e^{-\frac{\epsilon^2r^2}{A_1(z_0)}} rd\omega dr}{\int^\infty_0\int^{2\pi}_0\alpha(r,\omega)rd\omega dr}, \] Compare this to \eqref{g_s1_rad}. For a radial beam (i.e. $\alpha = \alpha(r)$) the previous simplifies a bit, however, in both cases (cylindrical or non-cylindrical) our ability to determine $z_0$ lies on finding an explicit expression for the above ratio involving $\alpha$. \end{comment} \begin{comment} An array of cameras placed on a tilted plane (with respect to the beam's axis) can measure all the line integrals passing near the intersection point of the beam's axis and the plane (let's say $x_0=(0,0,z_0)$), therefore, a Radon transform inversion leads to the determination of \[ g(0,z_0) = \frac{\sigma e^{-\lambda z_0}}{(2\pi)^2}\int_{\mathbb R^2}e^{-A_s(x^3)|\xi|^{2s}}\mathcal{F}_{X',V}[G](2\epsilon\xi,2\epsilon \xi x^3)d\xi. \] (compare this to \eqref{eq:g0}). See figure \ref{fig:noncylidrical} for an illustration of this measurement geometry. \begin{figure \centerline{ \includegraphics[scale=0.4]{Noncylindrical_case.png} } \caption{Tomographic measuring geometry for non-cylindrical beams} \label{fig:noncylidrical} \end{figure} Let's consider for example a spatially spread source but completely focused in angle, $G(X',V)=\alpha(X')\delta(V)$. In terms of polar coordinates $X'=r\theta$ we are able to rewrite the previous equality as \[ g(0,z_0) = \sigma e^{-\lambda z_0}\int_0^{2\pi}\int_0^\infty e^{-A_s(x^3)\rho^{2s}}\mathcal{F}_{pc}[\alpha](2\epsilon r,\omega)\rho d\rho d\omega. \] where $\mathcal{F}_{pc}$ stands for the Fourier transform in polar coordinates defined as \[ \mathcal{F}_{pc}[\alpha(r,\theta)](\rho,\omega)=\int^{2\pi}_0\int^\infty_0 e^{-ir\rho\cos(\omega-\theta)}\alpha(r,\theta)rdrd\theta. \] By means of Bessel functions and the Hankel Transform the previous recast as \[ g(0,z_0) = \sigma e^{-\lambda z_0}\int^\infty_0\overline{\mathbb{H}_0}[\alpha](\rho)e^{-A_{s}(z_0)\rho^{2s}}\rho d\rho, \] where $\overline{\mathbb{H}_0}[\alpha](\rho)$ stands for the angular average of the 0th-order Hankel Transform of $\alpha$: \[ \overline{\mathbb{H}_0}[\alpha](\rho) := \frac{1}{2\pi}\int^{2\pi}_0\left(\frac{1}{2\pi}\int^{2\pi}_0\int^\infty_0\alpha(r,\theta)J_0(\rho r)rdrd\theta\right)d\theta, \] with $J_0$ the 0th-order Bessel function. The integral appearing in $g(0,z_0)$ can be explicitly computed (in terms of $z_0$, $s$ and $\epsilon^{2s}D$) only in some particular cases, for instance, for $\alpha(r,\theta) = \frac{\beta(\theta)}{\sqrt{\tau}}e^{-\frac{r^2}{4\tau}}$ and $s=1,1/2$. The procedure following \eqref{eq:formg} (which implies the ability of computing $g(0,z)$ for multiple $z$'s) allows us to potentially determine $z_0$, $s$ and $\lambda$, however, we are unable to determine $\sigma$ (up to a constant depending on $\tau$ and $\beta$) and $\epsilon^{2s}D$ unless the planes of observation are perpendicular to the beam's axis, in which case we can integrate $\int_{\mathbb R^2}g(x',z)dx'$ and in principle recover $\sigma$ and subsequently $\epsilon^{2s}D$. \end{comment} \begin{comment} \gb{I'm not sure what to make of all of this. How specific is the geometry considered here? With enough measurements, there is no reason not to fully reconstruct $\alpha$. I do not understand why we restrict ourselves to $g(0,z_0)$ since we can presumably reconstruct $g(x',z_0)$ for all $x'$ a priori? Without all this information, sure, we cannot reconstruct all of $\alpha$. If we cannot reconstruct $\alpha$, we should not include this section and replace it by vague comments on what may happen for more general sources.} \bp{I thing the difficulty again is related to the direction of observation. If the observation plane is perpendicular to the beam's axis, we might be able to recover $\alpha$ (I'll write down the computations in this case) since we can decouple $x'$ and $x^3$. In the more general case of a tilted plane I'm not sure what we can do.} \end{comment} \begin{comment} \gb{The generalization thus corresponds to a broadened source term, which must generate a sort of convolution. The broadening should be parametrized by something else than $\alpha$ since we already used $\alpha$ earlier (and it is used twice with different meanings in the last section). It seems the construction of $(\lambda,s,\eps^2 D)$ is similar to the earlier case of a delta source (although I'm not sure about the details). } \bp{I changed $\alpha(x')$ for $h(x')$. The previous approach is similar although this time is in Fourier domain. For the case of a point source we used the information at $x'=0$. If we try to do this for more general sources we end up with the 0th-Hankel Transform:} \gb{I do not understand what we are doing below. Isn't the source the same as above? I'm not sure there is any interest in keeping that section; at least I do not see any motivation. More generally, would you mind removing everything that we no longer plan to keep, for instance everything after the biblio section (it seems we do not want to keep anything there). We have to finalize a clean up before we can finalize the paper.} \bp{What I wrote below was to show you what we obtain by mimicking the approach followed for the delta-generated beam case, but I wasn't planning to include it in the paper.} \gb{Sounds good. Thanks for starting another file. Please do not hesitate to remove everything that is no longer relevant so that we can finalize the paper. } Consider a source $G(X',V)=h(X')\delta(V)$. In terms of polar coordinates $X'=r\theta$ (respectively $\xi=\rho\omega$) we get \[ g(0,z_0) = \frac{\sigma e^{-\lambda z_0}}{4\pi^2}\int_0^{2\pi}\int_0^\infty e^{-A_s(z_0)\rho^{2s}}\mathcal{F}_{X'}[h](2\epsilon \rho,\omega)\rho d\rho d\omega. \] Integrating with respect to $\omega$ gives \[ \int_0^{2\pi}\mathcal{F}_{X'}[h](2\epsilon \rho,\omega)d\omega = 2\pi \int^\infty_0\left(\int^{2\pi}_0h(r,\omega)d\omega \right)J_0(2\epsilon r\rho)rdr \] where $J_0(\rho)=\frac{1}{2\pi}\int_0^{2\pi}e^{-i\rho\cos(\theta)}d\theta$ is the 0th-order Bessel function. We observe that \[ g(0,z_0) = \frac{\sigma e^{-\lambda z_0}}{2\pi} \int^\infty_0\left(\int^{2\pi}_0h(r,\omega)d\omega \right)\left(\int^\infty_0 e^{-A_s(z_0)\rho^{2s}}J_0(2\epsilon r\rho)\rho d\rho\right)rdr, \] which in terms of the 0th-Hankel Transform, $\mathbb{H}_0[f(\cdot)](r)=\int^\infty_0f(\rho)J_0(r \rho)\rho d\rho$, recast as \[ g(0,z_0) = \frac{\sigma e^{-\lambda z_0 }}{2\pi}\int^\infty_0\int^{2\pi}_0h(r,\omega)\mathbb{H}_0[e^{-A_{s}(z_0)(\cdot)^{2s}}](2\epsilon r) rd\omega dr. \] The value of $\mathbb{H}_0[e^{-A_{s}(z_0)(\cdot)^{2s}}](2\epsilon r)$ is explicit for $s=1,1/2$. For the former, \[ \mathbb{H}_0[e^{-A_{s}(z_0)(\cdot)^{2}}](2\epsilon r)=\frac{e^{-\frac{\epsilon^2r^2}{A_1(z_0)}}}{2A_1(z_0)}, \] therefore we get \[ g(0,z_0) = \frac{\sigma e^{\lambda z_0}}{4\pi A_1(z_0)}\int^\infty_0\int^{2\pi}_0h(r,\omega) e^{-\frac{\epsilon^2r^2}{A_1(z_0)}} rd\omega dr, \] or equivalently, \[ g(0,z_0) = \frac{C_0e^{\lambda z_0}}{4\pi A_1(z_0)}\cdot\frac{\int^\infty_0\int^{2\pi}_0h(r,\omega) e^{-\frac{\epsilon^2r^2}{A_1(z_0)}} rd\omega dr}{\int^\infty_0\int^{2\pi}_0h(r,\omega)rd\omega dr}, \] Compare this to \eqref{g_s1_rad}. For a radial beam (i.e. $h = h(r)$) the previous simplifies a bit, however, in both cases (cylindrical or non-cylindrical) our ability to determine $z_0$ lies on finding an explicit expression for the above ratio involving $h$.\\ \gb{As I mentioned above, I'm not sure what the results are here. Which assumptions are used where for what result needs to be crystal clear since we do not state theorems.} \end{comment} \begin{comment} To be more concrete, by assuming $\alpha(r,\theta)=\frac{\delta(r)}{r}\beta(\theta)$ we have that $\overline{\mathbb{H}_0}[\alpha](\rho) = \frac{1}{2\pi}\int^{2\pi}_0\beta(\theta)d\theta $ for all $ \rho>0$ and therefore, by setting $C_0 = \sigma F_0$ with $F_0 = 2\pi\int^{2\pi}_0\beta(\theta)d\theta$, we arrive to \eqref{eq:g0} and consequently to \eqref{eq:formg}. When the previous is done for multiple (parallel) tilted planes of observation (thus for $z_0+t_j$ with $t_0=0<t_1<t_2$), the procedure after \eqref{eq:formg} allows us to determine $z_0$, $s$ and $\lambda$, however, we are unable to determine $C_0$ and $\epsilon^{2s}D$ unless the planes of observation are perpendicular to the beam's axis, in which case we can integrate $\int_{\mathbb R^2}g(x',z)dx' = C_0 e^{-\lambda z}$ and recover $C_0$, and subsequently $\epsilon^{2s}D$ from \eqref{eq:formg}. \end{comment} \begin{comment} We model a non-cylindrical beam by superposing delta-generated pencil-beams, this is, we consider a beam $\mathfrak{u}$ given by \[ \mathfrak{u}(x,\theta) = \int f(y,\zeta)\mathfrak{u}_{y,\zeta}(x,\theta)dyd\zeta, \] with \[ \mathfrak{u}_{y,\zeta}(x,\theta) = \frac{H(\eta\cdot(x-y))}{(2\epsilon)^{2(n-1)}}U_{y,\eta}\circ T^\epsilon_{y,\eta}(x,\theta) \] where $U_{y,\eta}$ is the solution to the (fractional) Fermi pencil-beam equation \eqref{fFPB} with data \[ U_{y,\eta}(0,0) = \delta(X')\delta(V-\epsilon^{-1}\mathcal{S}(\eta)),\quad \widetilde D=\frac{1}{2^{2s}}D\quad\text{and}\quad \widetilde{\lambda} = \lambda; \] and where $T^\epsilon_{y,\eta}$ is a rotated version of the strectched coordinates introduced previously. More precisely, \[ T^\epsilon_{y,\eta}(x,\theta)=((2\epsilon)^{-1}\Pi_{\eta^{\perp}}(x-y),\eta\cdot(x-y),\epsilon^{-1}\mathcal{S}_\eta(\theta)) \] for $\Pi_{\eta^\perp}=(\text{Id} - \eta\eta^T)$ the orthogonal projection onto the subspace $\eta^\perp$, and $\mathcal{S}_\eta$ the stereographic projection of the unit sphere that maps $\eta$ to the origin in $\mathbb R^{n-1}$. \end{comment} \section{Conclusions}\label{sec:conclu} The reconstruction of the profile of a laser beam propagating through a turbulent atmosphere from limited off-axis measurements is a difficult task. We propose here a linear macroscopic description of the beam spreading based on a (possibly fractional) Fermi pencil beam equation. Such approximations are accurate in the regime of small mean-free path large transport-mean-free path consistent with the narrow laser beam hypothesis. Moreover, by neglecting back-scattering, they admit sufficiently explicit expressions that are amenable to parameter inversions. The off-axis measurement assumptions are as follows. We assume the presence of detector arrays away from the path of the beam. Light detection is modeled as wide-angle single scattering off of the laser beam. The scattering amplitude and the laser source amplitude remain unknown but it is assumed that their combined effect is large enough that it can be detected. At least two detector arrays are necessary to triangulate the line segment along which the beam propagates. Then, one sufficiently large detector, or several detectors along the path, allow us to evaluate the whole beam structure under suitable assumptions. The explicit influence of the parameters of the model on beam spreading enables us to reconstruct them. Such macroscopic parameters include the source location and the main features of the turbulence through which it propagates (diffusion coefficient and fractional power). Note that a model based on expansions of radiative transfer solutions into successive scattering events as in \cite{C-SPIE-03} would model the beam intensity as a ballistic component, which cannot possibly provide information on the source location, say, since beam spreading is absent from the model. Our explicit reconstructions are based on the inversion of the laser intensity at the center of a radially symmetric beam at several locations along its path. The latter intensity may be estimated by applying a standard inverse Radon transform on the available detector array measurements. Other, possibly more stable, reconstruction procedures from the same available data are certainly possible. Our results provide proof of concept that (fractional) Fermi pencil beam models allow for the reconstruction of macroscopic laser beam features including what was our main motivation here: their source location. \section*{Acknowledgment} This research was partially supported by the Office of Naval Research, Grant N00014-17-1-2096 and by the National Science Foundation, Grant DMS-1908736. Most of the work presented in this article was done while BP was a W. H. Kruskal Instructor at the University of Chicago. BP would like to thank the University of Chicago and in particular the Department of Statistics for their hospitality and generosity throughout those years. \begin{appendix} \section{} To compute $ \int_{\mathbb R^3} e^{-A_{s}|\xi|^{2s}} d\xi$ we consider polar coordinates and obtain \[ \int_{\mathbb R^3} e^{-A_{s}|\xi|^{2s}} d\xi= 2\pi\int_0^\infty \rho e^{-A_{s}\rho^{2s}} d\rho. \] Recalling the definition of the Gamma function, $\Gamma(z)=\int^\infty_0 x^{z-1}e^{-x}dx$, defined for $\text{Re}(z)>0$, we are able to recast the integral as \[ \int_{\mathbb R^3} e^{-A_{s}|\xi|^{2s}} d\xi= 2\pi A_{s}^{-1/s}\int^\infty_0 te^{-t^{2s}}dt = \frac{\pi }{sA_{s}^{1/s} }\int^\infty_0 t^{1/s-1}e^{-t}dt = \frac{\pi }{sA_{s}^{1/s}}\Gamma\left(\frac{1}{s}\right). \] \end{appendix}
{ "timestamp": "2021-09-30T02:03:47", "yymm": "2109", "arxiv_id": "2109.14023", "language": "en", "url": "https://arxiv.org/abs/2109.14023" }
\section{Introduction} Deep eutectic solvents\cite{Smith2014, Hansen2020} (DESs) are an interesting class of materials resembling ionic liquids\cite{Dong2017, Singh2020} (ILs) with great potential applications in synthesis\cite{Carriazo2012}, electrochemistry\cite{Wu2021}, extraction processes\cite{Cunha2018}, and biomass transformation\cite{Chen2019}. Similarly, most DESs and ILs are composed of a molecular cation and an inorganic anion. However, DESs also have a neutral component in their compositions, such as glycol that forms a specific stoichiometric mixture\cite{Alizadeh2020, Martins2018}. ILs, on the other hand, are pure molten salts at (or near) room temperature. The components of DESs are typically classified as hydrogen bond donors (HBD) or hydrogen bond acceptors (HBA). The combined action\cite{Kaur2020} of all hydrogen bonds present in DESs contributes to a decrease in the eutectic point of the system in relation to what would be predicted for an ideal mixture\cite{Smith2014, Hansen2020, Martins2018}. In terms of MD simulations, ILs and DESs present common characteristics and similar problems\cite{Kaur2020, GonzlezdeCastilla2019, Bedrov2019}. Due to the importance of electrostatic interactions in these systems, assigning partial charges is not straightforward. Several charge assignment methods exist and it is well-known that the choice impacts the properties and behavior of the systems\cite{Garca2015, Kohagen2011, Dommert2010, Schmidt2010, Schrder2008}. In addition, the high concentration of ions in these materials results in non-negligible local electric fields that polarize the voluminous ions. Due to this, polarization should be included for the simulations to be predictive and accurate. This is usually accomplished by either explicitly including polarization in force fields or by an implicit inclusion through a mean-field approach\cite{Bedrov2019, Cieplak2009,Lemkul2016}. Polarization can be implicitly accounted for, e.g., by modifying the parameters of the Lennard-Jones (LJ) potential. This approach is based on the fact that the interaction between two induced dipoles depends on the interparticle distance ($r$) as $\sim \! \! r^{-6}$, the same dependence that is present in the dispersion term of the LJ potential. This approach has been used for both DESs\cite{Chaumont2020} and ILs\cite{Kddermann2007}. The problems include the computational effort to recalibrate the LJ parameters, poor transferability, the absence of directionality and difficulties in accurately describing transport properties\cite{Bedrov2019}. A more popular and easy way for implicit inclusion of polarization is re-scaling the fixed partial charges of the atoms, which has been extensively done for both ILs and DESs across different modeling resolutions\cite{Zhang2020, Perkins2014, deSouza2021, Sapir2020, Celebi2019, deSouza2019, Salanne2015}. The theoretical framework for reducing charges is the electronic continuum correction (ECC) proposed by Leontyev and Stuchebrukhov\cite{Leontyev2009, Leontyev2011}. They showed that charges should be scaled by a multiplicative factor $\gamma = 1/n$, where $n$ is the refractive index of the medium; it is connected to the optical dielectric constant, $\epsilon_{\infty}$, via $\epsilon_{\infty} = n^{2}$. The scaling factor is typically in the range of \num{0.7} - \num{0.9}. It provides an approach consistent with quantum-mechanical charge evaluations and serves as an adjustable parameter for calibrating force fields\cite{Sapir2020} but it does not, however, come without problems\cite{Tolmachev2020-rw}. Explicit inclusion of polarization is currently a very active field of research\cite{Bedrov2019, Lemkul2016}. The methods include induced point dipoles, fluctuating charge models and the classical Drude oscillator. The main drawback of polarizable force fields is the large computational cost, making their development and calibration a challenging task. Despite the challenges, several general polarizable force fields exist for ILs, such as APPLE$\&$P\cite{Borodin2009}, CL\&Pol\cite{Goloviznina2019} and AMOEBA\cite{Starovoytov2014}. For DESs, the available option is the CL\&Pol force field by Goloviznina \textit{et al.}\cite{Goloviznina2019, Goloviznina2021} based on Drude-induced dipoles. \begin{figure}[b!] \centering \includegraphics[width=1.00\linewidth]{Figures/Estruturas_CPK.eps} \caption{Chemical structures of DES components and the naming conventions for the atom types of the hydroxyl groups. Oxygens are depicted in red, hydrogens in white, carbons in cyan, nitrogen in blue, and chloride in green.} \label{fig:structures} \end{figure} So far, only general validation has been completed with the CL\&Pol force field for DES. This includes properties such as density, viscosity and diffusion coefficients\cite{Goloviznina2021}. More complex evaluations of liquid structure and dynamic behavior of common DESs have been done with traditional non-polarizable force fields, such as the OPLS\cite{Doherty2018, Bittner2021, CeaKlapp2021}, CHARMM/CGenFF\cite{Kaur2019, JahanbakhshBonab2021, Bittner2021} and AMBER/GAFF\cite{Zhang2020, Mainberger2017, Bittner2021} families. Of these force fields, only OPLS has been specifically tuned for DES\cite{Doherty2018}. Other works that have addressed calibration or adjustment of parameters, includes those of Perkins\cite{Perkins2013, Perkins2014} \textit{et al.} and Ferreira\cite{Ferreira2016} \textit{et al.} In the present study, we explored the application of the CL\&Pol force field\cite{Goloviznina2021} and we noted some important issues that lead to unphysical results: Phase separation and overpolarization that occurs with the chlorides. Here, we present, discuss, and correct both problems, which allows a safer application of the force field. In addition, we also compared the results with its reduced charge non-polarizable version, the CL$\&$P force field\cite{CanongiaLopes2012}. The chosen DES was composed of the choline chloride, \ce{[CHO]^+}\ce{[Cl]^-}, and ethylene glycol, \ce{[EG]}, at a molar ratio of \num{1}:\num{2}, called ethaline, which has been subject of several experimental and theoretical studies\cite{Alizadeh2020, Kaur2020, Kaur2019, Zhang2020, Wagle2016, Ferreira2016}. Figure{~}\ref{fig:structures} shows the chemical structures and nomenclature of atom types used in this work. \section{Computational Details} \subsection{CL{\&}P and CL{\&}Pol Force Fields}\label{sec:dev} The CL{\&}P force field\cite{CanongiaLopes2012} was originally developed using fixed charges in a non-polarizable fashion. It was further extended to include explicit polarization via Drude-induced dipoles and named as CL{\&}Pol\cite{Goloviznina2019, Goloviznina2021}. CL{\&}Pol shares the functional form of the CL{\&}P force field, which is composed of Coulomb and Lennard-Jones (LJ) non-bonded interactions, and bond, angle and torsional potentials for intramolecular interactions. In addition, for modeling DESs, the CL{\&}Pol force field also has the Tang-Toennies\cite{Tang1984} (TT) and Thole\cite{Thole1981} damping functions, \begin{center} \begingroup \small \thinmuskip=\muexpr\thinmuskip*1/9\relax \medmuskip=\muexpr\medmuskip*1/9\relax \begin{equation}\label{eqn:TT} \begin{split} f_{n}(r_{ij}) = \num{1} - ce^{-br_{ij}} \sum_{k=0}^{n}\frac{(br_{ij})}{k!}^{k}, \\ \end{split} \end{equation} \endgroup\\ \end{center} \begin{center} \begingroup \small \thinmuskip=\muexpr\thinmuskip*1/9\relax \medmuskip=\muexpr\medmuskip*1/9\relax \begin{equation}\label{eqn:Thole} \begin{split} T(r_{ij}) = \num{1} - \left(\num{1} + \frac{pr_{ij}}{\num{2}(\alpha_{i}\alpha_{j})^{\frac{1}{6}}} \right) e^{-pr_{ij}/{(\alpha_{i}\alpha_{j})^{\frac{1}{6}}}}, \\ \end{split} \end{equation} \endgroup\\ \end{center} where the parameter $b$ determines the spatial extension of damping (the developers assigned its value equal to \num{4.5}), $k$ is the order of the sum and goes to $k$=\num{4}, $c$=\num{1}, $r_{ij}$ is the distance between the sites, $p$=\num{2.6} is the Thole Parameter and $\alpha_{i}$ is the polarizability of atom $i$. The Thole function dampens the Coulomb interactions at short distances that originate from the induced dipoles. The TT function was adopted in the CL{\&}Pol force field to avoid instabilities and to dampen the short-range charge-dipole interactions that occur due to the presence of "naked" hydrogen atoms in the force field formulation\cite{Goloviznina2021}. It means that some hydrogen atoms do not have LJ parameters, just charges. The "naked" hydrogens give rise to an important issue: The atomic diameters (the $\sigma$-parameter of the LJ potential) must be modified for the cross-interactions between chlorides and hydroxyls of the \ce{[CHO]^+} and \ce{[EG]} species. The motivation is that the repulsive potentials of the oxygen atoms (\ce{OH} and \ce{OHG}, Figure{~}\ref{fig:structures}) to which the hydrogens from hydroxyls are bound, are insufficient to compensate the augmented polarization effect, which "freezes" the system and impacts the local structure. They proposed\cite{Goloviznina2021} the value of $\sigma$=\SI{0.37}{\nano\meter} for both $\ce{[Cl]^-}-\ce{OH}$ and $\ce{[Cl]^-}-\ce{OHG}$ interactions. As we will see in Section~\ref{sec:finetune}, that leads to an unphysical phase separation. The fine-tuned values we propose are $\sigma$=\SI{0.345}{\nano\meter} and $\sigma$=\SI{0.356}{\nano\meter} for the $\ce{[Cl]^-}-\ce{OHG}$ and $\ce{[Cl]^-}-\ce{OH}$, respectively. The CL{\&}Pol force field assigns all the force constants of the harmonic bonds between the Drude cores (DC) and the Drude particles (DP) to $k_{D}$=\SI{4184}{\kilo\joule\per\mol}, and masses of all DPs to $m_{DP}$=\num{0.4}\,u. In addition, the CL{\&}Pol force field treats only the heavy atoms as polarizable and adds the polarizabilities of hydrogen atoms to the heavy atom to which they are bonded. Polarizabilities are used to generate the partial charges ($q_D$) of the DPs ($\alpha$=$q_D^{2}$/$k_{D}$) and have negative values. Opposite charges (-$q_D$) are added onto the initial charges of the heavy atoms. These \textit{starting} charges come from the non-polarizable CL{\&}P force field as do the \textit{starting} LJ parameters. The LJ parameters are subsequently scaled to avoid double counting of polarization effects\cite{Goloviznina2021}. This process is based on a predictive scheme proposed by the developers of the original force field\cite{Goloviznina2019, Pdua2017} using key fragments of the molecules of interest to avoid costly symmetry-adapted perturbation theory (SAPT) calculations\cite{Szalewicz2012-pf}. They split the \ce{[EG]} into two methanol units, while the cholinium cation was treated as a single fragment. The scaling coefficients and the final set of LJ parameters (as well as the partial charges and intramolecular parameters) they obtained were used in our work and can be consulted in their publication\cite{Goloviznina2021}, except for the $\sigma$ parameter of the $\ce{[Cl]^-}-\ce{OHG}$ and $\ce{[Cl]^-}-\ce{OH}$ interactions, as mentioned above. In our work, we also performed MD simulations with the non-polarizable CL{\&}P force field. In this case, the partial charges of the \ce{[CHO]^+}\ce{[Cl]^-} was multiplied by a scaling factor equal to \num{0.9} to implicitly account for polarization. The simulation with this scaling factor was validated by the reproduction of physicochemical properties of ethaline, as will be seen in Section~\ref{sec:finetune} and Table~\ref{tbl:FF}. \subsection{Molecular Dynamics Simulations} All the MD simulations were executed using LAMMPS\cite{Plimpton1995} with periodic boundary conditions and with the USER-DRUDE package\cite{Dequidt2015} enabled to allow the use of Drude-induced dipoles. To integrate the equations of motion, time steps of \SI{1}{\femto\second} and \SI{2}{\femto\second} were employed in the polarizable and non-polarizable simulations, respectively. A cut-off radius of \SI{1.2}{\nano\metre} was used for the Lennard-Jones and the real space part of the electrostatic interactions. The particle-particle particle-mesh (P3M) method\cite{Hockney1973, Eastwood1975} was applied for the long-range part of the electrostatic interactions. Bonds connected to hydrogen atoms were constrained using the SHAKE algorithm\cite{Ryckaert1977}. The ethaline simulations were conducted using \num{150} \ce{[CHO]^+}\ce{[Cl]^-} ionic pairs and \num{300} \ce{[EG]} molecules (\num{1}:\num{2} molar ratio). Initial configurations and force field parameter assignment were performed with \textit{fftool} and Packmol\cite{Martnez2009} utilities. In addition, for the polarizable model, the \textit{polarizer} and \textit{scaleLJ} tools were applied to add the Drude particles and to scale the Lennard-Jones parameters following the CL{\&}Pol force field protocol\cite{Goloviznina2021}. The steepest descents algorithm was used for energy minimization of the initial configurations. Then, the systems were equilibrated for \SI{10}{\nano\second} in the NPT (P=\SI{1}{atm} and T=\SI{323.15}{\kelvin} or \SI{373.15}{\kelvin}) ensemble and subsequently simulated in the NVT ensemble for \SI{70}{\nano\second}. In the non-polarizable simulation, temperature and pressure were controlled using the Nosé-Hoover thermostat and barostat\cite{Nos1983, Hoover1985} with time constants of \SI{0.2}{\pico\second} and \SI{2}{\pico\second}, respectively. In the simulations with polarizable force field, temperature and pressure were controlled using the temperature-grouped Nosé-Hoover thermostat\cite{Son2019} to provide better kinetic energy equipartitioning \cite{Son2019}. The time constants were set to \SI{0.1}{\pico\second} and \SI{1}{\pico\second} for temperature and pressure, respectively. In addition, the temperature of the Drude particles was maintained at \SI{1}{\kelvin} with time constant of \SI{0.02}{\pico\second}. To obtain the X-ray structure factors, $S(q)$, the last frame from each of the production runs was taken and replicated twice in the $x$, $y$, and $z$-directions. The final box consisted of \num{1200} \ce{[CHO]^+}\ce{[Cl]^-} ionic pairs and \num{2400} \ce{[EG]} molecules. It was equilibrated for \SI{2}{\nano\second} in the NPT ensemble and simulated for \SI{5}{\nano\second} in the NVT ensemble. The larger simulation box was needed to achieve low $q$ values in the S(q) calculations. \begin{figure*}[t!] \centering \includegraphics[width=1.00\linewidth]{Figures/ambos_seta.eps} \caption{Images showing the unphysical phase separation (left) and the behavior after fine-tuning (right) of the CL{\&}Pol force field, both at \SI{323.15}{\kelvin}. Choline ions are shown in blue, ethylene glycol molecules in black, and chlorides in green. Oxygens are highlighted in red and hydrogens in white.} \label{fig:phase} \end{figure*} \section{Results and Discussion} We start demonstrating and discussing the unphysical phase separation that occurs when using the original setup of the CL{\&}Pol force field. This is followed by a proposed correction to the problem. Then, we present evidence and discuss the overpolarization of chloride anions, followed by our proposal to correct it, namely the application of the TT damping function to the induced dipoles of the chlorides. In both sections, we also provide a comparison with the non-polarizable CL{\&}P force field. \subsection{Artificial Phase Separation and How to Correct it \label{sec:finetune}} \begin{figure}[b!] \centering \includegraphics[width=1.00\linewidth]{Figures/gr_triplo_line3font30_Cl_HO.eps} \includegraphics[width=1.00\linewidth]{Figures/gr_triplo_line3font30_Cl_HOG.eps} \caption{Radial distribution functions between the \ce{[Cl]-} anions and the hydrogen atoms (see Figure{~}\ref{fig:structures} for nomenclature) of the hydroxyl groups at \SI{373}{\kelvin}. The \textit{ab initio} curves are from Ref.~\cite{Alizadeh2020}.} \label{fig:grs} \end{figure} In the original publication of the CL{\&}Pol force field, the authors increased the repulsion ($\sigma$ parameter of the LJ potential) of the $\ce{[Cl]^-}-\ce{OH}$ and $\ce{[Cl]^-}-\ce{OHG}$ pairs to avoid the corresponding intense peaks in the radial distribution functions (RDF) due to the strong interactions that "froze" the system\cite{Goloviznina2021}. However, as we show in Figure{~}\ref{fig:phase} (left), using the original parameters, the ethaline system phase separates in an unphysical and unrealistic manner to an ethylene glycol phase and a choline chloride phase. This phase separation occurred in approximately \SI{20}{\nano\second} at \SI{323.15}{\kelvin}. It is plausible that the original authors of the CL{\&}Pol force field did not observe that since their simulations were up to \SI{10}{\nano\second} and at \SI{298}{\kelvin}. However, their RDF of the $\ce{[Cl]^-}-\ce{OHG}$ atom pair already provided a clue (see Figure~\num{3} in Goloviznina \textit{et al.}\cite{Goloviznina2021}): The peak in the $g(r)$ was almost entirely absent indicating too weak interactions. The straightforward approach to correct the above issue would be to rescale the $\sigma$ parameters of the $\ce{[Cl]^-}-\ce{OH}$ and $\ce{[Cl]^-}-\ce{OHG}$ atom pairs to balance the interactions between the chlorides and the hydroxyls. Goloviznina \textit{et al.}\cite{Goloviznina2021} originally proposed $\sigma$=\SI{0.37}{\nano\meter} for both pairs, which are the values that drove the system to separate phases; they reported that with their original choice, $\sigma$=\SI{0.337}{\nano\meter}, the interactions were too strong and the ethaline simulation froze\cite{Goloviznina2021}. The natural choice would be some value of $\sigma$ between \SI{0.337}{\nano\meter}-\SI{0.37}{\nano\meter}. In principle, there is no reason to adopt equal values for both $\ce{[Cl]^-}-\ce{OH}$ and $\ce{[Cl]^-}-\ce{OHG}$ atom pairs, since the LJ parameters were scaled differently to avoid double counting the effects of polarization, when transforming the CL{\&}P force field into the CL{\&}Pol force field. \begin{table}[t!] \centering \caption{Experimental\cite{Yadav2015, HarifiMood2017, DAgostino2011, Chen2019_T} (Exp) and calculated properties (at \SI{323.15}{\kelvin}) of ethaline using the polarizable CL{\&}Pol and the non-polarizable CL{\&}P force fields. Densities ($\rho$) are given in units of \SI{}{\gram\per\cubic\centi\metre}, viscosities ($\eta$) in \SI{}{cP^{-1}}, diffusion coefficients ($D^{+}$, $D^{0}$, $D^{-}$) in \SI{e-11}{\meter\squared\per\second} and surface tension ($\gamma$) in \SI{}{mN.m^{-1}}.} \label{tbl:FF} \setlength{\tabcolsep}{2.3pt} \begin{tabular}{cccc} \toprule & Exp & CL{\&}Pol & CL{\&}P \\ \midrule $\rho$ & \tablenum{1.10} & \tablenum{1.114} ($\pm\num{0.002}$) & \tablenum{1.076} ($\pm\num{0.002}$) \\ $\eta$ & \tablenum{18.8} & \tablenum{18.5} ($\pm\num{1.2}$) & \tablenum{19.7} ($\pm\num{1.2}$) \\ $D^{+}$ & \tablenum{7.5} & \tablenum{4.6} ($\pm\num{0.5}$) & \tablenum{5.3} ($\pm\num{0.3}$) \\ $D^{0}$ & \tablenum{13.2} & \tablenum{13.3} ($\pm\num{0.6}$) & \tablenum{10.4} ($\pm\num{0.4}$) \\ $D^{-}$ & {-} & \tablenum{11.7} ($\pm\num{0.7}$) & \tablenum{7.9} ($\pm\num{0.5}$) \\ $\gamma$ & \tablenum{47.6} & \tablenum{48.1} ($\pm\num{3.1}$) & \tablenum{48.0} ($\pm\num{2.9}$) \\ \bottomrule \end{tabular} \end{table} To determine the appropriate $\sigma$ values for the $\ce{[Cl]^-}-\ce{OH}$ and $\ce{[Cl]^-}-\ce{OHG}$ pairs, our strategy was to reproduce the \textit{ab initio} reference RDFs at \SI{373.15}{\kelvin} available from Alizadeh \textit{et al.}\cite{Alizadeh2020}. We determined the values to be $\sigma$=\SI{0.345}{\nano\meter} and $\sigma$=\SI{0.356}{\nano\meter} for the $\ce{[Cl]^-}-\ce{OH}$ and $\ce{[Cl]^-}-\ce{OHG}$ pairs, respectively. The radial distribution functions are shown in Figure{~}\ref{fig:grs}. As the figure shows, the polarizable model is in excellent agreement with the \textit{ab initio} curves. The local structures of the $\ce{[Cl]^-}-\ce{OH}$ and $\ce{[Cl]^-}-\ce{OHG}$ correlations have maxima \num{9.9} at \SI{2.05}{Å} and \num{7.7} at \SI{2.0}{Å}. The corresponding \textit{ab initio} maxima are \num{9.8} at \SI{2.10}{Å} and \num{7.6} at \SI{2.12}{Å}. For comparison, the RDFs from the reduced charge CL{\&}P force field are also shown in Figure{~}\ref{fig:grs}. While positions of the the maxima are well reproduced, the peak intensities are suboptimal; the CL{\&}P force field predicts more intense peaks for the $\ce{[Cl]^-}-\ce{OHG}$ correlation than for the $\ce{[Cl]^-}-\ce{OH}$ correlation, while the CL{\&}Pol force field predicts the opposite, which corroborates with the \textit{ab initio} data\cite{Alizadeh2020}. Importantly, the new $\sigma$ parameters correct the problem of the unphysical phase separation as illustrated in Figure{~}\ref{fig:phase} (right). The system remained stable (mixed) during the whole \SI{70}{\nano\second} simulation. To check the robustness of this result, some basic properties (density, self-diffusion coefficients, viscosities and surface tension) of ethaline were calculated with both the polarizable and non-polarizable models and compared with available experimental data. We found good agreement with experimental data\cite{Yadav2015, HarifiMood2017, DAgostino2011, Chen2019_T}. The results are summarized in Table~\ref{tbl:FF} and the details of the computations are given in Supporting Information. \subsection{Correcting Chlorides' Overpolarization} The \ce{[Cl]^-}-\ce{[Cl]^-} radial distribution functions (RDF) are shown in Figure{~}\ref{fig:grs_cl}. For comparison, the curve for the non-polarizable CL{\&}P force field is also included. The black curve shows the result with direct application of the fine-tuned LJ parameters of the CL{\&}Pol force field from the previous section. Two peaks, a principal peak at \num{5.5}\,Å and a secondary peak at \num{8.3}\,Å are present. In contrast, the non-polarizable force field presents only one peak of relatively large width with the maximum at \num{7.2}\,Å. The blue curve shows the result for the CL{\&}Pol force field with both LJ parameters and overpolarization (using the TT damping function) corrected and we will discuss it next. \begin{figure}[t!] \centering \includegraphics[width=1.00\linewidth]{Figures/gr_cl_cl_line3font30_MOD.eps} \caption{Radial distribution functions between the \ce{[Cl]-}- \ce{[Cl]-} anions at \SI{373.15}{\kelvin}. } \label{fig:grs_cl} \end{figure} We use the \textit{ab initio} RDF data of Alizadeh \textit{et al.}\cite{Alizadeh2020} as a reference (Figure~2 in their article). Their data is very similar to the non-polarizable CL{\&}P data, the red curve in Figure{~}\ref{fig:grs_cl}. This comparison indicates that the CL{\&}P force field correctly reproduces the RDF while the polarizable CL{\&}Pol force field does not do so even after the LJ parameter correction (black curve in Figure{~}\ref{fig:grs_cl}). At first sight that is somewhat surprising. Szabadi \textit{et al.}\cite{Szabadi2021} compared \textit{ab initio} and polarizable MD simulations of aqueous chloride-based ionic liquids and found that all of the investigated polarizable force fields (including CL{\&}Pol) overestimate the induced dipoles of the chlorides due to their high polarizability, $\alpha_{Cl}$=\num{4.4}\,{\AA}$^{3}$. The strong induced dipoles counteract the Coulomb repulsion between the chlorides. Consequently, Szabadi \textit{et al.} observed\cite{Szabadi2021} an alignment of chlorides with water molecules and formation of aggregates that would be unfavorable in the presence of properly balanced interactions. They attempted rectify this overpolarization by reducing chlorides' polarizability\cite{Szabadi2021}. Although the results improved, there were still inconsistencies and the use of the TT damping function was suggested\cite{Szabadi2021}. To correct, or at least to attenuate, the above issue, we applied the TT damping function\cite{Tang1984} as suggested by Szabadi \textit{et al.}\cite{Szabadi2021}. This approach is simple and takes advantage of the fact that the TT potential is already included in the CL{\&}Pol force field definition\cite{Goloviznina2021} and implemented in LAMMPS. The RDF after applying this correction is shown as the blue curve in Figure{~}\ref{fig:grs_cl}. The agreement with the non-polarizable curve (red), and hence the \textit{ab initio} reference data\cite{Alizadeh2020}, is much better: With the correction, the \ce{[Cl]^-}-\ce{[Cl]^-} RDF has only one peak at the expected position, although the peak is slightly wider and less sharp. \begin{figure}[b!] \centering \includegraphics[width=1.00\linewidth]{Figures/SK_Total_323K.eps} \caption{Total X-ray structure factors, $S(q)$, for bulk ethaline at \SI{323.15}{\kelvin} computed using Equation~\ref{eqn:TT}. } \label{fig:SK_Total} \end{figure} \begin{figure*}[t!] \centering \begin{minipage}[!]{0.49\linewidth} \includegraphics[width=1.00\linewidth]{Figures/SK_CHO-CHO_323K.eps} \end{minipage} \begin{minipage}[!]{0.49\linewidth} \includegraphics[width=1.00\linewidth]{Figures/SK_EG-EG_323K.eps} \end{minipage} \begin{minipage}[!]{0.49\linewidth} \includegraphics[width=1.00\linewidth]{Figures/SK_CHO-Cl_323K.eps} \end{minipage} \begin{minipage}[!]{0.49\linewidth} \includegraphics[width=1.00\linewidth]{Figures/SK_EG-Cl_323K.eps} \end{minipage} \begin{minipage}[!]{0.49\linewidth} \includegraphics[width=1.00\linewidth]{Figures/SK_CHO-EG_323K.eps} \end{minipage} \begin{minipage}[!]{0.49\linewidth} \includegraphics[width=1.00\linewidth]{Figures/SK_Cl-Cl_323K.eps} \end{minipage} \caption{Partial X-ray structure factors, $S(q)$, for bulk ethaline correlation components at \SI{323.15}{\kelvin} computed using Equation~\ref{eqn:SK_Partial}. } \label{fig:SKs} \end{figure*} A deeper structural analysis was done by computing the total and the partial components of the X-ray structure factor\cite{Sharma2021}, $S(q)$, with the Travis\cite{Brehm2011, Brehm2020} software using \begin{center} \begingroup \small \thinmuskip=\muexpr\thinmuskip*1/9\relax \medmuskip=\muexpr\medmuskip*1/9\relax \begin{equation}\label{eqn:TT} \begin{split} S(q) = \frac{\rho_{0}\sum_{i=1}^{n}\sum_{j=1}^{n}x_{i}x_{j}f_{i}(q)f_{j}(q)} {[\sum_{i=1}^{n}x_{i}f_{i}(q)][\sum_{j=1}^{n}x_{j}f_{j}(q)] } \\ \times \int_{0}^{L/2}\num{4}\pi r^{2}[g_{ij}(r) - \num{1}] \frac{\sin{qr}}{qr}W(r)\,dr, \end{split} \end{equation} \endgroup\\ \end{center} where $\rho_{0}$ is the total number density, $x_{i}$ and $x_{j}$ are the molar fractions of atoms $i$ and $j$, $f_{i}(q)$ and $f_{j}(q)$ are tabulated X-ray atomic form factors, $L$ is the simulation box length, $g_{ij}(r)$ is the radial distribution function between atomic species including both intra- and intermolecular terms, and $W(r) = \sin(2\pi r/L)/(2\pi r/L)$ is a Lorch function, sometimes used to reduce the effects of finite truncation error of $g_{ij}(r)$ at large values of $r$. The partial components of $S(q)$ were computed using\cite{Kaur2019, Kaur2020} \begin{center} \begingroup \small \thinmuskip=\muexpr\thinmuskip*1/9\relax \medmuskip=\muexpr\medmuskip*1/9\relax \begin{equation}\label{eqn:SK_Partial} \begin{split} S(q) = S^{[CHO]^+-[CHO]^+}(q) + S^{[Cl]^--[Cl]^-}(q) \\ + S^{[EG]-[EG]}(q) + \num{2}S^{[CHO]^+-[Cl]^-}(q) \\ + \num{2}S^{[CHO]^+-[EG]}(q) + \num{2}S^{[Cl]^--[EG]}(q). \\ \end{split} \end{equation} \endgroup\\ \end{center} The total X-ray structure factors for ethaline are shown in Figure{~}\ref{fig:SK_Total}. There is a principal peak around \SI{14}{\per\nano\meter}, which corresponds to a distance of approximately $2\pi/14 \! \approx \! \SI{0.44}{\nano\meter}$. Besides that, the secondary peak at \SI{26}{\per\nano\meter} and the valley at \SI{35}{\per\nano\meter} are more prominent with the polarizable model, both with and without the overpolarization correction. In turn, the non-polarizable CL{\&}P model shows total X-ray structure factors slightly shifted toward the lower $q$-vector region (higher real space values). In general, the main contributions to the principal peak are from the \ce{[EG]}-\ce{[EG]} and \ce{[CHO]^+}-\ce{[EG]} correlations, as the partial components of $S(q)$ in Figure{~}\ref{fig:SKs} show. Another interesting observation is the presence of peaks and antipeaks in the different self- and cross-correlations around \SI{10}{\per\nano\meter} suggesting a pseudo-charge-ordering similar to what has been found for ILs\cite{Sharma2021, Hettige2012} and also previously reported for DESs\cite{Kaur2019, Kaur2020}. Those peaks are due to \ce{[CHO]^+}-\ce{[CHO]^+}, \ce{[EG]}-\ce{[Cl]^-}, and \ce{[Cl]^-}-\ce{[Cl]^-} correlations, while the antipeaks are mainly due to \ce{[CHO]^+}-\ce{[Cl]^-} correlation. \begin{figure*}[t!] \centering \includegraphics[width=1.00\linewidth]{Figures/Tripla3.eps} \caption{Representative snapshot of ethaline simulated with (A) CL{\&}Pol, (B) CL{\&}Pol + LJ + TT, and (C) CL{\&}P + LJ. The images illustrate the degree of long-range ordering in the system. Choline ions and ethylene glycol molecules are depicted in blue and red, respectively. } \label{fig:nanohetero} \end{figure*} At first sight the overpolarization correction does not seem to have a considerable impact on the liquid structure in Figure{~}\ref{fig:SK_Total}. However, analysis of the partial components of $S(q)$ shows remarkable differences, as seen in Figure{~}\ref{fig:SKs}. The first one concerns the \ce{[Cl]^-}-\ce{[Cl]^-} correlation, as may be expected based on the \ce{[Cl]^-}-\ce{[Cl]^-} RDFs. Without the overpolarization correction, there is more than one peak in the range between \num{10}-\SI{15}{\per\nano\meter}. In addition, the peaks are considerably smaller than the single peak around \SI{10}{\per\nano\meter} for the CL{\&}P force field and the corrected CL{\&}Pol force field. When overpolarization is present, the notable difference is the presence of prepeaks and preantipeaks at \SI{2.4}{\per\nano\meter} in all self- and cross-correlations. This corresponds to a distance of $2\pi/2.4 \!\! \approx \! \! \SI{2.93}{\nano\meter}$, indicating long-range structural ordering or nano-heterogeneities. In contrast, when overpolarization is corrected, there are less intense prepeaks and preantipeaks and they appear only in the \ce{[CHO]^+}-\ce{[CHO]^+}, \ce{[EG]}-\ce{[EG]}, and \ce{[CHO]^+}-\ce{[EG]} correlations. In addition, the position of the prepeaks and preantipeaks change to \SI{4.3}{\per\nano\meter}, corresponding to a distance of $2\pi/4.3 \! \approx \! \SI{1.46}{\nano\meter}$. This is approximately half of the distance when the overpolarization is present. The non-polarizable model does not show any prepeaks or preantipeaks at low $q$ values, that is, no long-range ordering. Aiming to understand the differences of long-range ordering between the models representative snapshots of the systems (with some periodic copies included) are shown in Figure{~}\ref{fig:nanohetero}. In the case of the CL{\&}Pol force field without overpolarizarion correction (Fig.~\ref{fig:nanohetero}A), the presence of choline-rich (blue) and ethylene glycol-rich (red) nano-heterogeneities is evident. In the case of the non-polarizable model (Fig.~\ref{fig:nanohetero}C), the choline ions and ethylene glycol molecules are uniformly spread throughout the simulation box, without any noticeable domains. In turn, when overpolarization is corrected in the CL{\&}Pol force field (Fig.~\ref{fig:nanohetero}B), the situation is intermediate to the other two cases. In summary, Figure{~}\ref{fig:nanohetero} gives a visual analysis of the aforementioned discussion of prepeaks and preantipeaks at very low $q$ region. \begin{table*}[t!] \centering \caption{Average surface area (\SI{}{Å^2}) shared between components of ethaline at \SI{323.15}{\kelvin} obtained through Voronoi tesselation. The entries show the average area which a \textit{single} ion/molecule shares with the other component of the contact pair. The standard deviation is equal to \SI{0.1}{Å^2}.} \label{tbl:Voro} \begin{tabular}{cccc} \toprule Contact Pair & CL{\&}Pol + LJ + TT & CL{\&}Pol + LJ & CL{\&}P \\ \midrule $\ce{[Cl]^-}-\ce{[Cl]^-}$ & \tablenum{0} & \tablenum{0} & \tablenum{0} \\ $\ce{[Cl]^-}-\ce{[CHO]^+}$ & \tablenum{19.9} & \tablenum{20.7} & \tablenum{19.5} \\ $\ce{[Cl]^-}-\ce{[EG]}$ & \tablenum{16.9} & \tablenum{14.9} & \tablenum{17.2} \\ $\ce{[CHO]^+}-\ce{[CHO]^+}$ & \tablenum{409.1} & \tablenum{420.5} & \tablenum{404.6} \\ $\ce{[CHO]^+}-\ce{[EG]}$ & \tablenum{108.6} & \tablenum{95.1} & \tablenum{113.7} \\ $\ce{[EG]}-\ce{[EG]}$ & \tablenum{211.4} & \tablenum{219.5} & \tablenum{208.5} \\ \bottomrule \end{tabular} \end{table*} A few MD simulations of ethaline have been performed before\cite{Alizadeh2020, Ferreira2016, Zhang2020, Kaur2019}. However, the X-ray scattering structure factors were computed only by Kaur \textit{et al.}\cite{Kaur2019}, who also found peaks and antipeaks in low $q$ region, around \SI{5}{\per\nano\meter} corresponding to a distance of $2\pi/5 \approx \SI{1.25}{\nano\meter}$. This indicates some degree of nano-heterogeneity similar to what we have obtained after the overpolarization correction with the TT damping function. Nano-heterogeneity was also found by Zhang \textit{et al.}\cite{Zhang2020}, who computed protiated and deutered neutron scattering structure factors using the Genarlized Amber Force Field\cite{Wang2004, Sprenger2015} (GAFF) with scaled charges and compared with experimental data. Although the match between simulation and experiment was not perfect, it was still good and showed specific structural correlations at all length scales, including the low $q$ region. Another interesting work was performed by Alizadeh \textit{et al.}\cite{Alizadeh2019}, who simulated choline chloride and some of its derivatives with elongated alkyl chains in the presence of ethylene glycol molecules. They also found noticeable heterogeneity across the systems. In addition, the larger the alkyl chain, the stronger the heterogeneity. In general, as reviewed by Kaur \textit{et al.}\cite{Kaur2020}, DESs should exhibit some heterogeneity at nanoscales, similar of the ILs. Considering the results in Figures~\ref{fig:SKs} and \ref{fig:nanohetero} and the above discussion, MD simulations of ethaline should show some, albeit limited, degree of nano-heterogeneity. However, nano-heterogeneity is overestimated in the CL{\&}Pol force field, as seen in Figure{~}\ref{fig:nanohetero}A. As will be discussed below, \ce{[CHO]^+}-rich complexes are preferentially formed due to the over-strong induced chloride dipoles. When overpolarization is corrected, \ce{[EG]}-rich complexes are preferentially formed. Both are, however, mediated by chlorides. To quantify the above, the Voronoi tessellation\cite{Voronoi1908} technique was applied using the Voro++ package\cite{Rycroft2009} implemented in LAMMPS; Voronoi cell defines the area (volume) that is closer to a given particle than any other particle. Since the Voronoi cell of each molecule shares facets with other molecules, an average contact area between components can be computed. For this calculation, we removed all the Drude particles to allow for a direct comparison with the non-polarizable model as otherwise there would be artificial facets due to the Drude particles and hence systematically higher areas. The results are presented in Table~\ref{tbl:Voro}. \begin{figure}[b!] \centering \includegraphics[width=1.00\linewidth]{Figures/Probabilidade_x_Complexos_Triplo_UPDATED.eps} \caption{Probability of simultaneous coordination of different amount of hydroxyls from choline and from ethylene glycol around a single chloride anion. } \label{fig:prob_clusters} \end{figure} \begin{table*}[t!] \centering \caption{Probabilities (\textbf{P}) of different coordination numbers (\textbf{CN}) of hydroxyls simultaneously coordinated with a single \ce{[Cl]^-} anion.} \label{tbl:ProbCN} \begin{tabular}{ccccc} \toprule \textbf{CN} & \textbf{CN} & \textbf{P(\%)} & \textbf{P(\%)} & \textbf{P(\%)} \\ \midrule $\ce{[Cl]^-}-\ce{HO}$ & $\ce{[Cl]^-}-\ce{HOG}$ & CL{\&}Pol + LJ + TT & CL{\&}Pol + LJ & CL{\&}P \\ \midrule \tablenum{0} & \tablenum{0} & \tablenum{2.8} & \tablenum{8.2} & \tablenum{6.4} \\ \tablenum{0} & \tablenum{1} & \tablenum{4.9} & \tablenum{12.2} & \tablenum{7.2} \\ \tablenum{0} & \tablenum{2} & \tablenum{10.1} & \tablenum{15.8} & \tablenum{14.7} \\ \tablenum{0} & \tablenum{3} & \tablenum{11.5} & \tablenum{12.2} & \tablenum{12.0} \\ \tablenum{0} & \tablenum{4} & \tablenum{13.2} & \tablenum{1.9} & \tablenum{10.9} \\ \tablenum{0} & \tablenum{5} & \tablenum{7.0} & \tablenum{2.5} & \tablenum{5.6} \\ \tablenum{0} & \tablenum{6} & \tablenum{1.8} & \tablenum{0.3} & \tablenum{1.0} \\ \tablenum{1} & \tablenum{0} & \tablenum{3.1} & \tablenum{9.5} & \tablenum{5.0} \\ \tablenum{1} & \tablenum{1} & \tablenum{4.1} & \tablenum{9.5} & \tablenum{5.0} \\ \tablenum{1} & \tablenum{2} & \tablenum{10.9} & \tablenum{9.2} & \tablenum{11.0} \\ \tablenum{1} & \tablenum{3} & \tablenum{7.7} & \tablenum{1.9} & \tablenum{5.3} \\ \tablenum{1} & \tablenum{4} & \tablenum{6.0} & \tablenum{1.3} & \tablenum{5.2} \\ \tablenum{1} & \tablenum{5} & \tablenum{4.5} & \tablenum{0.2} & \tablenum{3.5} \\ \tablenum{1} & \tablenum{6} & \tablenum{0.1} & \tablenum{0.0} & \tablenum{0.0} \\ \tablenum{2} & \tablenum{0} & \tablenum{2.6} & \tablenum{5.9} & \tablenum{2.0} \\ \tablenum{2} & \tablenum{1} & \tablenum{2.2} & \tablenum{3.9} & \tablenum{1.5} \\ \tablenum{2} & \tablenum{2} & \tablenum{3.5} & \tablenum{1.9} & \tablenum{2.2} \\ \tablenum{2} & \tablenum{3} & \tablenum{1.8} & \tablenum{0.4} & \tablenum{0.7} \\ \tablenum{2} & \tablenum{4} & \tablenum{0.7} & \tablenum{0.0} & \tablenum{0.3} \\ \tablenum{2} & \tablenum{5} & \tablenum{0.0} & \tablenum{0.0} & \tablenum{0.0} \\ \tablenum{2} & \tablenum{6} & \tablenum{0.0} & \tablenum{0.0} & \tablenum{0.0} \\ \tablenum{3} & \tablenum{0} & \tablenum{1.0} & \tablenum{1.5} & \tablenum{0.0} \\ \tablenum{3} & \tablenum{1} & \tablenum{0.2} & \tablenum{1.2} & \tablenum{0.2} \\ \tablenum{3} & \tablenum{2} & \tablenum{0.3} & \tablenum{0.5} & \tablenum{0.3} \\ \tablenum{3} & \tablenum{3} \tablenum{4} \tablenum{5} \tablenum{6} & \tablenum{0.0} & \tablenum{0.0} & \tablenum{0.0} \\ \tablenum{4} &\tablenum{0} \tablenum{1} \tablenum{2} \tablenum{3} \tablenum{4} \tablenum{5} \tablenum{6} & \tablenum{0.0} & \tablenum{0.0} & \tablenum{0.0} \\ \bottomrule \end{tabular} \end{table*} The average surface areas of the \ce{[CHO]^+}-\ce{[CHO]^+} and \ce{[EG]}-\ce{[EG]} contact pairs are \SI{420.5}{Å^2} and \SI{219.5}{Å^2}, respectively, for the CL{\&}Pol force field with overpolarization. These values are larger than the corresponding values of \SI{409.1}{Å^2} and \SI{211.4}{Å^2} from the CL{\&}Pol force field with overpolarization corrected. The values are larger due to the overestimated segregation of the long-range ordering, as seen in Figure{~}\ref{fig:nanohetero}. Besides that, for the CL{\&}Pol force field with overpolarization, the chlorides share more area with the choline cations (\SI{20.7}{Å^2}) and less area with the ethylene glycol molecules (\SI{14.9}{Å^2}) when compared to the CL{\&}P (\SI{19.5}{Å^2} ; \SI{17.2}{Å^2}) and the CL{\&}Pol with the overpolarization corrected (\SI{19.9}{Å^2} ; \SI{16.9}{Å^2}). As suggested by these values, when overpolarization is present chloride-mediated \ce{[CHO]^+} complexes are favored due to larger \ce{[Cl]^-}-\ce{[CHO]^+} and \ce{[CHO]^+}-\ce{[CHO]^+} areas, while when overpolarization is corrected chloride-mediated \ce{[EG]} complexes are favored. To confirm the above, we calculated the probabilities of different amounts of hydroxyls from \ce{[CHO]^+} and \ce{[EG]} species simultaneously coordinated to a single \ce{[Cl]^-} anion. Each \ce{[CHO]^+} cation contributes with up to \num{1} hydroxyl group and each \ce{[EG]} molecule contributes with up to \num{2} hydroxyls. The criteria to assign a coordination is the same that determines a standard hydrogen bond with default values adopted from the Visual Molecular Dynamics (VMD) software\cite{Humphrey1996}. The results are graphically presented in Figure{~}\ref{fig:prob_clusters} and each probability given in Table~\ref{tbl:ProbCN}. Most of the points in Figure{~}\ref{fig:prob_clusters} have very low probabilities or even zero. These probabilities correspond to the case where the \ce{[Cl]^-} ions would be simultaneously coordinated with several hydroxyls. In this case, the hydroxyls can not achieve, at the same time, the proper conditions of distance and directionality to coordinate around the same chloride. In turn, many of the points with higher probabilities for the CL{\&}Pol force field with and without the overpolarization correction are located in distinct regions of Figure{~}\ref{fig:prob_clusters}: Middle top to right (black points) and middle top to left (blue points). These point location regions reflect the following: As the number of \ce{HOG} groups coordinated to a \ce{[Cl]^-} anion increases, the probabilities get higher for the CL{\&}Pol force field with overpolarization corrected than for the CL{\&}Pol without overpolarization correction. In contrast, when the number of \ce{HOG} groups decreases or when the number of \ce{HO} groups increases, the probabilities get lower for the CL{\&}Pol force field with overpolarization corrected and higher without correction. For instance, consulting Table~\ref{tbl:ProbCN}, when $\ce{[Cl]^-}-\ce{HO}$ $= 1$ and $\ce{[Cl]^-}-\ce{HOG}$ $= 3$ the probability is \num{7.7}$\%$ and \num{1.3}$\%$ with and without overpolarization correction, respectively. On the other hand, when $\ce{[Cl]^-}-\ce{HO}$ $= 1$ and $\ce{[Cl]^-}-\ce{HOG}$ $= 1$ the probabilities are \num{4.1}$\%$ and \num{9.5}$\%$, respectively. In the end, considering that there are many possible combinations of single-chloride mediated complexes, the impact on the summed probabilities is not negligible, that is, \ce{[EG]}-rich complexes are preferentially found when the overpolarization is corrected and \ce{[CHO]^+}-rich complexes are favored when the overpolarization is not corrected, corroborating with the discussions from the average surface area in Table~\ref{tbl:Voro} and with the overestimated nano-heterogeneity found from the low $q$ region of Figure{~}\ref{fig:SKs}. \section{Conclusion} In this work, we focused on correcting two problems that are present in the polarizable CL{\&}Pol force field for the deep eutectic solvent ethaline\cite{Goloviznina2019,Goloviznina2021}. Our simulations showed 1) unphysical phase separation in long simulations and 2) appearance of artificial nano-scale heterogeneities. Two different corrections were needed, the first involving the Lennard-Jones parameters and the second the inclusion of the Tang-Toennis damping function\cite{Tang1984} to correct for overpolarization. The first problem has its origin in the so-called "naked" hydrogens, that is, hydrogen atoms that do not have LJ parameters. This leads to unrealistically strong interactions after addition of the Drude particles. To correct for this, we balanced the interactions of the \ce{[Cl]^-} anions with the hydroxyls from the \ce{[CHO]^-} cation and \ce{[EG]^-} molecules. The values we propose here are $\sigma$=\SI{0.345}{\nano\meter} and $\sigma$=\SI{0.356}{\nano\meter} for the $\ce{[Cl]^-}-\ce{OH}$ and $\ce{[Cl]^-}-\ce{OHG}$ pairs, respectively. With these, the \textit{ab initio} reference RDFs\cite{Alizadeh2020} were reproduced and no artificial phase separation occurred. We would like to add a word of caution, however: When simulating other DESs, it is not clear that the $\sigma$ parameters are directly transferable. The second issue, overpolarization of chlorides, has its physical origin in the high polarizability of chloride, $\alpha_{Cl}$=\num{4.4}\,{\AA}$^{3}$. When not properly corrected, overpolarization leads to the appearance of unphysical nano-scale heterogeneities manifested as prepeaks and preantipeaks at \SI{2.4}{\per\nano\meter} in all self- and cross-correlations of the partial structure factors. We corrected the overpolarization by applying the Tang-Toennis damping function\cite{Tang1984} in the interactions of the induced chloride dipoles. This was originally suggested by Szabadi \textit{et al.}\cite{Szabadi2021} but not applied. The correction took care of artificial structural heterogeneity and the \ce{[Cl]^-}-\ce{[Cl]^-} RDF became similar to the \textit{ab initio} reference curve. \section{Data and Software Availability} All the force field parameters for ethaline are available in github from the developers of the CL{\&}Pol force field. (\url{https://github.com/kateryna-goloviznina/desff/tree/master/example_pol-des}). The updated values for the $\sigma$ parameter of the $\ce{[Cl]^-}-\ce{OHG}$ and $\ce{[Cl]^-}-\ce{OH}$ interactions are $\sigma$=\SI{0.345}{\nano\meter} and $\sigma$=\SI{0.356}{\nano\meter}, respectively. All software used in this work (LAMMPS, VMD, Travis, and Packmol) is freely available on the internet. Structures of the systems or raw MD data are available upon request. \begin{acknowledgement} R.M.d.S. and M.C.C.R. thank FAPESP (The S\~{a}o Paulo Research Foundation) grants Process 2020/06766-9 and 2016/21070-5. M.K. thanks the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Canada Research Chairs Program. Computational resources were provided by USP High Performance Computing (USP-HPC), SDumont (https://sdumont.lncc.br) from the "Laboratório Nacional de Computação Científica (LNCC/MCTI, Brazil)", and Compute Canada (www.computecanada.ca). This research was funded by the Ministry of Education and Science of the Russian Federation (contract RF----225121X0043). R.M.d.S. also thanks Vahideh Alizadeh and Barbara Kirchner for sharing the \textit{ab initio} data from Figure{~}\ref{fig:grs}. \end{acknowledgement} \begin{suppinfo} Details of how density, viscosity, self-diffusion coefficients and surface tension were computed. \end{suppinfo} \providecommand{\latin}[1]{#1} \makeatletter \providecommand{\doi} {\begingroup\let\do\@makeother\dospecials \catcode`\{=1 \catcode`\}=2 \doi@aux} \providecommand{\doi@aux}[1]{\endgroup\texttt{#1}} \makeatother \providecommand*\mcitethebibliography{\thebibliography} \csname @ifundefined\endcsname{endmcitethebibliography} {\let\endmcitethebibliography\endthebibliography}{} \begin{mcitethebibliography}{75} \providecommand*\natexlab[1]{#1} \providecommand*\mciteSetBstSublistMode[1]{} \providecommand*\mciteSetBstMaxWidthForm[2]{} \providecommand*\mciteBstWouldAddEndPuncttrue {\def\unskip.}{\unskip.}} \providecommand*\mciteBstWouldAddEndPunctfalse {\let\unskip.}\relax} \providecommand*\mciteSetBstMidEndSepPunct[3]{} \providecommand*\mciteSetBstSublistLabelBeginEnd[3]{} \providecommand*\unskip.}{} \mciteSetBstSublistMode{f} \mciteSetBstMaxWidthForm{subitem}{(\alph{mcitesubitemcount})} \mciteSetBstSublistLabelBeginEnd {\mcitemaxwidthsubitemform\space} {\relax} {\relax} \bibitem[Smith \latin{et~al.}(2014)Smith, Abbott, and Ryder]{Smith2014} Smith,~E.~L.; Abbott,~A.~P.; Ryder,~K.~S. Deep Eutectic Solvents ({DESs}) and Their Applications. \emph{Chemical Reviews} \textbf{2014}, \emph{114}, 11060--11082, DOI: \doi{10.1021/cr300162p}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hansen \latin{et~al.}(2020)Hansen, Spittle, Chen, Poe, Zhang, Klein, Horton, Adhikari, Zelovich, Doherty, Gurkan, Maginn, Ragauskas, Dadmun, Zawodzinski, Baker, Tuckerman, Savinell, and Sangoro]{Hansen2020} Hansen,~B.~B.; Spittle,~S.; Chen,~B.; Poe,~D.; Zhang,~Y.; Klein,~J.~M.; Horton,~A.; Adhikari,~L.; Zelovich,~T.; Doherty,~B.~W.; Gurkan,~B.; Maginn,~E.~J.; Ragauskas,~A.; Dadmun,~M.; Zawodzinski,~T.~A.; Baker,~G.~A.; Tuckerman,~M.~E.; Savinell,~R.~F.; Sangoro,~J.~R. Deep Eutectic Solvents: A Review of Fundamentals and Applications. \emph{Chemical Reviews} \textbf{2020}, \emph{121}, 1232--1285, DOI: \doi{10.1021/acs.chemrev.0c00385}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dong \latin{et~al.}(2017)Dong, Liu, Dong, Zhang, and Zhang]{Dong2017} Dong,~K.; Liu,~X.; Dong,~H.; Zhang,~X.; Zhang,~S. Multiscale Studies on Ionic Liquids. \emph{Chemical Reviews} \textbf{2017}, \emph{117}, 6636--6695, DOI: \doi{10.1021/acs.chemrev.6b00776}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Singh and Savoy(2020)Singh, and Savoy]{Singh2020} Singh,~S.~K.; Savoy,~A.~W. Ionic liquids synthesis and applications: An overview. \emph{Journal of Molecular Liquids} \textbf{2020}, \emph{297}, 112038, DOI: \doi{10.1016/j.molliq.2019.112038}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Carriazo \latin{et~al.}(2012)Carriazo, Serrano, Guti{\'{e}}rrez, Ferrer, and del Monte]{Carriazo2012} Carriazo,~D.; Serrano,~M.~C.; Guti{\'{e}}rrez,~M.~C.; Ferrer,~M.~L.; del Monte,~F. Deep-eutectic solvents playing multiple roles in the synthesis of polymers and related materials. \emph{Chemical Society Reviews} \textbf{2012}, \emph{41}, 4996, DOI: \doi{10.1039/c2cs15353j}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wu \latin{et~al.}(2021)Wu, Liang, Yu, L\"{u}, Ma, Qin, Chen, and Li]{Wu2021} Wu,~J.; Liang,~Q.; Yu,~X.; L\"{u},~Q.-F.; Ma,~L.; Qin,~X.; Chen,~G.; Li,~B. Deep Eutectic Solvents for Boosting Electrochemical Energy Storage and Conversion: A Review and Perspective. \emph{Advanced Functional Materials} \textbf{2021}, \emph{31}, 2011102, DOI: \doi{10.1002/adfm.202011102}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cunha and Fernandes(2018)Cunha, and Fernandes]{Cunha2018} Cunha,~S.~C.; Fernandes,~J.~O. Extraction techniques with deep eutectic solvents. \emph{{TrAC} Trends in Analytical Chemistry} \textbf{2018}, \emph{105}, 225--239, DOI: \doi{10.1016/j.trac.2018.05.001}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chen and Mu(2019)Chen, and Mu]{Chen2019} Chen,~Y.; Mu,~T. Application of deep eutectic solvents in biomass pretreatment and conversion. \emph{Green Energy {\&} Environment} \textbf{2019}, \emph{4}, 95--115, DOI: \doi{10.1016/j.gee.2019.01.012}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Alizadeh \latin{et~al.}(2020)Alizadeh, Malberg, P{\'{a}}dua, and Kirchner]{Alizadeh2020} Alizadeh,~V.; Malberg,~F.; P{\'{a}}dua,~A. A.~H.; Kirchner,~B. Are There Magic Compositions in Deep Eutectic Solvents? Effects of Composition and Water Content in Choline Chloride/Ethylene Glycol from Ab Initio Molecular Dynamics. \emph{The Journal of Physical Chemistry B} \textbf{2020}, \emph{124}, 7433--7443, DOI: \doi{10.1021/acs.jpcb.0c04844}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Martins \latin{et~al.}(2018)Martins, Pinho, and Coutinho]{Martins2018} Martins,~M. A.~R.; Pinho,~S.~P.; Coutinho,~J. A.~P. Insights into the Nature of Eutectic and Deep Eutectic Mixtures. \emph{Journal of Solution Chemistry} \textbf{2018}, \emph{48}, 962--982, DOI: \doi{10.1007/s10953-018-0793-1}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kaur \latin{et~al.}(2020)Kaur, Kumari, and Kashyap]{Kaur2020} Kaur,~S.; Kumari,~M.; Kashyap,~H.~K. Microstructure of Deep Eutectic Solvents: Current Understanding and Challenges. \emph{The Journal of Physical Chemistry B} \textbf{2020}, \emph{124}, 10601--10616, DOI: \doi{10.1021/acs.jpcb.0c07934}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[de~Castilla \latin{et~al.}(2019)de~Castilla, Bittner, M\"{u}ller, Jakobtorweihen, and Smirnova]{GonzlezdeCastilla2019} de~Castilla,~A.~G.; Bittner,~J.~P.; M\"{u}ller,~S.; Jakobtorweihen,~S.; Smirnova,~I. Thermodynamic and Transport Properties Modeling of Deep Eutectic Solvents: A Review on {gE}-Models, Equations of State, and Molecular Dynamics. \emph{Journal of Chemical {\&} Engineering Data} \textbf{2019}, \emph{65}, 943--967, DOI: \doi{10.1021/acs.jced.9b00548}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bedrov \latin{et~al.}(2019)Bedrov, Piquemal, Borodin, MacKerell, Roux, and Schr\"{o}der]{Bedrov2019} Bedrov,~D.; Piquemal,~J.-P.; Borodin,~O.; MacKerell,~A.~D.; Roux,~B.; Schr\"{o}der,~C. Molecular Dynamics Simulations of Ionic Liquids and Electrolytes Using Polarizable Force Fields. \emph{Chemical Reviews} \textbf{2019}, \emph{119}, 7940--7995, DOI: \doi{10.1021/acs.chemrev.8b00763}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Garc{\'{\i}}a \latin{et~al.}(2015)Garc{\'{\i}}a, Atilhan, and Aparicio]{Garca2015} Garc{\'{\i}}a,~G.; Atilhan,~M.; Aparicio,~S. The impact of charges in force field parameterization for molecular dynamics simulations of deep eutectic solvents. \emph{Journal of Molecular Liquids} \textbf{2015}, \emph{211}, 506--514, DOI: \doi{10.1016/j.molliq.2015.07.070}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kohagen \latin{et~al.}(2011)Kohagen, Brehm, Thar, Zhao, M\"uller-Plathe, and Kirchner]{Kohagen2011} Kohagen,~M.; Brehm,~M.; Thar,~J.; Zhao,~W.; M\"uller-Plathe,~F.; Kirchner,~B. Performance of Quantum Chemically Derived Charges and Persistence of Ion Cages in Ionic Liquids. A Molecular Dynamics Simulations Study of 1-n-Butyl-3-methylimidazolium Bromide. \emph{The Journal of Physical Chemistry B} \textbf{2011}, \emph{115}, 693--702, DOI: \doi{10.1021/jp109612k}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dommert \latin{et~al.}(2010)Dommert, Schmidt, Krekeler, Zhao, Berger, Site, and Holm]{Dommert2010} Dommert,~F.; Schmidt,~J.; Krekeler,~C.; Zhao,~Y.~Y.; Berger,~R.; Site,~L.~D.; Holm,~C. Towards multiscale modeling of ionic liquids: From electronic structure to bulk properties. \emph{Journal of Molecular Liquids} \textbf{2010}, \emph{152}, 2--8, DOI: \doi{10.1016/j.molliq.2009.06.014}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Schmidt \latin{et~al.}(2010)Schmidt, Krekeler, Dommert, Zhao, Berger, Site, and Holm]{Schmidt2010} Schmidt,~J.; Krekeler,~C.; Dommert,~F.; Zhao,~Y.; Berger,~R.; Site,~L.~D.; Holm,~C. Ionic Charge Reduction and Atomic Partial Charges from First-Principles Calculations of 1, 3-Dimethylimidazolium Chloride. \emph{The Journal of Physical Chemistry B} \textbf{2010}, \emph{114}, 6150--6155, DOI: \doi{10.1021/jp910771q}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Schr\"{o}der and Steinhauser(2008)Schr\"{o}der, and Steinhauser]{Schrder2008} Schr\"{o}der,~C.; Steinhauser,~O. The influence of electrostatic forces on the structure and dynamics of molecular ionic liquids. \emph{The Journal of Chemical Physics} \textbf{2008}, \emph{128}, 224503, DOI: \doi{10.1063/1.2929848}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cieplak \latin{et~al.}(2009)Cieplak, Dupradeau, Duan, and Wang]{Cieplak2009} Cieplak,~P.; Dupradeau,~F.-Y.; Duan,~Y.; Wang,~J. Polarization effects in molecular mechanical force fields. \emph{Journal of Physics: Condensed Matter} \textbf{2009}, \emph{21}, 333102, DOI: \doi{10.1088/0953-8984/21/33/333102}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lemkul \latin{et~al.}(2016)Lemkul, Huang, Roux, and MacKerell]{Lemkul2016} Lemkul,~J.~A.; Huang,~J.; Roux,~B.; MacKerell,~A.~D. An Empirical Polarizable Force Field Based on the Classical Drude Oscillator Model: Development History and Recent Applications. \emph{Chemical Reviews} \textbf{2016}, \emph{116}, 4983--5013, DOI: \doi{10.1021/acs.chemrev.5b00505}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chaumont \latin{et~al.}(2020)Chaumont, Engler, and Schurhammer]{Chaumont2020} Chaumont,~A.; Engler,~E.; Schurhammer,~R. Is Charge Scaling Really Mandatory when Developing Fixed-Charge Atomistic Force Fields for Deep Eutectic Solvents? \emph{The Journal of Physical Chemistry B} \textbf{2020}, \emph{124}, 7239--7250, DOI: \doi{10.1021/acs.jpcb.0c04907}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[K\"{o}ddermann \latin{et~al.}(2007)K\"{o}ddermann, Paschek, and Ludwig]{Kddermann2007} K\"{o}ddermann,~T.; Paschek,~D.; Ludwig,~R. Molecular Dynamic Simulations of Ionic Liquids: A Reliable Description of Structure, Thermodynamics and Dynamics. \emph{{ChemPhysChem}} \textbf{2007}, \emph{8}, 2464--2470, DOI: \doi{10.1002/cphc.200700552}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Zhang \latin{et~al.}(2020)Zhang, Poe, Heroux, Squire, Doherty, Long, Dadmun, Gurkan, Tuckerman, and Maginn]{Zhang2020} Zhang,~Y.; Poe,~D.; Heroux,~L.; Squire,~H.; Doherty,~B.~W.; Long,~Z.; Dadmun,~M.; Gurkan,~B.; Tuckerman,~M.~E.; Maginn,~E.~J. Liquid Structure and Transport Properties of the Deep Eutectic Solvent Ethaline. \emph{The Journal of Physical Chemistry B} \textbf{2020}, \emph{124}, 5251--5264, DOI: \doi{10.1021/acs.jpcb.0c04058}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Perkins \latin{et~al.}(2014)Perkins, Painter, and Colina]{Perkins2014} Perkins,~S.~L.; Painter,~P.; Colina,~C.~M. Experimental and Computational Studies of Choline Chloride-Based Deep Eutectic Solvents. \emph{Journal of Chemical {\&} Engineering Data} \textbf{2014}, \emph{59}, 3652--3662, DOI: \doi{10.1021/je500520h}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[de~Souza \latin{et~al.}(2021)de~Souza, Louren{\c{c}}o, de~Siqueira, Karttunen, Silva, and Dias]{deSouza2021} de~Souza,~R.~M.; Louren{\c{c}}o,~T.~C.; de~Siqueira,~L. J.~A.; Karttunen,~M.; Silva,~J. L.~D.; Dias,~L.~G. Development of coarse-grained force field to investigate sodium-ion transport mechanisms in cyanoborate-based ionic liquid. \emph{Journal of Molecular Liquids} \textbf{2021}, \emph{338}, 116648, DOI: \doi{10.1016/j.molliq.2021.116648}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sapir and Harries(2020)Sapir, and Harries]{Sapir2020} Sapir,~L.; Harries,~D. Restructuring a Deep Eutectic Solvent by Water: The Nanostructure of Hydrated Choline Chloride/Urea. \emph{Journal of Chemical Theory and Computation} \textbf{2020}, \emph{16}, 3335--3342, DOI: \doi{10.1021/acs.jctc.0c00120}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Celebi \latin{et~al.}(2019)Celebi, Vlugt, and Moultos]{Celebi2019} Celebi,~A.~T.; Vlugt,~T. J.~H.; Moultos,~O.~A. Structural, Thermodynamic, and Transport Properties of Aqueous Reline and Ethaline Solutions from Molecular Dynamics Simulations. \emph{The Journal of Physical Chemistry B} \textbf{2019}, \emph{123}, 11014--11025, DOI: \doi{10.1021/acs.jpcb.9b09729}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[de~Souza \latin{et~al.}(2019)de~Souza, de~Siqueira, Karttunen, and Dias]{deSouza2019} de~Souza,~R.~M.; de~Siqueira,~L. J.~A.; Karttunen,~M.; Dias,~L.~G. Molecular Dynamics Simulations of Polymer{\textendash}Ionic Liquid (1-Ethyl-3-methylimidazolium Tetracyanoborate) Ternary Electrolyte for Sodium and Potassium Ion Batteries. \emph{Journal of Chemical Information and Modeling} \textbf{2019}, \emph{60}, 485--499, DOI: \doi{10.1021/acs.jcim.9b00750}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Salanne(2015)]{Salanne2015} Salanne,~M. Simulations of room temperature ionic liquids: from polarizable to coarse-grained force fields. \emph{Physical Chemistry Chemical Physics} \textbf{2015}, \emph{17}, 14270--14279, DOI: \doi{10.1039/c4cp05550k}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Leontyev and Stuchebrukhov(2009)Leontyev, and Stuchebrukhov]{Leontyev2009} Leontyev,~I.~V.; Stuchebrukhov,~A.~A. Electronic continuum model for molecular dynamics simulations. \emph{The Journal of Chemical Physics} \textbf{2009}, \emph{130}, 085102, DOI: \doi{10.1063/1.3060164}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Leontyev and Stuchebrukhov(2011)Leontyev, and Stuchebrukhov]{Leontyev2011} Leontyev,~I.; Stuchebrukhov,~A. Accounting for electronic polarization in non-polarizable force fields. \emph{Physical Chemistry Chemical Physics} \textbf{2011}, \emph{13}, 2613, DOI: \doi{10.1039/c0cp01971b}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Tolmachev \latin{et~al.}(2020)Tolmachev, Boyko, Lukasheva, Martinez-Seara, and Karttunen]{Tolmachev2020-rw} Tolmachev,~D.~A.; Boyko,~O.~S.; Lukasheva,~N.~V.; Martinez-Seara,~H.; Karttunen,~M. Overbinding and Qualitative and Quantitative Changes Caused by Simple Na+ and K+ Ions in Polyelectrolyte Simulations: Comparison of Force Fields with and without {NBFIX} and {ECC} Corrections. \emph{J. Chem. Theory Comput.} \textbf{2020}, \emph{16}, 677--687, DOI: \doi{10.1021/acs.jctc.9b00813}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Borodin(2009)]{Borodin2009} Borodin,~O. Polarizable Force Field Development and Molecular Dynamics Simulations of Ionic Liquids. \emph{The Journal of Physical Chemistry B} \textbf{2009}, \emph{113}, 11463--11478, DOI: \doi{10.1021/jp905220k}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Goloviznina \latin{et~al.}(2019)Goloviznina, Lopes, Gomes, and P{\'{a}}dua]{Goloviznina2019} Goloviznina,~K.; Lopes,~J. N.~C.; Gomes,~M.~C.; P{\'{a}}dua,~A. A.~H. Transferable, Polarizable Force Field for Ionic Liquids. \emph{Journal of Chemical Theory and Computation} \textbf{2019}, \emph{15}, 5858--5871, DOI: \doi{10.1021/acs.jctc.9b00689}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Starovoytov \latin{et~al.}(2014)Starovoytov, Torabifard, and Cisneros]{Starovoytov2014} Starovoytov,~O.~N.; Torabifard,~H.; Cisneros,~G.~A. Development of {AMOEBA} Force Field for 1, 3-Dimethylimidazolium Based Ionic Liquids. \emph{The Journal of Physical Chemistry B} \textbf{2014}, \emph{118}, 7156--7166, DOI: \doi{10.1021/jp503347f}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Goloviznina \latin{et~al.}(2021)Goloviznina, Gong, Gomes, and P{\'{a}}dua]{Goloviznina2021} Goloviznina,~K.; Gong,~Z.; Gomes,~M. F.~C.; P{\'{a}}dua,~A. A.~H. Extension of the {CL}{\&}Pol Polarizable Force Field to Electrolytes, Protic Ionic Liquids, and Deep Eutectic Solvents. \emph{Journal of Chemical Theory and Computation} \textbf{2021}, \emph{17}, 1606--1617, DOI: \doi{10.1021/acs.jctc.0c01002}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Doherty and Acevedo(2018)Doherty, and Acevedo]{Doherty2018} Doherty,~B.; Acevedo,~O. {OPLS} Force Field for Choline Chloride-Based Deep Eutectic Solvents. \emph{The Journal of Physical Chemistry B} \textbf{2018}, \emph{122}, 9982--9993, DOI: \doi{10.1021/acs.jpcb.8b06647}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bittner \latin{et~al.}(2021)Bittner, Huang, Zhang, Kara, and Jakobtorweihen]{Bittner2021} Bittner,~J.~P.; Huang,~L.; Zhang,~N.; Kara,~S.; Jakobtorweihen,~S. Comparison and Validation of Force Fields for Deep Eutectic Solvents in Combination with Water and Alcohol Dehydrogenase. \emph{Journal of Chemical Theory and Computation} \textbf{2021}, \emph{17}, 5322--5341, DOI: \doi{10.1021/acs.jctc.1c00274}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Cea-Klapp \latin{et~al.}(2021)Cea-Klapp, Garrido, and Quinteros-Lama]{CeaKlapp2021} Cea-Klapp,~E.; Garrido,~J.~M.; Quinteros-Lama,~H. Insights into the orientation and hydrogen bond influence on thermophysical and transport properties in choline-based deep eutectic solvents and methanol. \emph{Journal of Molecular Liquids} \textbf{2021}, 117019, DOI: \doi{10.1016/j.molliq.2021.117019}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Kaur \latin{et~al.}(2019)Kaur, Malik, and Kashyap]{Kaur2019} Kaur,~S.; Malik,~A.; Kashyap,~H.~K. Anatomy of Microscopic Structure of Ethaline Deep Eutectic Solvent Decoded through Molecular Dynamics Simulations. \emph{The Journal of Physical Chemistry B} \textbf{2019}, \emph{123}, 8291--8299, DOI: \doi{10.1021/acs.jpcb.9b06624}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Bonab \latin{et~al.}(2021)Bonab, Ebrahimzadeh, and Sardroodi]{JahanbakhshBonab2021} Bonab,~P.~J.; Ebrahimzadeh,~A.~R.; Sardroodi,~J.~J. Insights into the interactions and dynamics of a {DES} formed by phenyl propionic acid and choline chloride. \emph{Scientific Reports} \textbf{2021}, \emph{11}, DOI: \doi{10.1038/s41598-021-85260-z}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mainberger \latin{et~al.}(2017)Mainberger, Kindlein, Bezold, Elts, Minceva, and Briesen]{Mainberger2017} Mainberger,~S.; Kindlein,~M.; Bezold,~F.; Elts,~E.; Minceva,~M.; Briesen,~H. Deep eutectic solvent formation: a structural view using molecular dynamics simulations with classical force fields. \emph{Molecular Physics} \textbf{2017}, \emph{115}, 1309--1321, DOI: \doi{10.1080/00268976.2017.1288936}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Perkins \latin{et~al.}(2013)Perkins, Painter, and Colina]{Perkins2013} Perkins,~S.~L.; Painter,~P.; Colina,~C.~M. Molecular Dynamic Simulations and Vibrational Analysis of an Ionic Liquid Analogue. \emph{The Journal of Physical Chemistry B} \textbf{2013}, \emph{117}, 10250--10260, DOI: \doi{10.1021/jp404619x}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ferreira \latin{et~al.}(2016)Ferreira, Voroshylova, Pereira, and Cordeiro]{Ferreira2016} Ferreira,~E. S.~C.; Voroshylova,~I.~V.; Pereira,~C.~M.; Cordeiro,~M. N. D.~S. Improved Force Field Model for the Deep Eutectic Solvent Ethaline: Reliable Physicochemical Properties. \emph{The Journal of Physical Chemistry B} \textbf{2016}, \emph{120}, 10124--10137, DOI: \doi{10.1021/acs.jpcb.6b07233}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Lopes and P{\'{a}}dua(2012)Lopes, and P{\'{a}}dua]{CanongiaLopes2012} Lopes,~J. N.~C.; P{\'{a}}dua,~A. A.~H. {CL}{\&}P: A generic and systematic force field for ionic liquids modeling. \emph{Theoretical Chemistry Accounts} \textbf{2012}, \emph{131}, DOI: \doi{10.1007/s00214-012-1129-7}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wagle \latin{et~al.}(2016)Wagle, Deakyne, and Baker]{Wagle2016} Wagle,~D.~V.; Deakyne,~C.~A.; Baker,~G.~A. Quantum Chemical Insight into the Interactions and Thermodynamics Present in Choline Chloride Based Deep Eutectic Solvents. \emph{The Journal of Physical Chemistry B} \textbf{2016}, \emph{120}, 6739--6746, DOI: \doi{10.1021/acs.jpcb.6b04750}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Tang and Toennies(1984)Tang, and Toennies]{Tang1984} Tang,~K.~T.; Toennies,~J.~P. An improved simple model for the van der Waals potential based on universal damping functions for the dispersion coefficients. \emph{The Journal of Chemical Physics} \textbf{1984}, \emph{80}, 3726--3741, DOI: \doi{10.1063/1.447150}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Thole(1981)]{Thole1981} Thole,~B. Molecular polarizabilities calculated with a modified dipole interaction. \emph{Chemical Physics} \textbf{1981}, \emph{59}, 341--350, DOI: \doi{10.1016/0301-0104(81)85176-2}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[P{\'{a}}dua(2017)]{Pdua2017} P{\'{a}}dua,~A. A.~H. Resolving dispersion and induction components for polarisable molecular simulations of ionic liquids. \emph{The Journal of Chemical Physics} \textbf{2017}, \emph{146}, 204501, DOI: \doi{10.1063/1.4983687}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Szalewicz(2012)]{Szalewicz2012-pf} Szalewicz,~K. Symmetry-adapted perturbation theory of intermolecular forces. \emph{Wiley Interdiscip. Rev. Comput. Mol. Sci.} \textbf{2012}, \emph{2}, 254--272, DOI: \doi{10.1002/wcms.86}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Plimpton(1995)]{Plimpton1995} Plimpton,~S. Fast Parallel Algorithms for Short-Range Molecular Dynamics. \emph{Journal of Computational Physics} \textbf{1995}, \emph{117}, 1--19, DOI: \doi{10.1006/jcph.1995.1039}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Dequidt \latin{et~al.}(2015)Dequidt, Dev{\'{e}}my, and P{\'{a}}dua]{Dequidt2015} Dequidt,~A.; Dev{\'{e}}my,~J.; P{\'{a}}dua,~A. A.~H. Thermalized Drude Oscillators with the {LAMMPS} Molecular Dynamics Simulator. \emph{Journal of Chemical Information and Modeling} \textbf{2015}, \emph{56}, 260--268, DOI: \doi{10.1021/acs.jcim.5b00612}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hockney \latin{et~al.}(1973)Hockney, Goel, and Eastwood]{Hockney1973} Hockney,~R.; Goel,~S.; Eastwood,~J. A 10000 particle molecular dynamics model with long range forces. \emph{Chemical Physics Letters} \textbf{1973}, \emph{21}, 589--591, DOI: \doi{10.1016/0009-2614(73)80315-x}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Eastwood(1975)]{Eastwood1975} Eastwood,~J. Optimal particle-mesh algorithms. \emph{Journal of Computational Physics} \textbf{1975}, \emph{18}, 1--20, DOI: \doi{10.1016/0021-9991(75)90099-6}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Ryckaert \latin{et~al.}(1977)Ryckaert, Ciccotti, and Berendsen]{Ryckaert1977} Ryckaert,~J.-P.; Ciccotti,~G.; Berendsen,~H.~J. Numerical integration of the cartesian equations of motion of a system with constraints: molecular dynamics of n-alkanes. \emph{Journal of Computational Physics} \textbf{1977}, \emph{23}, 327--341, DOI: \doi{10.1016/0021-9991(77)90098-5}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Mart{\'{\i}}nez \latin{et~al.}(2009)Mart{\'{\i}}nez, Andrade, Birgin, and Mart{\'{\i}}nez]{Martnez2009} Mart{\'{\i}}nez,~L.; Andrade,~R.; Birgin,~E.~G.; Mart{\'{\i}}nez,~J.~M. {PACKMOL}: A package for building initial configurations for molecular dynamics simulations. \emph{Journal of Computational Chemistry} \textbf{2009}, \emph{30}, 2157--2164, DOI: \doi{10.1002/jcc.21224}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Nos{\'{e}} and Klein(1983)Nos{\'{e}}, and Klein]{Nos1983} Nos{\'{e}},~S.; Klein,~M. Constant pressure molecular dynamics for molecular systems. \emph{Molecular Physics} \textbf{1983}, \emph{50}, 1055--1076, DOI: \doi{10.1080/00268978300102851}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hoover(1985)]{Hoover1985} Hoover,~W.~G. Canonical dynamics: Equilibrium phase-space distributions. \emph{Physical Review A} \textbf{1985}, \emph{31}, 1695--1697, DOI: \doi{10.1103/physreva.31.1695}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Son \latin{et~al.}(2019)Son, McDaniel, Cui, and Yethiraj]{Son2019} Son,~C.~Y.; McDaniel,~J.~G.; Cui,~Q.; Yethiraj,~A. Proper Thermal Equilibration of Simulations with Drude Polarizable Models: Temperature-Grouped Dual-Nos{\'{e}}{\textendash}Hoover Thermostat. \emph{The Journal of Physical Chemistry Letters} \textbf{2019}, \emph{10}, 7523--7530, DOI: \doi{10.1021/acs.jpclett.9b02983}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Yadav \latin{et~al.}(2015)Yadav, Kar, Verma, Naqvi, and Pandey]{Yadav2015} Yadav,~A.; Kar,~J.~R.; Verma,~M.; Naqvi,~S.; Pandey,~S. Densities of aqueous mixtures of (choline chloride${+}$ethylene glycol) and (choline chloride${+}$malonic acid) deep eutectic solvents in temperature range 283.15{\textendash}363.15K. \emph{Thermochimica Acta} \textbf{2015}, \emph{600}, 95--101, DOI: \doi{10.1016/j.tca.2014.11.028}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Harifi-Mood and Buchner(2017)Harifi-Mood, and Buchner]{HarifiMood2017} Harifi-Mood,~A.~R.; Buchner,~R. Density, viscosity, and conductivity of choline chloride ${+}$ ethylene glycol as a deep eutectic solvent and its binary mixtures with dimethyl sulfoxide. \emph{Journal of Molecular Liquids} \textbf{2017}, \emph{225}, 689--695, DOI: \doi{10.1016/j.molliq.2016.10.115}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[D{\textquotesingle}Agostino \latin{et~al.}(2011)D{\textquotesingle}Agostino, Harris, Abbott, Gladden, and Mantle]{DAgostino2011} D{\textquotesingle}Agostino,~C.; Harris,~R.~C.; Abbott,~A.~P.; Gladden,~L.~F.; Mantle,~M.~D. Molecular motion and ion diffusion in choline chloride based deep eutectic solvents studied by 1H pulsed field gradient {NMR} spectroscopy. \emph{Physical Chemistry Chemical Physics} \textbf{2011}, \emph{13}, 21383, DOI: \doi{10.1039/c1cp22554e}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Chen \latin{et~al.}(2019)Chen, Chen, Fu, Yang, Wang, Hu, Wang, and Mu]{Chen2019_T} Chen,~Y.; Chen,~W.; Fu,~L.; Yang,~Y.; Wang,~Y.; Hu,~X.; Wang,~F.; Mu,~T. Surface Tension of 50 Deep Eutectic Solvents: Effect of Hydrogen-Bonding Donors, Hydrogen-Bonding Acceptors, Other Solvents, and Temperature. \emph{Industrial {\&} Engineering Chemistry Research} \textbf{2019}, \emph{58}, 12741--12750, DOI: \doi{10.1021/acs.iecr.9b00867}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Szabadi \latin{et~al.}(2021)Szabadi, Elfgen, Macchieraldo, Kearns, Woodcock, Kirchner, and Schr\"{o}der]{Szabadi2021} Szabadi,~A.; Elfgen,~R.; Macchieraldo,~R.; Kearns,~F.~L.; Woodcock,~H.~L.; Kirchner,~B.; Schr\"{o}der,~C. Comparison between ab initio and polarizable molecular dynamics simulations of 1-butyl-3-methylimidazolium tetrafluoroborate and chloride in water. \emph{Journal of Molecular Liquids} \textbf{2021}, 116521, DOI: \doi{10.1016/j.molliq.2021.116521}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sharma \latin{et~al.}(2021)Sharma, Ivanov, and Margulis]{Sharma2021} Sharma,~S.; Ivanov,~A.~S.; Margulis,~C.~J. A Brief Guide to the Structure of High-Temperature Molten Salts and Key Aspects Making Them Different from Their Low-Temperature Relatives, the Ionic Liquids. \emph{The Journal of Physical Chemistry B} \textbf{2021}, \emph{125}, 6359--6372, DOI: \doi{10.1021/acs.jpcb.1c01065}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brehm and Kirchner(2011)Brehm, and Kirchner]{Brehm2011} Brehm,~M.; Kirchner,~B. {TRAVIS} - A Free Analyzer and Visualizer for Monte Carlo and Molecular Dynamics Trajectories. \emph{Journal of Chemical Information and Modeling} \textbf{2011}, \emph{51}, 2007--2023, DOI: \doi{10.1021/ci200217w}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Brehm \latin{et~al.}(2020)Brehm, Thomas, Gehrke, and Kirchner]{Brehm2020} Brehm,~M.; Thomas,~M.; Gehrke,~S.; Kirchner,~B. {TRAVIS}{\textemdash}A free analyzer for trajectories from molecular simulation. \emph{The Journal of Chemical Physics} \textbf{2020}, \emph{152}, 164105, DOI: \doi{10.1063/5.0005078}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Hettige \latin{et~al.}(2012)Hettige, Kashyap, Annapureddy, and Margulis]{Hettige2012} Hettige,~J.~J.; Kashyap,~H.~K.; Annapureddy,~H. V.~R.; Margulis,~C.~J. Anions, the Reporters of Structure in Ionic Liquids. \emph{The Journal of Physical Chemistry Letters} \textbf{2012}, \emph{4}, 105--110, DOI: \doi{10.1021/jz301866f}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Wang \latin{et~al.}(2004)Wang, Wolf, Caldwell, Kollman, and Case]{Wang2004} Wang,~J.; Wolf,~R.~M.; Caldwell,~J.~W.; Kollman,~P.~A.; Case,~D.~A. Development and testing of a general amber force field. \emph{Journal of Computational Chemistry} \textbf{2004}, \emph{25}, 1157--1174, DOI: \doi{10.1002/jcc.20035}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Sprenger \latin{et~al.}(2015)Sprenger, Jaeger, and Pfaendtner]{Sprenger2015} Sprenger,~K.~G.; Jaeger,~V.~W.; Pfaendtner,~J. The General {AMBER} Force Field ({GAFF}) Can Accurately Predict Thermodynamic and Transport Properties of Many Ionic Liquids. \emph{The Journal of Physical Chemistry B} \textbf{2015}, \emph{119}, 5882--5895, DOI: \doi{10.1021/acs.jpcb.5b00689}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Alizadeh \latin{et~al.}(2019)Alizadeh, Geller, Malberg, S{\'{a}}nchez, Padua, and Kirchner]{Alizadeh2019} Alizadeh,~V.; Geller,~D.; Malberg,~F.; S{\'{a}}nchez,~P.~B.; Padua,~A.; Kirchner,~B. Strong Microheterogeneity in Novel Deep Eutectic Solvents. \emph{{ChemPhysChem}} \textbf{2019}, \emph{20}, 1786--1792, DOI: \doi{10.1002/cphc.201900307}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Voronoi(1908)]{Voronoi1908} Voronoi,~G. Nouvelles applications des paramètres continus à la théorie des formes quadratiques. Deuxième mémoire. Recherches sur les parallélloèdres primitifs. \emph{Journal für die reine und angewandte Mathematik} \textbf{1908}, \emph{134}, 198--287\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Rycroft(2009)]{Rycroft2009} Rycroft,~C.~H. {VORO}++: A three-dimensional {V}oronoi cell library in C++. \emph{Chaos: An Interdisciplinary Journal of Nonlinear Science} \textbf{2009}, \emph{19}, 041111, DOI: \doi{10.1063/1.3215722}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \bibitem[Humphrey \latin{et~al.}(1996)Humphrey, Dalke, and Schulten]{Humphrey1996} Humphrey,~W.; Dalke,~A.; Schulten,~K. {VMD}: Visual molecular dynamics. \emph{Journal of Molecular Graphics} \textbf{1996}, \emph{14}, 33--38, DOI: \doi{10.1016/0263-7855(96)00018-5}\relax \mciteBstWouldAddEndPuncttrue \mciteSetBstMidEndSepPunct{\mcitedefaultmidpunct} {\mcitedefaultendpunct}{\mcitedefaultseppunct}\relax \unskip.} \end{mcitethebibliography} \end{document}
{ "timestamp": "2021-09-30T02:03:10", "yymm": "2109", "arxiv_id": "2109.14007", "language": "en", "url": "https://arxiv.org/abs/2109.14007" }
\section{Introduction} In \cite{KS01}, M.\:Kashiwara and P.\:Schapira introduced the notion of ind-sheaves and subanalytic sheaves to treat ``sheaves" of functions with tempered growth conditions. Ind-sheaves are defined as ind-objects of the category of sheaves of vector spaces with compact support. Subanalytic sheaves are defined as sheaves on subanalytic sites. Moreover, the authors proved that there exists a fully faithful functor from the category of subanalytic sheaves to the category of ind-sheaves, and its essentially image is equal to the category of ind-objects of $\mathbb{R}$-constructible sheaves with compact support. After a groundbreaking development in the theory of irregular meromorphic connections by K.\:S.\:Kedlaya \cite{Ked10, Ked11} and T.\:Mochizuki \cite{Mochi09, Mochi11}, A.\:D'Agnolo and M.\:Kashiwara introduced the notion of enhanced ind-sheaves extending the notion of ind-sheaves and established the Riemann--Hilbert correspondence for analytic irregular holonomic $\mathcal{D}$-modules in \cite{DK16} as below (see \cite{Ito21} for the algebraic case). Let $X$ be a complex manifold. Then there exists a fully faithful functor ${\rm Sol}^{\rm E}_X$ which is called enhanced solution functor (see \cite[Def.\:9.1.1]{DK16}, also Definition \ref{def3.33} for the definition) from the full triangulated subcategory ${\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$ of the derived category of $\mathcal{D}_X$-modules consisting of objects with holonomic cohomologies to the triangulated category ${\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\mathbb{C}_X)$ of $\mathbb{R}$-constructible enhanced ind-sheaves: \begin{align}\label{1} {\rm Sol}_X^{{\rm E}} : {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)^{{\mbox{\scriptsize op}}}\hookrightarrow{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\mathbb{C}_X). \end{align} Moreover, T.\:Mochizuki characterized its essentially image by the curve test \cite[Thm.\:12.1]{Mochi16}. In \cite{Ito20}, the author defined $\mathbb{C}$-constructability for enhanced ind-sheaves and proved that they are nothing but objects of its essentially image. Namely, we obtain an equivalence of categories between the triangulated category ${\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$ and the one ${\mathbf{E}}^{\mathrm{b}}_{\mathbb{C}-c}({\rm I}\mathbb{C}_X)$ of $\mathbb{C}$-constructible enhanced ind-sheaves: \[{\rm Sol}_X^{{\rm E}} \colon {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)^{{\mbox{\scriptsize op}}}\overset{\sim}{\longrightarrow} {\mathbf{E}}^{\mathrm{b}}_{\mathbb{C}-c}({\rm I}\mathbb{C}_X).\] At the 16th Takagi Lectures\footnote{ The 16th Takagi Lectures took place at Graduate School of Mathematical Sciences, The University of Tokyo, on November 28 and 29, 2015.}, M.\:Kashiwara explained a similar result of (\ref{1}) by using ``enhanced subanalytic sheaves" instead of enhanced ind-sheaves as below. We denote by ${\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{{\rm sub}})$ the derived category of subanalytic sheaves on a bordered space $X\times\mathbb{R}_\infty$, see \S \ref{subsec3.1} for the definition. Then there exists a fully faithful functor ${\rm Sol}_X^{{\mathsf{T}}}$ (see \cite[\S 5.4]{Kas16}, also Definition \ref{def3.36} for the definition) from ${\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$ to ${\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{{\rm sub}})$: \begin{align}\label{2} {\rm Sol}_X^{{\mathsf{T}}} : {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)^{{\mbox{\scriptsize op}}}\hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\mathbb{C}^{\rm sub}_{X\times \mathbb{R}_\infty}). \end{align} In this paper, we will explain a relation between (\ref{1}) and (\ref{2}). For this purpose, we will prove that there exists a fully faithful functor from the triangulated category of enhanced subanalytic sheaves to the one of enhanced ind-sheaves. Although it may be known by experts, it is not in the literature to our knowledge. The main results of this paper are Theorems \ref{main1}, \ref{main2}, \ref{main3} and \ref{main4}. One can summarize these results in the following commutative diagram: \[\xymatrix@M=7pt@R=35pt@C=60pt{ {}&{}&{\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{\rm sub}) & {}\\ {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)^{\mbox{\scriptsize op}}\ar@{^{(}->}[r]_-{{\rm Sol}_X^{{\rm E}, {\rm sub}}} \ar@{^{(}->}[rru]^-{{\rm Sol}_X^{{\mathsf{T}}, {\rm sub}}(\cdot)[1]}\ar@{^{(}->}[rd]_-{{\rm Sol}_X^{{\rm E}}} & {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})\ar@{}[r]|-{\text{\large $\subset$}} \ar@<-1.0ex>@{->}[d]_-{I_X^{\rm E}}\ar@{}[d]|-\wr & {\mathbf{E}}^{\mathrm{b}}(\mathbb{C}_X^{\rm sub})\ar@{^{(}->}[u]_-{\mathbf{R}_X^{{\rm E}, {\rm sub}}} \ar@<-1.0ex>@{^{(}->}[rd]_-{I_X^{\rm E}} \ar@<-1.0ex>@{->}[d]_-{I_X^{\rm E}}\ar@{}[d]|-\wr\\ {}&{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\mathbb{C}_X)\ar@{}[r]|-{\text{\large $\subset$}} \ar@<-1.0ex>@{->}[u]_-{J_X^{\rm E}} &{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\mathbb{C}_X)\ar@{}[r]|-{\text{\large $\subset$}} \ar@<-1.0ex>@{->}[u]_-{J_X^{\rm E}} & {\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_X).\ar@<-1.0ex>@{->}[lu]_-{J_X^{\rm E}} }\] See \S\ref{subsec2.7} for the definition of ${\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_X)$, \S\ref{subsec3.1} for the definition of ${\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{\rm sub})$, \S\ref{subsec3.3} for the definitions of ${\mathbf{E}}^{\mathrm{b}}(\mathbb{C}_X^{\rm sub}), \mathbf{R}_X^{{\rm E}, {\rm sub}}$, \S\ref{subsec3.4} for the definitions of ${\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}(\mathbb{C}_X^{\rm sub}), I_X^{\rm E}, J_X^{\rm E}, {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\mathbb{C}_X)$, Definition \ref{def3.21} for ${\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})$, Definition \ref{def3.33} for ${\rm Sol}_X^{\rm E}$, Definition \ref{def3.36} for ${\rm Sol}_X^{{\mathsf{T}}, {\rm sub}}$, Definition \ref{def3.37} for ${\rm Sol}_X^{{\rm E}, {\rm sub}}$. \section*{Acknowledgement} I would like to thank Dr. Tauchi of Kyushu University for many discussions and giving many comments. This work was supported by Grant-in-Aid for Research Activity Start-up (No. 21K20335), Japan Society for the Promotion of Science. \newpage \section{Preliminary Notions and Results}\label{sec-2} In this section, we briefly recall some basic notions and results which will be used in this paper. \subsection{Subanalytic Subsets} The theory of semi-analytic and subanalytic sets originates in the work of S.\:{\L}ojasiewicz \cite{Loj59, Loj64a, Loj64b} and has been elaborated by A.\:M.\:Gabrielov \cite{Gab68}, H.\:Hironaka \cite{Hiro73a, Hiro73b} and R.\:M.\:Hardt \cite{Har75, Har77} for subanalytic sets. See also \cite{BM88}. In this subsection, we just recall the definition of semi-analytic subsets and subanalytic subsets, and some properties. Let $M$ be a real analytic manifold and denote by $\mathcal{A}_M$ the sheaf of rings of real analytic functions. For an open subset $U$ of $M$, we denote by $\mathscr{S}(\mathcal{A}_M(U))$ the family of subsets of $M$ of the form $$\bigcup_{i=1}^p\bigcap_{j=1}^qX_{ij},$$ where each $X_{ij}$ is either $\{x\in U\ | f_{ij}(x) = 0, \ f_{ij}\in\mathcal{A}_M(U)\} \text{ or } \{x\in U\ | f_{ij}(x) > 0,\ f_{ij}\in\mathcal{A}_M(U)\}.$ \begin{definition} A subset $A$ of $M$ is called semi-analytic if for any $x\in M$ there exist an open neighborhood $U$ of $x$ such that $A\cap U \in \mathscr{S}(\mathcal{A}_M(U))$. \end{definition} Note that for any open subset $U$ of $M$, $\mathscr{S}(\mathcal{A}_M(U))$ is stable under finite intersection, finite union and complement. Hence, a finite union and a finite intersection of semi-analytic subsets are semi-analytic and the complement of a semi-analytic subset is also semi-analytic. Furthermore, the closure and the interior of a semi-analytic subset are semi-analytic, see \cite[Cor.\:2.8]{BM88} for the details. Although, the inverse image of a semi-analytic subset by a morphism of real analytic manifolds is semi-analytic, the operation of direct image by a proper morphism of real analytic manifolds does not in general preserve semi-analyticity. However, the class of subanalytic subsets is closed with respect to those operations as below. Subanalytic subsets are ``locally proper projections of relatively compact semi-analytic sets". \begin{definition} A subset $S$ of $M$ is called subanalytic if for any $x\in M$ there exist an open neighborhood $U$ of $x$, a real analytic manifold $N$ and a relatively compact semi-analytic subset $A$ of $M\times N$ such that $S\cap U = {\rm pr}_1(A)$, where ${\rm pr}_1\colon M\times N\to M$ is the first projection. \end{definition} From the basic properties of semi-analytic subsets, a finite union and a finite intersection of subanalytic subsets are subanalytic, and the closure of a subanalytic subset is subanalytic. Furthermore, the complement (and thus the interior) of a subanalytic subset is also subanalytic, see \cite[Thm.\:3.10]{BM88} for details. Note also that the inverse image of a subanalytic subset by a morphism of real analytic manifolds is subanalytic. Moreover, the direct image of a subanalytic subset by a proper morphism of real analytic manifolds is subanalytic, see \cite[Prop.\:3.8]{Hiro73a} for details. \subsection{Sheaves on Sites}\label{subsec2.2} The theory of sheaves on topological spaces was created by J.\:Leray \cite{Ler50} and this notion was extended to sheaves on sites by A.\:Grothendieck \cite{SGA4}. A site is a category endowed with a ``Grothendieck topology" on it. The Grothendieck topology was introduced by A.\:Grothendieck in order to have a cohomology theory on algebraic varieties. In this subsection, we shall briefly recall the definition of sheaves on sites and some properties based on \cite[\S2]{KS01}. See \cite[\S\S16, 17, 18]{KS06} for more general settings. Let $\mathcal{U}$ be a universe. A set is called $\mathcal{U}$-small if it is isomorphic to a set belonging to $\mathcal{U}$. A small category\footnote{In \cite{KS06}, a category means a small category.} $\mathcal{C}$ is called $\mathcal{U}$-category if for any objects $X, Y$ of $\mathcal{C}$ the set $\mathrm{Hom}_{\mathcal{C}}(X, Y)$ is $\mathcal{U}$-small. If moreover the family $\SO b(\mathcal{C})$ of objects (a set in bigger universe) of $\mathcal{C}$ is $\mathcal{U}$-small, then $\mathcal{C}$ is called $\mathcal{U}$-small. \begin{definition}\label{def2.3} Let $\mathcal{C}$ be a $\mathcal{U}$-small category admitting finite products and fiber products. \begin{itemize} \item[(1)] For an object $U$ of $\mathcal{C}$, we denote by $\mathcal{C}_U$ the category of arrows $V\to U$. Namely, the category $\mathcal{C}_U$ is given by \begin{align*} \SO b(\mathcal{C}_U) &:= \{(V, i_V)\ |\ V\in\SO b(\mathcal{C}), i_V\in\mathrm{Hom}_{\mathcal{C}}(V, U)\},\\ \mathrm{Hom}_{\mathcal{C}_U}\big((V, i_V), (W, i_W)\big) &:= \{\varphi\in\mathrm{Hom}_{\mathcal{C}}(V, W)\ |\ i_W\circ\varphi = i_V\}. \end{align*} For simplicity, we sometimes write $V\to U$ instead of $(V, i_V)$. \item[(2)] For an object $(V, i_V)\in\SO b(\mathcal{C}_U)$ and a subset $S\subset \SO b(\mathcal{C}_U)$, let us set $$V\times_U S := \{(V\times_U W, {\rm pr}_V)\ |\ W\in S\}\subset \SO b(\mathcal{C}_V),$$ where ${\rm pr}_V\colon V\times_U W\to V$ is the projection. \item[(3)] For $S_1, S_2\subset \SO b(\mathcal{C}_U)$, $S_1$ is said to be a refinement of $S_2$ if for any $(V_1, i_{V_1})\in S_1$ there exist $(V_2, i_{V_2})\in S_2$ and a morphism $\varphi\in\mathrm{Hom}_{\mathcal{C}}(V_1, V_2)$ such that $i_{V_2}\circ\varphi = i_{V_1}$. In such a situation, we write $S_1\preceq S_2$. \end{itemize} \end{definition} \begin{definition}\label{def-GT} Let $\mathcal{C}$ be a $\mathcal{U}$-small category admitting finite products and fiber products. A Grothendieck topology on $\mathcal{C}$ is the data associating to any $U\in \SO b(\mathcal{C})$ a family $\SC ov(U)$ of subsets of $\SO b(\mathcal{C}_U)$ satisfying the axioms \begin{itemize} \item[(GT1)] $\{{\rm id}_U\}\in\SC ov(U)$, \item[(GT2)] if $S_1\in\SC ov(U)$ is a refinement of $S_2\subset \SO b(\mathcal{C}_U)$, then $S_2\in\SC ov(U)$, \item[(GT3)] if $S\in\SC ov(\mathcal{C}_U)$, then $V\times_U S\in \SC ov(V)$ for any $(V, i_V)\in\SO b(\mathcal{C}_U)$, \item[(GT4)] if $S_1, S_2\subset\SO b(\mathcal{C}_U)$, $S_1\in \SC ov(U)$ and $V\times_U S_2\in \SC ov(V)$ for any $(V, i_V)\in S_1$, then $S_2\in \SC ov(U)$. \end{itemize} A subset $S\in\SC ov(U)$ is called a covering of $U$. A site $X$ is a $\mathcal{U}$-small category $\mathcal{C}_X$\footnote{Remark that the category $\mathcal{C}_X$ is different from the category $\mathcal{C}_X$ in Definition \ref{def2.3} (1).} admits finite products and fiber products and endowed with a Grothendieck topology on $\mathcal{C}_X$. \end{definition} \begin{example}\label{ex2.5} Let $X$ be a topological space and denote by $\SO p_X$ the category of open subsets of $X$. Namely, the category $\SO p_X$ is given by \begin{align*} \SO b(\SO p_X) := \{U\in\mathcal{P}(X)\ |\ U\text{ is open }\},\hspace{17pt} \mathrm{Hom}_{\SO p_X}(U, V) := \begin{cases} \ \{{\rm pt}\}\quad&(U\subset V),\\ \ \emptyset\quad &(\text{otherwise}), \end{cases} \end{align*} where $\mathcal{P}(X)$ is the power set of $X$. Note that for any $U\in\SO p_X$ we have $(\SO p_X)_U = \SO p_U$. We can endow $\SO p_X$ with the following Grothendieck topology: a subset $S\subset \SO b((\SO p_X)_U)$ is a covering of $U\in\SO b(\SO p_X)$ if $U = \bigcup_{V\in S}V$. We will keep the same symbol $X$ to denote its site. \end{example} Let $\Bbbk$ be a commutative unital ring and denote by $\mathrm{Mod}(\Bbbk)$ the category of $\Bbbk$-modules. \begin{definition} A presheaf of $\Bbbk$-modules on a site $X$ is nothing but a contravariant functor from $\mathcal{C}_X$ to $\mathrm{Mod}(\Bbbk)$. We denote by $s|_V$ the image of $s\in\mathcal{F}(U)$ by the restriction morphism $\mathcal{F}(U)\to\mathcal{F}(V)$ associated a morphism $V\to U$ in $\mathcal{C}_X$. Let us denote by ${\rm Psh}(\Bbbk_X)$ the category of presheaves of $\Bbbk$-modules on a site $X$. \end{definition} Note that the category ${\rm Psh}(\Bbbk_X)$ is abelian because the category $\mathrm{Mod}(\Bbbk)$ is abelian. \begin{definition} Let $X$ be a site. A presheaf $\mathcal{F}$ of $\Bbbk$-modules on $X$ is called separated if for any $U\in\SO b(\mathcal{C}_X)$ and any covering $S\in\SC ov(U)$ of $U$, the natural morphism $$\mathcal{F}(U) \to \ker\left(\prod_{V\in S}\mathcal{F}(V)\rightrightarrows\prod_{V', V''\in S}\mathcal{F}(V'\times_U V'')\right)$$ is a monomorphism. Here, the two arrows are associated to $\mathcal{F}(V')\to \mathcal{F}(V'\times_UV'')$, and $\mathcal{F}(V'')\to \mathcal{F}(V'\times_UV'')$ and the kernel of the double arrow is the kernel of the difference of the two arrows. If moreover the natural morphism is an isomorphism, a presheaf $\mathcal{F}$ is a sheaf of $\Bbbk$-modules on $X$. Let us denote by $\mathrm{Mod}(\Bbbk_X)$ the category of shaves of $\Bbbk$-modules on $X$. \end{definition} Note that the category $\mathrm{Mod}(\Bbbk_X)$ is a full additive subcategory of ${\rm Psh}(\Bbbk_X)$. Furthermore, it is abelian as below. To explain this, let us recall the sheaf associated with a presheaf. Let $X$ be a site. For a presheaf $\mathcal{F}$ of $\Bbbk$-modules on $X$ and $U\in \SO b(\mathcal{C}_X)$, we set $$\mathcal{F}^+(U) := \varinjlim_{S\in\SC ov(U) } \mathcal{F}(S),$$ where $\mathcal{F}(S) := \ker\left(\prod_{V\in S}\mathcal{F}(V)\rightrightarrows\prod_{V', V''\in S}\mathcal{F}(V'\times_U V'')\right)$. Remark that the relation $\preceq$ is a pre-order on $\SC ov(U)$ and hence $\SC ov(U)$ inherits a structure of a small category. Moreover, $\SC ov(U)^{\mbox{\scriptsize op}}$ is filtrant. See \cite[p.\,19]{KS01} for the details. Then we have the presheaf $$\mathcal{F}^+\colon \mathcal{C}_X^{\mbox{\scriptsize op}}\to\mathrm{Mod}(\Bbbk),\hspace{7pt}U\mapsto\mathcal{F}^+(U)$$ and a functor $$(\cdot)^+\colon{\rm Psh}(\Bbbk_X)\to {\rm Psh}(\Bbbk_X),\hspace{7pt}\mathcal{F}\mapsto \mathcal{F}^+.$$ \newpage \begin{theorem}[{\cite[Thm.\:2.1.7]{KS01}}, see also {\cite[Thm.\:17.4.7 (i), (iii)]{KS06}} for (3), (4)]\label{thm2.8} Let $X$ be a site. \begin{itemize} \item[\rm (1)] The functor $(\cdot)^+\colon{\rm Psh}(\Bbbk_X)\to {\rm Psh}(\Bbbk_X)$ is left exact. \item[\rm (2)] For any $\mathcal{F}\in{\rm Psh}(\Bbbk_X)$, $\mathcal{F}^+$ is a separated presheaf. \item[\rm (3)] For any separated presheaf $\mathcal{F}$, $\mathcal{F}^+$ is a sheaf. \item[\rm (4)] The functor $(\cdot)^{++}\colon{\rm Psh}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk_X)$ is a left adjoint to the embedding functor $\mathrm{Mod}(\Bbbk_X)\to {\rm Psh}(\Bbbk_X)$. Namely, for any $\mathcal{F}\in{\rm Psh}(\Bbbk_X)$ and any $\mathcal{G}\in\mathrm{Mod}(\Bbbk_X)$, we have $$\mathrm{Hom}_{{\rm Psh}(\Bbbk_X)}(\mathcal{F}, \mathcal{G}) \simeq \mathrm{Hom}_{\mathrm{Mod}(\Bbbk_X)}(\mathcal{F}^{++}, \mathcal{G}).$$ \end{itemize} \end{theorem} For a presheaf $\mathcal{F}\in{\rm Psh}(\Bbbk_X)$, the sheaf $\mathcal{F}^{++}$ is called the sheaf associated with $\mathcal{F}$. Let us call the functor $(\cdot)^{++}$ the sheafification functor. Hence, we have: \begin{theorem}[{\cite[Thm.\:2.1.10]{KS01}}, see also {\cite[Thms.\:17.4.9, 17.4.7 (iv), 18.1.6 (v)]{KS06}}]\label{thm2.9} Let $X$ be a site. \begin{itemize} \item[\rm (1)] The category $\mathrm{Mod}(\Bbbk_X)$ admits projective limits. More precisely, for any projective system $\{\mathcal{F}_i\}_{i\in I}$ of sheaves, its projective limit in ${\rm Psh}(\Bbbk_X)$ is a sheaf and is a projective limit in $\mathrm{Mod}(\Bbbk_X)$. \item[\rm (2)] The category $\mathrm{Mod}(\Bbbk_X)$ admits inductive limits. More precisely, for any inductive system $\{\mathcal{F}_i\}_{i\in I}$ of sheaves, its inductive limit in $\mathrm{Mod}(\Bbbk_X)$ is the sheaf associated with its inductive limit in ${\rm Psh}(\Bbbk_X)$. \item[\rm (3)] The category $\mathrm{Mod}(\Bbbk_X)$ is abelian. \item[\rm (4)] The embedding functor $\mathrm{Mod}(\Bbbk_X)\to {\rm Psh}(\Bbbk_X)$ is fully faithful and left exact, and the functor $(\cdot)^{++}\colon{\rm Psh}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk_X)$ is exact. \item[\rm (5)] Filtrant inductive limits in $\mathrm{Mod}(\Bbbk_X)$ are exact. \item[\rm (6)] The $\mathcal{U}$-category $\mathrm{Mod}(\Bbbk_X)$ admits enough injective. \end{itemize} \end{theorem} For $M\in\mathrm{Mod}(\Bbbk)$, we denote by $M_X$ the sheaf associated with the presheaf $U\mapsto M$ and call $M_X$ the constant sheaf with stalk $M$. There are many operations for sheaves on sites as similar to classical sheaves. \begin{definition} Let $X$ and $Y$ be sites. A morphism $f\colon X\to Y$ of sites from $X$ to $Y$ is a functor ${}^t\!f\colon \mathcal{C}_Y\to \mathcal{C}_X$ such that it commutes with fiber products and if for any $V\in \SO b(\mathcal{C}_Y)$ and any $S\in\SC ov(V)$ then ${}^t\!f(S)\in\SC ov({}^t\!f(V))$. \end{definition} For a morphism $f\colon X\to Y$ of sites associated with a functor ${}^t\!f\colon \mathcal{C}_Y\to\mathcal{C}_X$ and $\mathcal{F}\in{\rm Psh}(\Bbbk_X), \mathcal{G}\in{\rm Psh}(\Bbbk_Y)$, we set \begin{align*} (f_\ast\mathcal{F})(V) &:= \mathcal{F}({}^t\!f(V))\hspace{50pt} \text{ for any } V\in\mathcal{C}_Y,\\ (f^{-1}_{\rm pre})\mathcal{G}(U) &:= \varinjlim_{U\to {}^t\!f(V)}\mathcal{G}(V)\hspace{50pt} \text{ for any } U\in\mathcal{C}_X. \end{align*} Then we have functors: \begin{align*} f_\ast&\colon {\rm Psh}(\Bbbk_X)\to {\rm Psh}(\Bbbk_Y),\\ f^{-1}_{\rm pre} &\colon {\rm Psh}(\Bbbk_Y)\to{\rm Psh}(\Bbbk_X). \end{align*} Note that if $\mathcal{F}\in\mathrm{Mod}(\Bbbk_X)$ then the presheaf $f_\ast\mathcal{F}$ is a sheaf, see \cite[Prop.\:17.5.1]{KS06}. \begin{definition}\label{def2.11} Let $f\colon X\to Y$ be a morphism of sites. The direct image functor $f_\ast$ and inverse image functor $f^{-1}$ are defined by \begin{align*} f_\ast&\colon\mathrm{Mod}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk_Y),\hspace{7pt} \mathcal{F}\mapsto f_\ast\mathcal{F},\\ f^{-1}&\colon\mathrm{Mod}(\Bbbk_Y)\to\mathrm{Mod}(\Bbbk_X),\hspace{7pt} \mathcal{F}\mapsto \left(f^{-1}_{\rm pre}\mathcal{F}\right)^{++}. \end{align*} \end{definition} They have the following properties: \begin{proposition}[{\cite[Thm.\:2.2.1]{KS01}}, see also {\cite[Thm.\:17.5.2]{KS06}}]\label{prop2.12} Let $f\colon X\to Y$ be a morphism of sites. \begin{itemize} \item[\rm (1)] The inverse image functor $f^{-1}\colon\mathrm{Mod}(\Bbbk_Y)\to\mathrm{Mod}(\Bbbk_X)$ is left adjoint to the direct image functor $f_\ast\colon\mathrm{Mod}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk_Y)$. Namely, for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_X)$ and any $\mathcal{G}\in\mathrm{Mod}(\Bbbk_Y)$ we have $$\mathrm{Hom}_{\mathrm{Mod}(\Bbbk_X)}(f^{-1}\mathcal{F}, \mathcal{G})\simeq \mathrm{Hom}_{\mathrm{Mod}(\Bbbk_Y)}(\mathcal{F}, f_\ast\mathcal{G}).$$ \item[\rm (2)] The direct image functor $f_\ast$ is left exact and commutes with small projective limits. \item[\rm (3)] The inverse image functor $f^{-1}$ is exact and commutes with small inductive limits. \end{itemize} \end{proposition} Note that for a site $X$ and $U\in\SO b(\mathcal{C}_X)$, we can endow $(\mathcal{C}_X)_U$ with the following Grothendieck topology: for $V\in \SO b((\mathcal{C}_X)_U)$, a subset of $\SO b((\mathcal{C}_X)_V)$ is a covering of $V$ if it is a covering in $\mathcal{C}_X$. We denote by $U$ its site and by $i_U\colon U\to X$ the natural morphism of sites from $U$ to $X$ associated with a functor ${}^ti_U\colon \mathcal{C}_X\to(\mathcal{C}_X)_U,\hspace{5pt}V\mapsto (U\times V, {\rm pr}_1)$, where ${\rm pr}_1\colon U\times V\to U$ is the first projection. On the other hand, we have the morphism $j_U\colon X\to U$ of sites from $X$ to $U$ associated with a functor ${}^tj_U\colon(\mathcal{C}_X)_U\to \mathcal{C}_X,\ V\to V$. Hence we get functors: \[\xymatrix@M=7pt@C=45pt{ \mathrm{Mod}(\Bbbk_U)\ar@<0.7ex>@{->}[r]^-{j_U^{-1}} & \mathrm{Mod}(\Bbbk_X) \ar@<0.7ex>@{->}[r]^-{i_U^{-1}} \ar@<0.7ex>@{->}[l]^-{j_{U\ast}} & \mathrm{Mod}(\Bbbk_U) \ar@<0.7ex>@{->}[l]^-{i_{U\ast}}. }\] \begin{definition} Let $X$ be a site and $U\in\SO b(\mathcal{C}_X)$. \begin{itemize} \item[(1)] The exact functor $i_U^{-1}\colon \mathrm{Mod}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk_U)$ is called the restriction functor to $U$. For simplicity, we some times write $\mathcal{F}|_U$ instead of $i_U^{-1}\mathcal{F}$ for $\mathcal{F}\in\mathrm{Mod}(\Bbbk_X)$. \item[(2)] The exact functor $j_U^{-1}\colon \mathrm{Mod}(\Bbbk_U)\to\mathrm{Mod}(\Bbbk_X)$ is called the extension functor from $U$. Let us set $i_{U!} := j_U^{-1}$. \item[(3)] We set $\Gamma_U\mathcal{F} := i_{U\ast}i_U^{-1}\mathcal{F}$ and $\mathcal{F}_U := i_{U!}i_U^{-1}\mathcal{F}$ for $\mathcal{F}\in\mathrm{Mod}(\Bbbk_X)$. \end{itemize} \end{definition} Clearly, the functor $\Gamma_U$ is left exact and the functor $(\cdot)_U$ is exact, and there exist a canonical morphism ${\rm id}\to \Gamma_U$ of functors and an isomorphism $\Gamma(X;\ \cdot\ )\circ\Gamma_U \simeq \Gamma(U;\ \cdot\ )$. Note that $j_{U\ast} = i_U^{-1}$ and hence the functor $i_{U!}$ is a left adjoint to the functor $i_U^{-1}$. Namely, for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_U)$ and any $\mathcal{G}\in\mathrm{Mod}(\Bbbk_X)$ we have: $$\mathrm{Hom}_{\mathrm{Mod}(\Bbbk_X)}(i_{U!}\mathcal{F}, \mathcal{G})\simeq \mathrm{Hom}_{\mathrm{Mod}(\Bbbk_U)}(\mathcal{F}, i_U^{-1}\mathcal{G}).$$ See \cite[Prop.\:2.3.4]{KS01} for the details. Moreover we have: \begin{proposition}[{\cite[Prop.\:2.3.6]{KS01}}] Let $X$ be a site and $U\in\SO b(\mathcal{C}_X)$. We assume that for any $V\in\SO b(\mathcal{C}_X)$, $\mathrm{Hom}_{\mathcal{C}_X}(V, U)$ has at most one element. Then we have \begin{itemize} \item[\rm (1)] $i_U^{-1}\circ i_{U\ast}\overset{\sim}{\longrightarrow} {\rm id},\hspace{5pt} {\rm id} \overset{\sim}{\longrightarrow} i_U^{-1}\circ i_{U!}$ \item[\rm (2)] Functors $i_{U\ast}$ and $i_{U!}$ are fully faithful. \item[\rm (3)] There exist a canonical morphism $(\cdot)_U\to{\rm id}$ of functors. Moreover, for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_X)$ the canonical morphism $\mathcal{F}_U\to \mathcal{F}$ is a monomorphism . \end{itemize} \end{proposition} If sites $X$ and $Y$ have terminal objects which are denoted by same symbols, and a morphism $f\colon X\to Y$ of sites associated the functor ${}^t\!f\colon \mathcal{C}_Y\to\mathcal{C}_X$ satisfies ${}^t\!f(Y)=X$, for any $V\in\mathcal{C}_Y$ we have the following commutative diagrams: \[\xymatrix@M=5pt@R=20pt@C=40pt{ X\ar@{->}[r]^-{f} & Y\\ U\ar@{->}[u]^-{i_U}\ar@{->}[r]_-{f|_U} & V \ar@{->}[u]_-{i_V} ,}\hspace{17pt} \xymatrix@M=5pt@R=20pt@C=40pt{ X\ar@{->}[r]^-{f}\ar@{->}[d]_-{j_U} & Y\ar@{->}[d]^-{j_V}\\ U\ar@{->}[r]_-{f|_U} & V,}\] where we set $U:={}^t\!f(V)$ and a morphism $f|_U\colon U\to V$ of sites is induced by $f$. In this case, we have $$i_{U!}\circ(f|_U)^{-1}\simeq f^{-1}\circ i_{V!}\ ,\hspace{20pt} (f|_U)_\ast\circ i_U^{-1}\simeq i_V^{-1}\circ f_\ast\ .$$ By using the first one and the fact that $f^{-1}\Bbbk_Y\simeq \Bbbk_X$, we have $f^{-1}(\Bbbk_{YV}) \simeq \Bbbk_{XU}$. For $\mathcal{F}, \mathcal{G}\in{\rm Psh}(\Bbbk_X)$, we set \begin{align*} \big({\mathcal{H}}om_{\Bbbk_X}(\mathcal{F}, \mathcal{G})\big)(U) &:= \mathrm{Hom}_{\mathrm{Mod}(\Bbbk_U)}(\mathcal{F}|_U, \mathcal{G}|_U)\hspace{19pt} \text{ for any } U\in\mathcal{C}_X,\\ \left(\mathcal{F}\underset{\Bbbk_X}{\overset{\rm pre}{\otimes}}\mathcal{G}\right)(U) &:= \mathcal{F}(U)\underset{\Bbbk}{\otimes}\mathcal{G}(U)\hspace{39pt} \text{ for any } U\in\mathcal{C}_X. \end{align*} Then we have bifunctors: \begin{align*} {\mathcal{H}}om_{\Bbbk_X}(\cdot, \cdot) &\colon {\rm Psh}(\Bbbk_X)^{\mbox{\scriptsize op}}\times{\rm Psh}(\Bbbk_X)\to{\rm Psh}(\Bbbk_X),\hspace{7pt} (\mathcal{F}, \mathcal{G})\mapsto{\mathcal{H}}om_{\Bbbk_X}(\mathcal{F}, \mathcal{G}),\\ \mathrm{Hom}_{\mathrm{Mod}(\Bbbk_X)}(\cdot, \cdot) &\colon {\rm Psh}(\Bbbk_X)^{\mbox{\scriptsize op}}\times{\rm Psh}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk),\hspace{7pt} (\mathcal{F}, \mathcal{G})\mapsto\mathrm{Hom}_{\mathrm{Mod}(\Bbbk_X)}(\mathcal{F}, \mathcal{G}),\\ (\cdot)\overset{\rm pre}{\underset{\Bbbk_X}{\otimes}}(\cdot) &\colon {\rm Psh}(\Bbbk_X)\times{\rm Psh}(\Bbbk_X)\to{\rm Psh}(\Bbbk_X),\hspace{7pt} (\mathcal{F}, \mathcal{G})\mapsto\mathcal{F} \overset{\rm pre}{\underset{\Bbbk_X}{\otimes}}\mathcal{G}. \end{align*} Note that if $\mathcal{F}, \mathcal{G}\in\mathrm{Mod}(\Bbbk_X)$ then the presheaf ${\mathcal{H}}om_{\Bbbk_X}(\mathcal{F}, \mathcal{G})$ is a sheaf, see \cite[Prop.\:17.7.1 (i)]{KS06}. \begin{definition} The internal hom functor $ {\mathcal{H}}om_{\Bbbk_X}$, the hom functor $\mathrm{Hom}_{\mathrm{Mod}(\Bbbk_X)}$ and the tensor product functor $\underset{\Bbbk_X}{\otimes}$ are defined by \begin{align*} {\mathcal{H}}om_{\Bbbk_X}(\cdot, \cdot) &\colon \mathrm{Mod}(\Bbbk_X)^{\mbox{\scriptsize op}}\times\mathrm{Mod}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk_X),\hspace{7pt} (\mathcal{F}, \mathcal{G})\mapsto{\mathcal{H}}om_{\Bbbk_X}(\mathcal{F}, \mathcal{G}),\\ \mathrm{Hom}_{\mathrm{Mod}(\Bbbk_X)}(\cdot, \cdot) &\colon \mathrm{Mod}(\Bbbk_X)^{\mbox{\scriptsize op}}\times\mathrm{Mod}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk),\hspace{7pt} (\mathcal{F}, \mathcal{G})\mapsto\mathrm{Hom}_{\mathrm{Mod}(\Bbbk_X)}(\mathcal{F}, \mathcal{G}),\\ (\cdot)\underset{\Bbbk_X}{\otimes}(\cdot) &\colon \mathrm{Mod}(\Bbbk_X)\times\mathrm{Mod}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk_X),\hspace{7pt} (\mathcal{F}, \mathcal{G})\mapsto\left(\mathcal{F} \overset{\rm pre}{\underset{\Bbbk_X}{\otimes}}\mathcal{G}\right)^{++}. \end{align*} We sometimes write ${\mathcal{H}}om,\ \otimes$ instead of ${\mathcal{H}}om_{\Bbbk_X},\ \underset{\Bbbk_X}{\otimes}$, respectively. \end{definition} Remark that the internal hom functor is left exact in each variable and the tensor product functor is right exact in each variable. They have the following properties. \begin{proposition}[{\cite[Props.\:2.4.2, 2.4.3]{KS01}}, see also {\cite[Prop.\:17.7.3, Thms.\:18.2.3, Lem.\:18.3.1]{KS06}}] Let $f\colon X\to Y$ be a morphism of sites and $\mathcal{F}, \mathcal{F}_1, \mathcal{F}_2, \mathcal{H}\in\mathrm{Mod}(\Bbbk_X),$ $\mathcal{G}, \mathcal{G}_1, \mathcal{G}_2\in\mathrm{Mod}(\Bbbk_Y)$. Then we have \begin{itemize} \item[\rm (1)] for any $U\in\SO b(\mathcal{C}_X)$, we have $i_U^{-1}{\mathcal{H}}om(\mathcal{F}, \mathcal{G}) \simeq {\mathcal{H}}om(i_U^{-1}\mathcal{F}, i_U^{-1}\mathcal{G})$, \item[\rm (2)] ${\mathcal{H}}om_{\Bbbk_X}(\Bbbk_X, \mathcal{F}) \simeq \mathcal{F}$, \item[\rm (3)] $\Bbbk_X\otimes \mathcal{F} \simeq \mathcal{F}$, \item[\rm (4)] ${\mathcal{H}}om_{\Bbbk_X}(\mathcal{F}_1\otimes \mathcal{F}_2, \mathcal{H})\simeq {\mathcal{H}}om_{\Bbbk_X}(\mathcal{F}_1, {\mathcal{H}}om_{\Bbbk_X}(\mathcal{F}_2, \mathcal{H}))$, \item[\rm (5)] ${\mathcal{H}}om_{\Bbbk_Y}(\mathcal{G}, f_\ast\mathcal{F})\overset{\sim}{\longrightarrow} f_\ast{\mathcal{H}}om_{\Bbbk_X}(f^{-1}\mathcal{G}, \mathcal{F})$, \item[\rm (6)] $f^{-1}(\mathcal{G}_1\underset{\Bbbk_X}{\otimes}\mathcal{G}_2)\overset{\sim}{\longrightarrow} f^{-1}\mathcal{G}_1\underset{\Bbbk_X}{\otimes} f^{-1}\mathcal{G}_2$. \end{itemize} \end{proposition} Let us recall the notion of (sheaves of) $\mathcal{R}$-modules. \begin{definition} Let $X$ be a site and $\Bbbk$ a commutative unital ring. A sheaf of $\Bbbk$-algebras is a sheaf $\mathcal{R}$ of $\Bbbk$-modules on a site $X$ such that for any $U\in\SO b(\mathcal{C}_X)$, $\mathcal{R}(U)$ is a $\Bbbk$-algebra and for any morphism $V\to U$ in $\mathcal{C}_X$, the restriction morphism $\mathcal{R}(U)\to\mathcal{R}(V)$ is a morphism of $\Bbbk$-algebras. For sheaves $\mathcal{R}, \mathcal{S}$ of $\Bbbk$-algebras, a morphism of sheaves of $\Bbbk$-algebras from $\mathcal{R}$ to $\mathcal{S}$ is a morphism $f\colon \mathcal{R}\to\mathcal{S}$ of sheaves such that for any $U\in\mathcal{C}_X$ the morphism $f(U)\colon\mathcal{R}(U)\to\mathcal{S}(U)$ is a morphism of $\Bbbk$-algebras. \end{definition} \begin{example} The constant sheaf $\Bbbk_X$ on a site $X$ is a sheaf of $\Bbbk$-algebras. \end{example} A sheaf of $\Bbbk$-algebras is called a $\Bbbk_X$-algebra. A sheaf of $\mathbb{Z}$-algebras is simply called a sheaf of rings. Moreover, for a $\Bbbk_X$-algebra $\mathcal{R}$, we denote $\mathcal{R}^{\mbox{\scriptsize op}}$ the opposite $\Bbbk_X$-algebra which is defined by $\mathcal{R}^{\mbox{\scriptsize op}}(U) := \mathcal{R}(U)^{\mbox{\scriptsize op}}$ for any $U\in\mathcal{C}_X$. Here, $\mathcal{R}(U)^{\mbox{\scriptsize op}}$ is the opposite ring of $\mathcal{R}(U)$. \newpage \begin{definition} Let $X$ be a site and $\mathcal{R}$ a $\Bbbk_X$-algebra. A presheaf of $\mathcal{R}$-modules is a presheaf $\mathcal{F}\in{\rm Psh}(\Bbbk_X)$ such that for any $U\in\mathcal{C}_X$, $\mathcal{F}(U)$ is a left $\mathcal{R}(U)$-module and for any morphism $V\to U$ in $\mathcal{C}_X$, the restriction morphism $\mathcal{F}(U)\to\mathcal{F}(V)$ commutes with the action $\mathcal{R}$, that is, $(r\cdot s)|_V = r|_V\cdot s|_V$ for any $s\in\mathcal{F}(U)$ and $r\in\mathcal{R}(U)$. For presheaves $\mathcal{M}, \mathcal{N}$ of $\mathcal{R}$-modules, a morphism of presheaves of $\mathcal{R}$-modules from $\mathcal{M}$ to $\mathcal{N}$ is a morphism $\varphi\colon \mathcal{M}\to\mathcal{N}$ of presheaves such that for any $U\in\mathcal{C}_X$ the morphism $\varphi(U)\colon \mathcal{M}(U)\to\mathcal{N}(U)$ is a morphism of $\mathcal{R}(U)$-modules. Let us denote by ${\rm Psh}(\mathcal{R})$ the category of presheaves of $\mathcal{R}$-modules. \end{definition} Clearly, the category ${\rm Psh}(\mathcal{R})$ is abelian. \begin{definition} Let $X$ be a site and $\mathcal{R}$ be a $\Bbbk_X$-algebra. A sheaf of $\mathcal{R}$-modules is a presheaf of $\mathcal{R}$-modules which is a sheaf of $\Bbbk$-modules. For simplicity, a sheaf of $\mathcal{R}$-modules is called a $\mathcal{R}$-module. A $\mathcal{R}^{\mbox{\scriptsize op}}$-module is called a right $\mathcal{R}$-module. A morphism of $\mathcal{R}$-modules is a morphism of presheaves of $\mathcal{R}$-modules. Let us denote by $\mathrm{Mod}(\mathcal{R})$ the category of $\mathcal{R}$-modules. \end{definition} Note that the category $\mathrm{Mod}(\mathcal{R})$ has same properties of Theorem \ref{thm2.9}. For example, it is abelian, see also \cite[Thm.\:18.1.6 (1)]{KS06}. Moreover, the category $\mathrm{Mod}(\mathcal{R})$ is a Grothendieck category, see \cite[Def.\:8.3.24, Thm.\:18.1.6 (v)]{KS06} for details. In particular, it admits enough injectives, see also \cite[Thm.\:9.6.2]{KS06}. Note also that the forgetful functor $$for\colon\mathrm{Mod}(\mathcal{R})\to \mathrm{Mod}(\Bbbk_X)$$ is faithful and conservative but not fully faithful in general. The sheafification functor $(\cdot)^{++}\colon {\rm Psh}(\Bbbk_X)\to\mathrm{Mod}(\Bbbk_X)$ induces a functor $$(\cdot)^{++}\colon{\rm Psh}(\mathcal{R})\to\mathrm{Mod}(\mathcal{R}),$$ which will be denoted by the same symbol. Note that it is also left adjoint to the embedding functor $\mathrm{Mod}(\mathcal{R})\to {\rm Psh}(\mathcal{R})$. See \cite[Lem.\:18.1.4]{KS06} for details. Moreover, for any $U\in\mathcal{C}_X$, we have functors: \begin{align*} (\cdot)_U&\colon\mathrm{Mod}(\mathcal{R})\to\mathrm{Mod}(\mathcal{R}),\\ \Gamma_U&\colon\mathrm{Mod}(\mathcal{R})\to\mathrm{Mod}(\mathcal{R}),\\ \Gamma(U;\ \cdot\ )&\colon\mathrm{Mod}(\mathcal{R})\to\mathrm{Mod}(\mathcal{R}(U)) \end{align*} and the functor $(\cdot)_U$ is exact and the functors $\Gamma_U, \Gamma(U;\ \cdot\ )$ are left exact, see \cite[\S18.1]{KS06} for details. For $\mathcal{M}, \mathcal{N}\in{\rm Psh}(\mathcal{R})$, we set \begin{align*} \big({\mathcal{H}}om_{\mathcal{R}}(\mathcal{M}, \mathcal{N})\big)(U) &:= \mathrm{Hom}_{{\rm Psh}(\mathcal{R}|_U)}(\mathcal{M}|_U, \mathcal{N}|_U)\hspace{19pt} \text{ for any } U\in\mathcal{C}_X,\\ \left(\mathcal{M}\underset{\mathcal{R}}{\overset{\rm pre}{\otimes}}\mathcal{N}\right)(U) &:= \mathcal{M}(U)\underset{\mathcal{R}(U)}{\otimes}\mathcal{N}(U)\hspace{39pt} \text{ for any } U\in\mathcal{C}_X. \end{align*} Then we have bifunctors: \begin{align*} {\mathcal{H}}om_{\mathcal{R}}(\cdot, \cdot) &\colon {\rm Psh}(\mathcal{R})^{\mbox{\scriptsize op}}\times{\rm Psh}(\mathcal{R})\to{\rm Psh}(\Bbbk_X),\hspace{7pt} (\mathcal{M}, \mathcal{N})\mapsto{\mathcal{H}}om_{\mathcal{R}}(\mathcal{M}, \mathcal{N}),\\ \mathrm{Hom}_{{\rm Psh}(\mathcal{R})}(\cdot, \cdot) &\colon {\rm Psh}(\mathcal{R})^{\mbox{\scriptsize op}}\times{\rm Psh}(\mathcal{R})\to\mathrm{Mod}(\Bbbk),\hspace{7pt} (\mathcal{M}, \mathcal{N})\mapsto\mathrm{Hom}_{{\rm Psh}(\mathcal{R})}(\mathcal{M}, \mathcal{N}),\\ (\cdot)\overset{\rm pre}{\underset{\mathcal{R}}{\otimes}}(\cdot) &\colon {\rm Psh}(\mathcal{R}^{\mbox{\scriptsize op}})\times{\rm Psh}(\mathcal{R})\to{\rm Psh}(\Bbbk_X),\hspace{7pt} (\mathcal{M}, \mathcal{N})\mapsto\mathcal{M} \overset{\rm pre}{\underset{\mathcal{R}}{\otimes}}\mathcal{N}. \end{align*} Note that if $\mathcal{M}, \mathcal{N}\in\mathrm{Mod}(\mathcal{R})$ then the presheaf ${\mathcal{H}}om_{\mathcal{R}}(\mathcal{M}, \mathcal{N})$ is a sheaf, see \cite[Lem.\:18.2.1 (i)]{KS06}. \begin{definition} The internal hom functor ${\mathcal{H}}om_{\mathcal{R}}$, the hom functor $ \mathrm{Hom}_{\mathrm{Mod}(\mathcal{R})}$ and the tensor product functor $\underset{\mathcal{R}}{\otimes}$ are defined by \begin{align*} {\mathcal{H}}om_{\mathcal{R}}(\cdot, \cdot) &\colon \mathrm{Mod}(\mathcal{R})^{\mbox{\scriptsize op}}\times\mathrm{Mod}(\mathcal{R})\to\mathrm{Mod}(\Bbbk_X),\hspace{7pt} (\mathcal{M}, \mathcal{N})\mapsto{\mathcal{H}}om_{\mathcal{R}}(\mathcal{M}, \mathcal{N}),\\ \mathrm{Hom}_{\mathrm{Mod}(\mathcal{R})}(\cdot, \cdot) &\colon \mathrm{Mod}(\mathcal{R})^{\mbox{\scriptsize op}}\times\mathrm{Mod}(\mathcal{R})\to\mathrm{Mod}(\Bbbk_X),\hspace{7pt} (\mathcal{M}, \mathcal{N})\mapsto\mathrm{Hom}_{\mathrm{Mod}(\mathcal{R})}(\mathcal{M}, \mathcal{N}),\\ (\cdot)\underset{\mathcal{R}}{\otimes}(\cdot) &\colon \mathrm{Mod}(\mathcal{R}^{\mbox{\scriptsize op}})\times\mathrm{Mod}(\mathcal{R})\to\mathrm{Mod}(\Bbbk_X),\hspace{7pt} (\mathcal{M}, \mathcal{N})\mapsto\left(\mathcal{M} \overset{\rm pre}{\underset{\mathcal{R}}{\otimes}}\mathcal{N}\right)^{++}. \end{align*} \end{definition} Note that the internal hom functor is left exact in each variable and the tensor product functor is right exact in each variable. Note also that if $\mathcal{R}$ is a sheaf of commutative rings then ${\mathcal{H}}om_{\mathcal{R}}(\mathcal{M}, \mathcal{N}),\ \mathcal{M}\underset{\mathcal{R}}{\otimes}\mathcal{N}\in\mathrm{Mod}(\mathcal{R})$ for $\mathcal{M}, \mathcal{N}\in\mathrm{Mod}(\mathcal{R})$. Moreover we have: \begin{proposition}[{\cite[Rem.\:18.2.6]{KS06}}] Let $\mathcal{R}_{i}\ (i=1,2,3,4)$ be $\Bbbk_X$-algebras. Then we have left exact functors in each variable \begin{align*} {\mathcal{H}}om_{\mathcal{R}_1}(\cdot, \cdot) &\colon \mathrm{Mod}(\mathcal{R}_1\otimes\mathcal{R}_2^{\mbox{\scriptsize op}})^{\mbox{\scriptsize op}}\times\mathrm{Mod}(\mathcal{R}_1\otimes\mathcal{R}_3^{\mbox{\scriptsize op}}) \to\mathrm{Mod}(\mathcal{R}_2\otimes\mathcal{R}_3^{\mbox{\scriptsize op}}),\\ \mathrm{Hom}_{\mathrm{Mod}(\mathcal{R}_1)}(\cdot, \cdot) &\colon \mathrm{Mod}(\mathcal{R}_1\otimes\mathcal{R}_2^{\mbox{\scriptsize op}})^{\mbox{\scriptsize op}}\times\mathrm{Mod}(\mathcal{R}_1\otimes\mathcal{R}_3^{\mbox{\scriptsize op}}) \to\mathrm{Mod}((\mathcal{R}_2\otimes\mathcal{R}_3^{\mbox{\scriptsize op}})(X)) \end{align*} and a right exact functor in each variable $$(\cdot)\underset{\mathcal{R}_2}{\otimes}(\cdot) \colon \mathrm{Mod}(\mathcal{R}_1\otimes\mathcal{R}_2^{\mbox{\scriptsize op}})\times\mathrm{Mod}(\mathcal{R}_2\otimes\mathcal{R}_3^{\mbox{\scriptsize op}})\to\mathrm{Mod}(\mathcal{R}_1\otimes\mathcal{R}_3^{\mbox{\scriptsize op}}).$$ Moreover, for any ${}_i\mathcal{M}_j\in\mathrm{Mod}(\mathcal{R}_i\otimes\mathcal{R}_j^{\mbox{\scriptsize op}})\ (i, j = 1,2,3,4)$, there exist natural isomorphisms in $\mathrm{Mod}(\mathcal{R}_1\otimes \mathcal{R}_4^{\mbox{\scriptsize op}})$ \begin{align*} \left({}_1\mathcal{M}_2\underset{\mathcal{R}_2}{\otimes}{}_2\mathcal{M}_3\right)\underset{\mathcal{R}_3}{\otimes}{}_3\mathcal{M}_4 &\ \simeq\ {}_1\mathcal{M}_2\underset{\mathcal{R}_2}{\otimes}\left({}_2\mathcal{M}_3\underset{\mathcal{R}_3}{\otimes}{}_3\mathcal{M}_4\right),\\ {\mathcal{H}}om_{\mathcal{R}_2}\left({}_2\mathcal{M}_1,\ {\mathcal{H}}om_{\mathcal{R}_3}({}_3\mathcal{M}_2,\ {}_3\mathcal{M}_4)\right) &\ \simeq\ {\mathcal{H}}om_{\mathcal{R}_3}\left({}_3\mathcal{M}_2\underset{\mathcal{R}_2}{\otimes}{}_2\mathcal{M}_1,\ {}_3\mathcal{M}_4\right). \end{align*} \end{proposition} Let $f\colon X\to Y$ be a morphism of sites. Then $f_\ast\mathcal{R}$ is a $\Bbbk_Y$-algebra for a $\Bbbk_X$-algebra $\mathcal{R}$, see \cite[Lem.\:18.3.1 (i)]{KS06}. However, $f^{-1}\mathcal{S}$ is not necessarily a ring for a $\Bbbk_Y$-algebra $\mathcal{S}$. If the morphism $f\colon X\to Y$ of sites associated to the functor ${}^t\!f\colon \mathcal{C}_Y\to\mathcal{C}_X$ is left exact, that is, the functor ${}^t\!f$ is left exact, then the sheaf $f^{-1}\mathcal{S}$ is a $\Bbbk_X$-algebra. Moreover, in this case, the direct image and the inverse image functors $f_\ast, f^{-1}$ in the sense of sheaves of $\Bbbk$-modules induce the functors: \begin{align*} f_\ast&\colon\mathrm{Mod}(f^{-1}\mathcal{S})\to\mathrm{Mod}(\mathcal{S}),\\ f^{-1}&\colon\mathrm{Mod}(\mathcal{S})\to\mathrm{Mod}(f^{-1}\mathcal{S}). \end{align*} Note also that the direct image functor $f_\ast$ is left exact and the inverse image functor $f^{-1}$ is exact, see \cite[Lem.\:18.3.1 (ii)]{KS06} for the details of these results. Moreover, we have: \begin{proposition}[{\cite[Lems.\:18.3.1 (ii) (c), 18.3.2]{KS06}}] Let $f\colon X\to Y$ be a left exact morphism of sites and $\mathcal{S}$ be a $\Bbbk_Y$-algebra. Then we have \begin{itemize} \item[\rm (1)] for any $\mathcal{M}\in\mathrm{Mod}(\mathcal{S}^{\mbox{\scriptsize op}})$ and any $\mathcal{N}\in\mathrm{Mod}(\mathcal{S})$, $$f^{-1}\left(\mathcal{M}\underset{\mathcal{S}}{\otimes}\mathcal{N}\right)\simeq f^{-1}\mathcal{M}\underset{f^{-1}\mathcal{S}}{\otimes}f^{-1}\mathcal{N},$$ \item[\rm (2)] for any $\mathcal{M}\in\mathrm{Mod}(\mathcal{S})$ and any $\mathcal{N}\in\mathrm{Mod}(f^{-1}\mathcal{S})$, \begin{align*} f_\ast{\mathcal{H}}om_{f^{-1}\mathcal{S}}(f^{-1}\mathcal{M}, \mathcal{N}) &\simeq {\mathcal{H}}om_{\mathcal{S}}(\mathcal{M}, f_\ast\mathcal{N}),\\ \mathrm{Hom}_{\mathrm{Mod}(f^{-1}\mathcal{S})}(f^{-1}\mathcal{M}, \mathcal{N}) &\simeq \mathrm{Hom}_{\mathrm{Mod}(\mathcal{S})}(\mathcal{M}, f_\ast\mathcal{N}). \end{align*} In particular, the functor $f^{-1}$ is left adjoint to the functor $f_\ast$. \end{itemize} \end{proposition} At the end of this subsection, let us briefly recall the derived category $\mathbf{D}^\ast(\mathrm{Mod}(\mathcal{R}))$ ($\ast = {\rm ub}, +, -, {\rm b}$) of sheaves on a site. Let $\mathcal{R}$ be a $\Bbbk_X$-algebra which is flat as a $\Bbbk_X$-module. We shall often write for short $\mathbf{D}(\mathcal{R})$ (resp.\,$\mathbf{D}^\ast(\mathcal{R}))$ instead of $\mathbf{D}^{\rm ub}(\mathrm{Mod}(\mathcal{R}))$ (resp.\,$\mathbf{D}^\ast(\mathrm{Mod}(\mathcal{R})))$. For a left exact morphism $f\colon Z\to X$ of sites and $U\in\mathcal{C}_X$, we have (derived) functors \begin{align*} \mathbf{R} f_\ast&\colon\mathbf{D}(f^{-1}\mathcal{R})\longrightarrow \mathbf{D}(\mathcal{R}),\\ f^{-1}&\colon\mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(f^{-1}\mathcal{R}),\\ (\cdot)\underset{\mathcal{R}}{\overset{\mathbf{L}}{\otimes}}(\cdot)&\colon \mathbf{D}(\mathcal{R}^{\mbox{\scriptsize op}})\times \mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\Bbbk_X),\\ {\mathbf{R}}{\mathcal{H}}om_{\mathcal{R}}(\cdot, \cdot)&\colon \mathbf{D}(\mathcal{R})^{\mbox{\scriptsize op}}\times \mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\Bbbk_X),\\ {\mathbf{R}}\mathrm{Hom}_{\mathcal{R}}(\cdot, \cdot)&\colon \mathbf{D}(\mathcal{R})^{\mbox{\scriptsize op}}\times \mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\Bbbk),\\ (\cdot)_U&\colon\mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\mathcal{R}),\\ \mathbf{R}\Gamma_U&\colon\mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\mathcal{R}),\\ \mathbf{R}\Gamma(U;\ \cdot\ )&\colon\mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\mathcal{R}(U)). \end{align*} Note that these functorss have same properties of the case of classical sheaves. We shall skip the explanation of it. See \cite[\S\S18.4, 18.5, 18.6]{KS06} for the details. \newpage \subsection{Subanalytic Sheaves} The notion of subanalytic sheaves are introduced by M.\:Kashiwara and P.\:Schapira in \cite{KS01} to treat functions with growth conditions in the formalism of sheaves. They are defined as sheaves on a subanalytic site and have examples such as the subanalytic sheaf of tempered distributions, the one of tempered $\mathcal{C}^\infty$-functions, the one of Whitney $\mathcal{C}^\infty$-functions. In this subsection, let us briefly recall the notion of subanalytic sheaves. Reference are made to \cite[\S 6]{KS01}, \cite{Pre08}, see also Subsection \ref{subsec2.2}. Throughout this subsection, we assume that $\Bbbk$ is a field. Let $M$ be a real analytic manifold and denote by $\SO p^{\rm sub}_M$ the category of subanalytic open subsets of $M$. We can endow $\SO p^{\rm sub}_M$ with the following Grothendieck topology: a subset $S\subset \SO b((\SO p_M^{\rm sub})_U)$ is a covering of $U\in\SO p_M^{{\rm sub}}$ if for any compact subset $K$ of $M$ there exists a finite subset $S_0\subset S$ of $S$ such that $U\cap K = \left(\cup_{V\in S_0}V\right)\cap K$. We denote by $M^{{\rm sub}}$ such a site and call it the subanalytic site. A subanalytic sheaf (resp.\,presheaf) of $\Bbbk$-modules on $M$ is a sheaf (resp.\,presheaf) of $\Bbbk$-modules on the subanalytic site $M^{{\rm sub}}$. We shall write $\mathrm{Mod}(\Bbbk_{M}^{\rm sub})$ (resp.\,${\rm Psh}(\Bbbk_{M}^{\rm sub}$)) instead of $\mathrm{Mod}(\Bbbk_{M^{\rm sub}})$ (resp.\,${\rm Psh}(\Bbbk_{M^{\rm sub}})$). For simplicity, we shall call subanalytic sheaves (resp.\,presheaves) of $\Bbbk$-modules subanalytic sheaves (resp.\,presheaves). Note that the forgetful functor induces an equivalence of categories (see e.g. \cite[Rem.\:1.1.2]{Pre08}) $$\mathrm{Mod}(\Bbbk_{M^{{\rm sub}}})\overset{\sim}{\longrightarrow} \mathrm{Mod}(\Bbbk_{M^{{\rm sub},\,c}}),$$ where the site $M^{{\rm sub},\,c}$ is the category $\SO p_M^{{\rm sub},\,c}$ of relatively compact subanalytic open subsets with the following Grothendieck topology: a subset $S\subset \SO b((\SO p_M^{\rm sub})_U)$ is a covering of $U\in\SO p_M^{{\rm sub},\,c}$ if for any compact subset $K$ of $M$ there exists a finite subset $S_0\subset S$ of $S$ such that $U\cap K = \left(\cup_{V\in S_0}V\right)\cap K$. Hence we have: \begin{proposition}[{\cite[Prop.\:1.1.4]{Pre08}}] Let $M$ be a real analytic manifold and $\mathcal{F}$ a presheaf on $M^{{\rm sub},\,c}$. If $\mathcal{F}$ satisfies the following conditions then it is a sheaf on $M^{{\rm sub},\,c}$, that is, it is an object of $\mathrm{Mod}(\Bbbk_{M^{{\rm sub},\,c}})\ \left(\xleftarrow{\ \sim\ }\mathrm{Mod}(\Bbbk_M^{\rm sub})\right)$ \begin{itemize} \item[\rm(1)] $\mathcal{F}(\emptyset)=0$, \item[\rm(2)] for any relatively compact subanalytic open subsets $U, V$, the sequence \[\xymatrix@M=5pt@R=0pt@C=20pt{ 0\ar@{->}[r] & \mathcal{F}(U\cup V)\ar@{->}[r]&\mathcal{F}(U)\oplus\mathcal{F}(V)\ar@{->}[r]&\mathcal{F}(U\cap V)\\ {}& s \ar@{|->}[r] & (s|_U, s|_V) & {}\\ {}& {} & (t, u) \ar@{|->}[r] & t|_{U\cap V}- u|_{U\cap V} }\] is exact. \end{itemize} \end{proposition} Clearly, we have the natural morphism $\rho_M\colon M\to M^{\rm sub}$ of sites. Hence, there exist functors \begin{align*} \rho_{M\ast}&\colon\mathrm{Mod}(\Bbbk_M)\to\mathrm{Mod}(\Bbbk_M^{\rm sub}),\\ \rho_{M}^{-1}&\colon\mathrm{Mod}(\Bbbk_M^{\rm sub})\to\mathrm{Mod}(\Bbbk_M). \end{align*} We have already seen in Subsection \ref{subsec2.2}, a pair $(\rho_{M}^{-1}, \rho_{M\ast})$ is an adjoint pair, the functor $\rho_{M\ast}$ is left exact and the functor $\rho_{M}^{-1}$ is exact. Note that there exists a canonical isomorphism $\rho_{M}^{-1}\circ\rho_{M\ast}\overset{\sim}{\longrightarrow} {\rm id}$ of functors. Hence the functor $\rho_{M\ast}$ is a fully faithful functor, see \cite[Prop.\:1.1.8]{Pre08}. Note that the restriction of $\rho_{M\ast}$ to the category of $\mathbb{R}$-constructible sheaves is exact, see \cite[Prop.\:1.1.9]{Pre08}. Let us denote by $\rho_{M\ast}^{\mathbb{R}-c}$ the exact functor. Moreover there exists a left adjoint functor $\rho_{M!}$ of the functor $\rho_{M}^{-1}$ which is defined by the following: for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_M)$, $\rho_{M!}\mathcal{F}$ is given by the sheaf associated to the presheaf $U\mapsto \Gamma\left(\var{U}; \mathcal{F}|_{\var{U}}\right)$, where $\var{U}$ is the closure of $U$ in $M$. See \cite[Prop.\:1.1.13 (i)]{Pre08} for the details. Note that the functor $\rho_{M!}$ is exact and fully faithful, and there exist the canonical isomorphism ${\rm id}\overset{\sim}{\longrightarrow}\rho_{M}^{-1}\circ\rho_{M!}$ of functors and an isomorphism $${\mathcal{H}}om^{\rm sub}(\rho_{M!}\mathcal{F}, \mathcal{G})\simeq{\mathcal{H}}om(\mathcal{F}, \rho_{M}^{-1}\mathcal{G})$$ in $\mathrm{Mod}(\Bbbk_M)$ for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_M)$ and any $\mathcal{G}\in\mathrm{Mod}(\Bbbk_M^{\rm sub})$, see \cite[Props.\:1.1.14, 1.1.5]{Pre08}. Here, the functor ${\mathcal{H}}om^{\rm sub}$ is defined as below. Let $f\colon M\to N$ be a morphism of real analytic manifolds. Then it induces the morphism $f\colon M^{\rm sub}\to N^{\rm sub}$ of subanalytic sites from $M^{\rm sub}$ to $N^{\rm sub}$ which is denoted by the same symbol. We have already seen in Subsection \ref{subsec2.2}, there exist many functors \begin{align*} (\cdot)^{++}&\colon{\rm Psh}(\Bbbk_M^{\rm sub})\to\mathrm{Mod}(\Bbbk_M^{\rm sub}),\\ f_\ast&\colon\mathrm{Mod}(\Bbbk_M^{\rm sub})\to \mathrm{Mod}(\Bbbk_N^{\rm sub}),\\ f^{-1}&\colon\mathrm{Mod}(\Bbbk_N^{\rm sub})\to\mathrm{Mod}(\Bbbk_M^{\rm sub}),\\ (\cdot)\otimes(\cdot)&\colon \mathrm{Mod}(\Bbbk_M^{\rm sub})\times \mathrm{Mod}(\Bbbk_M^{\rm sub})\to\mathrm{Mod}(\Bbbk_M^{\rm sub}),\\ \mathcal{I}hom^{\rm sub}(\cdot, \cdot)&\colon \mathrm{Mod}(\Bbbk_M^{\rm sub})^{\mbox{\scriptsize op}}\times \mathrm{Mod}(\Bbbk_M^{\rm sub})\to\mathrm{Mod}(\Bbbk_M^{\rm sub}),\\ \mathrm{Hom}_{\mathrm{Mod}(\Bbbk_M^{\rm sub})}(\cdot, \cdot)&\colon \mathrm{Mod}(\Bbbk_M^{\rm sub})^{\mbox{\scriptsize op}}\times \mathrm{Mod}(\Bbbk_M^{\rm sub})\to\mathrm{Mod}(\Bbbk),\\ (\cdot)_U&\colon\mathrm{Mod}(\Bbbk_M^{\rm sub})\to\mathrm{Mod}(\Bbbk_M^{\rm sub}),\\ \Gamma_U&\colon\mathrm{Mod}(\Bbbk_M^{\rm sub})\to\mathrm{Mod}(\Bbbk_M^{\rm sub}),\\ \Gamma(U;\ \cdot\ )&\colon\mathrm{Mod}(\Bbbk_M^{\rm sub})\to\mathrm{Mod}(\Bbbk). \end{align*} In this paper, we shall write $\mathcal{I}hom^{\rm sub}$ instead of the internal hom functor on $\mathrm{Mod}(\Bbbk_M^{\rm sub})$, that is, ${\mathcal{H}}om_{\Bbbk_{M^{\rm sub}}}(\cdot, \cdot)$ in the notion of Subsection \ref{subsec2.2}. Let us set ${\mathcal{H}}om^{\rm sub} := \rho_{M}^{-1}\circ\mathcal{I}hom^{\rm sub}$. Moreover, we have a functor which is called the proper direct image functor: $$f_{!!}\colon\mathrm{Mod}(\Bbbk_M^{\rm sub})\to \mathrm{Mod}(\Bbbk_N^{\rm sub}), \hspace{7pt} f_{!!}\mathcal{F} := \varinjlim_Uf_\ast\mathcal{F}_U\simeq \varinjlim_Kf_\ast\Gamma_K\mathcal{F},$$ where $U$ ranges through the family of relatively compact open subanalytic subsets and $K$ ranges through the family of subanalytic compact subsets. Note that the functor $f_{!!}$ is left exact and if $f\colon M\to N$ is proper then we have $f_\ast\simeq f_{!!}$. Moreover, these functors have many properties as similar to classical sheaves, see \cite{Pre08} for the details. We shall skip the explanation of it. We shall write $\mathbf{D}^\ast(\Bbbk_M^{\rm sub})$ ($\ast = {\rm ub}, +, -, {\rm b}$) instead of $\mathbf{D}^\ast(\Bbbk_{M^{\rm sub}})$. Then there exist (derived) functors of them, we have already seen in Subsection \ref{subsec2.2} for several ones. Moreover, the functors below are well defined, see \cite[Cor.\:2.3.3]{Pre08} for the functors $\mathbf{R} f_\ast, \mathbf{R} f_{!!}$: \begin{align*} \mathbf{R}\rho_{M\ast}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)\hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}),\\ \rho_{M\ast}^{\mathbb{R}-c}&\colon{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_M)\hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}),\\ \rho_{M}^{-1}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M),\\ \rho_{M!}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)\hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}),\\ (\cdot)^{++}&\colon{\mathbf{D}}^{\mathrm{b}}({\rm Psh}(\Bbbk_M^{\rm sub}))\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}),\\ \mathbf{R} f_\ast&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow {\mathbf{D}}^{\mathrm{b}}(\Bbbk_N^{\rm sub}),\\ f^{-1}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_N^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}),\\ \mathbf{R} f_{!!}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow {\mathbf{D}}^{\mathrm{b}}(\Bbbk_N^{\rm sub}),\\ (\cdot)\otimes(\cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}),\\ {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\cdot, \cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})^{\mbox{\scriptsize op}}\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow\mathbf{D}^+(\Bbbk_M^{\rm sub}),\\ {\mathbf{R}}{\mathcal{H}}om^{\rm sub}(\cdot, \cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})^{\mbox{\scriptsize op}}\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow\mathbf{D}^+(\Bbbk_M),\\ {\mathbf{R}}\mathrm{Hom}(\cdot, \cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})^{\mbox{\scriptsize op}}\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow\mathbf{D}^+(\Bbbk),\\ (\cdot)_U&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}),\\ \mathbf{R}\Gamma_U&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}),\\ \mathbf{R}\Gamma(U;\ \cdot\ )&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk), \end{align*} where $U$ is a subanalytic open subset of $M$. Note that the functor $\mathbf{R} f_{!!}$ admits a right adjoint functor (see \cite[\S2.4]{Pre08} for the details): $$f^!\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_N^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}).$$ Moreover, these functors have many properties as similar to classical sheaves. In this subsection, we just recall the following properties, see \cite{Pre08} for the details. \begin{theorem}\label{thm2.25} Let $f\colon M\to N$ be a morphism of real analytic manifolds and $\mathcal{F}, \mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}), \mathcal{G}, \mathcal{G}_1, \mathcal{G}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_N^{\rm sub}), \mathcal{K}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M), \mathcal{L}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_N), \mathcal{J}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_M)$. \begin{itemize} \setlength{\itemsep}{-1pt} \item[\rm (1)] $\mathbf{R} \rho_{M\ast}{\mathcal{H}}om(\rho_{M}^{-1}\mathcal{F}, \mathcal{K}) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}, \mathbf{R} \rho_{M\ast}\mathcal{K}),\\[5pt] {\mathbf{R}}{\mathcal{H}}om^{\rm sub}(\rho_{M!}\mathcal{K}, \mathcal{F}) \simeq {\mathbf{R}}{\mathcal{H}}om(\mathcal{K}, \rho_{M}^{-1}\mathcal{F}),$ \item[\rm (2)] ${\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathbf{R} f_{!!}\mathcal{F}, \mathcal{G}) \simeq \mathbf{R} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}, f^!\mathcal{G}),\\[5pt] \mathbf{R} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(f^{-1}\mathcal{G}, \mathcal{F}) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{G}, \mathbf{R} f_\ast\mathcal{F}),$ \item[\rm (3)] ${\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}_1\otimes\mathcal{F}_2, \mathcal{F}) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\left(\mathcal{F}_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}_2, \mathcal{F})\right),$ \item[\rm(4)] $f^{-1}(\mathcal{F}_1\otimes\mathcal{F}_2)\simeq f^{-1}\mathcal{F}_1\otimes f^{-1}\mathcal{F}_2,\\[5pt] \mathbf{R} f_{!!}\left(\mathcal{F}\otimes f^{-1}\mathcal{G}\right)\simeq \mathbf{R} f_{!!}\mathcal{F}\otimes \mathcal{G},$ \item[\rm (5)] $f^!{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{G}_1, \mathcal{G}_2)\simeq{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(f^{-1}\mathcal{G}_1, f^!\mathcal{G}_2),$ \item[\rm (6)] for a cartesian diagram \[\xymatrix@M=5pt@R=20pt@C=40pt{ M'^{\,{\rm sub}}\ar@{->}[r]^-{f'}\ar@{->}[d]_-{g'} & N'^{\,{\rm sub}}\ar@{->}[d]^-{g}\\ M^{\rm sub}\ar@{->}[r]_-{f} & N^{\rm sub}}\] we have $g^{-1}f_{!!}\mathcal{F}\simeq f'_{!!}g'^{-1}\mathcal{F},\hspace{3pt} g^{!}f_{\ast}\mathcal{F}\simeq f'_{\ast}g'^{!}\mathcal{F}$, \item[\rm (7)] $f^!(\mathcal{G}\otimes\rho_{N!}\mathcal{L})\simeq f^!\mathcal{G}\otimes \rho_{M!}f^{-1}\mathcal{L}$, \item[\rm (8)] ${\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{J}, \mathcal{F})\otimes\rho_{M!}\mathcal{K}\simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{J}, \mathcal{F}\otimes \rho_{M!}\mathcal{K})$. \end{itemize} \end{theorem} Let us summarize the commutativity of the various functors in the table below. Here, $``\circ"$ means that the functors commute, and $``\times"$ they do not. \begin{table}[h] \begin{equation*} \begin{tabular}{l||c|c|c|c|c|c|c} {} & $\otimes$ & $f^{-1}$ & $\mathbf{R} f_\ast$ & $f^!$ & $\mathbf{R} f_{!!}$\\ \hline \hline $\overset{}{\underset{}{\mathbf{R}\rho_{\ast}}}$ & $\times$ & $\times$ & $\circ$ & $\circ$ & $\times$ \\ \hline $\overset{}{\underset{}{\rho_{\ast}^{\mathbb{R}-c}}}$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\times$ \\ \hline $\overset{}{\underset{}{\rho^{-1}}}$ & $\circ$ & $\circ$ & $\circ$ & $\times$ & $\circ$ \\ \hline $\overset{}{\underset{}{\rho_{!}}}$ & $\circ$ & $\circ$ & $\times$ & $\times$ & $\times$ \\ \hline \end{tabular} \end{equation*} \end{table} Recall that we assume that $\Bbbk$ is a field in this subsection. See Subsection \ref{subsec2.2} for the details of $\Bbbk_{M^{\rm sub}}$-algebras and modules over them. See also \cite[\S\S 3.1, 3.2]{Pre08}. We just remark that we have the following facts by the definitions. \begin{remark}\label{rem2.26} \begin{itemize} \item[(1)] Let $\mathcal{R}$ be a subanalytic sheaf. Then, the following two conditions are equivalent: \begin{itemize} \item[(i)] a subanalytic sheaf $\mathcal{R}$ is a $\Bbbk_{M^{\rm sub}}$-algebra, \item[(ii)] there exist morphisms $\mu_{\mathcal{R}}\colon\mathcal{R}\otimes\mathcal{R}\to\mathcal{R},\ \varepsilon_\mathcal{R}\colon \Bbbk_{M^{\rm sub}}\to\mathcal{R}$ of subanalytic sheaves such that the following diagrams are commutative: \[\hspace{-27pt} \xymatrix@M=5pt@R=20pt@C=40pt{ \mathcal{R}\ar@{->}[r]^-\sim\ar@{->}[d]_-{\rm id_{\mathcal{R}}} & \Bbbk_{M^{\rm sub}}\otimes\mathcal{R}\ar@{->}[d]^-{\varepsilon_{\mathcal{R}}\otimes\mathcal{R}}\\ \mathcal{R} & \mathcal{R}\otimes\mathcal{R}\ar@{->}[l]^-{\mu_{\mathcal{R}}}, }\hspace{11pt} \xymatrix@M=5pt@R=20pt@C=40pt{ \mathcal{R}\ar@{->}[r]^-\sim\ar@{->}[d]_-{\rm id_{\mathcal{R}}} & \mathcal{R}\otimes\Bbbk_{M^{\rm sub}}\ar@{->}[d]^-{\mathcal{R}\otimes\varepsilon_{\mathcal{R}}}\\ \mathcal{R} & \mathcal{R}\otimes\mathcal{R}\ar@{->}[l]^-{\mu_{\mathcal{R}}} },\hspace{11pt} \xymatrix@M=5pt@R=20pt@C=40pt{ \mathcal{R}\otimes\mathcal{R}\otimes\mathcal{R}\ar@{->}[r]^-{\mu_\mathcal{R}\otimes\mathcal{R}}\ar@{->}[d]_-{\mathcal{R}\otimes\mu_\mathcal{R}} & \mathcal{R}\otimes\mathcal{R}\ar@{->}[d]^-{\mu_{\mathcal{R}}}\\ \mathcal{R}\otimes\mathcal{R} \ar@{->}[r]_-{\mu_{\mathcal{R}}} & \mathcal{R}. }\] \end{itemize} \item[(2)] Let $\mathcal{R}$ be a $\Bbbk_{M^{\rm sub}}$-algebra and $\mathcal{M}$ a subanalytic sheaf. Then the following two conditions are equivalent: \begin{itemize} \item[(i)] a subanalytic sheaf $\mathcal{M}$ is a $\mathcal{R}$-module, \item[(ii)] there exists a morphism $\mu_{\mathcal{M}}\colon\mathcal{R}\otimes\mathcal{M}\to\mathcal{M}$ of subanalytic sheaves such that the following diagrams are commutative: \[\hspace{-27pt} \xymatrix@M=5pt@R=20pt@C=40pt{ \mathcal{M}\ar@{->}[r]^-\sim\ar@{->}[d]_-{\rm id_{\mathcal{M}}} & \Bbbk_{M^{\rm sub}}\otimes\mathcal{M}\ar@{->}[d]^-{\varepsilon_{\mathcal{A}}\otimes\mathcal{M}}\\ \mathcal{M} & \mathcal{R}\otimes\mathcal{M}\ar@{->}[l]^-{\mu_{\mathcal{M}}}, }\hspace{11pt} \xymatrix@M=5pt@R=20pt@C=40pt{ \mathcal{R}\otimes\mathcal{R}\otimes\mathcal{M}\ar@{->}[r]^-{\mu_\mathcal{R}\otimes\mathcal{M}}\ar@{->}[d]_-{\mathcal{R}\otimes\mu_\mathcal{M}} & \mathcal{R}\otimes\mathcal{M}\ar@{->}[d]^-{\mu_{\mathcal{M}}}\\ \mathcal{R}\otimes\mathcal{M} \ar@{->}[r]_-{\mu_{\mathcal{M}}} & \mathcal{M}. }\] \end{itemize} \item[(3)] Let $\mathcal{A}$ be a (classical) sheaf of $\Bbbk$-algebras on $M$. Then, a subanalytic sheaf $\rho_{M!}\mathcal{A}$ is $\Bbbk_{M^{\rm sub}}$-algebra and the following functors are well-defined: \begin{align*} \rho_{M!}\colon\mathrm{Mod}(\mathcal{A})\to \mathrm{Mod}(\rho_{M!}\mathcal{A}),\\ \rho_{M}^{-1}\colon\mathrm{Mod}(\rho_{M!}\mathcal{A})\to\mathrm{Mod}(\mathcal{A}). \end{align*} \end{itemize} \end{remark} \subsection{Ind-Sheaves} The theory of (classical) sheaves is not well suited to the study of various objects in Analysis which are not defined by local properties, such as holomorphic functions with tempered growth. The notion of ind-sheaves was introduced by M.\:Kashiwara and P.\:Schapira in \cite{KS01} to treat functions with such as growth conditions in the formalism of sheaves. In this subsection, we shall briefly recall the notion of ind-sheaves. References are made to \cite{KS01, KS06}. Let $\Bbbk$ be a field and $M$ a good topological space, that is, a topological space which is locally compact, Hausdorff, countable at infinity and has finite soft dimension. We denote by $\mathrm{Mod}^c(\Bbbk_M)$ the category of sheaves of $\Bbbk$-vector spaces on $M$ with compact support. An ind-sheaf of $\Bbbk$-vector spaces on $M$ is an ind-object of $\mathrm{Mod}^c(\Bbbk_M)$, that is, inductive limit $$\displaystyle``\varinjlim_{i\in I}"\mathcal{F}_i := \varinjlim_{i\in I}\mathrm{Hom}_{\mathrm{Mod}^c(\Bbbk_M)}(\ \cdot\ ,\ \mathcal{F}_i)$$ of a small filtrant inductive system $\{\mathcal{F}_i\}_{i\in I}$ in $\mathrm{Mod}^c(\Bbbk_M)$. We call an ind-sheaf of $\Bbbk$-vector spaces an ind-sheaf, for simplicity. Let us denote ${\rm I}\Bbbk_M$ the category of ind-sheaves of $\Bbbk$-vector spaces on $M$. Note that it is abelian. Note also that there exists a natural exact embedding $\iota_M : \mathrm{Mod}(\Bbbk_M)\to{\rm I}\Bbbk_M$ of categories. We sometimes omit it. It has an exact left adjoint $\alpha_M$, that has in turn an exact fully faithful left adjoint functor $\beta_M$: \[\xymatrix@C=60pt{\mathrm{Mod}(\Bbbk_M) \ar@<1.0ex>[r]^-{\iota_{M}} \ar@<-1.0ex>[r]_- {\beta_{M}} & {\rm I}\Bbbk_M \ar@<0.0ex>[l]|-{\alpha_{M}}}.\] The category ${\rm I}\Bbbk_M$ does not have enough injectives. Nevertheless, we can construct the derived category ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)$ for ind-sheaves and the Grothendieck six operations among them. We denote by $\otimes$ and ${\mathbf{R}}{\mathcal{I}}hom$ the operations of tensor product and internal hom respectively. If $f\colon M\to N$ be a continuous map, we denote by $f^{-1}, {\rm R} f_\ast, f^!$ and ${\rm R} f_{!!}$ the operations of inverse image, direct image, proper inverse image and proper direct image, respectively. We set also ${\mathbf{R}}{\mathcal{H}}om := \alpha_M\circ{\mathbf{R}}{\mathcal{I}}hom$. We thus have (derived) functors\\ \begin{minipage}{61mm} \begin{align*} \iota_M &: {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)\longrightarrow {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M),\\ \alpha_M &: {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\longrightarrow {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M),\\ \beta_M &: {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)\longrightarrow {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M),\\ \\ {}_Z(\cdot)&\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\longrightarrow{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M),\\ \mathbf{R}\Gamma_Z&\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\longrightarrow{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M), \end{align*} \end{minipage} \begin{minipage}{61mm} \begin{align*} (\cdot)\otimes(\cdot) &: {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\times{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\longrightarrow {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M), \\ {\mathbf{R}}{\mathcal{I}}hom(\cdot, \cdot) &: {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)^{{\mbox{\scriptsize op}}}\times{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\longrightarrow \mathbf{D}^+({\rm I}\Bbbk_M), \\ {\mathbf{R}}{\mathcal{H}}om(\cdot, \cdot) &: {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)^{{\mbox{\scriptsize op}}}\times{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\longrightarrow \mathbf{D}^+(\Bbbk_M), \\ {\mathbf{R}}\mathrm{Hom}(\cdot, \cdot) &: {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)^{{\mbox{\scriptsize op}}}\times{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\longrightarrow \mathbf{D}^+(\Bbbk), \\ \mathbf{R} f_\ast &: {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\longrightarrow {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_N),\\ f^{-1} &: {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_N)\longrightarrow {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M),\\ \mathbf{R} f_{!!} &: {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\longrightarrow {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_N),\\ f^! &: {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_N)\longrightarrow {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M), \end{align*} \end{minipage} where $Z$ is a locally closed subset of $M$. These functor have many properties as similar to classical sheaves, see \cite{KS01, KS06} for details. We just recall the commutativity of the various functors. Let us summarize them in the table below. Here, $``\circ"$ means that the functors commute, and $``\times"$ they do not. \begin{table}[h] \begin{equation*} \begin{tabular}{l||c|c|c|c|c|c|c} {} & $\otimes$ & $f^{-1}$ & ${\rm R} f_\ast$ & $f^!$ & ${\rm R} f_{!!}$ \\ \hline \hline $\overset{}{\underset{}{\iota}}$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\times$ \\ \hline $\overset{}{\underset{}{\alpha}}$ & $\circ$ & $\circ$ & $\circ$ & $\times$ & $\circ$ \\ \hline $\overset{}{\underset{}{\beta}}$ & $\circ$ & $\circ$ & $\times$ & $\times$ & $\times$\\ \hline \end{tabular} \end{equation*} \end{table} \newpage At the end of this subsection, we shall recall the notion of ind-sheaves with ring actions. An ind-$\Bbbk_M$-algebra is the date of an ind-sheaf $R$ and morphisms of ind-sheaves $$\mu_R\colon R\otimes R\to R,\ \varepsilon_R\colon \iota_M\Bbbk_M\to R$$ such that the following diagrams are commutative: \[\hspace{-27pt} \xymatrix@M=5pt@R=20pt@C=40pt{ R\ar@{->}[r]^-\sim\ar@{->}[d]_-{\rm id_{R}} & \iota_M\Bbbk_{M}\otimes R\ar@{->}[d]^-{\varepsilon_{R}\otimes R}\\ R & R\otimes R\ar@{->}[l]^-{\mu_{R}}, }\hspace{11pt} \xymatrix@M=5pt@R=20pt@C=40pt{ R\ar@{->}[r]^-\sim\ar@{->}[d]_-{\rm id_{R}} & R\otimes\iota_M\Bbbk_{M}\ar@{->}[d]^-{R\otimes\varepsilon_{R}}\\ R & R\otimes R\ar@{->}[l]^-{\mu_{R}}, }\hspace{11pt} \xymatrix@M=5pt@R=20pt@C=40pt{ R\otimes R\otimes R\ar@{->}[r]^-{\mu_R\otimes R}\ar@{->}[d]_-{R\otimes\mu_R} & R\otimes R\ar@{->}[d]^-{\mu_{R}}\\ R\otimes R \ar@{->}[r]_-{\mu_{R}} & R. }\] Let $R$ be an ind-$\Bbbk_M$-algebra. An $R$-module $F$ is the date of an ind-sheaf $F$ and a morphism $\mu_F\colon R\otimes F\to F$ such that the following diagrams are commutative: \[\hspace{-27pt} \xymatrix@M=5pt@R=20pt@C=40pt{ F\ar@{->}[r]^-\sim\ar@{->}[d]_-{\rm id_{F}} & \iota_M\Bbbk_{M}\otimes F\ar@{->}[d]^-{\varepsilon_{R}\otimes F}\\ F & R\otimes F\ar@{->}[l]^-{\mu_{F}}, }\hspace{11pt} \xymatrix@M=5pt@R=20pt@C=40pt{ R\otimes R\otimes F\ar@{->}[r]^-{\mu_R\otimes F}\ar@{->}[d]_-{\mathcal{R}\otimes\mu_F} & R\otimes F\ar@{->}[d]^-{\mu_{F}}\\ R\otimes F \ar@{->}[r]_-{\mu_{F}} & F. }\] A morphism of $R$-modules from $F$ to $G$ is a morphism $\varphi\colon F\to G$ of ind-sheaves such that the following diagram is commutative: \[\hspace{-27pt} \xymatrix@M=5pt@R=20pt@C=40pt{ R\otimes F\ar@{->}[r]^-{R\otimes \varphi}\ar@{->}[d]_-{\mu_F} & R\otimes G\ar@{->}[d]^-{\mu_G}\\ F\ar@{->}[r]_-{\varphi} & G. }\] Let us denote by ${\rm I} R$ the category of $R$-modules. Note that this is abelian, see \cite[\S 5.4]{KS01} for the details. For an $R$-module $F$, we denote by $\nu_F\colon F\to\mathcal{I}hom(R, F)$ the corresponding morphism of $\mu_F$ by the isomorphism $\mathrm{Hom}_{{\rm I}\Bbbk_M}(R\otimes F, F)\simeq \mathrm{Hom}_{{\rm I}\Bbbk_M}(F, \mathcal{I}hom(R, F))$ and set $e_F \colon F\simeq \Bbbk_M\otimes F\xrightarrow{\ \varepsilon_R\otimes F} R\otimes F,\ e_F^\ast\colon \mathcal{I}hom(R, F)\xrightarrow{\ \mathcal{I}hom(\varepsilon_R, F)\ }\mathcal{I}hom(\Bbbk_M, F)\simeq F.$ Note that in the classical case of module over ring, the analogous morphisms of $\mu_F, \nu_F, e_F$ and $e_F^\ast$ are the morphisms $r\otimes f\mapsto rf, f\mapsto (r\mapsto rf), f\mapsto 1\otimes f$ and $\varphi\mapsto \varphi(1)$, respectively. Moreover, for an ind-$\Bbbk_M$-algebra $R$ we defne an ind-$\Bbbk_M$-algebra $R^{\mbox{\scriptsize op}}$ by the date of an ind-$\Bbbk_M$-algebra $R$, a morphism $\mu_{R^{\mbox{\scriptsize op}}}\colon R\otimes R \to R$ of ind-sheaves corresponding to $a\otimes b\mapsto b\otimes a$ and a morphism $\varepsilon_{A^{\mbox{\scriptsize op}}} := \varepsilon_{A}$. \newpage The tensor product functor and the internal hom functor \begin{align*} (\cdot)\underset{R}{\otimes}(\cdot) &\colon {\rm I} R^{\mbox{\scriptsize op}}\times {\rm I} R\to {\rm I}\Bbbk_M,\\ \mathcal{I}hom_R(\cdot, \cdot) &\colon ({\rm I} R)^{\mbox{\scriptsize op}}\times {\rm I} R\to {\rm I}\Bbbk_M \end{align*} are defined by \begin{align*} F\underset{R}{\otimes}G &:= \operatorname{Coker}\(\mu_F'\otimes G - F\otimes\mu_G\colon F\otimes R\otimes G\to F\otimes G \),\\ \mathcal{I}hom_R(F, G) &:= \operatorname{Ker}\(\mathcal{I}hom(\mu_F, G) - \mathcal{I}hom(F, \nu_G)'\colon \mathcal{I}hom(F, G)\to \mathcal{I}hom(R\otimes F, G)\) \end{align*} where the morphism $\mu_F'$ is the composition $$F\otimes R \simeq R\otimes F\xrightarrow{\ \mu_F\ }F$$ and the morphism $ \mathcal{I}hom(F, \nu_G)'$ is the composition $$\mathcal{I}hom(F, G)\xrightarrow{\ \mathcal{I}hom(F, \nu_G)\ }\mathcal{I}hom(F, \mathcal{I}hom(R, G)) \simeq \mathcal{I}hom(R\otimes F, G).$$ Note that there exist isomorphisms $R\underset{R}{\otimes}F\simeq F, \mathcal{I}hom_R(R, F)\simeq F$ in ${\rm I}\Bbbk_M$, see \cite[Lem.\:5.4.8]{KS01} for the details. Let us denote by ${\mathbf{D}}^{\mathrm{b}}({\rm I} R)$ the derived category of ${\rm I} R$. For any three ind-$\Bbbk_M$-algebras $R_1, R_2, R_3$ the following functors are well-defined: \begin{align*} (\cdot)\overset{{\mathbb L}}{\underset{R_2}{\otimes}}(\cdot) &\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}(R_1\otimes R_2^{\mbox{\scriptsize op}}))\times{\mathbf{D}}^{\mathrm{b}}({\rm I}(R_2\otimes R_3^{\mbox{\scriptsize op}}))\to\mathbf{D}^+({\rm I}(R_1\otimes R_3^{\mbox{\scriptsize op}})),\\ {\mathbf{R}}{\mathcal{I}}hom_{R_1}(\cdot, \cdot) &\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}(R_1\otimes R_2^{\mbox{\scriptsize op}}))^{\mbox{\scriptsize op}}\times{\mathbf{D}}^{\mathrm{b}}({\rm I}(R_1\otimes R_3^{\mbox{\scriptsize op}}))\to\mathbf{D}^+({\rm I}(R_2\otimes R_3^{\mbox{\scriptsize op}})), \end{align*} see \cite[Thms.\:5.4.19 (a), 5.4.21 (b)]{KS01} for the details. Let $f\colon M\to N$ be a morphism of good topological spaces and $S$ an ind-$\Bbbk_N$-algebra. Then, the following functors are well-defined: \begin{align*} \mathbf{R} f_\ast&\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}(f^{-1}S))\to {\mathbf{D}}^{\mathrm{b}}({\rm I} S),\\ f^{-1}&\colon{\mathbf{D}}^{\mathrm{b}}({\rm I} S)\to{\mathbf{D}}^{\mathrm{b}}({\rm I}(f^{-1}S)),\\ \mathbf{R} f_{!!}&\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}(f^{-1}S))\to {\mathbf{D}}^{\mathrm{b}}({\rm I} S), \end{align*} see \cite[Thms.\:5.5.1]{KS01} for the details. These functors have many properties as similar to classical sheaves, see \cite[\S\S 5.4, 5.5]{KS01} for the details. We just recall that there exist isomorphisms \begin{align*} {\mathbf{R}}{\mathcal{I}}hom_{R_2}\({}_2F_3, {\mathbf{R}}{\mathcal{I}}hom_{R_1}\({}_1F_2, {}_1F_4\)\) &\simeq {\mathbf{R}}{\mathcal{I}}hom_{R_1}\({}_1F_2\underset{R_2}{\otimes}{}_2F_3, {}_1F_4\),\\ {\mathbf{R}}{\mathcal{I}}hom_S\(G, \mathbf{R} f_\ast F\) &\simeq \mathbf{R} f_\ast{\mathbf{R}}{\mathcal{I}}hom_{f^{-1}S}\(f^{-1}G, F\) \end{align*} where $R_i\ (i = 1, 2, 3, 4)$ are ind-$\Bbbk_M$-algebras, ${}_iF_j \in {\mathbf{D}}^{\mathrm{b}}({\rm I}(R_i\otimes R_j^{\mbox{\scriptsize op}}))\ (i, j = 1, 2, 3, 4)$, $S$ is an ind-$\Bbbk_N$-algebra and $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}(f^{-1}S)), G\in{\mathbf{D}}^{\mathrm{b}}({\rm I} S)$. Moreover, for any sheaf $\mathcal{A}$ of $\Bbbk$-algebras whose flat dimension is finite, the functors $\alpha_M, \beta_M$ induce functors (see \cite[\S 5.6]{KS01} for the details) \begin{align*} \alpha_M\colon{\rm I}(\beta_M\mathcal{A})\to\mathrm{Mod}(\mathcal{A}),\\ \beta_M\colon\mathrm{Mod}(\mathcal{A})\to {\rm I}(\beta_M\mathcal{A}),\\ \alpha_M\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}(\beta_M\mathcal{A}))\to {\mathbf{D}}^{\mathrm{b}}(\mathcal{A}),\\ \beta_M\colon{\mathbf{D}}^{\mathrm{b}}(\mathcal{A})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}(\beta_M\mathcal{A})). \end{align*} \subsection{Relation between Ind-sheaves and Subanalytic Sheaves}\label{subsec2.5} In this subsection, we shall recall the relation between ind-sheaves and subanalytic sheaves. Reference are made to \cite[\S\S 6.3, 7.1]{KS01} and \cite[\S A.2]{Pre13}. Let $M$ be a real analytic manifold and $\Bbbk$ a field. We denote by $\mathrm{Mod}_{\mathbb{R}-c}^c(\Bbbk_M)$ the abelian category of $\mathbb{R}$-constructible sheaves on $M$ with compact support and by ${\rm I}_{\mathbb{R}-c}\Bbbk_M$ the category of ind-objects of $\mathrm{Mod}_{\mathbb{R}-c}^c(\Bbbk_M)$. Moreover let us denote by ${\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_M)$ the full triangulated subcategory of ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M})$ consisting of objects whose cohomologies are contained in ${\rm I}_{\mathbb{R}-c}\Bbbk_{M}$. A functor $J_M\colon {\rm I}\Bbbk_M\to\mathrm{Mod}(\Bbbk_M^{\rm sub})$ is defined by $$J_M\left(``\varinjlim_{i\in I}"\mathcal{F}_i\right) := \varinjlim_{i\in I}\rho_{M\ast}\mathcal{F}_i.$$ Note that for any $F\in{\rm I}\Bbbk_M$ the subanalytic sheaf $J_MF$ is given by $\Gamma(U; J_MF) = \mathrm{Hom}_{{\rm I}\Bbbk_M}(\iota_M\Bbbk_U, F)$ for each open subanalytic subset $U$. Note also that the functor $J_M$ is left exact and admits a left adjoint $$I_M\colon \mathrm{Mod}(\Bbbk_M^{\rm sub})\to{\rm I}\Bbbk_M$$ which is fully faithful, exact and commutes with filtrant inductive limits. Then we have (derived) functors \begin{align*} I_M&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M),\\ \mathbf{R} J_M&\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)\to{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub}) \end{align*} and a pair $(I_M, \mathbf{R} J_M)$ of functors is an adjoint pair and there exists a canonical isomorphism ${\rm id}\overset{\sim}{\longrightarrow}\mathbf{R} J_M\circ I_M$. Moreover, we have $$\mathbf{R} J_M{\mathbf{R}}{\mathcal{I}}hom(I_M(\cdot),\ \cdot) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\cdot, \mathbf{R} J_M(\cdot)).$$ \begin{theorem}[{\cite[Thm.\:6.3.5]{KS01}}, see also {\cite[A.2.1]{Pre13}}]\label{thm2.26} There exists an equivalence of abelian categories: \[\xymatrix@M=7pt@C=45pt{ \mathrm{Mod}(\Bbbk_M^{\rm sub})\ar@<0.8ex>@{->}[r]^-{I_M}_-\sim & {\rm I}_{\mathbb{R}-c}\Bbbk_M \ar@<0.8ex>@{->}[l]^-{J_M}. }\] Furthermore, there exists an equivalence of triangulated categories: \[\xymatrix@M=7pt@C=45pt{ {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\ar@<0.8ex>@{->}[r]^-{I_M}_-\sim & {\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_M) \ar@<0.8ex>@{->}[l]^-{\mathbf{R} J_M}. }\] \end{theorem} Let us summarize the commutativity between the various functors and functors $I, \mathbf{R} J$. We will denote by $$\lambda_M\colon {\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_M)\overset{\sim}{\longrightarrow} {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})$$ the inverse functor of $I_M\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M^{\rm sub})\overset{\sim}{\longrightarrow} {\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_M)$. Let $f\colon M\to N$ be a morphism of real analytic manifolds. Then, we have \begin{align*} \alpha_M\circ I_M &\simeq \rho_{M}^{-1},\\ I_M\circ\rho_{M!}&\simeq\beta_M,\\ I_M\circ f^{-1}&\simeq f^{-1}\circ I_N,\\ \mathbf{R} f_{!!}\circ I_M&\simeq I_N\circ \mathbf{R} f_{!!},\\ I_M\circ f^{!}&\simeq f^{!}\circ I_N,\\ I_M(\cdot\otimes \cdot) &\simeq I_M(\cdot)\otimes I_M(\cdot), \end{align*} and \begin{align*} \mathbf{R} J_M\circ \iota_M&\simeq \rho_{M\ast},\\ \rho_{M}^{-1}\circ\mathbf{R} J_M&\simeq \alpha_M,\\ \mathbf{R} J_M\circ \beta_M&\simeq \rho_{M!},\\ \mathbf{R} f_{\ast}\circ \mathbf{R} J_M &\simeq \mathbf{R} J_N\circ \mathbf{R} f_{\ast} ,\\ \mathbf{R} J_M\circ f^{!}&\simeq f^{!}\circ \mathbf{R} J_N. \end{align*} Moreover, we have \begin{align*} I_M\circ\rho_{M\ast}^{\mathbb{R}-c} &\simeq \iota_M|_{\mathrm{Mod}_{\mathbb{R}-c}(\Bbbk_M)},\\ \lambda_M\circ f^{-1}&\simeq f^{-1}\circ \lambda_N,\\ \mathbf{R} f_{!!}\circ \lambda_M&\simeq \lambda_N\circ \mathbf{R} f_{!!},\\ \lambda_M(\cdot\otimes \cdot) &\simeq \lambda_M(\cdot)\otimes \lambda_M(\cdot). \end{align*} At the end of this subsection, let us consider Theorem \ref{thm2.26} with ring actions. Let $\mathcal{A}$ is a sheaf of algebras whose flat dimension is finite. Since the functors $I_M$ and $\lambda_M$ are exact and commute with the tensor product functor $\otimes$, the following functors are well-defined: \begin{align*} I_M&\colon\mathrm{Mod}(\rho_{M!}\mathcal{A})\to{\rm I}(\beta_M\mathcal{A}),\\ \lambda_M&\colon{\rm I}(\beta_M\mathcal{A})\cap{\rm I}_{\mathbb{R}-c}\Bbbk_M\to\mathrm{Mod}(\rho_{M!}\mathcal{A}). \end{align*} See also Remark \ref{rem2.26}. Hence, by Theorem \ref{thm2.26}, there exists an equivalence of abelian categories: \[\xymatrix@M=7pt@C=45pt{ \mathrm{Mod}(\rho_{M!}\mathcal{A})\ar@<0.8ex>@{->}[r]^-{I_M}_-\sim & {\rm I}(\beta_M\mathcal{A})\cap{\rm I}_{\mathbb{R}-c}\Bbbk_M \ar@<0.8ex>@{->}[l]^-{\lambda_M}. }\] Furthermore, there exists an equivalence of triangulated categories: \[\xymatrix@M=7pt@C=45pt{ {\mathbf{D}}^{\mathrm{b}}(\rho_{M!}\mathcal{A})\ar@<0.8ex>@{->}[r]^-{I_M}_-\sim & {\mathbf{D}}^{\mathrm{b}}({\rm I}(\beta_M\mathcal{A}))\cap{\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_M) \ar@<0.8ex>@{->}[l]^-{\lambda_M}. }\] Moreover, for any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}(\rho_{M!}\mathcal{A})$ and any $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}(\beta_M\mathcal{A}))\cap{\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_M)$, one has $$\mathbf{R} J_M{\mathbf{R}}{\mathcal{I}}hom_{\beta_M\mathcal{A}}(I_M\mathcal{M}, G) \simeq {\mathbf{R}}{\mathcal{I}}hom_{\rho_{M!}\mathcal{A}}^{\rm sub}(\mathcal{M}, \lambda_MG).$$ This claim follows from the fact that $\mathbf{R} J_M{\mathbf{R}}{\mathcal{I}}hom(I_M(\cdot),\ \cdot) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\cdot, \mathbf{R} J_M(\cdot))$ and the definition of the bifunctor $\mathcal{I}hom_{\beta_M\mathcal{A}}$. See also Remark \ref{rem2.26}. \subsection{Ind-Sheaves on Bordered Spaces} In this subsection, we shall recall the notions of bordered spaces and ind-sheaves on bordered spaces. Reference are made to \cite[\S 3]{DK16} for the details of bordered spaces and ind-sheaves on bordered spaces. Throughout this subsection, we assume that $\Bbbk$ is a field. \subsubsection{Bordered Spaces} A bordered space is a pair $M_{\infty} = (M, \che{M})$ of a good topological space $\che{M}$ and an open subset $M\subset\che{M}$. A morphism $f\colon (M, \che{M})\to (N, \che{N})$ of bordered spaces is a continuous map $f\colon M\to N$ such that the first projection $\che{M}\times\che{N}\to\che{M}$ is proper on the closure $\var{\Gamma}_f$ of the graph $\Gamma_f$ of $f$ in $\che{M}\times\che{N}$. Note that a morphism $\che{f}\colon \che{M}\to\che{N}$ such that $\che{f}(M)\subset N$ induces a morphism of bordered spaces from $(M, \che{M})$ to $(N, \che{N})$. The category of good topological spaces is embedded into that of bordered spaces by the identification $M = (M, M)$. For a locally closed subset $Z$ of $M$, we set $Z_\infty := (Z, \var{Z})$, where $\var{Z}$ is the closure of $Z$ in $\che{M}$, and denote by $i_{Z_\infty}\colon Z_\infty\to M_\infty$ a morphism of bordered spaces induced by the natural embedding $i_Z\colon Z\hookrightarrow M$. Moreover, let us denote by $j_{M_\infty}\colon M_\infty\to\che{M}$ a morphism of bordered spaces induced by the natural embedding $j_M\colon M\hookrightarrow \che{M}$. \subsubsection{Ind-Sheaves on Bordered Spaces} A quotient category \begin{align*} {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) &:= {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})/{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}\backslash M}) \end{align*} is called the category of ind-sheaves on $M_{\infty} = (M, \che{M})$, where ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}\backslash M})$ is identified with its essentially image in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$ by the fully faithful functor $\mathbf{R} i_{\che{M}\backslash M !!} \simeq \mathbf{R} i_{\che{M}\backslash M\ast}$. Here $i_{\che{M}\backslash M}\colon \che{M}\backslash M\to M$ is the closed embedding. An object of ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ is called ind-sheaves on $M_{\infty}$. The quotient functor \[\mathbf{q}_{M_\infty} : {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\] has a left adjoint $\mathbf{l}_{M_\infty}$ and a right adjoint $\mathbf{r}_{M_\infty}$, both fully faithful, defined by \begin{align*} \mathbf{l}_{M_\infty} &\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}}),\ \mathbf{q} F \mapsto \iota_{\che{M}}\Bbbk_M\otimes F,\\ \mathbf{r}_{M_\infty} &\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}}),\ \mathbf{q} F \mapsto {\mathbf{R}}{\mathcal{I}}hom(\iota_{\che{M}}\Bbbk_M, F). \end{align*} Moreover, they induce equivalences of categories: \begin{align*} \mathbf{l}_{M_\infty} &\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\overset{\sim}{\longrightarrow} \{F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})\ |\ \iota_{\che{M}}\Bbbk_{M}\otimes F\overset{\sim}{\longrightarrow} F\},\\ \mathbf{r}_{M_\infty} &\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\overset{\sim}{\longrightarrow} \{F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})\ |\ F\overset{\sim}{\longrightarrow}{\mathbf{R}}{\mathcal{I}}hom(\iota_{\che{M}}\Bbbk_{M}, F)\}. \end{align*} \newpage For a morphism $f\colon M_\infty\to N_\infty$ of bordered spaces, we have the Grothendieck operations \begin{align*} (\cdot)\otimes(\cdot) &\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\times{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}),\\ {\mathbf{R}}{\mathcal{I}}hom(\cdot, \cdot) &\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})^{\mbox{\scriptsize op}}\times{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}),\\ \mathbf{R} f_\ast&\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty}),\\ f^{-1}&\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}),\\ \mathbf{R} f_{!!}&\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty}),\\ f^!&\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}), \end{align*} see \cite[Defs\:3.3.1, 3.3.4]{DK16} for the definition. Moreover, these functors have many properties as similar to classical sheaves. We shall skip the explanation of them. See \cite[\S 3.3]{DK16} for the details. Note that the functors $j_{M_\infty}^{-1}\simeq j_{M_\infty}^! \colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ are isomorphic to the quotient functor and the functor $\mathbf{R} j_{M_\infty!!}\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$ (resp.\ $\mathbf{R} j_{M_\infty\ast}\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$) is isomorphic to the functor $\mathbf{l}$ (resp.\ $\mathbf{r}$). Note also that there exists an embedding functor $$\iota_{M_\infty}\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M}) \hookrightarrow {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}), \hspace{7pt} \mathcal{F}\mapsto j_{M_\infty}^{-1}\iota_{\che{M}}j_{M!}\mathcal{F}\ (\,\simeq j_{M_\infty}^{-1}\iota_{\che{M}}\mathbf{R} j_{M\ast}\mathcal{F})$$ which has an exact left adjoint $$\alpha_{M_\infty}\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M}), \hspace{7pt} F\mapsto j_M^{-1}\alpha_{\che{M}}\mathbf{R} j_{M_\infty!!}F\ (\,\simeq j_M^{-1}\alpha_{\che{M}}\mathbf{R} j_{M_\infty\ast}F) $$ that has in turn an exact fully faithful left adjoint functor $$\beta_{M_\infty}\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M}) \hookrightarrow {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}), \hspace{7pt} \mathcal{F}\mapsto j_{M_\infty}^{-1}\beta_{\che{M}}j_{M!}\mathcal{F}\ (\,\simeq j_{M_\infty}^{-1}\beta_{\che{M}}\mathbf{R} j_{M\ast}\mathcal{F}).$$ It is clear that the quotient category ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}) := {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{\che{M}})/{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{\che{M}\backslash M})$ is equivalent to the derived category ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M})$ of the abelian category $\mathrm{Mod}(\Bbbk_M)$. We sometimes write ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty})$ for ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M})$, when considered as a full subcategory of ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. Let $\mathcal{A}$ be a sheaf of $\Bbbk$-algebras on $M$ whose flat dimension is finite. Then we have a functor $${\mathbf{R}}{\mathcal{I}}hom_{\pi^{-1}\beta_M\mathcal{A}}(\pi^{-1}\beta_M(\cdot), \cdot)\colon {\mathbf{D}}^{\mathrm{b}}(\mathcal{A})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M\times\mathbb{R}_\infty})$$ which is defined by $$(\mathcal{M}, \mathbf{q}_{M\times\mathbb{R}_\infty}F) \mapsto \mathbf{q}_{M\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom_{\overline{\pi}^{-1}\beta_M\mathcal{D}_M}(\overline{\pi}^{-1}\beta_M\mathcal{M}, F),$$ where $\overline{\pi}^{-1}\colon M\times \overline{\mathbb{R}} \to M$ is the projection. This functor is needed to define the enhanced solution functors in Definition \ref{def3.33}. \newpage \subsection{Enhanced Ind-Sheaves}\label{subsec2.7} It was a long-standing problem to generalize the Riemann--Hilbert correspondence for regular holonomic $\mathcal{D}$-modules to the (not necessarily regular) holonomic $\mathcal{D}$-module case. One of the difficulties was that we could not find an appropriate substitute of the target category of the regular case. In \cite{DK16}, the authors solved it by using the enhanced ind-sheaves. In this subsection, we shall recall a more general notion of enhanced ind-sheaves on topological spaces, that is, the notion of enhanced ind-sheaves on bordered spaces. Reference are made to \cite{KS16-2} and \cite{DK16-2}. We also refer to \cite{DK16} and \cite{KS16} for the notion of enhanced ind-sheaves on good topological spaces. Let $M_\infty = (M, \che{M})$ be a bordered space. We set $\mathbb{R}_\infty := (\mathbb{R}, \var{\mathbb{R}})$ for $\var{\mathbb{R}} := \mathbb{R}\sqcup\{-\infty, +\infty\}$, and let $t\in\mathbb{R}$ be the affine coordinate. We consider the morphisms of bordered spaces \[M_\infty\times\mathbb{R}^2_\infty\xrightarrow{p_1,\ p_2,\ \mu}M_\infty \times\mathbb{R}_\infty\overset{\pi}{\longrightarrow}M_\infty\] given by the maps $p_1(x, t_1, t_2) := (x, t_1)$, $p_2(x, t_1, t_2) := (x, t_2)$, $\mu(x, t_1, t_2) := (x, t_1+t_2)$ and $\pi (x,t) := x$. Then the convolution functors for ind-sheaves on $M_\infty \times \mathbb{R}_\infty$ \begin{align*} (\cdot)\overset{+}{\otimes}(\cdot)&\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times \mathbb{R}_\infty})\times {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times \mathbb{R}_\infty}) \to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times \mathbb{R}_\infty}),\\ {\mathbf{R}}{\mathcal{I}}hom^+(\cdot, \cdot)&\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times \mathbb{R}_\infty})^{\mbox{\scriptsize op}}\times {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times \mathbb{R}_\infty}) \to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times \mathbb{R}_\infty}) \end{align*} are defined by \begin{align*} F_1\overset{+}{\otimes} F_2 & := {\rm R}\mu_{!!}(p_1^{-1}F_1\otimes p_2^{-1}F_2),\\ {\mathbf{R}}{\mathcal{I}}hom^+(F_1, F_2) & := {\rm R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom(p_2^{-1}F_1, \mu^!F_2), \end{align*} where $F_1, F_2\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times \mathbb{R}_\infty})$. Now we define the triangulated category of enhanced ind-sheaves on a bordered space $M_\infty$ by $${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) := {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty \times\mathbb{R}_\infty})/\pi^{-1}{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}).$$ The quotient functor \[\mathbf{Q}_{M_\infty} \colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})\to{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\] has fully faithful left and right adjoints \begin{align*} \mathbf{L}_{M_\infty}^{\rm E}\colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) \to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty}),\\ \mathbf{R}_{M_\infty}^{\rm E} \colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) \to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty}) \end{align*} defined by \begin{align*} \mathbf{L}_{M_\infty}^{\rm E}(\mathbf{Q}_{M_\infty}(F)) &:= \iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}})\overset{+}{\otimes} F,\\ \mathbf{R}_{M_\infty}^{\rm E}(\mathbf{Q}_{M_\infty}(F)) &:={\mathbf{R}}{\mathcal{I}}hom^+(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}), F) \end{align*} for $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$, where $\{t\geq0\}$ stands for $\{(x, t)\in M\times\mathbb{R}\ |\ t\geq0\} and $\{t\leq0\}$ is defined similarly. We sometimes denote $\mathbf{Q}_{M_\infty}$ (resp.\ $\mathbf{L}_{M_\infty}^{\rm E}, \mathbf{R}_{M_\infty}^{\rm E}$ ) by $\mathbf{Q}$ (resp.\ $\mathbf{L}^{\rm E}, \mathbf{R}^{\rm E}$) for short. Moreover they induce equivalences of categories \begin{align*} \mathbf{L}_{M_\infty}^{\rm E} &\colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\overset{\sim}{\longrightarrow} \left\{F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})\ |\ \iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} F\overset{\sim}{\longrightarrow} F\right\},\\ \mathbf{R}_{M_\infty}^{\rm E} &\colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\overset{\sim}{\longrightarrow} \left\{F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})\ |\ F\overset{\sim}{\longrightarrow} {\mathbf{R}}{\mathcal{I}}hom^+(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), F)\right\}. \end{align*} Then we have the following standard t-structure on ${\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M_\infty})$ which is induced by the standard t-structure on ${\mathbf{D}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M_\infty\times\mathbb{R}_\infty})$ \begin{align*} \mathbf{E}^{\leq 0}({\rm I}\mathbb{C}_{M_\infty}) & = \{K\in {\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M_\infty})\ | \ \mathbf{L}_{M_\infty}^{{\rm E}}K\in \mathbf{D}^{\leq 0}({\rm I}\mathbb{C}_{M_\infty\times\mathbb{R}_\infty})\},\\ \mathbf{E}^{\geq 0}({\rm I}\mathbb{C}_{M_\infty}) & = \{K\in {\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M_\infty})\ | \ \mathbf{L}_{M_\infty}^{{\rm E}}K\in \mathbf{D}^{\geq 0}({\rm I}\mathbb{C}_{M_\infty\times\mathbb{R}_\infty})\}. \end{align*} We denote by \[\mathcal{H}^n \colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M_\infty})\to\mathbf{E}^0({\rm I}\mathbb{C}_{M_\infty})\] the $n$-th cohomology functor, where we set $$\mathbf{E}^0({\rm I}\mathbb{C}_{M_\infty}) := \mathbf{E}^{\leq 0}({\rm I}\mathbb{C}_{M_\infty})\cap\mathbf{E}^{\geq 0}({\rm I}\mathbb{C}_{M_\infty}).$$ The convolution functors for enhanced ind-sheaves on bordered spaces are well defined. We denote them by the same symbols: \begin{align*} (\cdot)\overset{+}{\otimes}(\cdot) \colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\times{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}),\\ {\mathbf{R}}{\mathcal{I}}hom^+(\cdot, \cdot)\colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})^{\mbox{\scriptsize op}}\times{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}). \end{align*} For a morphism $f \colon M_\infty \to N_\infty $ of bordered spaces, we can define also the operations \begin{align*} \mathbf{E} f_\ast&\colon{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty}),\\ \mathbf{E} f^{-1}&\colon{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})\to {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}),\\ \mathbf{E} f_{!!}&\colon{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty}),\\ \mathbf{E} f^!&\colon{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})\to {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) \end{align*} for enhanced ind-sheaves on bordered spaces. For example, we define $\mathbf{E} f_\ast \big( \mathbf{Q}_{M_\infty}F\big) := \mathbf{Q}_{N_\infty}\big(\mathbf{R} f_{\mathbb{R}_\infty\ast}F\big)$ for $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$, where $f_{\mathbb{R}_\infty}\colon M_\infty \times \mathbb{R}_{\infty} \to N_\infty \times \mathbb{R}_{\infty}$ is the natural morphism of bordered spaces associated to $f$ . The other operations are defined similarly. Note that there exists a morphism $\mathbf{E} f_{!!}\to\mathbf{E} f_\ast$ of functors from ${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ to ${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})$ and it is an isomorphism if $f$ is proper. Moreover we have external hom functors \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{\rm E}(\cdot, \cdot)&\colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})^{\mbox{\scriptsize op}}\times{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}),\\ {\mathbf{R}}{\mathcal{H}}om^{\rm E}(\cdot, \cdot)&\colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})^{\mbox{\scriptsize op}}\times{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M}),\\ {\mathbf{R}}\mathrm{Hom}^{\rm E}(\cdot, \cdot)&\colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})^{\mbox{\scriptsize op}}\times{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}(\Bbbk), \end{align*} which are defined by \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{\rm E}(K_1, K_2)&:=\mathbf{R}\pi_\ast{\mathbf{R}}{\mathcal{I}}hom(\mathbf{L}_{M_\infty}^{\rm E} K_1, \mathbf{L}_{M_\infty}^{\rm E} K_2),\\ {\mathbf{R}}{\mathcal{H}}om^{\rm E}(K_1, K_2) &:= \alpha_{M_\infty}{\mathbf{R}}{\mathcal{I}}hom^{\rm E}(K_1, K_2),\\ {\mathbf{R}}\mathrm{Hom}^{\rm E}(K_1, K_2) &:= \mathbf{R}\Gamma\big(M; {\mathbf{R}}{\mathcal{H}}om^{\rm E}(K_1, K_2)\big), \end{align*} for $K_1, K_2\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. For $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ and $K\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ the objects \begin{align*} \pi^{-1}F\otimes K & :=\mathbf{Q}_{M_\infty}(\pi^{-1}F\otimes \mathbf{L}_{M_\infty}^{\rm E} K),\\ {\mathbf{R}}{\mathcal{I}}hom(\pi^{-1}F, K) & :=\mathbf{Q}_{M_\infty}\big({\mathbf{R}}{\mathcal{I}}hom(\pi^{-1}F, \mathbf{R}_{M_\infty}^{\rm E} K)\big). \end{align*} in ${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ are well defined. Hence, we have functors \begin{align*} \pi^{-1}(\cdot)\otimes (\cdot)&\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\times{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty}),\\ {\mathbf{R}}{\mathcal{I}}hom(\pi^{-1}(\cdot),\ \cdot)&\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})^{\mbox{\scriptsize op}}\times{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty}). \end{align*} We set $$\Bbbk_{M_\infty}^{\rm E} := \mathbf{Q}_{M_\infty}\mathbf{q} \(``\underset{a\to +\infty}{\varinjlim}"\ \iota_{\che{M}\times\var{\mathbb{R}}}\Bbbk_{\{t\geq a\}}\)\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}).$$ Then we have a natural embedding functor $$e_{M_\infty} \colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) \to {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}), \hspace{17pt} F\mapsto\Bbbk_{M_\infty}^{\rm E}\otimes\pi^{-1}F.$$ Let us define $$\omega_{M_\infty}^{\rm E} := e_{M_\infty}(\iota_{M_\infty}\omega_M)\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$$ where $\omega_M\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}) ( \simeq {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M))$ is the dualizing complex, see \cite[Definition 3.1.16]{KS90} for the details. Then we have the Verdier duality functor for enhanced ind-sheaves on bordered spaces $${\rm D}_{M_\infty}^{\rm E} \colon{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})^{{\mbox{\scriptsize op}}}\to{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}), \hspace{17pt} K\mapsto{\mathbf{R}}{\mathcal{I}}hom^+(K, \omega_{M_\infty}^{\rm E}).$$ Note that there exists an isomorphism ${\rm D}_{M_\infty}^{\rm E}(e_{M_\infty}\iota_{M_\infty}\mathcal{F}) \simeq e_{M_\infty}(\iota_{M_\infty}{\rm D}_M\mathcal{F})$ in ${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ for any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)$. Let $i_0 \colon M_\infty\to M_\infty\times\mathbb{R}_\infty$ be a morphism of bordered spaces induced by $x\mapsto (x, 0)$. We set \begin{align*} {\rm I}{\rm sh}_{M_\infty} &:= i_0^!\circ \mathbf{R}_{M_\infty}^{{\rm E}} \colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) \to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}),\\ {\rm sh}_{M_\infty} &:= \alpha_{M_\infty}\circ {\rm I}{\rm sh}_{M_\infty} \colon {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) \to {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M}) \end{align*} and call them the ind-sheafification functor, the sheafification functor, for enhanced ind-sheaves on bordered spaces, respectively. Note that a pair $(e_{M_\infty}, {\rm I}{\rm sh}_{M_\infty})$ is an adjoint pair and there exist isomorphisms $F\overset{\sim}{\longrightarrow} {\rm I}{\rm sh}_{M_\infty}e_{M_\infty}F$ for $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_M)$ and $\mathcal{F}\overset{\sim}{\longrightarrow} {\rm sh}_{M_\infty}e_{M_\infty}\iota_{M_\infty}\mathcal{F}$ for $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)$. See \cite[\S 3]{DK21} for details. For a continuous function $\varphi \colon U\to \mathbb{R}$ defined on an open subset $U\subset M$, we set the exponential enhanced ind-sheaf by \[\mathbb{E}_{U|M_\infty}^\varphi := \Bbbk_{M_\infty}^{\rm E}\overset{+}{\otimes} \mathbf{Q}_{M_\infty}\iota_{M_\infty\times\mathbb{R}_\infty}\Bbbk_{\{t+\varphi\geq0\}} , \] where $\{t+\varphi\geq0\}$ stands for $\{(x, t)\in M\times{\mathbb{R}}\ |\ x\in U, t+\varphi(x)\geq0\}$. \newpage \section{Main Results}\label{sec3} The main theorem of this paper is Theorems \ref{main1}, \ref{main2}, \ref{main3} and \ref{main4}. \subsection{Subanalytic Sheaves on Real Analytic Bordered spaces}\label{subsec3.1} The notion of subanalytic sheaves on bordered spaces was introduced by M.\:Kashiwara. Although it has already been explained in \cite[\S\S 3.4--3.7]{Kas16}, we will explain in detail, in this subsection again\footnote{In \cite{Kas16}, the author introduced the notion of subanalytic sheaves on subanalytic bordered spaces. In this paper, we shall only consider them on real analytic bordered spaces.}. A real analytic bordered space is a bordered space $M_\infty = (M, \che{M})$ such that $\che{M}$ is a real analytic manifold and $M$ is an open subanalytic subset. A morphism $f\colon (M, \che{M})\to (N, \che{N})$ of real analytic bordered spaces is a morphism of bordered spaces such that the graph $\Gamma_f$ of $f$ is a subanalytic subset of $\che{M}\times\che{N}$. Note that a morphism $\che{f}\colon \che{M}\to\che{N}$ of real analytic manifolds such that $\che{f}(M)\subset N$ induces a morphism of bordered spaces of real analytic bordered spaces from $(M, \che{M})$ to $(N, \che{N})$. The category of real analytic manifolds is embedded into that of real analytic bordered spaces by the identification $M = (M, M)$. Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. We denote by $\SO p_{M_\infty}^{{\rm sub}}$ (resp.\,$\SO p_{M_\infty}^{{\rm sub},\,c}$) the category of open subsets of $M$ which are subanalytic (resp.\,subanalytic and relatively compact) in $\che{M}$. It is clear that they admit finite products and fiber products. Although the following proposition are well known by experts, we will prove in this paper. \begin{proposition} Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. \begin{itemize} \item[\rm (1)] The category $\SO p_{M_\infty}^{{\rm sub}}$ can be endowed with the following Grothendieck topology: a subset $S\subset \SO b\((\SO p_{M_\infty}^{{\rm sub}})_U\)$ is a covering of $U\in\SO b\(\SO p_{M_\infty}^{{\rm sub}}\)$ if for any compact subset $K$ of $\che{M}$ there exists a finite subset $S'\subset S$ of $S$ such that $\displaystyle K\cap U = K\cap\bigcup_{V\in S'}V$. \item[\rm(2)] The category $\SO p_{M_\infty}^{{\rm sub},\,c}$ can be endowed with the following Grothendieck topology: a subset $S\subset \SO b\((\SO p_{M_\infty}^{{\rm sub},\,c})_U\)$ is a covering of $U\in\SO b\(\SO p_{M_\infty}^{{\rm sub},\,c}\)$ if for any compact subset $K$ of $\che{M}$ there exists a finite subset $S'\subset S$ of $S$ such that $\displaystyle K\cap U = K\cap\bigcup_{V\in S'}V$. \end{itemize} \end{proposition} \begin{proof} Since the proof of (2) is similar, we only prove (1). It is clear that the condition (GT1) of Definition \ref{def-GT} is satisfied. Let us prove that the condition (GT2) of Definition \ref{def-GT} is satisfied. Let $S_1\subset \SO b((\SO p_{M_\infty}^{\rm sub})_U)$ be a covering of $U\in\SO b(\SO p_{M_\infty}^{\rm sub})$ which is a refinement of $S_2\subset \SO b((\SO p_{M_\infty}^{\rm sub})_U)$ and $K$ a compact subset of $\che{M}$. Then there exists a finite subset $S_1'$ of $S_1$ such that $K\cap U = K\cap(\cup_{V\in S_1'}V)$. Since $S_1$ is a refinement of $S_2$, for any $V_1\in S_1'$, there exists $V_2\in S_2$ such that $V_1\subset V_2$. We write $S'_2$ for a subset of $S_2$ consisting of such elements. Since $\cup_{V_1\in S_1'}V_1\subset \cup_{V_2\in S_2'}V_2$, we have $$K\cap U = K\cap \bigcup_{V_1\in S_1'}V_1 \subset K\cap \bigcup_{V_2\in S_2'}V_2 \hspace{7pt}(\subset K\cap U)$$ and hence $K\cap U = K\cap (\cup_{V_2\in S_2'}V_2)$. This implies that $S_2$ is a covering of $U$ and the condition (GT2) of Definition \ref{def-GT} is satisfied. Let us prove that the condition (GT3) of Definition \ref{def-GT} is satisfied. Let $S \subset \SO b((\SO p_{M_\infty}^{\rm sub})_U)$ be a covering of $U\in\SO b(\SO p_{M_\infty}^{\rm sub})$, $V\in\SO b((\SO p_{M_\infty}^{\rm sub})_U)$ and $K$ a compact subset of $\che{M}$. Then there exists a finite subset $S'$ of $S$ such that $K\cap U = K\cap(\cup_{W\in S'}W)$. Then the set $V\times_US' := \{V\cap W\ |\ W\in S'\}$ is a finite subset of $V\times_US$ and we have $$K\cap \bigcup_{\tl{W}\in V\times_US'}\tl{W} = K\cap\bigcup_{W\in S'}(V\cap W) = \(K\cap\bigcup_{W\in S'}W\)\cap V = (K\cap U)\cap V = K\cap V.$$ This implies that $V\times_US$ is a covering of $V$ and the condition (GT3) of Definition \ref{def-GT} is satisfied. Let us prove that the condition (GT4) of Definition \ref{def-GT} is satisfied. Let $S_1$ be a covering of $U\in\SO b(\SO p_{M_\infty}^{\rm sub})$, $S_2$ a subset of $\SO b((\SO p_{M_\infty}^{\rm sub})_U)$ such that for any $V\in S_1$, $V\times_U S_2$ is a covering of $V$, and $K$ a compact subset of $\che{M}$. Then there exists a finite subset $S_1'$ of $S_1$ such that $K\cap U = K\cap (\cup_{V\in S_1'}V)$. Moreover, for any $V\in S_1'\ (\subset S_1)$ there exists a finite subset $\tl{S_V}$ of $V\times_US_2$ such that $K\cap V = K\cap (\cup_{\tl{W}\in \tl{S_V}}\tl{W})$. For any $V\in S_1'$ let us set $$S_V := \{W\in S_2\ |\ V\cap W\in \tl{S_V}\},$$ and set $S_2' := \cup_{V\in S_1'}S_V$. Then $S_2'$ is a finite subset of $S_2$ and we have $$K\cap U = \bigcup_{V\in S_1'}(K\cap V) = \bigcup_{V\in S_1'}\(K\cap \(\bigcup_{\tl{W}\in \tl{S_V}}\tl{W}\)\) = K\cap \bigcup_{V\in S_1'}\bigcup_{\tl{W}\in \tl{S_V}}\tl{W} = K\cap \bigcup_{V\in S_1'}\bigcup_{W\in S_V}(V\cap W). $$ Since $\cup_{V\in S_1'}\cup_{W\in S_V}(V\cap W)\subset \cup_{V\in S_1'}\cup_{W\in S_V}W = \cup_{W\in S_2'}W$, we have $K\cap U\subset K\cap\cup_{W\in S_2'}W$ and hence $$K\cap U = K\cap\bigcup_{W\in S_2'}W.$$ This implies that $S_2$ is a covering of $U$ and the condition (GT4) of Definition \ref{def-GT} is satisfied. \end{proof} Let us denote by $M_\infty^{{\rm sub}}$ (resp.\,$M_\infty^{{\rm sub},c}$) the site $\SO p_{M_\infty}^{{\rm sub}}$ (resp.\,$\SO p_{M_\infty}^{{\rm sub},\,c}$) with the above Grothendieck topology. Note that the forgetful functor induces an equivalence of categories: $$for\colon\mathrm{Mod}(\Bbbk_{M_\infty^{{\rm sub}}})\overset{\sim}{\longrightarrow} \mathrm{Mod}(\Bbbk_{M_\infty^{{\rm sub},\,c}}).$$ Indeed, for $\mathcal{F}\in\mathrm{Mod}(\Bbbk_{M_\infty^{{\rm sub},\,c}})$ and for $V\in\SO b(\SO p_{M_\infty}^{{\rm sub}})$ we set $$\tl{\mathcal{F}}(V) := \varprojlim_{U\in\SO b\(\SO p_{M_\infty}^{{\rm sub},\,c}\)}\mathcal{F}(U\cap V).$$ Then we have $\tl{\mathcal{F}}\in\mathrm{Mod}(\Bbbk_{M_\infty^{{\rm sub}}})$ and the functor $\mathcal{F}\mapsto \tl{\mathcal{F}}$ is the inverse functor of the forgetful functor. \begin{definition} Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. An object of $\mathrm{Mod}(\Bbbk_{M_\infty^{{\rm sub}}})\simeq\mathrm{Mod}(\Bbbk_{M_\infty^{{\rm sub},\,c}})$ is called a subanalytic sheaf on $M_\infty$. We shall write $\mathrm{Mod}(\Bbbk_{M_\infty}^{{\rm sub}})$ instead of $\mathrm{Mod}(\Bbbk_{M_\infty^{{\rm sub}}})\simeq\mathrm{Mod}(\Bbbk_{M_\infty^{{\rm sub},\,c}})$, for simplicity. \end{definition} The category of subanalytic sheaves on $M_\infty$ is abelian, and admits projective limits and inductive limits, by Theorem \ref{thm2.9} (1), (2) and (3). Moreover, subanalytic sheaves can be characterized as follows: \begin{lemma} Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. We assume that a presheaf $\mathcal{F}$ on $M_\infty^{{\rm sub},\,c}$ satisfies the following two conditions: \begin{itemize} \item[\rm(1)] $\mathcal{F}(\emptyset)=0$, \item[\rm(2)] for any open subsets $U, V$ of $M$ which are subanalytic and relatively compact in $\che{M}$, the following sequence is exact: \[\xymatrix@M=5pt@R=0pt@C=20pt{ 0\ar@{->}[r] & \mathcal{F}(U\cup V)\ar@{->}[r]&\mathcal{F}(U)\oplus\mathcal{F}(V)\ar@{->}[r]&\mathcal{F}(U\cap V).\\ {}& s \ar@{|->}[r] & (s|_U, s|_V) & {}\\ {}& {} & (t, u) \ar@{|->}[r] & t|_{U\cap V}- u|_{U\cap V} }\] \end{itemize} Then the presheaf $\mathcal{F}$ is a sheaf on $M_\infty^{{\rm sub},\,c}$. \end{lemma} \begin{proof} First, let us prove that, for any finite family $\{U_i\}_{i=1}^n$ in $\SO b\(\SO p_{M_\infty}^{{\rm sub},\,c}\)$, a sequence \[\xymatrix@M=5pt@R=0pt@C=30pt{ 0\ar@{->}[r] & \mathcal{F}\(\overset{n}{\underset{i=1}{\bigcup}}U_i\)\ar@{->}[r]^-\alpha &\overset{n}{\underset{i=1}{\bigoplus}}\mathcal{F}(U_i)\ar@{->}[r]^-\beta &\underset{1\leq i < j \leq n}{\bigoplus}\mathcal{F}(U_{i}\cap U_j)\\ {}& s \ar@{|->}[r] & (s|_{U_i})_{i=1}^n & {}\\ {}& {} & (t_i)_{i=1}^n \ar@{|->}[r] & \(t_i|_{U_{i}\cap U_j}-t_j|_{U_{i}\cap U_j}\)_{1\leq i < j \leq n} }\] is exact. We shall prove it by induction on $n$. The cases of $n=1, 2$ are obvious. Let us assume that $n > 2$ and the results of the case of $k\leq n-1$ have been proved. We shall consider the following commutative diagram: \[\xymatrix@M=5pt@R=30pt@C=30pt{ {} & {} & 0\ar@{->}[d] & 0\ar@{->}[d]\\ 0\ar@{->}[r] & \mathcal{F}\(\overset{n}{\underset{i=1}{\bigcup}}U_i\)\ar@{->}[r]^-{\alpha'}\ar@{->}[rd]^-{\alpha} &\mathcal{F}\(\overset{n-1}{\underset{i=1}{\bigcup}}U_i\)\oplus\mathcal{F}(U_n)\ar@{->}[r]^-{\beta'}\ar@{->}[d]^-{\alpha''} &\mathcal{F}\(\overset{n-1}{\underset{i=1}{\bigcup}}U_{i}\cap U_n\)\ar@{->}[d]^-{\gamma}\\ {} & & \overset{n}{\underset{i=1}{\bigoplus}}\mathcal{F}(U_i) \ar@{->}[rd]^-{\beta}\ar@{->}[d]^-{\beta''}\ar@{->}[r]^-{\beta'''} & \overset{n-1}{\underset{i=1}{\bigoplus}}\mathcal{F}(U_i\cap U_n)\ar@{->}[d]\\ {} & {} & \underset{1\leq i < j \leq n-1}{\bigoplus}\mathcal{F}(U_i\cap U_j)\ar@{->}[r] & \underset{1\leq i < j \leq n}{\bigoplus}\mathcal{F}(U_i\cap U_j), }\] where \begin{align*} &\alpha'(s) := \(s|_{\bigcup_{i=1}^{n-1} U_i}, s|_{U_n}\), \hspace{17pt} \beta'(t, u) := t|_{\bigcup_{i=1}^{n-1}U_i\cap U_n} - u|_{\bigcup_{i=1}^{n-1}U_i\cap U_n},\\ &\alpha''(t, u) := \(\(t|_{U_i}\)_{i=1}^{n-1}, u\), \hspace{17pt} \beta''\((t_i)_{i=1}^{n}\) := \(t_i|_{U_{i}\cap U_j}-t_j|_{U_{i}\cap U_j}\)_{1\leq i < j \leq n-1},\\ &\gamma(v) := (v|_{U_i\cap U_n})_{i=1}^{n-1}, \hspace{40pt} \beta'''\((t_i)_{i=1}^{n}\) := \(t_i|_{U_{i}\cap U_n}-t_n|_{U_{i}\cap U_n}\)_{i=1}^{n-1}. \end{align*} Remark that $\alpha = \alpha''\circ\alpha'$ and $\beta = \beta''\oplus \beta'''$. By the assumption (2), a sequence \[\xymatrix@M=5pt@R=0pt@C=20pt{ 0\ar@{->}[r] & \mathcal{F}\(\overset{n-1}{\underset{i=1}{\bigcup}}U_i\cup U_n\)\ar@{->}[r]^-{\alpha'} &\mathcal{F}\(\overset{n-1}{\underset{i=1}{\bigcup}}U_i\)\oplus\mathcal{F}(U_n)\ar@{->}[r]^-{\beta'} &\mathcal{F}\(\overset{n-1}{\underset{i=1}{\bigcup}}U_i\cap U_n\)}\] is exact. Moreover, by the induction hypothesis, $\gamma\colon\mathcal{F}\(\overset{n-1}{\underset{i=1}{\bigcup}}U_i\cap U_n\)\to \overset{n-1}{\underset{i=1}{\bigoplus}}\mathcal{F}(U_i\cap U_n)$ is injective, and a sequence \[\xymatrix@M=5pt@R=0pt@C=20pt{ 0\ar@{->}[r] & \mathcal{F}\(\overset{n-1}{\underset{i=1}{\bigcup}}U_i\)\ar@{->}[r] &\overset{n-1}{\underset{i=1}{\bigoplus}}\mathcal{F}(U_i)\ar@{->}[r] &\underset{1\leq i < j \leq n-1}{\bigoplus}\mathcal{F}(U_{i}\cap U_j)\\ {}& s' \ar@{|->}[r] & \(s'|_{U_i}\)_{i=1}^{n-1} & {}\\ {}& {} & \(t_i\)_{i=1}^{n-1} \ar@{|->}[r] & \(t_i|_{U_i\cap U_j}-t_j|_{U_i\cap U_j}\)_{1\leq i < j \leq n-1} }\] is exact, and hence the following sequence is exact: \[\xymatrix@M=5pt@R=0pt@C=20pt{ 0\ar@{->}[r] & \mathcal{F}\(\overset{n-1}{\underset{i=1}{\bigcup}}U_i\)\oplus\mathcal{F}(U_n)\ar@{->}[r]^-{\alpha''} &\overset{n-1}{\underset{i=1}{\bigoplus}}\mathcal{F}(U_i)\oplus\mathcal{F}(U_n)\ar@{->}[r]^-{\beta''} &\underset{1\leq i < j \leq n-1}{\bigoplus}\mathcal{F}(U_{i}\cap U_j).}\] Since $\alpha', \alpha''$ are injective, $\alpha$ is also injective. It is clear that $\beta\circ\alpha = 0$ and hence we have $\operatorname{Im}\alpha\subset\operatorname{Ker}\beta$. Let $(t_i)_{i=1}^n\in \operatorname{Ker}\beta$. Then we have $ \beta\((t_i)_{i=1}^n\) = 0$ and hence $\beta''\((t_i)_{i=1}^n\) = \beta'''\((t_i)_{i=1}^n\) = 0$. Since $\operatorname{Im}\alpha''=\operatorname{Ker}\beta''$, there exists $(t, u)\in \mathcal{F}\(\overset{n-1}{\underset{i=1}{\bigcup}}U_i\)\oplus\mathcal{F}(U_n)$ such that $\alpha''(t, u)= \((t_i)_{i=1}^{n-1}, t_n\)$. Moreover, since $\gamma(\beta'(t, u)) = \beta'''(\alpha''(t, u)) = \beta'''\((t_i)_{i=1}^{n-1}, t_n\) = \beta'''\((t_i)_{i=1}^{n}\) = 0$ and $\gamma$ is injective, we have $\beta'(t, u) = 0$. So that, $(t, u)\in \operatorname{Ker}\beta' = \operatorname{Im}\alpha'$, and hence there exists $s\in\mathcal{F}\(\overset{n}{\underset{i=1}{\bigcup}}U_i\)$ such that $\alpha'(s) = (t, u)$. Moreover we have $\alpha(s) = \alpha''(\alpha'(s)) = \alpha''(t, u) = ((t_i)_{i=1}^{n-1}, t_n) = (t_i)_{i=1}^{n}$, and hence $\operatorname{Im}\alpha = \operatorname{Ker}\beta$. Let us prove that the presheaf $\mathcal{F}$ is a sheaf on $M_\infty^{{\rm sub}, c}$, that is, for any covering $S$ of $U\in\SO b\(\SO p_{M_\infty}^{{\rm sub}, c}\)$ a sequence \[\xymatrix@M=5pt@R=0pt@C=20pt{ 0\ar@{->}[r] & \mathcal{F}(U)\ar@{->}[r]^-\varphi &\underset{V\in S}{\prod}\mathcal{F}(V)\ar@{->}[r]^-\psi &\underset{V', V''\in S}{\prod}\mathcal{F}(V'\cap V'')\\ {}& s \ar@{|->}[r] & \(s|_{V}\)_{V\in S} & {}\\ {}& {} & \(t_V\)_{V\in S} \ar@{|->}[r] & \(t_{V'}|_{V'\cap V''}-t_{V''}|_{V'\cap V''}\)_{V', V''\in S} }\] is exact. Since $U$ is relatively compact in $\che{M}$ and the definition of Grothendieck topology on $M_\infty^{{\rm sub},\,c}$, there exists a finite subset $\{U_i\}_{i=1}^n$ of $S$ such that $U = \bigcup_{i=1}^nU_i$, and hence a sequence \[\xymatrix@M=5pt@R=0pt@C=30pt{ 0\ar@{->}[r] & \mathcal{F}(U)\ar@{->}[r]^-\alpha &\overset{n}{\underset{i=1}{\bigoplus}}\mathcal{F}(U_i)\ar@{->}[r]^-\beta &\underset{1\leq i < j \leq n}{\bigoplus}\mathcal{F}(U_{i}\cap U_j)\\ {}& s \ar@{|->}[r] & (s|_{U_i})_{i=1}^n & {}\\ {}& {} & (t_i)_{i=1}^n \ar@{|->}[r] & \(t_i|_{U_{i}\cap U_j}-t_j|_{U_{i}\cap U_j}\)_{1\leq i < j \leq n} }\] is exact, as already proved. Let $s\in \mathcal{F}(U)$ and assume that $\varphi(s) = 0$. Then we have $\(s_V\)_{V\in S}=0$, and hence $\(s_{U_i}\)_{i=1}^n = 0$. This implies that $\alpha(s)=0$. Since $\alpha$ is injective, we have $s = 0$ and hence $\varphi$ is injective. It is clear that $\psi\circ\varphi = 0$ and hence $\operatorname{Im}\varphi\subset \operatorname{Ker}\psi$. Let $\(t_V\)_{V\in S}\in\operatorname{Ker}\psi$. Then we have $0 = \psi\(\(t_V\)_{V\in S}\) = \(t_{V'}|_{V'\cap V''}-t_{V''}|_{V'\cap V''}\)_{V', V''\in S}$. In particular, we have $\(t_i|_{U_{i}\cap U_j}-t_j|_{U_{i}\cap U_j}\)_{1\leq i < j \leq n} = 0$. This implies that $\beta\(\(t_{U_i}\)_{i=1}^n\) = 0$. Since $\(t_{U_i}\)_{i=1}^n\in\operatorname{Ker}\beta = \operatorname{Im}\alpha$, there exists $s\in\mathcal{F}(U)$ such that for any $i=1, \ldots, n$ one has $s|_{U_i} = t_{U_i}$. Let us prove that for any $V\in S\setminus\{U_i\ |\ i=1,\ldots, n\}$ one has $s|_V = t_V$. Since $\(t_V\)_{V\in S}\in\operatorname{Ker}\psi$, in particular, for any $i=1,\ldots, n$ we have $$(s|_V - t_V)|_{V\cap U_i} = (s|_V)|_{V\cap U_i} - t_V|_{V\cap U_i} = (s|_{U_i})|_{V\cap U_i} - t_V|_{V\cap U_i} = t_{U_i}|_{V\cap U_i} - t_V|_{V\cap U_i} = 0. $$ Moreover, a map $\mathcal{F}(V)\to\overset{n}{\underset{i=1}{\bigoplus}}\mathcal{F}(V\cap U_i)$ is injective, as already proved, and hence $s|_V = t_V$. This implies that $\varphi(s) = \(t_V\)_{V\in S}$ and we have $\operatorname{Im}\varphi = \operatorname{Ker}\psi$. Therefore, the proof is completed. \end{proof} From now on, let us describe various functors for subanalytic sheaves. As already seen in Theorem \ref{thm2.8}, there exists the left exact functor \[(\cdot)^{+}\colon {\rm Psh}(\Bbbk_{M_\infty^{{\rm sub}}})\to{\rm Psh}(\Bbbk_{M_\infty^{{\rm sub}}})\] where for any $\mathcal{F}\in {\rm Psh}(\Bbbk_{M_\infty}^{\rm sub})$ and any $U\in\SO b\(\SO p_{M_\infty}\)$ we set $$\mathcal{F}^+(U) := \varinjlim_{S\in\SC ov(U)}\mathcal{F}(S)$$ and $\mathcal{F}(S)$ is the kernel of a map $$ \prod_{V\in S}\mathcal{F}(V)\rightarrow\prod_{V', V''\in S}\mathcal{F}(V'\cap V''), \hspace{17pt} \(t_V\)_{V\in S}\longmapsto (t_{V'}|_{V'\cap V''}-t_{V''}|_{V'\cap V''})_{V', V''\in S}.$$ Moreover, we have the exact functor \[(\cdot)^{++}\colon {\rm Psh}(\Bbbk_{M_\infty^{{\rm sub}}})\to\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\] which is called the sheafification functor and is the left adjoint of natural embedding functor $\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to {\rm Psh}(\Bbbk_{M_\infty^{{\rm sub}}})$. Note that there exists the natural morphism $\rho_{M_\infty}\colon M\to M_\infty^{\rm sub}$ of sites. We sometimes write $\rho$ instead of $\rho_{M_\infty}$, for simplicity. Then we have the functors \begin{align*} \rho_{M_\infty\ast}&\colon\mathrm{Mod}(\Bbbk_{M})\to\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub}),\\ \rho_{M_\infty}^{-1}&\colon\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to\mathrm{Mod}(\Bbbk_{M}) \end{align*} which are defined by $$(\rho_{M_\infty\ast}\mathcal{F})(U) := \mathcal{F}(U)$$ for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_M)$ and any $U\in \SO b\(\SO p_{M_\infty}^{\rm sub}\)$ , and by $$\rho_{M_\infty}^{-1}\mathcal{G} := \(\rho_{\rm pre}^{-1}\mathcal{G}\)^{++}$$ for any $\mathcal{G}\in\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$. Here, the presheaf $\rho_{\rm pre}^{-1}\mathcal{G}$ is given by $\displaystyle\rho_{\rm pre}^{-1}\mathcal{G}(V) := \varinjlim_{V\subset U}\mathcal{G}(U)$ for any open subset $V$ of $M$ and $U$ ranges through the family $\SO b\(\SO p_{M_\infty}^{\rm sub}\)$. Note that a pair $(\rho_{M_\infty}^{-1}, \rho_{M_\infty\ast})$ is an adjoint pair, the functor $\rho_{M_\infty\ast}$ is left exact and the functor $\rho_{M_\infty}^{-1}$ is exact, as already seen in Proposition \ref{prop2.12}. Note also that there exists a canonical isomorphism $\rho_{M_\infty}^{-1}\circ\rho_{M_\infty\ast}\overset{\sim}{\longrightarrow} {\rm id}$ of functors and an isomorphism $$\rho_{M_\infty\ast}{\mathcal{H}}om(\rho_{M_\infty}^{-1}\mathcal{F}, \mathcal{G}) \simeq {\mathcal{H}}om(\mathcal{F}, \rho_{M_\infty\ast}\mathcal{G})$$ in $\mathrm{Mod}(\Bbbk_{M})$ for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$ and any $\mathcal{G}\in\mathrm{Mod}(\Bbbk_M)$. Hence the functor $\rho_{M_\infty\ast}$ is a fully faithful functor. A sheaf $\mathcal{F}\in\mathrm{Mod}(\Bbbk_M)$ is said to be $\mathbb{R}$-constructible on $M_\infty$ if $j_{M!}\mathcal{F}\in\mathrm{Mod}_{\mathbb{R}-c}(\Bbbk_{\che{M}})$ where $j_M\colon M\hookrightarrow \che{M}$ is the natural embedding. Let us denote by $\mathrm{Mod}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$ the category of $\mathbb{R}$-constructible sheaves on $M_\infty$. Note that the restriction of $\rho_{M_\infty\ast}$ to $\mathrm{Mod}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$ is exact. We shall denote by $$\rho_{M_\infty\ast}^{\mathbb{R}-c}\colon \mathrm{Mod}_{\mathbb{R}-c}(\Bbbk_{M_\infty})\to \mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$$ the exact functor. By the same argument as \cite[Prop.\:1.1.14]{Pre08}, we have the left adjoint functor $$\rho_{M_\infty!}\colon\mathrm{Mod}(\Bbbk_M)\to\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$$ of $\rho_{M_\infty}^{-1}$ which is given by the following: for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_M)$, $\rho_{M_\infty!}\mathcal{F}$ is the sheaf associated to the presheaf $U\mapsto \Gamma(\var{U}; \mathcal{F}|_{\var{U}})$, where $U\in\SO b\(\SO p_{M_\infty}^{\rm sub}\)$ and $\var{U}$ is the closure of $U$ in $M$. Note that the functor $\rho_{M!}$ is exact and fully faithful, and there exist the canonical isomorphism ${\rm id}\overset{\sim}{\longrightarrow}\rho_{M_\infty}^{-1}\circ\rho_{M_\infty!}$ of functors and an isomorphism $$\rho_{M_\infty}^{-1}{\mathcal{H}}om(\rho_{M_\infty!}\mathcal{F}, \mathcal{G})\simeq{\mathcal{H}}om(\mathcal{F}, \rho_{M_\infty}^{-1}\mathcal{G})$$ in $\mathrm{Mod}(\Bbbk_M)$ for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_M)$ and any $\mathcal{G}\in\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$. Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces. Then we obtain a morphism of sites from $M_\infty^{\rm sub}$ to $N_\infty^{\rm sub}$ associated with a map $\SO p_{N_\infty}^{\rm sub}\to\SO p_{M_\infty}^{\rm sub}, V\mapsto f^{-1}(V)$. We shall denote by the same symbol $f\colon M_\infty^{\rm sub}\to N_\infty^{\rm sub}$. As already seen in Definition \ref{def2.11} and Proposition \ref{prop2.12}, there exist the direct image functor and the inverse image functor \begin{align*} f_\ast&\colon\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to \mathrm{Mod}(\Bbbk_{N_\infty}^{\rm sub}),\\ f^{-1}&\colon\mathrm{Mod}(\Bbbk_{N_\infty}^{\rm sub})\to\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub}) \end{align*} which are defined by $$(f_\ast\mathcal{F})(V):= \mathcal{F}(f^{-1}(V)) \simeq\mathrm{Hom}_{\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})}(\Bbbk_{f^{-1}(V)}, \mathcal{F})$$ for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$ and any $V\in\SO b\(\SO p_{M_\infty}^{\rm sub}\)$, and by $$f^{-1}\mathcal{G} := (f^{-1}_{\rm pre}\mathcal{G})^{++}$$ for any $\mathcal{G} \in \mathrm{Mod}(\Bbbk_{N_\infty}^{\rm sub})$. Here, the presheaf $f^{-1}_{\rm pre}\mathcal{G}$ is given by $$(f^{-1}_{\rm pre}\mathcal{G})(U) := \varinjlim_{U\subset f^{-1}(V)}\mathcal{G}(V)$$ for any $U\in\SO b\(\SO p_{M_\infty}^{\rm sub}\)$ and $V$ ranges through the family $\{W\in \SO b\(\SO p_{N_\infty}^{\rm sub}\)\ |\ U\subset f^{-1}(W)\}$. We have already seen in Proposition \ref{prop2.12}, a pair $(f^{-1}, f_\ast)$ is an adjoint pair, the direct image functor $f_\ast$ is left exact and the inverse image functor $f^{-1}$ is exact. For any $U\in\SO b\(\SO p_{M_\infty}^{\rm sub}\)$, we obtain a real analytic bordered space $(U, \che{M})$. We shall denote by $U_\infty = (U, \che{M})$\footnote{It is clear that $(U, \var{U}) \simeq (U, \che{M})$ as bordered spaces.} the real analytic bordered space and denote by $i_{U_\infty}\colon U_\infty\to M_\infty$ a morphism of real analytic bordered spaces induced by the natural embedding $U\hookrightarrow M$. Hence, we obtain a morphism $i_{U_\infty}\colon U_\infty^{\rm sub}\to M_\infty^{\rm sub}$ of real analytic bordered spaces. Moreover, we have a morphism from $M_\infty^{\rm sub}$ to $U_\infty^{\rm sub}$ induced by the map $\SO p_{U_\infty}^{\rm sub}\to \SO p_{M_\infty}^{\rm sub}, V\mapsto V$, and the inverse image functor of it which is denoted by $i_{U_\infty!!}$\footnote{ We shall write $i_{U_\infty!!}$ instead of $i_{U_\infty!}$ in the notion of Subsection \ref{subsec2.2}.} is left adjoint to the functor $i_{U_\infty}^{-1}$, namely, for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_{U_\infty}^{\rm sub})$ and any $\mathcal{G}\in\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$, we have $$\mathrm{Hom}_{\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})}(i_{U_\infty!!}\mathcal{F}, \mathcal{G}) \simeq\mathrm{Hom}_{\mathrm{Mod}(\Bbbk_{U_\infty}^{\rm sub})}(\mathcal{F}, i_{U_\infty}^{-1}\mathcal{G}).$$ We have already seen in Subsection \ref{subsec2.2}, there exist functors \begin{align*} (\cdot)_{U_\infty} := i_{U_\infty!!}i_{U_\infty}^{-1} &\colon\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub}),\\ \Gamma_{U_\infty} := i_{U_\infty\ast}i_{U_\infty}^{-1}&\colon\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub}),\\ \Gamma(U;\ \cdot\ )&\colon\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to\mathrm{Mod}(\Bbbk). \end{align*} As already seen in Subsection \ref{subsec2.2}, we have the tensor product functor and the internal hom functor\footnote{ In this paper, we shall write $\mathcal{I}hom^{\rm sub}$ instead of the internal hom functor on $\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$, that is, ${\mathcal{H}}om_{\Bbbk_{M_\infty^{{\rm sub}}}}(\cdot, \cdot)$ in the notion of Subsection \ref{subsec2.2}.} \begin{align*} (\cdot)\otimes(\cdot)&\colon \mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\times \mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub}),\\ \mathcal{I}hom^{\rm sub}(\cdot, \cdot)&\colon \mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times \mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub}) \end{align*} which are defined by \begin{align*} \mathcal{F}\otimes\mathcal{G} &:= \(\mathcal{F}\overset{\rm pre}{\otimes}\mathcal{G}\)^{++},\\ \(\mathcal{I}hom^{\rm sub}(\mathcal{F}, \mathcal{G})\)(U) &:= \mathrm{Hom}_{\mathrm{Mod}(\Bbbk_{U_\infty}^{\rm sub})}(i_{U_\infty}^{-1}\mathcal{F},\,i_{U_\infty}^{-1}\mathcal{G}) \end{align*} for any $\mathcal{F}, \mathcal{G}\in\mathrm{Mod}(\Bbbk_{M})$ and any $U\in\SO b\(\SO p_{M_\infty}^{\rm sub}\)$. Here, the presheaf $\mathcal{F}\overset{\rm pre}{\otimes}\mathcal{G}$ is given by $$\(\mathcal{F}\overset{\rm pre}{\otimes}\mathcal{G}\)(U) := \mathcal{F}(U)\otimes\mathcal{G}(U)$$ for any $U\in\SO b\(\SO p_{M_\infty}^{\rm sub}\)$. Moreover we have external hom functors \begin{align*} {\mathcal{H}}om^{\rm sub}(\cdot, \cdot) := \rho_{M_\infty}^{-1}\mathcal{I}hom^{\rm sub} &\colon \mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times \mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to\mathrm{Mod}(\Bbbk_M),\\ \mathrm{Hom}_{\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})}(\cdot, \cdot)&\colon \mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times \mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to\mathrm{Mod}(\Bbbk). \end{align*} For a morphism $f\colon M\to N$ of real analytic manifolds, the proper direct image functor $$f_{!!}\colon\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})\to \mathrm{Mod}(\Bbbk_{N_\infty}^{\rm sub})$$ is given by $$(f_{!!}\mathcal{F})(V) := \varinjlim_{U}\mathrm{Hom}_{\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})}(\Bbbk_{f^{-1}(V)}, \mathcal{F}_{U_\infty})$$ for any $V\in\SO b\(\SO p_{N_\infty}^{{\rm sub},c}\)$. Here, $U$ ranges through the family $\SO b\(\SO p_{M_\infty}^{{\rm sub},c}\)$ such that $f^{-1}(V)\cap\var{U}\to V$ is proper, $\var{U}$ is the closure of $U$ in $M$. Note that the functor $f_{!!}$ is left exact and if $f\colon M\to N$ is proper then we have $f_\ast\simeq f_{!!}$. These functor have many properties as similar to classical sheaves. We shall skip the explanation of it. We shall write $\mathbf{D}^\ast(\Bbbk_{M_\infty}^{\rm sub})$ ($\ast = {\rm ub}, +, -, {\rm b}$) instead of $\mathbf{D}^\ast\(\mathrm{Mod}\(\Bbbk_{M_\infty}^{{\rm sub}}\)\)$ and denoted by ${\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$ the full triangulated subcategory of ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)$ consisting of objects whose cohomologies are contained in $\mathrm{Mod}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$. Recall that the abelian category $\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$ admits enough injectives, see Theorem \ref{thm2.9} (6). For a morphism $f\colon M_\infty\to N_\infty$ of real analytic bordered spaces and $U\in\SO b\(\SO p_{M_\infty}^{\rm sub}\)$, there exist (derived) functors: \begin{align*} \mathbf{R}\rho_{M_\infty\ast}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)\hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ \rho_{M_\infty\ast}^{\mathbb{R}-c}&\colon{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})\hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ \rho_{M_\infty}^{-1}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M),\\ \rho_{M_\infty!}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)\hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ (\cdot)^{++}&\colon{\mathbf{D}}^{\mathrm{b}}({\rm Psh}(\Bbbk_{M_\infty}^{\rm sub}))\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ \mathbf{R} f_\ast&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\longrightarrow {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub}),\\ f^{-1}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ \mathbf{R} f_{!!}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\longrightarrow {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub}),\\ (\cdot)\otimes(\cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\cdot, \cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \longrightarrow\mathbf{D}^+(\Bbbk_{M_\infty}^{\rm sub}),\\ {\mathbf{R}}{\mathcal{H}}om^{\rm sub}(\cdot, \cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\longrightarrow\mathbf{D}^+(\Bbbk_{M_\infty}),\\ {\mathbf{R}}\mathrm{Hom}(\cdot, \cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\longrightarrow\mathbf{D}^+(\Bbbk),\\ (\cdot)_U&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ \mathbf{R}\Gamma_U&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ \mathbf{R}\Gamma(U;\ \cdot\ )&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk). \end{align*} Note that the functor $\mathbf{R} f_{!!}$ admits a right adjoint functor, see e.g. \cite[\S 3.6]{Kas16}: $$f^!\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})\longrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}).$$ Note also that if $f\colon M\to N$ is topological submersive then $f^!(\cdot)\simeq \rho_{M_\infty\ast}\omega_{M/N}\otimes f^{-1}(\cdot)$, where $\omega_{M/N}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)$ is the relative dualizing complex which is defined in \cite[Def.\:3.1.16 (i)]{KS90}. Moreover, these functors have many properties as similar to classical (subanalytic) sheaves. \begin{proposition}\label{prop3.4} Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces and $\mathcal{F}, \mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}), \mathcal{G}, \mathcal{G}_1, \mathcal{G}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub}), \mathcal{K}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M), \mathcal{L}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_N), \mathcal{J}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$. \begin{itemize} \item[\rm (1)] $\mathbf{R} \rho_{M_\infty\ast}{\mathcal{H}}om(\rho_{M_\infty}^{-1}\mathcal{F}, \mathcal{K}) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}, \mathbf{R} \rho_{M_\infty\ast}\mathcal{K}),\\[5pt] {\mathbf{R}}{\mathcal{H}}om^{\rm sub}(\rho_{M_\infty!}\mathcal{K}, \mathcal{F}) \simeq {\mathbf{R}}{\mathcal{H}}om(\mathcal{K}, \rho_{M_\infty}^{-1}\mathcal{F}),$ \item[\rm (2)] ${\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathbf{R} f_{!!}\mathcal{F}, \mathcal{G}) \simeq \mathbf{R} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}, f^!\mathcal{G}),\\[5pt] \mathbf{R} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(f^{-1}\mathcal{G}, \mathcal{F}) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{G}, \mathbf{R} f_\ast\mathcal{F}),\\[5pt] {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}_1\otimes\mathcal{F}_2, \mathcal{F}) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\left(\mathcal{F}_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}_2, \mathcal{F})\right),$ \item[\rm (3)] $f^{-1}(\mathcal{F}_1\otimes\mathcal{F}_2)\simeq f^{-1}\mathcal{F}_1\otimes f^{-1}\mathcal{F}_2,\\[5pt] \mathbf{R} f_{!!}\left(\mathcal{F}\otimes f^{-1}\mathcal{G}\right)\simeq \mathbf{R} f_{!!}\mathcal{F}\otimes \mathcal{G},\\[5pt] f^!{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{G}_1, \mathcal{G}_2)\simeq{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(f^{-1}\mathcal{G}_1, f^!\mathcal{G}_2),$ \item[\rm (4)] for a cartesian diagram \[\xymatrix@M=5pt@R=20pt@C=40pt{ M'_\infty\ar@{->}[r]^-{f'}\ar@{->}[d]_-{g'} & N'_\infty\ar@{->}[d]^-{g}\\ M_\infty\ar@{->}[r]_-{f} & N_\infty}\] we have $g^{-1}\mathbf{R} f_{!!}\mathcal{F}\simeq \mathbf{R} f'_{!!}g'^{-1}\mathcal{F},\hspace{3pt} g^{!}\mathbf{R} f_{\ast}\mathcal{F}\simeq \mathbf{R} f'_{\ast}g'^{!}\mathcal{F}$, \item[\rm (5)] $f^!(\mathcal{G}\otimes\rho_{N_\infty!}\mathcal{L})\simeq f^!\mathcal{G}\otimes \rho_{M_\infty!}f^{-1}\mathcal{L},\\[5pt] {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{J}, \mathcal{F})\otimes\rho_{M_\infty!}\mathcal{K}\simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{J}, \mathcal{F}\otimes \rho_{M_\infty!}\mathcal{K})$, \item[\rm (6)] there exist the commutativity of the various functors \begin{table}[h] \begin{equation*} \begin{tabular}{l||c|c|c|c|c|c|c} {} & $\otimes$ & $f^{-1}$ & $\mathbf{R} f_\ast$ & $f^!$ & $\mathbf{R} f_{!!}$\\ \hline \hline $\overset{}{\underset{}{\mathbf{R}\rho_{\ast}}}$ & $\times$ & $\times$ & $\circ$ & $\circ$ & $\times$ \\ \hline $\overset{}{\underset{}{\rho_{\ast}^{\mathbb{R}-c}}}$ & $\circ$ & $\circ$ & $\circ$ & $\circ$ & $\times$ \\ \hline $\overset{}{\underset{}{\rho^{-1}}}$ & $\circ$ & $\circ$ & $\circ$ & $\times$ & $\circ$ \\ \hline $\overset{}{\underset{}{\rho_{!}}}$ & $\circ$ & $\circ$ & $\times$ & $\times$ & $\times$ \\ \hline \end{tabular} \end{equation*} \end{table} \noindent where $``\circ"$ means that the functors commute, and $``\times"$ they do not. \end{itemize} \end{proposition} Since the proof of this proposition is similar to the case of classical sheaves, we shall skip the proof. As we have already seen in Subsection \ref{subsec2.2}, we can consider subanalytic sheaves with ring actions. We just remark that $\Bbbk_{M_\infty^{\rm sub}}$-algebras and modules over them can be characterized as similar to Remark \ref{rem2.26} (1), (2), and there exist functors \begin{align*} \mathbf{R} f_\ast&\colon\mathbf{D}(f^{-1}\mathcal{R})\longrightarrow \mathbf{D}(\mathcal{R}),\\ f^{-1}&\colon\mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(f^{-1}\mathcal{R}),\\ (\cdot)\underset{\mathcal{R}}{\overset{\mathbf{L}}{\otimes}}(\cdot)&\colon \mathbf{D}(\mathcal{R}^{\mbox{\scriptsize op}})\times \mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\Bbbk_M^{\rm sub}),\\ {\mathbf{R}}{\mathcal{I}}hom_{\mathcal{R}}^{\rm sub}(\cdot, \cdot)&\colon \mathbf{D}(\mathcal{R})^{\mbox{\scriptsize op}}\times \mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\Bbbk_M),\\ {\mathbf{R}}{\mathcal{H}}om_{\mathcal{R}}^{\rm sub}(\cdot, \cdot) := \rho_M^{-1}{\mathbf{R}}{\mathcal{I}}hom_{\mathcal{R}}^{\rm sub}&\colon \mathbf{D}(\mathcal{R})^{\mbox{\scriptsize op}}\times \mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\Bbbk_M),\\ {\mathbf{R}}\mathrm{Hom}_{\mathcal{R}}(\cdot, \cdot)&\colon \mathbf{D}(\mathcal{R})^{\mbox{\scriptsize op}}\times \mathbf{D}(\mathcal{R})\longrightarrow\mathbf{D}(\Bbbk) \end{align*} where $\mathcal{R}$ is a $\Bbbk_{M_\infty^{\rm sub}}$-algebra. In this paper, we shall write $\mathcal{I}hom_{\mathcal{R}}^{\rm sub}$ instead of the internal hom functor on $\mathrm{Mod}(\mathcal{R})$, that is, ${\mathcal{H}}om_{\mathcal{R}}(\cdot, \cdot)$ in the notion of Subsection \ref{subsec2.2}. At the end of this subsection, we shall describe a relation between ind-sheaves on $M_\infty$ and subanalytic sheaves on $M_\infty$. Recall that $j_{M_\infty}\colon M_\infty\to \che{M}$ is a morphism of real analytic bordered spaces associated to the natural embedding $j_M\colon M\hookrightarrow\che{M}$. Then for any $\mathcal{F}\in\mathrm{Mod}(\Bbbk_{M_\infty}^{\rm sub})$ and any $\mathcal{G}\in\mathrm{Mod}(\Bbbk_{\che{M}}^{\rm sub})$ we have \begin{align*} j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty\ast}\mathcal{F}&\overset{\sim}{\longrightarrow} \mathcal{F},\\ j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty!!}\mathcal{F}&\overset{\sim}{\longleftarrow} \mathcal{F},\\ \mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}\mathcal{G} &\simeq \rho_{\che{M}\ast}\Bbbk_M\otimes \mathcal{G},\\\ \mathbf{R} j_{M_\infty\ast}j_{M_\infty}^{-1}\mathcal{G} &\simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\rho_{\che{M}\ast}\Bbbk_M, \mathcal{G}), \end{align*} and hence functors $j_{M_\infty}^{-1}, \mathbf{R} j_{M_\infty!!}, \mathbf{R} j_{M_\infty\ast}$ induce equivalences of categories: \begin{align*} \tl{j}\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \overset{\sim}{\longrightarrow} &\,\{\mathcal{F}\in {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{\che{M}}^{\rm sub})\ |\ \rho_{M_\infty\ast}\Bbbk_M\otimes \mathcal{F}\overset{\sim}{\longrightarrow} \mathcal{F}\}\\ \simeq\ &\,\{\mathcal{F}\in {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{\che{M}}^{\rm sub})\ |\ \mathcal{F}\overset{\sim}{\longrightarrow}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\rho_{M_\infty\ast}\Bbbk_M, \mathcal{F}\) \}. \end{align*} Recall that the category of ind-sheaves on $M_\infty$ is defined by \begin{align*} {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) &:= {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})/{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}\backslash M}) \end{align*} and there exist (derived) functors \begin{align*} I_{\che{M}}&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{\che{M}}^{\rm sub})\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}}),\\ \mathbf{R} J_{\che{M}}&\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})\to{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{\che{M}}^{\rm sub}). \end{align*} Let us consider the following functors \begin{align*} I_{M_\infty}&\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}), \hspace{7pt} \mathcal{F}\mapsto \mathbf{q} I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F},\\ \mathbf{R} J_{M_\infty}&\colon{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}), \hspace{7pt} F\mapsto j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}F, \end{align*} where $\mathbf{q}\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ is the quotient functor. We denote by ${\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})$ the full triangulated subcategory of ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ consisting of objects such that $\mathbf{R} j_{M_\infty!!}F\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$. Then the following lemma follows from \cite[Lem.\:7.1.3]{KS01}. \begin{lemma}\label{lem3.5} Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces associated with a morphism $\che{f}\colon\che{M}\to\che{N}$ of real analytic manifolds. The functors below are well defined: \begin{itemize} \item[\rm (1)] $\iota_{M_\infty}\colon {\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_M)\to{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, \item[\rm (2)] $\beta_{M_\infty}\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)\to{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, \item[\rm (3)] $(\cdot)\otimes(\cdot)\colon {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})\times {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}) \to{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, \item[\rm (4)] ${\mathbf{R}}{\mathcal{I}}hom(\iota_{M_\infty}(\cdot),\ \cdot)\colon {\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})^{{\mbox{\scriptsize op}}}\times {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}) \to{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, \item[\rm (5)] $f^{-1}\colon{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})\to{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, \item[\rm (6)] $\mathbf{R} f_{!!}\colon{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})\to{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$, \item[\rm (7)] $f^{!}\colon{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})\to{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. \end{itemize} \end{lemma} \begin{proof} (1) Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M})$. Then we have $j_{M!}\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{\che{M}})$. Note that there exists an isomorphism $\mathbf{R} j_{M_\infty!!}\iota_{M_\infty}\mathcal{F} \simeq \iota_{\che{M}}j_{M!}\mathcal{F}$ in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$. Since the functor $\iota_{\che{M}}\colon{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{\che{M}})\to{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$ is well defined, we have $\mathbf{R} j_{M_\infty!!}\iota_{M_\infty}\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$. This implies that $\iota_{M_\infty}\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. \bigskip \noindent (2) Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M})$. Then we have $j_{M!}\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{\che{M}})$. Note that there exists an isomorphism $\mathbf{R} j_{M_\infty!!}\beta_{M_\infty}\mathcal{F} \simeq \beta_{\che{M}}j_{M!}\mathcal{F}$ in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$. By \cite[Lem.\:7.1.3 (v)]{KS01}, we have $\beta_{\che{M}}j_{M!}\mathcal{F}\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}(\Bbbk_{\che{M}})$, and hence $\mathbf{R} j_{M_\infty!!}\beta_{M_\infty}\mathcal{F}\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}(\Bbbk_{\che{M}})$. This implies that $\beta_{M_\infty}\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. \bigskip \noindent (3) Let $F_1, F_2\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Then we have $\mathbf{R} j_{M_\infty!!}F_1, \mathbf{R} j_{M_\infty!!}F_2\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$. Note that there exists an isomorphism $\mathbf{R} j_{M_\infty!!}(F_1\otimes F_2)\simeq\mathbf{R} j_{M_\infty!!}F_1\otimes\mathbf{R} j_{M_\infty!!}F_2$. By \cite[Lem.\:7.1.3 (iii)]{KS01}, we have $\mathbf{R} j_{M_\infty!!}F_1\otimes\mathbf{R} j_{M_\infty!!}F_2\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}(\Bbbk_{\che{M}})$, and hence $\mathbf{R} j_{M_\infty!!}(F_1\otimes F_2)\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}(\Bbbk_{\che{M}})$. This implies that $F_1\otimes F_2\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. \bigskip \noindent (4) Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$ and $G\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Then we have $j_{M!}\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{\che{M}})$ and $\mathbf{R} j_{M_\infty!!}G\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$. Note that there exist isomorphisms in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$ \begin{align*} \mathbf{R} j_{M_\infty!!}{\mathbf{R}}{\mathcal{I}}hom(\iota_{M_\infty}\mathcal{F}, G) &\simeq \mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty\ast} {\mathbf{R}}{\mathcal{I}}hom(\iota_{M_\infty}\mathcal{F}, j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty!!}G)\\ &\simeq \mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1} {\mathbf{R}}{\mathcal{I}}hom(\mathbf{R} j_{M_\infty!!}\iota_{M_\infty}\mathcal{F}, \mathbf{R} j_{M_\infty!!}G)\\ &\simeq \iota_{\che{M}}\Bbbk_M\otimes {\mathbf{R}}{\mathcal{I}}hom(\iota_{\che{M}}j_{M!}\mathcal{F}, \mathbf{R} j_{M_\infty!!}G), \end{align*} where the first isomorphism follows from $j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty\ast}\simeq{\rm id}$ and ${\rm id}\simeq j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty!!}$, in the second isomorphism we used the adjointness of $(\mathbf{R} j_{M_\infty!!}, j_{M_\infty}^{-1})$ and the last isomorphism follows from $\mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}(\cdot)\simeq\iota_{\che{M}}\Bbbk_M\otimes(\cdot)$. By \cite[Lem.\:7.1.3 (iii), (iv)]{KS01}, we have $\iota_{\che{M}}\Bbbk_M\otimes {\mathbf{R}}{\mathcal{I}}hom(\iota_{\che{M}}j_{M!}\mathcal{F}, \mathbf{R} j_{M_\infty!!}G)\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}(\Bbbk_{\che{M}})$, and hence $\mathbf{R} j_{M_\infty!!}{\mathbf{R}}{\mathcal{I}}hom(\iota_{M_\infty}\mathcal{F}, G)\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}(\Bbbk_{\che{M}})$. This implies that ${\mathbf{R}}{\mathcal{I}}hom(\iota_{M_\infty}\mathcal{F}, G)\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. \bigskip \noindent (5) Let $G\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$. Then we have $\mathbf{R} j_{N_\infty!!}G\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{N}})$. Note that there exists an isomorphism $\mathbf{R} j_{M_\infty!!}f^{-1}G \simeq \che{f}^{-1}\mathbf{R} j_{N_\infty!!}G$ in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{N}})$. Since the functor $\che{f}\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{N}})\to {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$ is well defined, we have $\mathbf{R} j_{M_\infty!!}f^{-1}G\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$. This implies that $f^{-1}G\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. \bigskip \noindent (6) Let $F\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Then we have $\mathbf{R} j_{M_\infty!!}F\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$. Note that there exists an isomorphism $\mathbf{R} j_{N_\infty!!}\mathbf{R} f_{!!}F \simeq \che{f}_{!!}\mathbf{R} j_{M_\infty!!}F$ in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$. By \cite[Lem.\:7.1.3 (i)]{KS01}, we have $\mathbf{R} j_{N_\infty!!}\mathbf{R} f_{!!}F\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{N}})$. This implies that $\mathbf{R} f_{!!}F\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$. \bigskip \noindent (7) Let $G\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$. Then we have $\mathbf{R} j_{N_\infty!!}G\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{N}})$. Note that there exist isomorphism in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$ \begin{align*} \mathbf{R} j_{M_\infty!!}f^{!}G \simeq \mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}\che{f}^{!}\mathbf{R} j_{N_\infty!!}G \simeq \iota_{\che{M}}\Bbbk_M\otimes \che{f}^{!}\mathbf{R} j_{N_\infty!!}G \end{align*} where in the last isomorphism we used $\mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}\simeq\iota_{\che{M}}\Bbbk_M\otimes(\cdot)$. By \cite[Lem.\:7.1.3 (ii), (iii)]{KS01} and the fact that $\iota_{\che{M}}\Bbbk_M\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$, we have $\iota_{\che{M}}\Bbbk_M\otimes \che{f}^{!}\mathbf{R} j_{N_\infty!!}G\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$, and hence $\mathbf{R} j_{M_\infty!!}f^{!}G\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$. This implies that $f^{!}G\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. \end{proof} Let us describe a relation between ind-sheaves on $M_\infty$ and subanalytic sheaves on $M_\infty$. \begin{proposition}\label{prop3.6} Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. Then we have \begin{itemize} \item[\rm (1)] a pair $(I_{M_\infty}, \mathbf{R} J_{M_\infty})$ is an adjoint pair and there exists a canonical isomorphism ${\rm id}\overset{\sim}{\longrightarrow}\mathbf{R} J_{M_\infty}\circ I_{M_\infty}$, \item[\rm (2)] there exists an equivalence of triangulated categories: \[\xymatrix@M=7pt@C=45pt{ {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\ar@<0.8ex>@{->}[r]^-{I_{M_\infty}}_-\sim & {\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty}) \ar@<0.8ex>@{->}[l]^-{\mathbf{R} J_{M_\infty}}. }\] \end{itemize} \end{proposition} \begin{proof} (1) Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. Then we have \begin{align*} \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}(I_{M_\infty}\mathcal{F}, G) &= \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}(\mathbf{q} I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}, G)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})}(I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}, \mathbf{R} j_{M_\infty\ast}G)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{\che{M}}^{\rm sub})}(\mathbf{R} j_{M_\infty!!}\mathcal{F}, \mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}G)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}(\mathcal{F}, \mathbf{R} j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}G)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}(\mathcal{F}, \mathbf{R} J_{M_\infty}G), \end{align*} where in the first isomorphism we used the fact that $\mathbf{q}=j_{M_\infty}^{-1}$ and a pair $(j_{M_\infty}^{-1}, \mathbf{R} j_{M_\infty\ast})$ is an adjoint pair, the second isomorphism follows from a pair $(I_{\che{M}}, \mathbf{R} J_{\che{M}})$ is an adjoint pair and the third isomorphism follows from a pair $(\mathbf{R} j_{M_\infty!!}, j_{M_\infty}^{-1})$ is an adjoint pair. This implies that a pair $(I_{M_\infty}, \mathbf{R} J_{M_\infty})$ is an adjoint pair. Hence, there exists a natural morphism ${\rm id}\to \mathbf{R} J_{M_\infty}\circ I_{M_\infty}$ of functors. Moreover, for any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$, we have isomorphisms in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ \begin{align*} (\mathbf{R} J_{M_\infty}\circ I_{M_\infty})(\mathcal{F}) &\simeq j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}\mathbf{q} I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}j_{M_\infty}^{-1} I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}{\mathbf{R}}{\mathcal{I}}hom(\iota_{\che{M}}\Bbbk_{M}, I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F})\\ &\simeq j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}{\mathbf{R}}{\mathcal{I}}hom(I_{\che{M}}\rho_{\che{M}}\Bbbk_{M}, I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F})\\ &\simeq j_{M_\infty}^{-1}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\rho_{\che{M}}\Bbbk_{M}, \mathbf{R} J_{\che{M}}I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F})\\ &\simeq j_{M_\infty}^{-1}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\rho_{\che{M}}\Bbbk_{M}, \mathbf{R} j_{M_\infty!!}\mathcal{F})\\ &\simeq j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty\ast}j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq\mathcal{F} \end{align*} where the second isomorphism follows from $\mathbf{q} = j_{M_\infty}^{-1}$, in the third isomorphism we used the fact that $\mathbf{R} j_{M_\infty\ast}j_{M_\infty}^{-1}(\cdot)\simeq{\mathbf{R}}{\mathcal{I}}hom(\iota_{\che{M}}\Bbbk_M, \cdot)$, the fourth isomorphism follows from $I_{\che{M}}\circ\rho_{\che{M}}^{\mathbb{R}-c} = \iota_{\che{M}}|_{\mathrm{Mod}_{\mathbb{R}-c}(\Bbbk_{\che{M}})}$, the fifth isomorphism follows from the adjointness of $(I_{\che{M}}, \mathbf{R} J_{\che{M}})$ and in the sixth we used the fact ${\rm id}\overset{\sim}{\longrightarrow} \mathbf{R} J_{\che{M}}\circ I_{\che{M}}$. \bigskip \noindent (2) First, let us prove that for any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ one has $I_{M_\infty}(\mathcal{F})\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. There exist isomorphisms in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}})$ \begin{align*} \mathbf{R} j_{M_\infty!!}I_{M_\infty}(\mathcal{F}) &\simeq \mathbf{R} j_{M_\infty!!}\mathbf{q} I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq \iota_{\che{M}}\Bbbk_M\otimes I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq I_{\che{M}}\rho_{\che{M}}\Bbbk_M\otimes I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq I_{\che{M}}(\rho_{\che{M}}\Bbbk_M\otimes \mathbf{R} j_{M_\infty!!}\mathcal{F}) \end{align*} where the third isomorphism follows from $I_{\che{M}}\circ\rho_{\che{M}}^{\mathbb{R}-c} = \iota_{\che{M}}|_{\mathrm{Mod}_{\mathbb{R}-c}(\Bbbk_{\che{M}})}$ and in the fourth isomorphism we used the fact that $I_{\che{M}}(\cdot\otimes\cdot)\simeq I_{\che{M}}(\cdot)\otimes I_{\che{M}}(\cdot)$. Since $I_{\che{M}}(\rho_{\che{M}}\Bbbk_M\otimes \mathbf{R} j_{M_\infty!!}\mathcal{F})\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$, we have $\mathbf{R} j_{M_\infty!!}I_{M_\infty}(\mathcal{F})\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{\che{M}})$. This implies $I_{M_\infty}(\mathcal{F})\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. By (1), the functor $I_{M_\infty}$ is fully faithful. Let us prove that the functor $I_{M_\infty}$ is essentially surjective. Let $G\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})$. Then we have $\mathbf{R} j_{M_\infty!!}G\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_{\che{M}})$. By Theorem \ref{thm2.26}, there exists $\mathcal{F}\in {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{\che{M}}^{\rm sub})$ such that $\mathbf{R} j_{M_\infty!!}G\simeq I_{\che{M}}\mathcal{F}$ and hence we have $G\simeq j_{M_\infty}^{-1}I_{\che{M}}\mathcal{F}$. Moreover, there exist isomorphisms in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ \begin{align*} j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}\mathcal{F} &\simeq j_{M_\infty}^{-1}I_{\che{M}}(\rho_{\che{M}}\Bbbk_M\otimes\mathcal{F})\\ &\simeq j_{M_\infty}^{-1}(I_{\che{M}}\rho_{\che{M}}\Bbbk_M\otimes I_{\che{M}}\mathcal{F})\\ &\simeq j_{M_\infty}^{-1}(\iota_{\che{M}}\Bbbk_M\otimes I_{\che{M}}\mathcal{F})\\ &\simeq j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}I_{\che{M}}\mathcal{F}\\ &\simeq j_{M_\infty}^{-1}I_{\che{M}}\mathcal{F} \end{align*} where in the first isomorphism we used $(\mathbf{R} j_{M_\infty!!}\circ j_{M_\infty}^{-1})(\cdot) \simeq \rho_{\che{M}\ast}\Bbbk_M\otimes(\cdot)$, in the second isomorphism we used the fact that $I_{\che{M}}(\cdot\otimes\cdot)\simeq I_{\che{M}}(\cdot)\otimes I_{\che{M}}(\cdot)$, the third isomorphism follows from $I_{\che{M}}\circ\rho_{\che{M}}^{\mathbb{R}-c} = \iota_{\che{M}}|_{\mathrm{Mod}_{\mathbb{R}-c}(\Bbbk_{\che{M}})}$ and in the fourth isomorphism we used the fact that $(\mathbf{R} j_{M_\infty!!}\circ j_{M_\infty}^{-1})(\cdot)\simeq \iota_{\che{M}}\Bbbk_{M}\otimes(\cdot)$. Hence we have $$G\simeq j_{M_\infty}^{-1}I_{\che{M}}\mathcal{F} \simeq j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}\mathcal{F} \simeq I_{M_\infty}(j_{M_\infty}^{-1}\mathcal{F}).$$ This implies that the functor $I_{M_\infty}$ is essentially surjective. Therefore, the proof is completed. \end{proof} We will denote by $$\lambda_{M_\infty}\colon {\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})\overset{\sim}{\longrightarrow} {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$$ the inverse functor of $I_{M_\infty}\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \overset{\sim}{\longrightarrow} {\mathbf{D}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})$. \begin{proposition}\label{prop3.7} Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces associated with a morphism $\che{f}\colon\che{M}\to\che{N}$ of real analytic manifolds. \begin{itemize} \item[\rm (1)] For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$, we have $$\mathbf{R} J_{M_\infty}{\mathbf{R}}{\mathcal{I}}hom(I_{M_\infty}\mathcal{F}, G) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}, \mathbf{R} J_{M_\infty}G).$$ \item[\rm (2)] For any $\mathcal{L}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_M)$, any $\mathcal{F}, \mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $\mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, we have \begin{itemize} \item[\rm (i)] $\alpha_{M_\infty}I_{M_\infty}\mathcal{F}\simeq \rho_{M_\infty}^{-1}\mathcal{F}$, \item[\rm (ii)] $I_{M_\infty}\rho_{M_\infty!}\mathcal{L}\simeq\beta_{M_\infty}\mathcal{L}$, \item[\rm (iii)] $I_{M_\infty}f^{-1}\mathcal{G}\simeq f^{-1}I_{N_\infty}\mathcal{G}$, \item[\rm (iv)] $\mathbf{R} f_{!!}I_{M_\infty}\mathcal{F}\simeq I_{N_\infty}\mathbf{R} f_{!!}\mathcal{F}$, \item[\rm (v)] $I_{M_\infty}f^{!}\mathcal{G}\simeq f^{!}I_{N_\infty}\mathcal{G}$, \item[\rm (vi)] $I_{M_\infty}(\mathcal{F}_1\otimes \mathcal{F}_2) \simeq I_{M_\infty}(\mathcal{F}_1)\otimes I_{M_\infty}(\mathcal{F}_2)$. \end{itemize} \item[\rm (3)] For any $\mathcal{L}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M})$, any $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ and any $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})$, we have \begin{itemize} \item[\rm (i)] $\mathbf{R} J_{M_\infty}\iota_{M_\infty}\mathcal{L}\simeq \rho_{M_\infty\ast}\mathcal{L}$, \item[\rm (ii)] $\rho_{M_\infty}^{-1}\mathbf{R} J_{M_\infty}F\simeq \alpha_{M_\infty}F$, \item[\rm (iii)] $\mathbf{R} J_{M_\infty}\beta_{M_\infty}\mathcal{L}\simeq \rho_{M_\infty!}\mathcal{L}$, \item[\rm (iv)] $\mathbf{R} f_{\ast}\mathbf{R} J_{M_\infty}F\simeq \mathbf{R} J_{N_\infty}\mathbf{R} f_{\ast}F$, \item[\rm (v)] $\mathbf{R} J_{M_\infty}f^{!}G\simeq f^{!}\mathbf{R} J_{N_\infty}G$. \end{itemize} \item[\rm (4)] For any $F, F_1, F_2\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, any $G\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$ and any $\mathcal{L}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$, we have \begin{itemize} \item[\rm (i)] $I_{M_\infty}\rho_{M_\infty\ast}^{\mathbb{R}-c}\mathcal{L}\simeq \iota_{M_\infty}\mathcal{L}$, \item[\rm (ii)] $\lambda_{M_\infty}f^{-1}G\simeq f^{-1}\lambda_{N_\infty}G$, \item[\rm (iii)] $\mathbf{R} f_{!!}\lambda_{M_\infty}F\simeq \lambda_{N_\infty}\mathbf{R} f_{!!}F$, \item[\rm (iv)] $\lambda_{M_\infty}(F_1\otimes F_2) \simeq \lambda_{M_\infty}(F_1)\otimes \lambda_{M_\infty}(F_2)$. \end{itemize} \end{itemize} \end{proposition} \begin{proof} (1) Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. Then we have \begin{align*} \mathbf{R} J_{M_\infty}{\mathbf{R}}{\mathcal{I}}hom(I_{M_\infty}\mathcal{F}, G) &= j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast} {\mathbf{R}}{\mathcal{I}}hom(j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}, G)\\ &\simeq j_{M_\infty}^{-1}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathbf{R} j_{M_\infty!!}\mathcal{F}, \mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}G)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty!!}\mathcal{F}, j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}G)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}, \mathbf{R} J_{M_\infty}G), \end{align*} where in the second isomorphism we used the adjointness of $(j_{M_\infty}^{-1}, \mathbf{R} j_{M_\infty\ast})$ and $(I_{\che{M}}, \mathbf{R} j_{\che{M}})$ and the last isomorphism follows from $j_{M_\infty}^{-1}\circ\mathbf{R} j_{M_\infty!!}\simeq{\rm id}$. \bigskip \noindent (2)(i) Let $\mathcal{F}\in {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. Then we have \begin{align*} \alpha_{M_\infty}I_{M_\infty}\mathcal{F} &\simeq j_M^{-1}\alpha_{\che{M}}\mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq j_M^{-1}\alpha_{\che{M}}(\iota_{\che{M}}\Bbbk_M\otimes I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F})\\ &\simeq j_M^{-1}\alpha_{\che{M}}\iota_{\che{M}}\Bbbk_M\otimes j_M^{-1}\alpha_{\che{M}}I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq j_M^{-1}\rho_{\che{M}}^{-1}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq \rho_{M_\infty}^{-1}j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq \rho_{M_\infty}^{-1}\mathcal{F}, \end{align*} where in the second isomorphism we used the fact that $\mathbf{R} j_{M_\infty!!}j_{M_\infty}^{-1}\simeq\iota_{\che{M}}\Bbbk_M\otimes(\cdot)$ and the fourth isomorphism follows from $\alpha_{\che{M}}\circ I_{\che{M}}\simeq\rho_{\che{M}}^{-1}$. \medskip \noindent (ii) Let $\mathcal{L}\in {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M})$. Since $I_{\che{M}}\circ\rho_{\che{M}!}\simeq\beta_{\che{M}}$, we have \begin{align*} I_{M_\infty}\rho_{M_\infty!}\mathcal{L} &\simeq j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}\rho_{M_\infty!}\mathcal{L}\\ &\simeq j_{M_\infty}^{-1}I_{\che{M}}\rho_{\che{M}!}j_{M!}\mathcal{L}\\ &\simeq j_{M_\infty}^{-1}\beta_{\che{M}}j_{M!}\mathcal{L}\\ &\simeq\beta_{M_\infty}\mathcal{F}. \end{align*} \medskip \noindent (iii) Let $\mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$. Since $I_{\che{M}}\circ\che{f}^{-1}\simeq\che{f}^{-1}\circ I_{\che{N}}$, we have \begin{align*} I_{M_\infty}f^{-1}\mathcal{G} &\simeq j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}f^{-1}\mathcal{G}\\ &\simeq j_{M_\infty}^{-1}I_{\che{M}}\che{f}^{-1}\mathbf{R} j_{N_\infty!!}\mathcal{G}\\ &\simeq j_{M_\infty}^{-1}\che{f}^{-1}I_{\che{N}}\mathbf{R} j_{N_\infty!!}\mathcal{G}\\ &\simeq {f}^{-1}j_{N_\infty}^{-1}I_{\che{N}}\mathbf{R} j_{N_\infty!!}\mathcal{G}\\ &\simeq f^{-1}I_{N_\infty}\mathcal{G}. \end{align*} \medskip \noindent (iv) Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. Since $I_{\che{N}}\circ\che{f}_{!!}\simeq\che{f}_{!!}\circ I_{\che{M}}$, we have \begin{align*} \mathbf{R} f_{!!}I_{M_\infty}\mathcal{F} &\simeq \mathbf{R} f_{!!}j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq j_{N_\infty}^{-1}\mathbf{R} \che{f}_{!!}I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq j_{N_\infty}^{-1}I_{\che{N}}\mathbf{R} \che{f}_{!!}\mathbf{R} j_{M_\infty!!}\mathcal{F}\\ &\simeq j_{N_\infty}^{-1}I_{\che{N}}\mathbf{R} j_{N_\infty!!}\mathbf{R}{f}_{!!}\mathcal{F}\\ &\simeq I_{N_\infty}\mathbf{R}{f}_{!!}\mathcal{F}. \end{align*} \medskip \noindent (v) Let $G\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$. By Lemma \ref{lem3.5} (7) we have $f^!I_{N_\infty}G\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$ and hence by Proposition \ref{prop3.6} (2) we have $I_{M_\infty}\mathbf{R} J_{M_\infty} f^!I_{N_\infty}G\simeq f^!I_{N_\infty}G$. Then we have \begin{align*} f^!I_{N_\infty}G \simeq I_{M_\infty}\mathbf{R} J_{M_\infty} f^!I_{N_\infty}G \simeq I_{M_\infty}f^!\mathbf{R} J_{N_\infty}I_{N_\infty}G \simeq I_{M_\infty}f^!G, \end{align*} where the second isomorphism follows from (3)(v) and in the last isomorphism we used $\mathbf{R} J_{M_\infty}\circ I_{M_\infty}\simeq{\rm id}$. \medskip \noindent (vi) Let $\mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. Then we have $$\mathbf{R} j_{M_\infty!!}(\mathcal{F}_1\otimes \mathcal{F}_2) \simeq \mathbf{R} j_{M_\infty!!}(j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty!!}\mathcal{F}_1\otimes \mathcal{F}_2) \simeq \mathbf{R} j_{M_\infty!!}\mathcal{F}_1\otimes\mathbf{R} j_{M_\infty!!}\mathcal{F}_2,$$ and hence we have $$I_{\che{M}}\mathbf{R} j_{M_\infty!!}(\mathcal{F}_1\otimes \mathcal{F}_2) \simeq I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}_1\otimes I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}_2.$$ So that, we have \begin{align*} I_{M_\infty}(\mathcal{F}_1\otimes \mathcal{F}_2) &\simeq j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}(\mathcal{F}_1\otimes \mathcal{F}_2)\\ &\simeq j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}_1 \otimes j_{M_\infty}^{-1}I_{\che{M}}\mathbf{R} j_{M_\infty!!}\mathcal{F}_2\\ &\simeq I_{M_\infty}\mathcal{F}_1\otimes I_{M_\infty}\mathcal{F}_2. \end{align*} \bigskip \noindent (3)(i) Let $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. Since $\mathbf{R} J_{\che{M}}\circ\iota_{\che{M}}\simeq \rho_{\che{M}\ast}$, we have \begin{align*} \mathbf{R} J_{M_\infty}\iota_{M_\infty}F &\simeq j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}\iota_{M_\infty}F\\ &\simeq j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\iota_{\che{M}}\mathbf{R} j_{M_\infty\ast}F\\ &\simeq j_{M_\infty}^{-1}\rho_{\che{M}\ast}\mathbf{R} j_{M_\infty\ast}F\\ &\simeq \rho_{M_\infty\ast}j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty\ast}F\\ &\simeq \rho_{M_\infty\ast}F. \end{align*} \medskip \noindent (ii) Let $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. Since $\rho_{\che{M}}^{-1}\circ\mathbf{R} J_{\che{M}}\simeq \alpha_{\che{M}}$, we have \begin{align*} \rho_{M_\infty}^{-1}\mathbf{R} J_{M_\infty}F &\simeq \rho_{M_\infty}^{-1}j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}F\\ &\simeq j_{M_\infty}^{-1}\rho_{\che{M}}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}F\\ &\simeq j_{M_\infty}^{-1}\alpha_{\che{M}}\mathbf{R} j_{M_\infty\ast}F\\ &\simeq \alpha_{M_\infty}j_{M_\infty}^{-1}\mathbf{R} j_{M_\infty\ast}F\\ &\simeq\alpha_{M_\infty}F. \end{align*} \medskip \noindent (iii) Let $\mathcal{L}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M})$. By using (2)(ii) and the fact that $\mathbf{R} J_{M_\infty}\circ I_{M_\infty}\simeq{\rm id}$, we have $$\mathbf{R} J_{M_\infty}\beta_{M_\infty}\mathcal{L}\simeq \mathbf{R} J_{M_\infty}I_{M_\infty}\rho_{M_\infty!}\mathcal{L}\simeq \rho_{M_\infty!}F.$$ \medskip \noindent (iv) Let $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ and $\mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$. Then we have \begin{align*} \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})}(\mathcal{G}, \mathbf{R} f_\ast\mathbf{R} J_{M_\infty}F) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}(I_{M_\infty}f^{-1}\mathcal{G}, F)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}(f^{-1}I_{N_\infty}\mathcal{G}, F)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})}(\mathcal{G}, \mathbf{R} J_{N_\infty}\mathbf{R} f_\ast F), \end{align*} where the first and last isomorphisms follow from adjointness of $(f^{-1}, \mathbf{R} f_{\ast})$ and $(I, \mathbf{R} J)$ and in the second isomorphism we used (2)(iii). Hence there exists an isomorphism $\mathbf{R} f_\ast\mathbf{R} J_{M_\infty}F\simeq \mathbf{R} J_{N_\infty}\mathbf{R} f_\ast F$ in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})$. \medskip \noindent (v) Let $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})$. Since $\mathbf{R} J_{\che{M}}\circ \che{f}^!\simeq \che{f}^!\circ\mathbf{R} J_{\che{N}}$, we have \begin{align*} \mathbf{R} J_{M_\infty}f^!G &\simeq j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\mathbf{R} j_{M_\infty\ast}f^!G\\ &\simeq j_{M_\infty}^{-1}\mathbf{R} J_{\che{M}}\che{f}^!\mathbf{R} j_{N_\infty\ast}G\\ &\simeq j_{M_\infty}^{-1}\che{f}^!\mathbf{R} J_{\che{N}}\mathbf{R} j_{N_\infty\ast}G\\ &\simeq f^!j_{N_\infty}^{-1}\mathbf{R} J_{\che{N}}\mathbf{R} j_{N_\infty\ast}G\\ &\simeq f^!\mathbf{R} J_{N_\infty}G. \end{align*} \bigskip \noindent (4)(i) Let $\mathcal{L}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$. Then we have $\iota_{M_\infty}\mathcal{L}\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$ by Lemma \ref{lem3.5} (1). Hence, by using (3)(i) and Proposition \ref{prop3.6} (2), we have $$\iota_{M_\infty}\mathcal{L} \simeq I_{M_\infty}\mathbf{R} J_{M_\infty}\iota_{M_\infty}\mathcal{L} \simeq I_{M_\infty}\rho_{M_\infty\ast}\mathcal{L}.$$ \medskip \noindent (ii) Let $G\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$. Then we have $G\simeq I_{N_\infty}\lambda_{N_\infty}G$ by Proposition \ref{prop3.6} (2). Moreover, by using (2)(iii) we have $$\lambda_{M_\infty}f^{-1}G\simeq \lambda_{M_\infty}f^{-1}I_{N_\infty}\lambda_{N_\infty}G \simeq \lambda_{M_\infty}I_{M_\infty}f^{-1}\lambda_{N_\infty}G \simeq f^{-1}\lambda_{N_\infty}G. $$ \medskip \noindent (iii) Let $F\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Then we have $F\simeq I_{M_\infty}\lambda_{M_\infty}F$ by Proposition \ref{prop3.6} (2). Hence, by using (2)(iv) we have $$\lambda_{N_\infty}f_{!!}F\simeq \lambda_{N_\infty}f_{!!}I_{M_\infty}\lambda_{M_\infty}F \simeq \lambda_{N_\infty}I_{N_\infty}f_{!!}\lambda_{M_\infty}F \simeq f_{!!}\lambda_{M_\infty}F. $$ \medskip \noindent (iv) Let $F_1, F_2\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Then we have $F_i\simeq I_{M_\infty}\lambda_{M_\infty}F_i\ (i=1,2)$ by Proposition \ref{prop3.6} (2). Hence, by using (2)(vi) we have \begin{align*} \lambda_{M_\infty}(F_1\otimes F_2) &\simeq \lambda_{M_\infty}(I_{M_\infty}\lambda_{M_\infty}F_1\,\otimes\,I_{M_\infty}\lambda_{M_\infty}F_2)\\ &\simeq \lambda_{M_\infty}I_{M_\infty}(\lambda_{M_\infty}F_1\,\otimes\,\lambda_{M_\infty}F_2)\\ &\simeq \lambda_{M_\infty}F_1\otimes \lambda_{M_\infty}F_2. \end{align*} \end{proof} \subsection{Convolutions for Subanalytic Sheaves on Real Analytic Bordered Spaces} In this subsection, let us define convolution functors for subanalytic sheaved on real analytic bordered spaces. Although it has already been explained in \cite[\S 5.1]{Kas16}, we will explain in detail, in this subsection again\footnote{In \cite{Kas16}, the author introduced convolution functors for subanalytic sheaves on subanalytic bordered spaces. In this paper, we shall only consider them on real analytic bordered spaces.}. Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. We set $\mathbb{R}_\infty := (\mathbb{R}, \var{\mathbb{R}})$ for $\var{\mathbb{R}} := \mathbb{R}\sqcup\{-\infty, +\infty\}$, and let $t\in\mathbb{R}$ be the affine coordinate. We consider the morphisms of real analytic bordered spaces \[M_\infty\times\mathbb{R}^2_\infty\xrightarrow{p_1,\ p_2,\ \mu}M_\infty \times\mathbb{R}_\infty\overset{\pi}{\longrightarrow}M_\infty\] given by the maps $p_1(x, t_1, t_2) := (x, t_1)$, $p_2(x, t_1, t_2) := (x, t_2)$, $\mu(x, t_1, t_2) := (x, t_1+t_2)$ and $\pi (x,t) := x$. Then, the convolution functors for subanalytic sheaves on $M_\infty \times \mathbb{R}_\infty$ \begin{align*} (\cdot)\overset{+}{\otimes}(\cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub}) \to{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub}),\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\cdot, \cdot)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub}) \to{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub}) \end{align*} are defined by \begin{align*} \mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2 & := {\rm R}\mu_{!!}(p_1^{-1}\mathcal{F}_1\otimes p_2^{-1}\mathcal{F}_2),\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}_1, \mathcal{F}_2) & := {\rm R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{F}_1, \mu^!\mathcal{F}_2), \end{align*} for $\mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})$. Note that for any $\mathcal{F}, \mathcal{F}_1, \mathcal{F}_2, \mathcal{F}_3\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ there exist isomorphisms \begin{align*} \mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2 &\simeq \mathcal{F}_2\overset{+}{\otimes} \mathcal{F}_1,\\ \mathcal{F}_1\overset{+}{\otimes}\(\mathcal{F}_2\overset{+}{\otimes} \mathcal{F}_3\) &\simeq \(\mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2\)\overset{+}{\otimes} \mathcal{F}_3,\\ \Bbbk_{\{t=0\}}\overset{+}{\otimes} \mathcal{F} &\simeq \mathcal{F}\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\Bbbk_{\{t=0\}}, \mathcal{F}), \end{align*} where $\{t = 0\}$ stands for $\{(x, t)\in M\times {\mathbb{R}}\ |\ t = 0\}$. Hence, the category ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ has a structure of commutative tensor category with $\overset{+}{\otimes}$ as tensor product functor and $\Bbbk_{\{t=0\}}$ as unit object. The convolution functors have several properties as similar to the tensor product functor and the internal hom functor. For a morphism of real analytic bordered spaces $f\colon M_\infty\to N_\infty$, let us denoted by $f_{\mathbb{R}_\infty}\colon M_\infty\times\mathbb{R}_\infty\to M_\infty\times\mathbb{R}_\infty$ the morphism $f\times{\rm id}_{\mathbb{R}_\infty}$ of real analytic bordered spaces. \begin{proposition}\label{prop3.8}\label{prop3.8} Let $f\colon M_\infty\to\mathcal{N}_\infty$ be a morphism of real analytic bordered spaces, $\mathcal{F}, \mathcal{F}_1, \mathcal{F}_2, \in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ and $\mathcal{G}, \mathcal{G}_1, \mathcal{G}_2, \in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty\times\mathbb{R}_\infty}^{\rm sub})$. There exist isomorphisms \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2, \mathcal{F}\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}_1, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}_2, \mathcal{F})\),\\ \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})}\(\mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2, \mathcal{F}\) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})}\(\mathcal{F}_1, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}_2, \mathcal{F})\),\\ \mathbf{R} f_{\mathbb{R}_\infty\ast}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(f_{\mathbb{R}_\infty}^{-1}\mathcal{G}, \mathcal{F}\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{G}, \mathbf{R} f_{\mathbb{R}_\infty\ast}\mathcal{F}\),\\ f_{\mathbb{R}_\infty}^{-1}\(\mathcal{F}_1\overset{+}{\otimes}\mathcal{F}_2\) &\simeq f_{\mathbb{R}_\infty}^{-1}\mathcal{F}_1\overset{+}{\otimes} f_{\mathbb{R}_\infty}^{-1}\mathcal{F}_2,\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{R} f_{\mathbb{R}_\infty!!}\mathcal{F}, \mathcal{G}\) &\simeq \mathbf{R} f_{\mathbb{R}_\infty\ast}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}, f_{\mathbb{R}_\infty}^!\mathcal{G}\),\\ \mathbf{R} f_{\mathbb{R}_\infty!!}\left(\mathcal{F}\overset{+}{\otimes} f_{\mathbb{R}_\infty}^{-1}\mathcal{G}\right) &\simeq \mathbf{R} f_{\mathbb{R}_\infty!!}\mathcal{F}\overset{+}{\otimes} \mathcal{G},\\ f_{\mathbb{R}_\infty}^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{G}_1, \mathcal{G}_2) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(f_{\mathbb{R}_\infty}^{-1}\mathcal{G}_1, f_{\mathbb{R}_\infty}^!\mathcal{G}_2\). \end{align*} \end{proposition} \begin{proof} First, let us prove the second isomorphism. By using the adjointness, we have \begin{align*} \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2, \mathcal{F}_3\) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\mathbf{R}\mu_{!!}(p_1^{-1}\mathcal{F}_1\otimes p_2^{-1}\mathcal{F}_2), \mathcal{F}_3\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\mathcal{F}_1, \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{F}_2, \mu^{!}\mathcal{F}_3)\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathcal{F}_1, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}_2, \mathcal{F}_3)\). \end{align*} Let us prove the first isomorphism. By using the second isomorphism, for any $\mathcal{F}_0\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$, there exist isomorphisms \begin{align*} &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\mathcal{F}_0,\,{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2, \mathcal{F}_3\)\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\mathcal{F}_0\overset{+}{\otimes}\(\mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2\),\,\mathcal{F}_3\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\(\mathcal{F}_0\overset{+}{\otimes}\mathcal{F}_1\)\overset{+}{\otimes} \mathcal{F}_2,\,\mathcal{F}_3\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\mathcal{F}_0\overset{+}{\otimes}\mathcal{F}_1,\,{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}_2, \mathcal{F}_3)\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\mathcal{F}_0, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}_1, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}_2, \mathcal{F}_3)\)\). \end{align*} Hence, we have ${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2, \mathcal{F}_3\)\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}_1, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}_2, \mathcal{F}_3)\)$. \medskip Let us denote by $\tl{f}_{\mathbb{R}_\infty}\colon M_\infty\times\mathbb{R}_\infty^2\to N_\infty\times\mathbb{R}_\infty^2$ the morphism $f\times{\rm id}_{\mathbb{R}_\infty^2}$ of real analytic bordered spaces. Then there exist cartesian diagrams: \[\xymatrix@M=5pt@R=20pt@C=40pt{ M_\infty\times \mathbb{R}_\infty^2\ar@{->}[r]^-{\bigstar}\ar@{->}[d]_-{\tl{f}_{\mathbb{R}_\infty}}\ar@{}[rd]|-\Box& M_\infty\times \mathbb{R}_\infty\ar@{->}[r]^-{\pi}\ar@{->}[d]_-{f_{\mathbb{R}_\infty}}\ar@{}[rd]|-\Box& M_\infty\ar@{->}[d]_-{f}\\ N_\infty\times \mathbb{R}_\infty^2\ar@{->}[r]_-{\bigstar}& N_\infty\times \mathbb{R}_\infty\ar@{->}[r]_-{\pi}& N_\infty,}\] where $\bigstar = p_1, p_2, \mu$, respectively. Hence, we have \begin{align*} \mathbf{R} f_{\mathbb{R}_\infty\ast}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(f_{\mathbb{R}_\infty}^{-1}\mathcal{G}, \mathcal{F}) &\simeq \mathbf{R} f_{\mathbb{R}_\infty\ast}\mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}f_{\mathbb{R}_\infty}^{-1}\mathcal{G}, \mu^!\mathcal{F})\\ &\simeq \mathbf{R} p_{1\ast}\mathbf{R} \tl{f}_{\mathbb{R}_\infty\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\tl{f}_{\mathbb{R}_\infty}^{-1}p_2^{-1}\mathcal{G}, \mu^!\mathcal{F})\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{G}, \mathbf{R} \tl{f}_{\mathbb{R}_\infty\ast}\mu^!\mathcal{F})\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{G},\mu^! \mathbf{R} {f}_{\mathbb{R}_\infty\ast}\mathcal{F})\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{G}, \mathbf{R} f_{\mathbb{R}_\infty\ast}\mathcal{F}), \end{align*} where in the fourth isomorphism we used Proposition \ref{prop3.4} (4). By using Proposition \ref{prop3.4} (3), (4) we have \begin{align*} f_{\mathbb{R}_\infty}^{-1}\(\mathcal{F}_1\overset{+}{\otimes}\mathcal{F}_2\) &\simeq f_{\mathbb{R}_\infty}^{-1}\mathbf{R}\mu_{!!}\(p_1^{-1}\mathcal{F}_1\overset{+}{\otimes} p_2^{-1}\mathcal{F}_2\)\\ &\simeq \mathbf{R}\mu_{!!}\tl{f}_{\mathbb{R}_\infty}^{-1}\(p_1^{-1}\mathcal{F}_1\overset{+}{\otimes} p_2^{-1}\mathcal{F}_2\)\\ &\simeq \mathbf{R}\mu_{!!}\(\tl{f}_{\mathbb{R}_\infty}^{-1}p_1^{-1}\mathcal{F}_1\overset{+}{\otimes} \tl{f}_{\mathbb{R}_\infty}^{-1}p_2^{-1}\mathcal{F}_2\)\\ &\simeq \mathbf{R}\mu_{!!}\(p_1^{-1}{f}_{\mathbb{R}_\infty}^{-1}\mathcal{F}_1\overset{+}{\otimes} p_2^{-1}{f}_{\mathbb{R}_\infty}^{-1}\mathcal{F}_2\)\\ &\simeq f_{\mathbb{R}_\infty}^{-1}\mathcal{F}_1\overset{+}{\otimes} f_{\mathbb{R}_\infty}^{-1}\mathcal{F}_2, \end{align*} and \begin{align*} \mathbf{R} f_{\mathbb{R}_\infty!!}\left(\mathcal{F}\overset{+}{\otimes} f_{\mathbb{R}_\infty}^{-1}\mathcal{G}\right) &\simeq \mathbf{R} f_{\mathbb{R}_\infty!!}\mathbf{R}\mu_{!!}\left(p_1^{-1}\mathcal{F}\overset{+}{\otimes} p_2^{-1}f_{\mathbb{R}_\infty}^{-1}\mathcal{G}\right)\\ &\simeq \mathbf{R}\mu_{!!}\mathbf{R}\tl{f}_{\mathbb{R}_\infty!!}\left(p_1^{-1}\mathcal{F}\overset{+}{\otimes} \tl{f}_{\mathbb{R}_\infty}^{-1}p_2^{-1}\mathcal{G}\right)\\ &\simeq \mathbf{R}\mu_{!!}\left(\mathbf{R}\tl{f}_{\mathbb{R}_\infty!!}p_1^{-1}\mathcal{F}\overset{+}{\otimes} p_2^{-1}\mathcal{G}\right)\\ &\simeq \mathbf{R}\mu_{!!}\left(p_1^{-1}\mathbf{R}{f}_{\mathbb{R}_\infty!!}\mathcal{F}\overset{+}{\otimes} p_2^{-1}\mathcal{G}\right)\\ &\simeq \mathbf{R} f_{\mathbb{R}_\infty!!}\mathcal{F}\overset{+}{\otimes} \mathcal{G}. \end{align*} Moreover, by using Proposition \ref{prop3.4} (3) we have \begin{align*} f_{\mathbb{R}_\infty}^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{G}_1, \mathcal{G}_2) &\simeq f_{\mathbb{R}_\infty}^!\mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{G}_1, \mu^{!}\mathcal{G}_2)\\ &\simeq \mathbf{R} p_{1\ast}\tl{f}_{\mathbb{R}_\infty}^!{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{G}_1, \mu^{!}\mathcal{G}_2)\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\tl{f}_{\mathbb{R}_\infty}^{-1}p_2^{-1}\mathcal{G}_1, \tl{f}_{\mathbb{R}_\infty}^!\mu^{!}\mathcal{G}_2)\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}{f}_{\mathbb{R}_\infty}^{-1}\mathcal{G}_1, \mu^{!}{f}_{\mathbb{R}_\infty}^!\mathcal{G}_2)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(f_{\mathbb{R}_\infty}^{-1}\mathcal{G}_1, f_{\mathbb{R}_\infty}^!\mathcal{G}_2\). \end{align*} By using Proposition \ref{prop3.4} (2), (4) there exits isomorphisms \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{R} f_{\mathbb{R}_\infty!!}\mathcal{F}, \mathcal{G}\) &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_2^{-1}\mathbf{R} f_{\mathbb{R}_\infty!!}\mathcal{F}, \mu^!\mathcal{G}\)\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathbf{R}\tl{f}_{\mathbb{R}_\infty!!}p_2^{-1}\mathcal{F}, \mu^!\mathcal{G}\)\\ &\simeq \mathbf{R} p_{1\ast}\mathbf{R}\tl{f}_{\mathbb{R}_\infty\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_2^{-1}\mathcal{F}, \mathbf{R}\tl{f}_{\mathbb{R}_\infty}^{!}\mu^!\mathcal{G}\)\\ &\simeq \mathbf{R}{f}_{\mathbb{R}_\infty\ast}\mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_2^{-1}\mathcal{F}, \mu^!\mathbf{R}{f}_{\mathbb{R}_\infty}^{!}\mathcal{G}\)\\ &\simeq \mathbf{R} f_{\mathbb{R}_\infty\ast}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}, f_{\mathbb{R}_\infty}^!\mathcal{G}\). \end{align*} \end{proof} \begin{proposition}\label{prop3.9} Let $\mathcal{F}, \mathcal{G}, \mathcal{H}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ and $\mathcal{F}_0\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. Then there exist isomorphisms \begin{align*} \pi^{-1}\mathcal{F}_0\otimes \(\mathcal{F}\overset{+}{\otimes} \mathcal{G}\) &\simeq \(\pi^{-1}\mathcal{F}_0\otimes\mathcal{F}\)\overset{+}{\otimes} \mathcal{G},\\ {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\pi^{-1}\mathcal{F}_0, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}, \mathcal{G})\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\pi^{-1}\mathcal{F}_0\otimes\mathcal{F}, \mathcal{G}\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F},{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}_0, \mathcal{G}\),\\ \mathbf{R}\pi_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathcal{F}\overset{+}{\otimes} \mathcal{G}, \mathcal{H}\) &\simeq \mathbf{R}\pi_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{G}, \mathcal{H})\). \end{align*} \end{proposition} \begin{proof} First let us remark that $\pi\circ p_1 = \pi\circ p_2 = \pi\circ\mu$. By using Proposition \ref{prop3.4} (3), we have \begin{align*} \pi^{-1}\mathcal{F}_0\otimes \(\mathcal{F}\overset{+}{\otimes} \mathcal{G}\) &\simeq \pi^{-1}\mathcal{F}_0\otimes \mathbf{R}\mu_{!!}\(p_1^{-1}\mathcal{F}\otimes p_2^{-1}\mathcal{G}\)\\ &\simeq \mathbf{R}\mu_{!!}\(\mu^{-1}\pi^{-1}\mathcal{F}_0\otimes \(p_1^{-1}\mathcal{F}\otimes p_2^{-1}\mathcal{G}\)\)\\ &\simeq \mathbf{R}\mu_{!!}\(p_1^{-1}\pi^{-1}\mathcal{F}_0\otimes \(p_1^{-1}\mathcal{F}\otimes p_2^{-1}\mathcal{G}\)\)\\ &\simeq \mathbf{R}\mu_{!!}\(p_1^{-1}(\pi^{-1}\mathcal{F}_0\otimes\mathcal{F})\otimes p_2^{-1}\mathcal{G}\)\\ &\simeq \(\pi^{-1}\mathcal{F}_0\otimes\mathcal{F}\)\overset{+}{\otimes} \mathcal{G}. \end{align*} By using Proposition \ref{prop3.4} (2), \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\pi^{-1}\mathcal{F}_0, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}_1, \mathcal{F}_2)\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\pi^{-1}\mathcal{F}_0, \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{F}_1, \mu^!\mathcal{F}_2)\)\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_1^{-1}\pi^{-1}\mathcal{F}_0, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{F}_1, \mu^!\mathcal{F}_2)\)\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_1^{-1}\pi^{-1}\mathcal{F}_0\otimes p_2^{-1}\mathcal{F}_1, \mu^!\mathcal{F}_2)\). \end{align*} Moreover, by using Proposition \ref{prop3.4} (3) we have \begin{align*} \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_1^{-1}\pi^{-1}\mathcal{F}_0\otimes p_2^{-1}\mathcal{F}_1, \mu^!\mathcal{F}_2)\) &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_2^{-1}\pi^{-1}\mathcal{F}_0\otimes p_2^{-1}\mathcal{F}_1, \mu^!\mathcal{F}_2)\)\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_2^{-1}(\pi^{-1}\mathcal{F}_0\otimes \mathcal{F}_1), \mu^!\mathcal{F}_2)\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\pi^{-1}\mathcal{F}_0\otimes\mathcal{F}_1, \mathcal{F}_2\) \end{align*} and by using Proposition \ref{prop3.4} (2), (3) we have \begin{align*} &\mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_1^{-1}\pi^{-1}\mathcal{F}_0\otimes p_2^{-1}\mathcal{F}_1, \mu^!\mathcal{F}_2)\)\\ \simeq\ &\mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mu^{-1}\pi^{-1}\mathcal{F}_0\otimes p_2^{-1}\mathcal{F}_1, \mu^!\mathcal{F}_2)\)\\ \simeq\ &\mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_2^{-1}\mathcal{F}_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mu^{-1}\pi^{-1}\mathcal{F}_0, \mu^!\mathcal{F}_2)\)\\ \simeq\ &\mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(p_2^{-1}\mathcal{F}_1, \mu^!{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}_0, \mathcal{F}_2)\)\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}_0, \mathcal{F}_2)\). \end{align*} Let us prove the last assertion. For any $\mathcal{G}_0\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, there exist isomorphisms \begin{align*} &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})} \(\mathcal{G}_0,\,\mathbf{R}\pi_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathcal{F}\overset{+}{\otimes} \mathcal{G}, \mathcal{H}\)\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\pi^{-1}\mathcal{G}_0,\,{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathcal{F}\overset{+}{\otimes} \mathcal{G}, \mathcal{H}\)\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\pi^{-1}\mathcal{G}_0\otimes\(\mathcal{F}\overset{+}{\otimes} \mathcal{G}\),\,\mathcal{H}\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\(\pi^{-1}\mathcal{G}_0\otimes\mathcal{F}\)\overset{+}{\otimes} \mathcal{G},\,\mathcal{H}\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})} \(\pi^{-1}\mathcal{G}_0\otimes\mathcal{F},\,{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{G}, \mathcal{H})\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})} \(\mathcal{G}_0, \mathbf{R}\pi_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{G}, \mathcal{H})\)\).\, \end{align*} where in the third (resp.\,fourth) isomorphism we used the first assertion (resp.\,Proposition \ref{prop3.8}). Hence, we have an isomorphism $$\mathbf{R}\pi_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathcal{F}\overset{+}{\otimes} \mathcal{G}, \mathcal{H}\) \simeq \mathbf{R}\pi_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{G}, \mathcal{H})\).$$ \end{proof} \begin{lemma} For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ and any $\mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, We have \begin{align*} \pi^{-1}\mathcal{G}\overset{+}{\otimes}\mathcal{F} &\simeq \pi^{-1}(\mathcal{G}\otimes \mathbf{R}\pi_{!!}\mathcal{F}),\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\pi^{-1}\mathcal{G}, \mathcal{F}) &\simeq \pi^{!}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{G}, \mathbf{R}\pi_{\ast}\mathcal{F}),\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}, \pi^{!}\mathcal{G}) &\simeq \pi^{!}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathbf{R}\pi_{!!}\mathcal{F}, \mathcal{G}). \end{align*} \end{lemma} \begin{proof} Note that there exist Cartesian diagrams (i=1, 2): \[\xymatrix@M=5pt@R=20pt@C=40pt{ {M_\infty\times \mathbb{R}_\infty^2}\ar@{}[rd]|-\Box\ar@{->}[r]^-{\mu}\ar@{->}[d]_-{p_i}& {M_\infty\times \mathbb{R}_\infty}\ar@{->}[d]_-{\pi}& {M_\infty\times \mathbb{R}_\infty^2}\ar@{->}[l]_-{p_1}\ar@{->}[d]_-{p_2}\ar@{}[ld]|-\Box\\ {M_\infty\times \mathbb{R}_\infty}\ar@{->}[r]_-{\pi}& {M_\infty}& {M_\infty\times \mathbb{R}_\infty}\ar@{->}[l]^-{\pi}.}\] Then by using Proposition \ref{prop3.4} (3), (4), we have \begin{align*} \pi^{-1}\mathcal{G}\overset{+}{\otimes}\mathcal{F} &\simeq \mathbf{R}\mu_{!!}\(p_1^{-1}\pi^{-1}\mathcal{G}\otimes p_2^{-1}\mathcal{F}\)\\ &\simeq \mathbf{R}\mu_{!!}\(\mu^{-1}\pi^{-1}\mathcal{G}\otimes p_2^{-1}\mathcal{F}\)\\ &\simeq \pi^{-1}\mathcal{G}\otimes \mathbf{R}\mu_{!!}p_2^{-1}\mathcal{F}\\ &\simeq \pi^{-1}\mathcal{G}\otimes \pi^{-1}\mathbf{R}\pi_{!!}\mathcal{F}\\ &\simeq \pi^{-1}(\mathcal{F}\otimes \mathbf{R}\pi_{!!}\mathcal{F}). \end{align*} Moreover, by using Proposition \ref{prop3.4} (2), (3), (4), we have \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\pi^{-1}\mathcal{G}, \mathcal{F}) &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\pi^{-1}\mathcal{G}, \mu^!\mathcal{F})\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mu^{-1}\pi^{-1}\mathcal{G}, \mu^!\mathcal{F})\\ &\simeq \mathbf{R} p_{1\ast}\mu^!{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{G}, \mathcal{F})\\ &\simeq \pi^!\mathbf{R}\pi_{\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{G}, \mathcal{F})\\ &\simeq \pi^{!}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{G}, \mathbf{R}\pi_{\ast}\mathcal{F}) \end{align*} and \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}, \pi^{!}\mathcal{G}) &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{F}, \mu^!\pi^{!}\mathcal{G})\\ &\simeq \mathbf{R} p_{1\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(p_2^{-1}\mathcal{F}, p_2^!\pi^{!}\mathcal{G})\\ &\simeq \mathbf{R} p_{1\ast}p_2^!{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}, \pi^{!}\mathcal{G})\\ &\simeq \pi^!\mathbf{R} \pi_{\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}, \pi^{!}\mathcal{G})\\ &\simeq \pi^{!}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathbf{R}\pi_{!!}\mathcal{F}, \mathcal{G}). \end{align*} \end{proof} Since $\mathbf{R}\pi_{!!}\Bbbk_{\{t\leq0\}} \simeq \mathbf{R}\pi_{!!}\Bbbk_{\{t\geq0\}}\simeq0$ and $\pi^{-1}\Bbbk_M\simeq\Bbbk_{M\times\mathbb{R}}$, we have \begin{corollary}\label{cor3.11} For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ and any $\mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, we have \begin{align*} \rho_{M_\infty\times\mathbb{R}_\infty\ast}\(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}\)\overset{+}{\otimes} \pi^{-1}\mathcal{G} &\simeq 0\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}\(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}\), \pi^{-1}\mathcal{G}\) &\simeq 0,\\ \rho_{M_\infty\times\mathbb{R}_\infty\ast}\Bbbk_{M\times\mathbb{R}}\overset{+}{\otimes} \mathcal{F} &\simeq \pi^{-1}\mathbf{R}\pi_{!!}\mathcal{F}\ (\,\simeq \pi^!\mathbf{R}\pi_{!!}\mathcal{F}[-1]),\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}\Bbbk_{M\times\mathbb{R}}, \mathcal{F}\) &\simeq \pi^{!}\mathbf{R}\pi_{\ast}\mathcal{F}\ (\,\simeq \pi^{-1}\mathbf{R}\pi_{\ast}\mathcal{F}[1]). \end{align*} \end{corollary} At the end of this subsection, we shall prove the following proposition. \begin{proposition}\label{prop3.12} Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. \begin{itemize} \item[\rm (1)] For any $\mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$, there exits an isomorphism in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty}):$ $$I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1\overset{+}{\otimes} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_2 \simeq I_{M_\infty\times\mathbb{R}_\infty}\(\mathcal{F}_1\overset{+}{\otimes}\mathcal{F}_2\).$$ \item[\rm (2)] For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ and any $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$, there exist an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub}):$ $$\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom^+\(I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}, G\) \simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}, \mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}G\).$$ \item[\rm (3)] The functor $$(\cdot)\overset{+}{\otimes}(\cdot)\colon {\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})\times{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty}) \to{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$$ is well defined. \item[\rm (4)] For any $F_1, F_2\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$, there exits an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ $$\lambda_{M_\infty\times\mathbb{R}_\infty}F_1\overset{+}{\otimes} \lambda_{M_\infty\times\mathbb{R}_\infty}F_2 \simeq \lambda_{M_\infty\times\mathbb{R}_\infty}\(F_1\overset{+}{\otimes} F_2\).$$ \end{itemize} \end{proposition} \begin{proof} (1) Let us denote by $S$ the closure of $\{(t_1, t_2, t_3)\in\mathbb{R}^3\ |\ t_1+t_2+t_3=0\}$ in $\var{\mathbb{R}}^3$, and consider the morphisms $\tl{p}_1, \tl{p}_2, \tl{\mu}\colon S\to\var{\mathbb{R}}$ given by $\tl{p}_1(t_1, t_2, t_3) = t_1,\ \tl{p}_2(t_1, t_2, t_3) = t_2,\ \tl{\mu}(t_1, t_2, t_3) = t_1+t_2 = -t_3$. We shall denote by the same symbols the corresponding morphisms $\che{M}\times S\to\che{M}\times\var{\mathbb{R}}$. Then there exists a commutative diagram \[\xymatrix@M=7pt@C=35pt{ M_\infty\times\mathbb{R}_\infty^2\ar@{->}[r]^-k\ar@{->}[d]_-u & \che{M}\times S\ar@{->}[d]^-{\tl{u}}\\ M_\infty\times\mathbb{R}_\infty\ar@{->}[r]_-{j_{M_\infty\times\mathbb{R}_\infty}} &\che{M}\times\var{\mathbb{R}},}\] where $u = p_1, p_2, \mu$, and $k$ is the morphism associated by the embedding $\mathbb{R}^2\hookrightarrow S, (t_1, t_2)\mapsto (t_1, t_2, -t_1-t_2)$. Note that for any $F_1, F_2\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ there exists an isomorphism $$ F_1\overset{+}{\otimes} F_2\simeq j_{M_\infty\times\mathbb{R}_\infty}^{-1}\mathbf{R}\tl{\mu}_{!!}( \tl{p}_1^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}F_1\otimes \tl{p}_2^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}F_2). $$ This assertion can be proved as similarly to \cite[Lem.\:4.3.9]{DK16}. Then, we have isomorphism $$ I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1\overset{+}{\otimes} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_2 \simeq j_{M_\infty\times\mathbb{R}_\infty}^{-1}\mathbf{R}\tl{\mu}_{!!}\( \tl{p}_1^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1 \otimes \tl{p}_2^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_2\). $$ Moreover, we have isomorphisms \begin{align*} \tl{p}_1^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1 &\simeq \tl{p}_1^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!} j_{M_\infty\times\mathbb{R}_\infty}^{-1}I_{\che{M}\times\var{\mathbb{R}}}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1\\ &\simeq \tl{p}_1^{-1}(\iota_{\che{M}\times\var{\mathbb{R}}}\Bbbk_{M\times\mathbb{R}} \otimes I_{\che{M}\times\var{\mathbb{R}}}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1)\\ &\simeq \tl{p}_1^{-1}(I_{\che{M}\times\var{\mathbb{R}}}\rho_{\che{M}\times\var{\mathbb{R}}\ast}\Bbbk_{M\times\mathbb{R}} \otimes I_{\che{M}\times\var{\mathbb{R}}}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1)\\ &\simeq \tl{p}_1^{-1}I_{\che{M}\times\var{\mathbb{R}}}\(\rho_{\che{M}\times\var{\mathbb{R}}\ast}\Bbbk_{M\times\mathbb{R}} \otimes \mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1\)\\ &\simeq I_{\che{M}\times S}\tl{p}_1^{-1}\(\rho_{\che{M}\times\var{\mathbb{R}}\ast}\Bbbk_{M\times\mathbb{R}} \otimes \mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1\)\\ &\simeq I_{\che{M}\times S}\tl{p}_1^{-1}\( \mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}j_{M_\infty\times\mathbb{R}_\infty}^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1\)\\ &\simeq I_{\che{M}\times S}\(\tl{p}_1^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1\), \end{align*} where in the second (resp.\:sixth) isomorphism we used the fact that $\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}j_{M_\infty\times\mathbb{R}_\infty}^{-1} \simeq\iota_{\che{M}\times\var{\mathbb{R}}}\Bbbk_{M\times\mathbb{R}}\otimes(\cdot)$ (resp.\:$\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}j_{M_\infty\times\mathbb{R}_\infty}^{-1} \simeq\rho_{\che{M}\times\var{\mathbb{R}}\ast}\Bbbk_{M\times\mathbb{R}}\otimes(\cdot)$). See also the end of Subsection \ref{subsec2.5}. In a similar way, we have an isomorphism $$\tl{p}_2^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_2 \simeq I_{\che{M}\times S}\(\tl{p}_2^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_2\).$$ Hence, there exist isomorphisms \begin{align*} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1\overset{+}{\otimes} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_2 &\simeq j_{M_\infty\times\mathbb{R}_\infty}^{-1}\mathbf{R}\tl{\mu}_{!!}I_{\che{M}\times S} \(\tl{p}_1^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1 \otimes \tl{p}_2^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_2\)\\ &\simeq j_{M_\infty\times\mathbb{R}_\infty}^{-1}I_{\che{M}\times \var{\mathbb{R}}}\mathbf{R}\tl{\mu}_{!!} \(\tl{p}_1^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1 \otimes \tl{p}_2^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_2\)\\ &\simeq I_{M_\infty\times\mathbb{R}_\infty}j_{M_\infty\times\mathbb{R}_\infty}^{-1}\mathbf{R}\tl{\mu}_{!!} \(\tl{p}_1^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1 \otimes \tl{p}_2^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_2\). \end{align*} Moreover, we have \begin{align*} &j_{M_\infty\times\mathbb{R}_\infty}^{-1}\mathbf{R}\tl{\mu}_{!!} \(\tl{p}_1^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_1 \otimes \tl{p}_2^{-1}\mathbf{R} j_{M_\infty\times\mathbb{R}_\infty!!}\mathcal{F}_2\)\\ \simeq\ &\mathbf{R}{\mu}_{!!}k^{-1} \(\mathbf{R} k_{!!}{p}_1^{-1}\mathcal{F}_1\otimes \mathbf{R} k_{!!}{p}_2^{-1}\mathcal{F}_2\)\\ \simeq\ &\mathbf{R}{\mu}_{!!} \({p}_1^{-1}\mathcal{F}_1\otimes {p}_2^{-1}\mathcal{F}_2\) \end{align*} and hence we have $I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1\overset{+}{\otimes} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_2 \simeq I_{M_\infty\times\mathbb{R}_\infty}\(\mathcal{F}_1\overset{+}{\otimes}\mathcal{F}_2\)$. \medskip \noindent (2) For any $\mathcal{F}_0\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$, there exist isomorphisms \begin{align*} &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})}\( \mathcal{F}_0, \mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom^+\(I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}, G\)\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})}\( I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_0\overset{+}{\otimes} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}, G\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})}\( I_{M_\infty\times\mathbb{R}_\infty}\(\mathcal{F}_0\overset{+}{\otimes} \mathcal{F}\), G\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})}\(\mathcal{F}_0, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}, \mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}G\)\), \end{align*} where in the first and last isomorphisms we used Proposition \ref{prop3.6} (1) and Proposition \ref{prop3.8}, and the second isomorphism follows from the assertion (1). Hence we have an isomorphism $\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom^+\(I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}, G\) \simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathcal{F}, \mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}G\).$ \medskip \noindent (3) Let $F_1, F_2\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$. By Proposition \ref{prop3.6} (2), there exist $\mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ such that $F_1\simeq I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1, F_2\simeq I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_2$. Moreover, by using the first assertion, we have $$F_1\overset{+}{\otimes} F_2 \simeq I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1\overset{+}{\otimes} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1 \simeq I_{M_\infty\times\mathbb{R}_\infty}\(\mathcal{F}_1\otimes\mathcal{F}_2\).$$ This implies that $F_1\overset{+}{\otimes} F_2\in {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$. The proof is completed. \medskip \noindent (4) This assertion follows from Propositions \ref{prop3.6} (2) and the assertion (1). \end{proof} \subsection{Enhanced Subanalytic Sheaves}\label{subsec3.3} In this subsection, let us define enhanced subanalytic sheaves in a similar way to the definition of enhanced ind-sheaves. Let $M_\infty = (M, \che{M})$ be a real analytic bordered space and set $\mathbb{R}_\infty := (\mathbb{R}, \var{\mathbb{R}})$ for $\var{\mathbb{R}} := \mathbb{R}\sqcup\{-\infty, +\infty\}$. Let us set $${\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) := {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty \times\mathbb{R}_\infty}^{\rm sub})/\pi^{-1}{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$$ and we shall call an object of ${\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ an enhanced subanalytic sheaf\footnote{It seems that a object of ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ is called an enhanced subanalytic sheaf, in \cite{Kas16}.} on $M_\infty$. The category $\pi^{-1}{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ can be characterized as follows. \begin{lemma} For $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$, the following five conditions are equivalent \begin{itemize} \item[\rm (i)] $\mathcal{F}\in\pi^{-1}{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, \item[\rm (ii)] $\mathcal{F}\overset{\sim}{\longrightarrow}\Bbbk_{M\times\mathbb{R}}[1]\overset{+}{\otimes}\mathcal{F}$, \item[\rm (iii)] ${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\Bbbk_{M\times\mathbb{R}}[1], \mathcal{F})\overset{\sim}{\longrightarrow} \mathcal{F}$, \item[\rm (iv)] $\(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}\)\overset{+}{\otimes} \mathcal{F} \simeq 0$, \item[\rm (v)] ${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}, \mathcal{F}\)\simeq 0$. \end{itemize} \end{lemma} \begin{proof} By using a distinguished triangle $$\Bbbk_{\{t\geq 0\}}\oplus\Bbbk_{\{t\leq 0\}}\longrightarrow \Bbbk_{\{t=0\}}\longrightarrow\Bbbk_{M\times\mathbb{R}}[1]\xrightarrow{+1}$$ and the fact that $\Bbbk_{\{t=0\}}\overset{+}{\otimes} \mathcal{F}\simeq \mathcal{F}$ (resp.\,$\mathcal{F}\simeq{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\Bbbk_{\{t=0\}}, \mathcal{F})$), we have the condition (ii) (resp.\,(iii)) is equivalent to the condition (iv) (resp.\,(v)). Let us prove three conditions (i), (ii), (iii) are equivalent. Let us assume the condition (ii) (resp.\,(iii)) is satisfied. Then we have $\mathcal{F}\overset{\sim}{\longrightarrow}\Bbbk_{M\times\mathbb{R}}\overset{+}{\otimes}\mathcal{F}[1]$ (resp.\,${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\Bbbk_{M\times\mathbb{R}}[1], \mathcal{F})\overset{\sim}{\longrightarrow} \mathcal{F}$). By Corollary \ref{cor3.11}, $\mathcal{F}\overset{\sim}{\longrightarrow}\pi^{-1}\mathbf{R}\pi_{!!}[1]\mathcal{F}$ (resp.\,$\pi^{-1}\mathbf{R}\pi_\ast\mathcal{F}\overset{\sim}{\longleftarrow}\mathcal{F}$). Hence, the condition (i) is satisfied. Let us assume the condition (i) is satisfied. By using Proposition \ref{prop3.7} (2)(iii), we have $I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}\in\pi^{-1}{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$, and hence by \cite[Lem.\:4.4.3]{DK16} we have \begin{align*} \pi^{-1}\mathbf{R}\pi_\ast I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F} &\overset{\sim}{\longrightarrow} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F},\\ I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F} &\overset{\sim}{\longrightarrow} \pi^{!}\mathbf{R}\pi_{!!} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}. \end{align*} By using Proposition \ref{prop3.7} (3)(iv), (v) (resp.\,(2)(iv), (v)) and Proposition \ref{prop3.6} (2), we have $$ \pi^{-1}\mathbf{R}\pi_\ast\mathcal{F} \overset{\sim}{\longrightarrow} \mathcal{F}\hspace{7pt} (\text{resp.}\, \mathcal{F}\overset{\sim}{\longrightarrow}\pi^{!}\mathbf{R}\pi_{!!} \mathcal{F}). $$ By Corollary \ref{cor3.11}, this implies that the condition (iii) (resp.\,(ii)) is satisfied. Therefore, the proof is completed. \end{proof} Let us prove the quotient functor \[\mathbf{Q}_{M_\infty}^{\rm sub} \colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\] has fully faithful left and right adjoints. By Corollary \ref{cor3.11}, two functors \begin{align*} \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \to{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub}), \hspace{7pt} \\ \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}} &\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \to{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub}) \end{align*} which are defined by \begin{align*} \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}(\mathbf{Q}_{M_\infty}^{\rm sub}(\cdot)) &:= \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}) \overset{+}{\otimes} (\cdot) ,\\ \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}(\mathbf{Q}_{M_\infty}^{\rm sub}(\cdot)) &:={\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}), \cdot) \end{align*} are well defined. \begin{lemma}\label{lem3.13} The functors $\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}$ induce equivalences of categories \begin{align*} \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}} &\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\overset{\sim}{\longrightarrow} \{\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\ |\ \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} \mathcal{F}\overset{\sim}{\longrightarrow} \mathcal{F}\},\\ \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}} &\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\overset{\sim}{\longrightarrow} \{\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\ |\ \mathcal{F}\overset{\sim}{\longrightarrow} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathcal{F})\}, \end{align*} respectively. Moreover, the quotient functor admits a left (resp.\,right) adjoint $\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}$ (resp.\,$\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}$). \end{lemma} \begin{proof} By Corollary \ref{cor3.11} and the fact that for any $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ there exist isomorphisms \begin{align*} &\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes}\mathbf{L}^{{\rm E}, {\rm sub}}_{M_\infty}K \overset{\sim}{\longrightarrow}\mathbf{L}^{{\rm E}, {\rm sub}}_{M_\infty}K,\\ &\mathbf{R}^{{\rm E}, {\rm sub}}_{M_\infty}K \overset{\sim}{\longrightarrow} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathbf{R}^{{\rm E}, {\rm sub}}_{M_\infty}K\), \end{align*} functors \begin{align*} \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}} &\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to \{\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\ |\ \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} \mathcal{F}\overset{\sim}{\longrightarrow} \mathcal{F}\},\\ \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}} &\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to \{\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\ |\ \mathcal{F}\overset{\sim}{\longrightarrow} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathcal{F})\} \end{align*} are well defined. Let $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. Then there exist $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ such that $K\simeq \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}$. Let us prove that \begin{align*} &\mathbf{Q}_{M_\infty}^{\rm sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}) \overset{+}{\otimes} \mathcal{F}\) \simeq \mathbf{Q}_{M_\infty}^{\rm sub}(\mathcal{F}),\\ &\mathcal{F}\overset{\sim}{\longrightarrow} \mathbf{Q}_{M_\infty}^{\rm sub}\({\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathcal{F})\). \end{align*} Since there exists a distinguished triangle $$\Bbbk_{\{t\geq 0\}}\oplus\Bbbk_{\{t\leq 0\}}\longrightarrow \Bbbk_{\{t=0\}}\longrightarrow\Bbbk_{M\times\mathbb{R}}[1]\xrightarrow{+1},$$ it is enough to show that $$\mathbf{Q}_{M_\infty}^{\rm sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}\Bbbk_{M\times\mathbb{R}}[1]\overset{+}{\otimes} \mathcal{F}\) \simeq \mathbf{Q}_{M_\infty}^{\rm sub}\({\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{M_\infty\times\mathbb{R}_\infty\ast}\Bbbk_{M\times\mathbb{R}}[1], \mathcal{F})\) \simeq0.$$ These assertions follows from Corollary \ref{cor3.11}. So that, we have \begin{align*} \mathbf{Q}_{M_\infty}^{\rm sub}\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K \simeq \mathbf{Q}_{M_\infty}^{\rm sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}) \overset{+}{\otimes} \mathcal{F}\) \simeq \mathbf{Q}_{M_\infty}^{\rm sub}(\mathcal{F}) \simeq K,\\ \mathbf{Q}_{M_\infty}^{\rm sub}\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K \simeq \mathbf{Q}_{M_\infty}^{\rm sub}\({\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathcal{F})\) \simeq \mathbf{Q}_{M_\infty}^{\rm sub}(\mathcal{F}) \simeq K. \end{align*} Hence we have $\mathbf{Q}_{M_\infty}^{\rm sub}\circ\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}\simeq{\rm id},\ \mathbf{Q}_{M_\infty}^{\rm sub}\circ\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}\simeq{\rm id}$. Moreover, it is clear that for any $\mathcal{G}_1\in\{\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\ |\ \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} \mathcal{F}\overset{\sim}{\longrightarrow} \mathcal{F}\}$ and any $\mathcal{G}_2\in\{\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\ |\ \mathcal{F}\overset{\sim}{\longrightarrow}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathcal{F})\}$, we have $\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{Q}_{M_\infty}^{\rm sub}(\mathcal{G}_1)\simeq\mathcal{G}_1$ and $\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}\circ\mathbf{Q}_{M_\infty}^{\rm sub}(\mathcal{G}_2)\simeq\mathcal{G}_2$. Therefore, there exist equivalences of categories \begin{align*} \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}} &\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\overset{\sim}{\longrightarrow} \{\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\ |\ \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} \mathcal{F}\overset{\sim}{\longrightarrow} \mathcal{F}\},\\ \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}} &\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\overset{\sim}{\longrightarrow} \{\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\ |\ \mathcal{F}\overset{\sim}{\longrightarrow} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathcal{F})\}. \end{align*} Let us prove that the quotient functor admits a left (resp.\,right) adjoint $\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}$ (resp.\,$\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}$). Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ and $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. Since functors $\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}$ induce fully faithful functors \begin{align*} \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}} &\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub}),\\ \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}} &\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub}), \end{align*} there exist isomorphisms \begin{align*} \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})} \(\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}, K\) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})} \(\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}, \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K\),\\ \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})} \(K, \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}\) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})} \(\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}\). \end{align*} Let us prove that there exist isomorphisms in ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub}):$ \begin{align*} &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K\) \simeq \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K,\\ &\rho_{M_\infty\times\mathbb{R}_\infty\ast}\(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}\)\overset{+}{\otimes}\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K \simeq \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K. \end{align*} Since $K\in {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, there exists $\mathcal{F}_0\in {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$ such that $K\simeq \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}_0$. Moreover there exists $F_0\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ such that $\mathcal{F}_0\simeq\lambda_{M_\infty\times\mathbb{R}_\infty}F_0$ by Proposition \ref{prop3.6} (2). Then we have isomorphisms \begin{align*} &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} \mathcal{F}_0\)\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \lambda_{M_\infty\times\mathbb{R}_\infty}\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} \lambda_{M_\infty\times\mathbb{R}_\infty}F_0\)\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \lambda_{M_\infty\times\mathbb{R}_\infty}\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} F_0\)\)\\ \simeq\ &\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty }{\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} F_0\)\\ \simeq\ &\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty }{\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), F_0\)\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \lambda_{M_\infty\times\mathbb{R}_\infty}F_0\) \end{align*} where in the first isomorphism we used Proposition \ref{prop3.7} (3)(i), in the second isomorphism we used Proposition \ref{prop3.12} (4), in the third and fifth isomorphisms we used Propositions \ref{prop3.7} (4)(i) and \ref{prop3.12} (2). In the fourth isomorphism we used the fact that for any $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$: \begin{align*} &{\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} G\)\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), G\). \end{align*} This assertion can be proved as similar way to \cite[Cor.\:4.3.11]{DK16}. Hence there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub}):$ $${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K\) \simeq \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K.$$ Moreover, we have isomorphisms \begin{align*} & \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \mathcal{F}_0\)\\ \simeq\ & \lambda_{M_\infty\times\mathbb{R}_\infty}\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), \lambda_{M_\infty\times\mathbb{R}_\infty}F_0\)\\ \simeq\ & \lambda_{M_\infty\times\mathbb{R}_\infty}\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} \lambda_{M_\infty\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), F_0\)\\ \simeq\ & \lambda_{M_\infty\times\mathbb{R}_\infty}\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} {\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), F_0\)\)\\ \simeq\ & \lambda_{M_\infty\times\mathbb{R}_\infty}\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} F_0\)\\ \simeq\ & \lambda_{M_\infty\times\mathbb{R}_\infty}\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} \lambda_{M_\infty\times\mathbb{R}_\infty}F_0 \end{align*} where in the first isomorphism we used Proposition \ref{prop3.7} (3)(i), in the second isomorphisms we used Proposition \ref{prop3.12} (2), in the third and fifth isomorphisms we used Proposition \ref{prop3.12} (4). In the fourth isomorphism we used the fact that for any $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$: \begin{align*} \iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} {\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}), G\) \simeq \iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}})\overset{+}{\otimes} G. \end{align*} This assertion can be proved as similar way to \cite[Cor.\:4.3.11]{DK16}. Hence there exists isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub}):$ $$\rho_{M_\infty\times\mathbb{R}_\infty\ast}\(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq0\}}\)\overset{+}{\otimes}\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K \simeq \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K.$$ Therefore, we have \begin{align*} \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})} (\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}, \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})} (\mathcal{F}, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K),\\ \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})} (\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})} (\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K, \mathcal{F}), \end{align*} and hence there exist isomorphisms \begin{align*} \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})} (\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}, K) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})} (\mathcal{F}, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K),\\ \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})} (K, \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})} (\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K, \mathcal{F}). \end{align*} Therefore, the quotient functor admits a left (resp.\,right) adjoint $\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}$ (resp.\,$\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}$). \end{proof} We sometimes denote $\mathbf{Q}_{M_\infty}^{\rm sub}$ (resp.\ $\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}$ ) by $\mathbf{Q}^{\rm sub}$ (resp.\ $\mathbf{L}^{{\rm E}, {\rm sub}}, \mathbf{R}^{{\rm E}, {\rm sub}}$) for short. Let us set \begin{align*} \mathbf{E}^{\leq 0}(\Bbbk_{M_\infty}^{\rm sub}) & = \{K\in {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\ | \ \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K\in \mathbf{D}^{\leq 0}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\},\\ \mathbf{E}^{\geq 0}(\Bbbk_{M_\infty}^{\rm sub}) & = \{K\in {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\ | \ \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K\in \mathbf{D}^{\geq 0}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\}, \end{align*} where $\(\mathbf{D}^{\leq 0}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub}), \mathbf{D}^{\geq 0}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\)$ is the standard t-structure on ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$. Note that a pair $\(\mathbf{E}^{\leq 0}(\Bbbk_{M_\infty}^{\rm sub}), \mathbf{E}^{\geq 0}(\Bbbk_{M_\infty}^{\rm sub})\)$ is a standard t-structure on ${\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. We denote by \[\mathcal{H}^n \colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to\mathbf{E}^0(\Bbbk_{M_\infty}^{\rm sub})\] the $n$-th cohomology functor, where we set $\mathbf{E}^0(\Bbbk_{M_\infty}^{\rm sub}) := \mathbf{E}^{\leq 0}(\Bbbk_{M_\infty}^{\rm sub})\cap\mathbf{E}^{\geq 0}(\Bbbk_{M_\infty}^{\rm sub})$. By Proposition \ref{prop3.9}, the convolution functors can be lifted to the triangulated category ${\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. We denote them by the same symbols $\overset{+}{\otimes}$, ${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}$. Namely, we obtain functors \begin{align*} (\cdot)\overset{+}{\otimes}(\cdot)&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\times {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\cdot, \cdot)&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \end{align*} which are defined by \begin{align*} \mathbf{Q}_{M_\infty}^{\rm sub}(\mathcal{F})\overset{+}{\otimes} \mathbf{Q}_{M_\infty}^{\rm sub}(\mathcal{G}) & := \mathbf{Q}_{M_\infty}^{\rm sub}\(\mathcal{F}\overset{+}{\otimes}\mathcal{G}\),\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathbf{Q}_{M_\infty}^{\rm sub}(\mathcal{F}), \mathbf{Q}_{M_\infty}^{\rm sub}(\mathcal{G})) & := \mathbf{Q}_{M_\infty}^{\rm sub}\({\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}, \mathcal{G})\), \end{align*} for $\mathcal{F}, \mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times \mathbb{R}_\infty}^{\rm sub})$. Moreover, by Proposition \ref{prop3.4} (4), for a morphism $f\colon M_\infty\to N_\infty$ of real analytic bordered spaces, the following functors are well defined \begin{align*} \mathbf{E} f_\ast&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub}), \hspace{7pt} \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F} \longmapsto \mathbf{Q}_{N_\infty}^{\rm sub}\(\mathbf{R} f_{\mathbb{R}_\infty\ast}\mathcal{F}\) \\ \mathbf{E} f^{-1}&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}), \hspace{7pt} \mathbf{Q}_{N_\infty}^{\rm sub}\mathcal{G} \longmapsto \mathbf{Q}_{M_\infty}^{\rm sub}\(f_{\mathbb{R}_\infty}^{-1}\mathcal{G}\),\\ \mathbf{E} f_{!!}&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub}), \hspace{7pt} \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F} \longmapsto \mathbf{Q}_{N_\infty}^{\rm sub}\(\mathbf{R} f_{\mathbb{R}_\infty!!}\mathcal{F}\),\\ \mathbf{E} f^{!}&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}), \hspace{7pt} \mathbf{Q}_{N_\infty}^{\rm sub}\mathcal{G} \longmapsto \mathbf{Q}_{M_\infty}^{\rm sub}\(f_{\mathbb{R}_\infty}^{!}\mathcal{G}\). \end{align*} Let us define external hom functors \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{{\rm E},{\rm sub}}(\cdot, \cdot)&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ {\mathbf{R}}{\mathcal{H}}om^{{\rm E},{\rm sub}}(\cdot, \cdot)&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M}),\\ {\mathbf{R}}\mathrm{Hom}^{{\rm E}, {\rm sub}}(\cdot, \cdot)&\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to {\mathbf{D}}^{\mathrm{b}}(\Bbbk), \end{align*} by \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\mathbf{Q}_{M_\infty}^{\rm sub} \mathcal{F}_1, \mathbf{Q}_{M_\infty}^{\rm sub} \mathcal{F}_2\) &:=\mathbf{R}\pi_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}_1, \mathcal{F}_2),\\ {\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\(\mathbf{Q}_{M_\infty}^{\rm sub} \mathcal{F}_1, \mathbf{Q}_{M_\infty}^{\rm sub} \mathcal{F}_2\) &:= \rho_{M_\infty\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm E}\(\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}_1, \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}_2\),\\ {\mathbf{R}}\mathrm{Hom}^{{\rm E}, {\rm sub}}\(\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}_1, \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}_2\) &:= \mathbf{R}\Gamma\(M; {\mathbf{R}}{\mathcal{H}}om^{\rm E}(\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}_1, \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}_2)\), \end{align*} for $\mathcal{F}_1, \mathcal{F}_2\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$. Note that for any $K_1, K_2\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, we have $$\mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}(K_1, K_2)\simeq\mathcal{H}^0{\mathbf{R}}\mathrm{Hom}^{{\rm E}, {\rm sub}}\(K_1, K_2\).$$ Moreover, for $\mathcal{F}_0\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$, the objects \begin{align*} \pi^{-1}\mathcal{F}_0\otimes \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F} & :=\mathbf{Q}_{M_\infty}^{\rm sub}\(\pi^{-1}\mathcal{F}_0\otimes \mathcal{F}\),\\ {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\pi^{-1}\mathcal{F}_0, \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}\) & :=\mathbf{Q}_{M_\infty}^{\rm sub}\({\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}_0, \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F})\) \end{align*} are well defined and hence the following functors are well defined \begin{align*} \pi^{-1}(\cdot)\otimes (\cdot) &\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\times{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}),\\ {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\pi^{-1}(\cdot), \cdot\)&\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}). \end{align*} At the end of this subsection, let us prove that these functorss have several properties as similar to the classical sheaves. \begin{proposition}\label{prop3.14} Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces. \begin{itemize} \item[\rm (1)] \begin{itemize} \item[\rm (i)] For any $K_1, K_2, K_3\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, one has \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K_1\overset{+}{\otimes} K_2, K_3\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K_2, K_3)\),\\ {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(K_1\overset{+}{\otimes} K_2, K_3\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K_2, K_3)\),\\ {\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\(K_1\overset{+}{\otimes} K_2, K_3\) &\simeq {\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K_2, K_3)\),\\ {\mathbf{R}}\mathrm{Hom}^{{\rm E}, {\rm sub}}\(K_1\overset{+}{\otimes} K_2, K_3\) &\simeq {\mathbf{R}}\mathrm{Hom}^{{\rm E}, {\rm sub}}\(K_1{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K_2, K_3)\),\\ \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(K_1 \overset{+}{\otimes} K_2, K_3\) &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K_2, K_3)\). \end{align*} \item[\rm (ii)] For any $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $L\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, one has \begin{align*} \mathbf{E} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{E} f^{-1}L, K\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(L, \mathbf{E} f_{\ast}K\),\\ \mathbf{R} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\mathbf{E} f^{-1}L, K\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(L, \mathbf{E} f_{\ast}K\),\\ \mathbf{R} f_\ast{\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\(\mathbf{E} f^{-1}L, K\) &\simeq {\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\(L, \mathbf{E} f_{\ast}K\),\\ {\mathbf{R}}\mathrm{Hom}^{{\rm E}, {\rm sub}}\(\mathbf{E} f^{-1}L, K\) &\simeq {\mathbf{R}}\mathrm{Hom}^{{\rm E}, {\rm sub}}\(L, \mathbf{E} f_{\ast}K\),\\ \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(\mathbf{E} f^{-1}L, K\) &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})}\(L, \mathbf{E} f_{\ast}K\). \end{align*} \item[\rm (iii)] For any $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $L\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, one has \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{E} f_{!!}K, L\) &\simeq \mathbf{E} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K, \mathbf{E} f^{!}L\),\\ {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\mathbf{E} f_{!!}K, L\) &\simeq \mathbf{R} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(K, \mathbf{E} f^{!}L\),\\ {\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\(\mathbf{E} f_{!!}K, L\) &\simeq \mathbf{R} f_\ast{\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\(K, \mathbf{E} f^{!}L\),\\ \mathbf{R}\mathrm{Hom}^{{\rm E}, {\rm sub}}\(\mathbf{E} f_{!!}K, L\) &\simeq \mathbf{R}\mathrm{Hom}^{{\rm E}, {\rm sub}}\(K, \mathbf{E} f^{!}L\),\\ \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})}\(\mathbf{E} f_{!!}K, L\) &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(K, \mathbf{E} f^{!}L, \). \end{align*} \end{itemize} \item[\rm (2) ] For any $K, K_1, K_2\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $L, L_1, L_2\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, one has, \begin{align*} \mathbf{E} f^{-1}\(K_1\overset{+}{\otimes} K_2\) &\simeq \mathbf{E} f^{-1}K_1\overset{+}{\otimes} \mathbf{E} f^{-1}K_2,\\ \mathbf{E} f_{!!}\(K\overset{+}{\otimes}\mathbf{E} f^{-1}L\) &\simeq \mathbf{E} f_{!!}K\overset{+}{\otimes} L,\\ \mathbf{E} f^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(L_1, L_2\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{E} f^{-1}L_1, \mathbf{E} f^!L_2\) \end{align*} \item[\rm (3)] For a cartesian diagram \[\xymatrix@M=5pt@R=20pt@C=40pt{ M'_\infty\ar@{->}[r]^-{f'}\ar@{->}[d]_-{g'} & N'_\infty\ar@{->}[d]^-{g}\\ M_\infty\ar@{->}[r]_-{f} & N_\infty}\] and any $K\in {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, one has \begin{align*} \mathbf{E} g^{-1}\mathbf{E} f_{!!}K&\simeq \mathbf{E} f'_{!!}\mathbf{E} g'^{-1}K,\\ \mathbf{E} g^{!}\mathbf{E} f_{\ast}K&\simeq \mathbf{E} f'_{\ast}\mathbf{E} g'^{!}K. \end{align*} \item[\rm (4)] \begin{itemize} \item[\rm (i)] For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $K_1, K_2\in {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, one has \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\pi^{-1}\mathcal{F}\otimes K_1, K_2\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}, K_2\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\pi^{-1}\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K_1, K_2)\),\\ {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\pi^{-1}\mathcal{F}\otimes K_1, K_2\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E},{\rm sub}}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}, K_2\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm sub}}\(\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(K_1, K_2)\),\\ {\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\(\pi^{-1}\mathcal{F}\otimes K_1, K_2\) &\simeq {\mathbf{R}}{\mathcal{H}}om^{{\rm E},{\rm sub}}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}, K_2)\)\\ &\simeq {\mathbf{R}}{\mathcal{H}}om^{{\rm sub}}\(\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(K_1, K_2)\),\\ {\mathbf{R}}\mathrm{Hom}^{{\rm E}, {\rm sub}}\(\pi^{-1}\mathcal{F}\otimes K_1, K_2\) &\simeq {\mathbf{R}}\mathrm{Hom}^{{\rm E},{\rm sub}}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}, K_2)\)\\ &\simeq {\mathbf{R}}\mathrm{Hom}\(\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(K_1, K_2)\),\\ \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(\pi^{-1}\mathcal{F}\otimes K_1, K_2\) &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}, K_2)\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(K_1, K_2)\),\\ \end{align*} \item[\rm (ii)] For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, any $\mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, any $K\in {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $L\in {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, one has \begin{align*} \mathbf{E} f^{-1}\(\pi^{-1}\mathcal{F}\otimes L\) &\simeq \pi^{-1}f^{-1}\mathcal{F}\otimes \mathbf{E} f^{-1}L,\\ \mathbf{E} f_{!!}\(\pi^{-1}\mathcal{F}\otimes \mathbf{E} f^{-1}L\) &\simeq \pi^{-1}\mathbf{R} f_{!!}\mathcal{F}\otimes L,\\ \mathbf{E} f_{!!}\(\pi^{-1}f^{-1}\mathcal{F}\otimes K\) &\simeq \pi^{-1}\mathcal{F}\otimes \mathbf{E} f_{!!}K,\\ \mathbf{E} f^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\pi^{-1}\mathcal{G}, L\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\pi^{-1}f^{-1}\mathcal{G}, \mathbf{E} f^!L\). \end{align*} \item[\rm (iii)] For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $K, L\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, one has \begin{align*} \pi^{-1}\mathcal{F}\otimes \(K\overset{+}{\otimes} L\) &\simeq \(\pi^{-1}\mathcal{F}\otimes K\)\overset{+}{\otimes} L,\\ {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\pi^{-1}\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, L)\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\pi^{-1}\mathcal{F}\otimes K, L\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K,{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}, L\),\\ {\mathbf{R}}{\mathcal{I}}hom^{\mathbf{E}, {\rm sub}}\(K\overset{+}{\otimes} L, \mathcal{F}\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{\mathbf{E}, {\rm sub}}\(K, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(L, \mathcal{F})\). \end{align*} \end{itemize} \end{itemize} \end{proposition} \begin{proof} (1)(i) The first (resp.\,second) assertion follows from Proposition \ref{prop3.8} (resp.\,\ref{prop3.4} (2)). The third (resp.\,fourth, fifth) assertion follows from the second (resp.\, third, fourth) one. \medskip \noindent (ii) The first (resp.\,second) assertion follows from Proposition \ref{prop3.8} (resp.\,\ref{prop3.4} (2)). The third (resp.\,fourth, fifth) assertion follows from the second (resp.\, third, fourth) one. \medskip \noindent (iii) The first (resp.\,second) assertion follows from Proposition \ref{prop3.8} (resp.\,\ref{prop3.4} (2)). The third (resp.\,fourth, fifth) assertion follows from the second (resp.\, third, fourth) one. \medskip \noindent (2) The three assertions follow from Proposition \ref{prop3.8}. \medskip \noindent (3) The two assertions follow from Proposition \ref{prop3.4} (4). \medskip \noindent (4)(i) By the definition we have \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\pi^{-1}\mathcal{F}\otimes K_1, K_2\) &\simeq \mathbf{Q}_{M_\infty}^{\rm sub}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub} \(\pi^{-1}\mathcal{F}\otimes \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\) \end{align*} and hence by Proposition \ref{prop3.9} there exist isomorphisms \begin{align*} &\mathbf{Q}_{M_\infty}^{\rm sub}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub} \(\pi^{-1}\mathcal{F}\otimes \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\)\\ \simeq &\,\mathbf{Q}_{M_\infty}^{\rm sub}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub} \(\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\)\\ \simeq &\,{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}, K_2\) \end{align*} and \begin{align*} &\mathbf{Q}_{M_\infty}^{\rm sub}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub} \(\pi^{-1}\mathcal{F}\otimes \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\)\\ \simeq &\,\mathbf{Q}_{M_\infty}^{\rm sub}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub} \(\pi^{-1}\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}K_2)\)\\ \simeq &\,{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\pi^{-1}\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K_1, K_2\). \end{align*} \medskip \noindent By the definition we have $${\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\pi^{-1}\mathcal{F}\otimes K_1, K_2\) \simeq \mathbf{R}\pi_{\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub} \(\pi^{-1}\mathcal{F}\otimes\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\)$$ and hence by Proposition \ref{prop3.4} (2) there exist isomorphisms \begin{align*} &\mathbf{R}\pi_{\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub} \(\pi^{-1}\mathcal{F}\otimes\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\)\\ \simeq &\mathbf{R}\pi_{\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub} \(\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\pi^{-1}\mathcal{F}, \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\)\)\\ \simeq &{\mathbf{R}}{\mathcal{I}}hom^{{\rm E},{\rm sub}}\(K_1, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\pi^{-1}\mathcal{F}, K_2\) \end{align*} and \begin{align*} &\mathbf{R}\pi_{\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub} \(\pi^{-1}\mathcal{F}\otimes\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\)\\ \simeq &\mathbf{R}\pi_{\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub} \(\pi^{-1}\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\)\)\\ \simeq &{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathcal{F}, \mathbf{R}\pi_{\ast}{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}\(\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_1, \mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K_2\)\)\\ \simeq &{\mathbf{R}}{\mathcal{I}}hom^{{\rm sub}}\(\mathcal{F}, {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(K_1, K_2\). \end{align*} \medskip \noindent The third (resp.\,fourth, fifth) assertion follows from the second (resp.\,third, fourth) one. \medskip \noindent The four assertions of (ii) follow from Proposition \ref{prop3.4} (3), (4). \medskip \noindent The three assertions of (iii) follow from Proposition \ref{prop3.9}. \end{proof} \subsection{Relation between Enhanced Ind-Sheaves and Enhanced Subanalytic Sheaves }\label{subsec3.4} In this subsection, we shall explain a relation between enhanced subanalytic sheaves and enhanced ind-sheaves. Theorems \ref{main1} and \ref{main2} are main results of this paper. Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. Let us consider a quotient category $$ {\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}) := {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty \times\mathbb{R}_\infty})/\pi^{-1}{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}). $$ Note that this is a full triangulated subcategory of ${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ by using $\pi^{-1}{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}) = \pi^{-1}{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\cap{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ and \cite[Prop.\:1.6.10]{KS90}. Note also that $\Bbbk_{M_\infty}^{\rm E}\in{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Moreover, Propositions \ref{prop3.15} below follows from Lemma \ref{lem3.5} and Proposition \ref{prop3.12} (3). \begin{proposition}\label{prop3.15} Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces associated with a morphism $\che{f}\colon\che{M}\to\che{N}$ of real analytic manifolds. The functors below are well defined: \begin{itemize} \item[\rm (1)] $e_{M_\infty}\colon {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})\to{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, \item[\rm (2)] $(\cdot)\overset{+}{\otimes}(\cdot)\colon {\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})\times {\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}) \to{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, \item[\rm (3)] $\mathbf{E} f^{-1}\colon{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})\to{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, \item[\rm (4)] $\mathbf{E} f_{!!}\colon{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})\to{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$, \item[\rm (5)] $\mathbf{E} f^{!}\colon{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})\to{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. \end{itemize} \end{proposition} By Proposition \ref{prop3.7} (2)(iii), (3)(v), the following functors are well defined \begin{align*} I_{M_\infty}^{\rm E}&\colon{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}), \hspace{7pt} \mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}\mapsto \mathbf{Q}_{M_\infty}I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F},\\ J_{M_\infty}^{\rm E}&\colon{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\to{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}), \hspace{7pt} \mathbf{Q}_{M_\infty}F\mapsto \mathbf{Q}_{M_\infty}^{\rm sub}\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}F. \end{align*} \begin{theorem}\label{main1} Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. Then we have \begin{itemize} \item[\rm (1)] a pair $(I_{M_\infty}^{\rm E}, J_{M_\infty}^{\rm E})$ is an adjoint pair and there exists a canonical isomorphism ${\rm id}\overset{\sim}{\longrightarrow} J_{M_\infty}^{\rm E}\circ I_{M_\infty}^{\rm E}$, \item[\rm (2)] there exists an equivalence of triangulated categories: \[\xymatrix@M=7pt@C=45pt{ {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\ar@<0.8ex>@{->}[r]^-{I_{M_\infty}^{\rm E}}_-\sim & {\mathbf{E}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty}) \ar@<0.8ex>@{->}[l]^-{J_{M_\infty}^{\rm E}}. }\] \end{itemize} \end{theorem} \begin{proof} (1) For any $K\in {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $L\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$, there exist isomorphisms \begin{align*} \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}\(I_{M_\infty}^{\rm E} K, L\) &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}\( \mathbf{Q}_{M_\infty}I_{M_\infty\times\mathbb{R}_\infty}\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K, L\)\\ &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(K, \mathbf{Q}_{M_\infty}^{\rm sub}\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}\mathbf{R}_{M_\infty}^{{\rm E}}L\)\\ &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(K, J_{M_\infty}^{\rm sub} L\), \end{align*} where in the second isomorphism we used Proposition \ref{prop3.6} (1) and Lemma \ref{lem3.13}. This implies that a pair $(I_{M_\infty}^{\rm E}, J_{M_\infty}^{\rm E})$ is an adjoint pair. Moreover, by Proposition \ref{prop3.6} (1) it is clear that ${\rm id}\overset{\sim}{\longrightarrow} J_{M_\infty}^{\rm E}\circ I_{M_\infty}^{\rm E}$. \medskip \noindent (2) Since the functor $I_{M_\infty\times\mathbb{R}_\infty}\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})\to {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ is well defined, for any $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ we have $$I_{M_\infty}^{\rm E} K =\mathbf{Q}_{M_\infty}I_{M_\infty\times\mathbb{R}_\infty}\mathbf{L}_{M_\infty}^{{\rm E}, {\rm sub}}K \in{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}).$$ Let $L\in {\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Then there exists $G\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ such that $L \simeq \mathbf{Q}_{M_\infty}G$ and hence $$I_{M_\infty}^{\rm E}\mathbf{R} J_{M_\infty}^{\rm E} L \simeq \mathbf{Q}_{M_\infty}I_{M_\infty\times\mathbb{R}_\infty}\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}G \simeq \mathbf{Q}_{M_\infty}G \simeq L,$$ where in the second isomorphism we used Proposition \ref{prop3.6} (2). Therefore, the proof is completed. \end{proof} We will denote by $\lambda_{M_\infty}^{\rm E} \colon {\mathbf{E}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})\overset{\sim}{\longrightarrow} {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ the inverse functor of $I_{M_\infty}^{\rm E}\colon {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\overset{\sim}{\longrightarrow} {\mathbf{E}}^{\mathrm{b}}_{{\rm I}{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})$. There exists a commutativity between the various functors and functors $I^{\rm E}, J^{\rm E}, \lambda^{\rm E}$ as below. \begin{proposition}\label{prop3.18} Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces associated with a morphism $\che{f}\colon\che{M}\to\che{N}$ of real analytic manifolds. \begin{itemize} \item[\rm (1)] For any $K, K_1, K_2\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $L\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$, we have \begin{itemize} \item[\rm (i)] $I_{M_\infty}^{\rm E}\(K_1\overset{+}{\otimes} K_2\) \simeq I_{M_\infty}^{\rm E} K_1\overset{+}{\otimes} I_{M_\infty}^{\rm E} K_2,$ \item[\rm (ii)] $J_{M_\infty}^{\rm E}{\mathbf{R}}{\mathcal{I}}hom^+(I_{M_\infty}^{\rm E} K, L) \simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, J_{M_\infty}^{\rm E} L)$ \item[\rm (iii)] $\mathbf{R} J_{M_\infty}{\mathbf{R}}{\mathcal{I}}hom^{{\rm E}}(I_{M_\infty}^{\rm E} K, L) \simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(K, J_{M_\infty}^{\rm E} L).$ \end{itemize} \item[\rm (2)] For any $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $L\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, we have \begin{itemize} \item[\rm (i)] $I_{M_\infty}^{\rm E}\mathbf{E} f^{-1}L\simeq \mathbf{E} f^{-1}I_{N_\infty}^{\rm E} L$, \item[\rm (ii)] $\mathbf{E} f_{!!}I_{M_\infty}^{\rm E} K\simeq I_{N_\infty}^{\rm E}\mathbf{E} f_{!!}K$, \item[\rm (iii)] $I_{M_\infty}^{\rm E}\mathbf{E} f^{!}L\simeq \mathbf{E} f^{!}I_{N_\infty}^{\rm E} L$. \end{itemize} \item[\rm (3)] For any $K\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ and any $L\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{N_\infty})$, we have \begin{itemize} \item[\rm (i)] $\mathbf{E} f_{\ast}J_{M_\infty}^{\rm E} K\simeq J_{N_\infty}^{\rm E}\mathbf{E} f_{\ast}K$, \item[\rm (ii)] $J_{M_\infty}^{\rm E}\mathbf{E} f^{!}L\simeq \mathbf{E} f^{!}J_{N_\infty}^{\rm E} L$. \end{itemize} \item[\rm (4)] For any $K\in{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$ and any $L\in{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$, we have \begin{itemize} \item[\rm (i)] $\lambda_{M_\infty}^{\rm E}\mathbf{E} f^{-1}L\simeq \mathbf{E} f^{-1}\lambda_{N_\infty}^{\rm E} L$, \item[\rm (ii)] $\mathbf{E} f_{!!}\lambda_{M_\infty}^{\rm E} K\simeq \lambda_{N_\infty}^{\rm E}\mathbf{E} f_{!!}K$, \item[\rm (iii)] $\lambda_{M_\infty}^{\rm E}\(K_1\overset{+}{\otimes} K_2\) \simeq \lambda_{M_\infty}^{\rm E} K_1\overset{+}{\otimes} \lambda_{M_\infty}^{\rm E} K_2$. \end{itemize} \end{itemize} \end{proposition} \begin{proof} Let us denote by ${f}_{\mathbb{R}_\infty}\colon M_\infty\times\mathbb{R}_\infty\to N_\infty\times\mathbb{R}_\infty$ the morphism $f\times{\rm id}_{\mathbb{R}_\infty}$ of real analytic bordered spaces. \medskip \noindent (1) Let $K, K_1, K_2\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, $L\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$, then there exist $\mathcal{F}, \mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$, $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ such that $K\simeq\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}, K_1\simeq\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}_1, K_2\simeq\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F}_2, L\simeq\mathbf{Q}_{M_\infty}G$. \smallskip \noindent (i) By Proposition \ref{prop3.12} (1), we have \begin{align*} I_{M_\infty}^{\rm E}\(K_1\overset{+}{\otimes} K_2\) &\simeq \mathbf{Q}_{M_\infty}I_{M_\infty\times\mathbb{R}_\infty}\(\mathcal{F}_1\overset{+}{\otimes} \mathcal{F}_2\)\\ &\simeq \mathbf{Q}_{M_\infty}\(I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_1\overset{+}{\otimes} I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}_2\)\\ &\simeq I_{M_\infty}^{\rm E} K_1\overset{+}{\otimes} I_{M_\infty}^{\rm E} K_2. \end{align*} \noindent (ii) By Proposition \ref{prop3.12} (2), we have \begin{align*} J_{M_\infty}^{\rm E}{\mathbf{R}}{\mathcal{I}}hom^+(I_{M_\infty}^{\rm E} K, L) &\simeq \mathbf{Q}_{M_\infty}^{\rm sub} \mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom^+(I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}, G)\\ &\simeq \mathbf{Q}_{M_\infty}^{\rm sub}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathcal{F}, \mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}G)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, J_{M_\infty}^{\rm E} L). \end{align*} \noindent (iii) By Proposition \ref{prop3.7} (1), (3)(iv), we have \begin{align*} \mathbf{R} J_{M_\infty}{\mathbf{R}}{\mathcal{I}}hom^{{\rm E}}(I_{M_\infty}^{\rm E} K, L) &\simeq \mathbf{R} J_{M_\infty}\pi_\ast{\mathbf{R}}{\mathcal{I}}hom(I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}, \mathcal{G})\\ &\simeq \pi_\ast\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom(I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}, \mathcal{G})\\ &\simeq \pi_\ast{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}(\mathcal{F}, \mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}\mathcal{G})\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(K, J_{M_\infty}^{\rm E} L). \end{align*} \medskip \noindent (2) Let $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, $L\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$. Then there exist $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty\times\mathbb{R}_\infty}^{\rm sub})$, $\mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty\times\mathbb{R}_\infty}^{\rm sub})$ such that $K\simeq\mathbf{Q}_{M_\infty}^{\rm sub}\mathcal{F},\ L\simeq\mathbf{Q}_{N_\infty}^{\rm sub}\mathcal{G}$. \smallskip \noindent (i) By Proposition \ref{prop3.7} (2)(iii), we have \begin{align*} I_{M_\infty}^{\rm E}\mathbf{E} f^{-1}L \simeq \mathbf{Q}_{M_\infty}I_{M_\infty\times\mathbb{R}_\infty}f_{\mathbb{R}_\infty}^{-1}\mathcal{G} \simeq \mathbf{Q}_{M_\infty}f_{\mathbb{R}_\infty}^{-1}I_{N_\infty\times\mathbb{R}_\infty}\mathcal{G}\simeq \mathbf{E} f^{-1}I_{N_\infty}^{\rm E} L. \end{align*} \noindent (ii) By Proposition \ref{prop3.7} (2)(iv), we have \begin{align*} \mathbf{E} f_{!!}I_{M_\infty}^{\rm E} K \simeq \mathbf{Q}_{N_\infty}f_{\mathbb{R}_\infty!!}I_{M_\infty\times\mathbb{R}_\infty}\mathcal{F} \simeq \mathbf{Q}_{N_\infty}I_{N_\infty\times\mathbb{R}_\infty}f_{\mathbb{R}_\infty!!}\mathcal{F} \simeq \mathbf{E} f_{!!}I_{N_\infty}^{\rm E} K. \end{align*} \noindent (iii) By Proposition \ref{prop3.7} (2)(v), we have \begin{align*} I_{M_\infty}^{\rm E}\mathbf{E} f^{!}L \simeq \mathbf{Q}_{M_\infty}I_{M_\infty\times\mathbb{R}_\infty}f_{\mathbb{R}_\infty}^{!}\mathcal{G} \simeq \mathbf{Q}_{M_\infty}f_{\mathbb{R}_\infty}^{!}I_{N_\infty\times\mathbb{R}_\infty}\mathcal{G} \simeq \mathbf{E} f^{!}I_{N_\infty}^{\rm E} L. \end{align*} \medskip \noindent (3) Let $K\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$, $L\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. Then there exist $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$, $G\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ such that $K\simeq\mathbf{Q}_{M_\infty} F,\ L\simeq\mathbf{Q}_{M_\infty} G$. \smallskip \noindent (i) By Proposition \ref{prop3.7} (3)(iv), we have \begin{align*} \mathbf{E} f_{\ast}J_{M_\infty}^{\rm E} K \simeq \mathbf{Q}_{N_\infty}^{\rm sub} \mathbf{R} f_{\mathbb{R}_\infty\ast}\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}F \simeq \mathbf{Q}_{N_\infty}^{\rm sub} \mathbf{R} J_{N_\infty\times\mathbb{R}_\infty}\mathbf{R} f_{\mathbb{R}_\infty\ast}F \simeq J_{N_\infty}^{\rm E}\mathbf{E} f_{\ast} K. \end{align*} \noindent (ii) By Proposition \ref{prop3.7} (3)(v), we have \begin{align*} J_{M_\infty}^{\rm E}\mathbf{E} f^{!}L \simeq \mathbf{Q}_{M_\infty}^{\rm sub} \mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}f_{\mathbb{R}_\infty}^{!}G \simeq \mathbf{Q}_{M_\infty}^{\rm sub} f_{\mathbb{R}_\infty}^{!}\mathbf{R} J_{N_\infty\times\mathbb{R}_\infty}G \simeq \mathbf{E} f^{!}J_{N_\infty}^{\rm E} L. \end{align*} \medskip \noindent (4) Let $K\in{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, $L\in{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$. \smallskip \noindent (i) By using (2)(i) and Theorem \ref{main1} (2), we have \begin{align*} \lambda_{M_\infty}^{\rm E}\mathbf{E} f^{-1}L \simeq \lambda_{M_\infty}^{\rm E}\mathbf{E} f^{-1}I_{N_\infty}^{\rm E} \lambda_{N_\infty}^{\rm E} L \simeq \lambda_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} \mathbf{E} f^{-1}\lambda_{N_\infty}^{\rm E} L \simeq \mathbf{E} f^{-1}\lambda_{N_\infty}^{\rm E} L. \end{align*} \noindent (ii) By using (2)(ii) and Theorem \ref{main1} (2), we have \begin{align*} \mathbf{E} f_{!!}\lambda_{M_\infty}^{\rm E} K \simeq \lambda_{N_\infty}^{\rm E} I_{N_\infty}^{\rm E} \mathbf{E} f_{!!}\lambda_{M_\infty}^{\rm E} K \simeq \lambda_{N_\infty}^{\rm E}\mathbf{E} f_{!!}I_{M_\infty}^{\rm E}\lambda_{M_\infty}^{\rm E} K \simeq \lambda_{N_\infty}^{\rm E}\mathbf{E} f_{!!}K. \end{align*} \noindent (iii) By using (1)(i) and Theorem \ref{main1} (2), we have \begin{align*} \lambda_{M_\infty}^{\rm E}\(K_1\overset{+}{\otimes} K_2\) &\simeq \lambda_{M_\infty}^{\rm E}\( I_{M_\infty}^{\rm E}\lambda_{M_\infty}^{\rm E} K_1 \overset{+}{\otimes} I_{M_\infty}^{\rm E}\lambda_{M_\infty}^{\rm E} K_2\)\\ &\simeq \lambda_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E}\( \lambda_{M_\infty}^{\rm E} K_1\overset{+}{\otimes} \lambda_{M_\infty}^{\rm E} K_2\)\\ &\simeq \lambda_{M_\infty}^{\rm E} K_1\overset{+}{\otimes} \lambda_{M_\infty}^{\rm E} K_2. \end{align*} \end{proof} Let us prove that the functors $I_{M_\infty}^{\rm E}, J_{M_\infty}^{\rm E}$ are preserve the $\mathbb{R}$-constructability. Let us recall that an enhanced ind-sheaf $K$ on $M_\infty$ is $\mathbb{R}$-constructible if for any open subset $U$ of $M$ which is subanalytic and relatively compact in $\che{M}$ there exists $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{U_\infty\times\mathbb{R}_\infty})$ such that $\mathbf{E} i_{U_\infty}^{-1}K\simeq \Bbbk_{U_\infty}^{{\rm E}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}\iota_{U_\infty\times\mathbb{R}_\infty}\mathcal{F}$. We denote by ${\mathbf{E}}^{\mathrm{b}}_{{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})$ the category of $\mathbb{R}$-constructible enhanced ind-sheaves. See \cite[\S3.3]{DK16-2} for the details. We shall set $$\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}} := \mathbf{Q}_{M_\infty}^{\rm sub} \(\underset{a\to +\infty}{\varinjlim}\ \rho_{M_\infty\times\mathbb{R}_\infty\ast}\Bbbk_{\{t\geq a\}} \)\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}).$$ \begin{lemma}\label{lem3.19} There exist an isomorphism $I_{M_\infty}^{\rm E}\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\simeq \Bbbk_{M_\infty}^{\rm E}$ in ${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ and an isomorphism $J_{M_\infty}^{\rm E}\Bbbk_{M_\infty}^{{\rm E}}\simeq \Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}$ in ${\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. \end{lemma} \begin{proof} By the definition of $\Bbbk_{M_\infty}^{\rm E}$, we have $\Bbbk_{M_\infty}^{\rm E} \simeq \mathbf{Q}_{M_\infty}j_{M_\infty\times\mathbb{R}_\infty}^{-1}\(``\underset{a\to +\infty}{\varinjlim}"\ \iota_{\che{M}\times\var{\mathbb{R}}}\Bbbk_{\{t\geq a\}}\)$. Since $I_{\che{M}\times\var{\mathbb{R}}}\circ\rho_{\che{M}\times\var{\mathbb{R}}\ast}\simeq \iota_{\che{M}\times\var{\mathbb{R}}}$ and the fact that a functor $I$ commutes with the filtrant inductive limits, there exist isomorphisms in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{\che{M}\times\var{\mathbb{R}}})$: $$``\underset{a\to +\infty}{\varinjlim}"\ \iota_{\che{M}\times\var{\mathbb{R}}}\Bbbk_{\{t\geq a\}} \simeq I_{\che{M}\times\var{\mathbb{R}}}\( \underset{a\to +\infty}{\varinjlim}\ \rho_{\che{M}\times\var{\mathbb{R}}\ast}\Bbbk_{\{t\geq a\}}\).$$ Hence, we have \begin{align*} \Bbbk_{M_\infty}^{\rm E} &\simeq \mathbf{Q}_{M_\infty}j_{M_\infty\times\mathbb{R}_\infty}^{-1}I_{\che{M}\times\var{\mathbb{R}}}\( \underset{a\to +\infty}{\varinjlim}\ \rho_{\che{M}\times\var{\mathbb{R}}\ast}\Bbbk_{\{t\geq a\}}\)\\ &\simeq I_{M_\infty}^{\rm E}\mathbf{Q}_{M_\infty}^{\rm sub} \(\underset{a\to +\infty}{\varinjlim}\ j_{M_\infty\times\mathbb{R}_\infty}^{-1}\rho_{\che{M}\times\var{\mathbb{R}}\ast}\Bbbk_{\{t\geq a\}}\)\\ &\simeq I_{M_\infty}^{\rm E}\mathbf{Q}_{M_\infty}^{\rm sub} \(\underset{a\to +\infty}{\varinjlim}\ \rho_{M_\infty\times\mathbb{R}_\infty\ast}\Bbbk_{\{t\geq a\}}\)\\ &\simeq I_{M_\infty}^{\rm E}\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}. \end{align*} The second assertion follows from the first assertion and Theorem \ref{main1} (1). \end{proof} \begin{proposition}\label{prop3.20} The triangulated category ${\mathbf{E}}^{\mathrm{b}}_{{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})$ is a full triangulated subcategory of ${\mathbf{E}}^{\mathrm{b}}_{{{\rm I}\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})$. \end{proposition} \begin{proof} Let $K\in{\mathbf{E}}^{\mathrm{b}}_{{\mathbb{R}-c}}({\rm I}\Bbbk_{M_\infty})$. Since a pair $(I_{M_\infty}^{\rm E}, J_{M_\infty}^{\rm E})$ is an adjoint pair, we have a morphism $I_{M_\infty}^{\rm E} J_{M_\infty}^{\rm E} K\to K$ in ${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. Since $K$ is $\mathbb{R}$-constructible, for any open subset $U$ of $M$ which is subanalytic and relatively compact in $\che{M}$ there exists $\mathcal{F}^U\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{U_\infty\times\mathbb{R}_\infty})$ such that $\mathbf{E} i_{U_\infty}^{-1}K\simeq \Bbbk_{U_\infty}^{{\rm E}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}\iota_{U_\infty\times\mathbb{R}_\infty}\mathcal{F}^U$. By Proposition \ref{prop3.7} (4)(i), Theorem \ref{main1} (2), Proposition \ref{prop3.18} (1)(i) and Lemma \ref{lem3.19}, we have $$\Bbbk_{U_\infty}^{{\rm E}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}\iota_{U_\infty\times\mathbb{R}_\infty}\mathcal{F}^U \simeq I_{U_\infty}^{\rm E}\(\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}^{\rm sub}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{F}^U\)\in{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{U_\infty}).$$ This implies that for any open subset $U$ of $M$ which is subanalytic and relatively compact in $\che{M}$, $\mathbf{E} i_{U_\infty}^{-1}K\in{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{U_\infty})$. Hence, by Proposition \ref{prop3.18} (2)(i), (3)(ii) there exist isomorphisms \begin{align*} \(I_{M_\infty}^{\rm E} J_{M_\infty}^{\rm E} K\)|_{U_\infty} \simeq I_{U_\infty}^{\rm E} J_{U_\infty}^{\rm E} (K|_{U_\infty}) \simeq K|_{U_\infty} \end{align*} for any open subset $U$ of $M$ which is subanalytic and relatively compact in $\che{M}$. This implies that $I_{M_\infty}^{\rm E} J_{M_\infty}^{\rm E} K\overset{\sim}{\longrightarrow} K$ and hence $K\in{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. \end{proof} \begin{definition}\label{def3.21} Let $M_\infty = (M, \che{M})$ be a real analytic bordered space. We say that an enhanced subanalytic sheaf $K$ is $\mathbb{R}$-constructible if for any open subset $U$ of $M$ which is subanalytic and relatively compact in $\che{M}$ there exists $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{U_\infty\times\mathbb{R}_\infty})$ such that $$\mathbf{E} i_{U_\infty}^{-1}K\simeq \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}^{\rm sub}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{F}.$$ Let us denote by ${\mathbf{E}}^{\mathrm{b}}_{{\mathbb{R}-c}}(\Bbbk_{M_\infty}^{\rm sub})$ the category of $\mathbb{R}$-constructible enhanced subanalytic sheaves. \end{definition} \begin{theorem}\label{main2} Let $M_\infty$ be a real analytic bordered space. Then the functors $I_{M_\infty}^{\rm E}, J_{M_\infty}^{\rm E}$ induce an equivalence of categories \[\xymatrix@M=7pt@C=45pt{ {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})\ar@<0.8ex>@{->}[r]^-{I_{M_\infty}^{\rm E}}_-\sim & {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}) \ar@<0.8ex>@{->}[l]^-{J_{M_\infty}^{\rm E}}. }\] \end{theorem} \begin{proof} It is enough to show that the following functors are well defined: \begin{align*} I_{M_\infty}^{\rm E}&\colon{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}),\\ J_{M_\infty}^{\rm E}&\colon{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub}). \end{align*} Let $K\in {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$ and $U$ an open subset of $M$ which is subanalytic and relatively compact in $\che{M}$. Then there exists $\mathcal{F}^U\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{U_\infty\times\mathbb{R}_\infty})$ such that $\mathbf{E} i_{U_\infty}^{-1}K\simeq \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{M_\infty}^{\rm sub}\rho_{M_\infty\times\mathbb{R}_\infty\ast}\mathcal{F}^U.$ By Propositions \ref{prop3.7} (4)(i), \ref{prop3.18} (1)(i), (2)(i) and Lemma \ref{lem3.19}, there exist isomorphisms \begin{align*} \mathbf{E} i_{U_\infty}^{-1}I_{M_\infty}^{\rm E} K &\simeq I_{U_\infty}^{\rm E} \mathbf{E} i_{U_\infty}^{-1}K\\ &\simeq I_{U_\infty}^{\rm E}\(\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{M_\infty}^{\rm sub}\rho_{M_\infty\times\mathbb{R}_\infty\ast}\mathcal{F}^U\)\\ &\simeq I_{U_\infty}^{\rm E}\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} I_{U_\infty}^{\rm E}\mathbf{Q}_{M_\infty}^{\rm sub}\rho_{M_\infty\times\mathbb{R}_\infty\ast}\mathcal{F}^U \\ &\simeq \Bbbk_{U_\infty}^{{\rm E}}\overset{+}{\otimes} \mathbf{Q}_{M_\infty}\iota_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}^U. \end{align*} Therefore, we have $I_{M_\infty}^{\rm E} K\in {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Let $K\in {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$ and $U$ an open subset of $M$ which is subanalytic and relatively compact in $\che{M}$. Then there exists $\mathcal{F}^U\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{U_\infty\times\mathbb{R}_\infty})$ such that $\mathbf{E} i_{U_\infty}^{-1}K\simeq \Bbbk_{U_\infty}^{{\rm E}}\overset{+}{\otimes} \mathbf{Q}_{M_\infty}\iota_{M_\infty\times\mathbb{R}_\infty}\mathcal{F}^U.$ By Propositions \ref{prop3.7} (3)(i), \ref{prop3.18} (4)(i), (iii), Lemma \ref{lem3.19} and Proposition \ref{prop3.20}, there exist isomorphisms \begin{align*} \mathbf{E} i_{U_\infty}^{-1}J_{M_\infty}^{\rm E} K &\simeq J_{U_\infty}^{\rm E}\mathbf{E} i_{U_\infty}^{-1} K\\ &\simeq J_{U_\infty}^{\rm E} \( \Bbbk_{U_\infty}^{{\rm E}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}\iota_{U_\infty\times\mathbb{R}_\infty}\mathcal{F}^U\)\\ &\simeq J_{U_\infty}^{\rm E}\Bbbk_{U_\infty}^{{\rm E}}\overset{+}{\otimes} J_{U_\infty}^{\rm E}\mathbf{Q}_{U_\infty}\iota_{U_\infty\times\mathbb{R}_\infty}\mathcal{F}^U\\ &\simeq \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}^{\rm sub}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{F}^U. \end{align*} Therefore, we have $\lambda_{M_\infty}^{\rm E} K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$. The proof is completed. \end{proof} Let us summarize results of Proposition \ref{prop3.20} and Theorems \ref{main1}, \ref{main2} in the following commutative diagram: \[\xymatrix@M=10pt@R=35pt@C=85pt{ {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\ar@<0.7ex>@{^{(}->}[r]^-{I_{M_\infty}^{\rm E}} \ar@<-0.7ex>@{->}[rd]^-{I_{M_\infty}^{\rm E}}_-\sim & {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\ar@<0.7ex>@{->}[l]^-{J_{M_\infty}^{\rm E}}\\ {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})\ar@{}[u]|-{\bigcup} \ar@<0.7ex>@{->}[rd]^-{I_{M_\infty}^{\rm E}}_-\sim & {\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty}).\ar@{}[u]|-{\bigcup} \ar@<2.5ex>@{->}[lu]^-{\lambda_{M_\infty}^{\rm E}}\\ {} & {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})\ar@{}[u]|-{\bigcup} \ar@<1.0ex>@{->}[lu]^-{\lambda_{M_\infty}^{\rm E}}. }\] At the end of this subsection, let us consider an embedding functor from ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ to ${\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and a Verdier duality functor for enhanced subanalytic sheaves. Let us define a functor $$e_{M_\infty}^{\rm sub} \colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \to {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}), \hspace{7pt} \mathcal{F}\mapsto \pi^{-1}\mathcal{F}\otimes\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}.$$ By Proposition \ref{prop3.23} below, we have commutative diagrams: \[\xymatrix@M=7pt@C=45pt{ {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\ar@{->}[r]^-{e_{M_\infty}^{\rm sub}}\ar@{->}[d]_-{I_{M_\infty}} & {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\ar@{->}[d]^-{I_{M_\infty}^{\rm E}}\\ {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})\ar@{->}[r]_-{e_{M_\infty}} & {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}),} \hspace{17pt} \xymatrix@M=7pt@C=45pt{ {\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})\ar@{->}[r]^-{e_{M_\infty}}\ar@{->}[d]_-{\lambda_{M_\infty}} & {\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})\ar@{->}[d]^-{\lambda_{M_\infty}^{\rm E}}\\ {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\ar@{->}[r]_-{e_{M_\infty}^{\rm sub}} & {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}).} \] \begin{proposition}\label{prop3.23} For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $F\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$, we have \begin{align*} I_{M_\infty}^{\rm E} e_{M_\infty}^{\rm sub}\mathcal{F} &\simeq e_{M_\infty}I_{M_\infty}\mathcal{F},\\ e_{M_\infty}^{\rm sub}\lambda_{M_\infty}F &\simeq \lambda_{M_\infty}^{\rm E} e_{M_\infty}F. \end{align*} Moreover, the functor $e_{M_\infty}^{\rm sub}\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \to {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ is fully faithful. \end{proposition} \begin{proof} Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. By Proposition \ref{prop3.7} (2)(iii), (vi) and Lemma \ref{lem3.19}, there exist isomorphisms in ${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$: $$I_{M_\infty}^{\rm E} e_{M_\infty}^{\rm sub}\mathcal{F} \simeq I_{M_\infty}^{\rm E}\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\otimes \pi^{-1}I_{M_\infty}\mathcal{F} \simeq \Bbbk_{M_\infty}^{{\rm E}}\otimes \pi^{-1}I_{M_\infty}\mathcal{F} \simeq e_{M_\infty}I_{M_\infty}\mathcal{F}.$$ \smallskip Let $F\in{\mathbf{D}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. By Proposition \ref{prop3.7} (4)(ii), (iv) and Lemma \ref{lem3.19}, there exist isomorphisms in ${\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$: $$e_{M_\infty}^{\rm sub} \lambda_{M_\infty}F \simeq \Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\otimes \pi^{-1}\lambda_{M_\infty}F \simeq \lambda_{M_\infty}^{\rm E}\Bbbk_{M_\infty}^{{\rm E}}\otimes \pi^{-1}\lambda_{M_\infty}F \simeq \lambda_{M_\infty}^{\rm E} e_{M_\infty}F.$$ \medskip \noindent Let $\mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. By Proposition \ref{prop3.6}, the functor $I_{M_\infty}\colon{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ is fully faithful and hence there exists an isomorphism $$\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}(\mathcal{F}_1, \mathcal{F}_2) \simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}(I_{M_\infty}\mathcal{F}_1, I_{M_\infty}\mathcal{F}_2).$$ Since the functor $e_{M_\infty}\colon {\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty}) \to {\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ is fully faithful, we have an isomorphism $$\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}(I_{M_\infty}\mathcal{F}_1, I_{M_\infty}\mathcal{F}_2) \simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}(e_{M_\infty}I_{M_\infty}\mathcal{F}_1, e_{M_\infty}I_{M_\infty}\mathcal{F}_2).$$ Moreover, by the first assertion, we have $$\mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}(e_{M_\infty}I_{M_\infty}\mathcal{F}_1, e_{M_\infty}I_{M_\infty}\mathcal{F}_2) \simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}( I_{M_\infty}^{\rm E} e_{M_\infty}^{\rm sub}\mathcal{F}_1, I_{M_\infty}^{\rm E} e_{M_\infty}^{\rm sub}\mathcal{F}_2).$$ By Theorem \ref{main1}, the functor $I_{M_\infty}^{\rm E}\colon{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ is fully faithful, and hence $$\mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}( I_{M_\infty}^{\rm E} e_{M_\infty}^{\rm sub}\mathcal{F}_1, I_{M_\infty}^{\rm E} e_{M_\infty}^{\rm sub}\mathcal{F}_2) \simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}( e_{M_\infty}^{\rm sub}\mathcal{F}_1, e_{M_\infty}^{\rm sub}\mathcal{F}_2).$$ Therefore, we have $$\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}(\mathcal{F}_1, \mathcal{F}_2)\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}( e_{M_\infty}^{\rm sub}\mathcal{F}_1, e_{M_\infty}^{\rm sub}\mathcal{F}_2).$$ This implies that the functor $e_{M_\infty}^{\rm sub}\colon {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \to {\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ is fully faithful. \end{proof} The functor $e^{\rm sub}$ commutes with several functors as below. \begin{proposition}\label{prop3.24} Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces. For any $\mathcal{F}, \mathcal{F}_1, \mathcal{F}_2\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $\mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, we have \begin{align*} e_{M_\infty}^{\rm sub}(\mathcal{F}_1\otimes\mathcal{F}_2) &\simeq e_{M_\infty}^{\rm sub}\mathcal{F}_1\overset{+}{\otimes} e_{M_\infty}^{\rm sub}\mathcal{F}_2,\\ \mathbf{E} f_{!!}e_{M_\infty}^{\rm sub}\mathcal{F} &\simeq e_{N_\infty}^{\rm sub}\mathbf{R} f_{!!}\mathcal{F},\\ \mathbf{E} f^{-1}e_{N_\infty}^{\rm sub}\mathcal{G} &\simeq e_{M_\infty}^{\rm sub} f^{-1}\mathcal{G},\\ \mathbf{E} f^{!}e_{N_\infty}^{\rm sub}\mathcal{G} &\simeq e_{M_\infty}^{\rm sub} f^{!}\mathcal{G}. \end{align*} \end{proposition} \begin{proof} By Proposition \ref{prop3.14} (4)(iii) and the fact that $\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \Bbbk_{M_\infty}^{{\rm E}, {\rm sub}} \simeq\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}$, we have \begin{align*} e_{M_\infty}^{\rm sub}(\mathcal{F}_1\otimes\mathcal{F}_2) &\simeq \Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\otimes\pi^{-1}(\mathcal{F}_1\otimes\mathcal{F}_2)\\ &\simeq \(\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes}\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\) \otimes(\pi^{-1}\mathcal{F}_1\otimes\pi^{-1}\mathcal{F}_2)\\ &\simeq \(\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\otimes\pi^{-1}\mathcal{F}_1\) \overset{+}{\otimes}\(\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\otimes\pi^{-1}\mathcal{F}_2\)\\ &\simeq e_{M_\infty}^{\rm sub}\mathcal{F}_1\overset{+}{\otimes} e_{M_\infty}^{\rm sub}\mathcal{F}_2. \end{align*} \noindent The second and third assertions follow from Proposition \ref{prop3.14} (4)(ii) and the fact that $\mathbf{E} f^{-1}\Bbbk_{N_\infty}^{{\rm E}, {\rm sub}}\simeq \Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}$. \noindent Let us prove the last assertion. By Propositions \ref{prop3.6}, \ref{prop3.18} (3)(ii) and \ref{prop3.23}, we have isomorphisms in ${\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$: $$\mathbf{E} f^!e_{N_\infty}^{\rm sub}\mathcal{G} \simeq \mathbf{E} f^!e_{N_\infty}^{\rm sub}\lambda_{N_\infty}I_{N_\infty}\mathcal{G} \simeq \lambda_{M_\infty}^{\rm E} \mathbf{E} f^!e_{N_\infty}I_{N_\infty}\mathcal{G}.$$ By \cite[Prop.\:2.18]{KS16-2}, we have $\mathbf{E} f^!\circ e_{N_\infty} \simeq e_{M_\infty}\circ f^!$ and hence there exist isomorphisms \begin{align*} \mathbf{E} f^!e_{N_\infty}^{\rm sub}\mathcal{G} \simeq \lambda_{M_\infty}^{\rm E} \mathbf{E} f^!e_{N_\infty}I_{N_\infty}\mathcal{G} \simeq \lambda_{M_\infty}^{\rm E} e_{M_\infty} f^!I_{N_\infty}\mathcal{G} \simeq e_{M_\infty}^{\rm sub}\lambda_{M_\infty}^{\rm E} I_{M_\infty}f^!\mathcal{G} \simeq e_{M_\infty}^{\rm sub} f^!\mathcal{G} \end{align*} where in the third isomorphism we used Propositions \ref{prop3.7} (2)(v), \ref{prop3.23} and in the last isomorphism we used Proposition \ref{prop3.6}. \end{proof} Let $i_0 \colon M_\infty\to M_\infty\times\mathbb{R}_\infty$ be the morphism of real analytic bordered spaces induced by the map $M\to M\times\mathbb{R}, x\mapsto (x, 0)$. We set \[{\rm sh}_{M_\infty}^{\rm sub} := i_0^!\circ \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}} \colon{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \to {\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}) \] and call it the subanalytic sheafification functor for enhanced subanalytic sheaves on real analytic bordered spaces. It has the following properties. \begin{proposition} \end{proposition} \begin{itemize} \item[\rm (1)] A pair $(e_{M_\infty}^{\rm sub}, {\rm sh}_{M_\infty}^{\rm sub})$ is an adjoint pair. \item[\rm (2)] For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$, one has $\mathcal{F}\overset{\sim}{\longrightarrow} {\rm sh}_{M_\infty}^{\rm sub} e_{M_\infty}^{\rm sub} \mathcal{F}$. \item[\rm (3)] For any $K\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$, one has $\mathbf{R} J_{M_\infty}{\rm I}{\rm sh}_{M_\infty} K\simeq {\rm sh}_{M_\infty}^{\rm sub} J_{M_\infty}^{\rm E} K$. \item[\rm (4)] Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces. For any $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $L\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, one has \begin{align*} \mathbf{R} f_\ast{\rm sh}_{M_\infty}^{\rm sub} K &\simeq {\rm sh}_{N_\infty}^{\rm sub} \mathbf{E} f_\ast K,\\ f^!{\rm sh}_{M_\infty}^{\rm sub} L &\simeq {\rm sh}_{M_\infty}^{\rm sub}\mathbf{E} f^!L. \end{align*} \end{itemize} \begin{proof} First, let us prove the assertion (3). Let $K\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. Then there exists $F\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty\times\mathbb{R}_\infty})$ such that $K\simeq\mathbf{Q}_{M_\infty}(F)$. Then we have \begin{align*} \mathbf{R} J_{M_\infty}{\rm I}{\rm sh}_{M_\infty} K &\simeq \mathbf{R} J_{M_\infty}i_0^!\mathbf{R}_{M_\infty}^{{\rm E}}K\\ &\simeq i_0^!\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}\mathbf{R}_{M_\infty}^{{\rm E}}K\\ &\simeq i_0^!\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty} {\mathbf{R}}{\mathcal{I}}hom^+(\iota_{M_\infty\times\mathbb{R}_\infty}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}), F)\\ &\simeq i_0^!\mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom^+(I_{M_\infty\times\mathbb{R}_\infty}\rho_{M_\infty\times\mathbb{R}_\infty\ast} (\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}), F)\\ &\simeq i_0^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{M_\infty\times\mathbb{R}_\infty\ast} (\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}), \mathbf{R} J_{M_\infty\times\mathbb{R}_\infty}F)\\ &\simeq i_0^! \mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}} J_{M_\infty}^{\rm E} K\\ &\simeq {\rm sh}_{M_\infty}^{\rm sub} J_{M_\infty}^{\rm E} K, \end{align*} where in the second (resp.\,fourth, fifth) isomorphism we used Proposition \ref{prop3.7} (3)(v) (resp.\,(4)(i), Proposition \ref{prop3.12} (2)). \medskip \noindent (1) Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. Then we have \begin{align*} \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(e_{M_\infty}^{\rm sub}\mathcal{F}, K\) &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}\(I_{M_\infty}^{\rm E} e_{M_\infty}^{\rm sub}\mathcal{F}, I_{M_\infty}^{\rm E} K\)\\ &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})}\(e_{M_\infty}I_{M_\infty}\mathcal{F}, I_{M_\infty}^{\rm E} K\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(\mathcal{F}, \mathbf{R} J_{M_\infty}{\rm I}{\rm sh}_{M_\infty}I_{M_\infty}^{\rm E} K\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(\mathcal{F}, {\rm sh}_{M_\infty}^{\rm sub} J_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(\mathcal{F}, {\rm sh}_{M_\infty}^{\rm sub} K\) \end{align*} where in the first and last isomorphisms (resp.\:second isomorphism) we used Theorem \ref{main1} (resp.\:Proposition \ref{prop3.23}), in the third isomorphism we used the fact that $(e_{M_\infty}, {\rm I}{\rm sh}_{M_\infty})$ is an adjoint pair and Proposition \ref{prop3.6} (1), and the fourth isomorphism follows from the assertion (3). This implies that a pair $(e_{M_\infty}^{\rm sub}, {\rm sh}_{M_\infty}^{\rm sub})$ is an adjoint pair. \medskip \noindent (2) Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. By the assertion (1), there exists a canonical morphism $\mathcal{F}\to {\rm sh}_{M_\infty}^{\rm sub} e_{M_\infty}^{\rm sub} \mathcal{F}$. Moreover we have \begin{align*} {\rm sh}_{M_\infty}^{\rm sub} e_{M_\infty}^{\rm sub} \mathcal{F} \simeq {\rm sh}_{M_\infty}^{\rm sub} J_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} e_{M_\infty}^{\rm sub} \mathcal{F} \simeq \mathbf{R} J_{M_\infty}{\rm I}{\rm sh}_{M_\infty}e_{M_\infty}I_{M_\infty}\mathcal{F} \simeq\mathcal{F} \end{align*} where in the first isomorphism we used Theorem \ref{main1} (1), in the second isomorphism we used the assertion (3) and Proposition \ref{prop3.23}, and in the last isomorphism we used the fact that ${\rm I}{\rm sh}_{M_\infty}\circ e_{M_\infty}\simeq{\rm id}$ and Proposition \ref{prop3.6} (1). \medskip \noindent (4) Let $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. For any $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, we have \begin{align*} \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})}\(\mathcal{F}, \mathbf{R} f_\ast{\rm sh}_{M_\infty}^{\rm sub} K\) &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\( e_{M_\infty}^{\rm sub} f^{-1}\mathcal{F}, K\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\( \mathbf{E} f^{-1}e_{N_\infty}^{\rm sub}\mathcal{F}, K\)\\ &\simeq \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})}\(\mathcal{F}, {\rm sh}_{N_\infty}^{\rm sub} \mathbf{E} f_\ast K\), \end{align*} where in the first and last isomorphisms we used the fact that a pair $(f^{-1}, \mathbf{R} f_\ast)$ is an adjoint pair, the assertion (1) and Proposition \ref{prop3.14} (1)(ii), and in the second isomorphism we used Proposition \ref{prop3.24}. This implies that there exists an isomorphism $\mathbf{R} f_\ast{\rm sh}_{M_\infty}^{\rm sub} K\simeq{\rm sh}_{N_\infty}^{\rm sub} \mathbf{E} f_\ast K$. Let $L\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$. Then there exists $\mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{N_\infty\times\mathbb{R}_\infty}^{\rm sub})$ such that $L\simeq\mathbf{Q}_{N_\infty}\mathcal{G}$. We shall denote by ${f}_{\mathbb{R}_\infty}\colon M_\infty\times\mathbb{R}_\infty\to N_\infty\times\mathbb{R}_\infty$ the morphism $f\times{\rm id}_{\mathbb{R}_\infty}$ of real analytic bordered spaces. Then we have isomorphisms in ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$: $$f^!{\rm sh}_{N_\infty}^{\rm sub} L \simeq f^!i_0^!\mathbf{R}_{N_\infty}^{{\rm E},{\rm sub}}L \simeq i_0^!f_{\mathbb{R}_\infty}^!\mathbf{R}_{N_\infty}^{{\rm E},{\rm sub}}L \simeq i_0^!f_{\mathbb{R}_\infty}^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}( \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}), \mathcal{G}).$$ Moreover, by Proposition \ref{prop3.8}, we have \begin{align*} f^!{\rm sh}_{N_\infty}^{\rm sub} L &\simeq i_0^!f_{\mathbb{R}_\infty}^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}( \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}), \mathcal{G})\\ &\simeq i_0^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}( f_{\mathbb{R}_\infty}^{-1}\rho_{N_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}), f_{\mathbb{R}_\infty}^!\mathcal{G})\\ &\simeq i_0^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}( \rho_{M_\infty\times\mathbb{R}_\infty\ast}(\Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}), f_{\mathbb{R}_\infty}^!\mathcal{G})\\ &\simeq i_0^!\mathbf{R}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{Q}_{M_\infty}^{\rm sub} f_{\mathbb{R}_\infty}^!\mathcal{G}\\ &\simeq {\rm sh}_{M_\infty}^{\rm sub}\mathbf{E} f^!L. \end{align*} \end{proof} Let us set $$\omega_{M_\infty}^{{\rm E}, {\rm sub}} := e_{M_\infty}^{\rm sub}(\rho_{M_\infty\ast}\omega_M)\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$$ where $\omega_M\in{\mathbf{D}}^{\mathrm{b}}(\Bbbk_{M_\infty})\ (\,\simeq {\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_M))$ is the dualizing complex, see \cite[Def.\:3.1.16 (i)]{KS90} for the details. Note that since $\omega_M\simeq j_M^{-1}\omega_{\che{M}}$, we have $\omega_M\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$. We shall define a functor $$\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}} \colon{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})^{{\mbox{\scriptsize op}}}\to{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub}), \hspace{7pt} K\mapsto {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, \omega_{M_\infty}^{{\rm E}, {\rm sub}}). $$ \begin{lemma}\label{lem3.26} There exist an isomorphism $I_{M_\infty}^{\rm E}\omega_{M_\infty}^{{\rm E}, {\rm sub}}\simeq \omega_{M_\infty}^{\rm E}$ in ${\mathbf{E}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$ and an isomorphism $J_{M_\infty}^{\rm E}\omega_{M_\infty}^{{\rm E}}\simeq \omega_{M_\infty}^{{\rm E}, {\rm sub}}$ in ${\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$. \end{lemma} \begin{proof} Since $\omega_M$ is $\mathbb{R}$-constructible, there exists an isomorphism $\iota_{M_\infty}\omega_M\simeq I_{M_\infty}\rho_{M_\infty\ast}\omega_M$ in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\Bbbk_{M_\infty})$. Hence, we have \begin{align*} \omega_{M_\infty}^{\rm E} := e_{M_\infty}(\iota_{M_\infty}\omega_M) \simeq e_{M_\infty}(I_{M_\infty}\rho_{M_\infty\ast}\omega_M) \simeq I_{M_\infty}^{\rm E} e_{M_\infty}^{\rm sub}(\rho_{M_\infty\ast}\omega_M) \simeq I_{M_\infty}^{\rm E}\omega_{M_\infty}^{{\rm E}, {\rm sub}}, \end{align*} where in the third isomorphism we used Proposition \ref{prop3.23}. The second assertion follows from the first assertion and Theorem \ref{main1} (1). \end{proof} \newpage \begin{proposition}\label{prop3.27} Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces. \begin{itemize} \item[\rm (1)] For any $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and any $L\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$, one has \begin{align*} \mathbf{E} f^!\mathbf{D}_{N_\infty}^{{\rm E}, {\rm sub}}L &\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{E} f^{-1}L,\\ \mathbf{E} f_{\ast}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K &\simeq \mathbf{D}_{N_\infty}^{{\rm E}, {\rm sub}}\mathbf{E} f_{!!}K,\\ J_{M_\infty}^{\rm E}\mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K &\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K. \end{align*} \item[\rm (2)] For any $K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$, we have $\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$ and $\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K \simeq K$. In particular, there exists an equivalence of categories: \[\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\colon {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\overset{\sim}{\longrightarrow}{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub}).\] \end{itemize} \end{proposition} \begin{proof} (1) First, let us prove that $\mathbf{E} f^!\omega_{N_\infty}^{{\rm E}, {\rm sub}}\simeq \omega_{M_\infty}^{{\rm E}, {\rm sub}}$. By Proposition \ref{prop3.24} and the fact that $f^!\omega_N\simeq\omega_M$, we have $$ \mathbf{E} f^!\omega_{N_\infty}^{{\rm E}, {\rm sub}} \simeq \mathbf{E} f^!e_{N_\infty}^{{\rm E}, {\rm sub}}(\rho_{N_\infty\ast}\omega_N) \simeq e_{M_\infty}^{{\rm sub}}(\rho_{M_\infty\ast}f^!\omega_N) \simeq e_{M_\infty}^{{\rm sub}}(\rho_{M_\infty\ast}\omega_M) \simeq \omega_{M_\infty}^{{\rm E}, {\rm sub}}.$$ Let $K\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{M_\infty}^{\rm sub})$ and $L\in{\mathbf{E}}^{\mathrm{b}}(\Bbbk_{N_\infty}^{\rm sub})$. By Proposition \ref{prop3.14} (2), we have \begin{align*} \mathbf{E} f^!\mathbf{D}_{N_\infty}^{{\rm E}, {\rm sub}}L &\simeq \mathbf{E} f^!{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(L, \omega_{N_\infty}^{{\rm E}, {\rm sub}})\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathbf{E} f^{-1}L, \mathbf{E} f^!\omega_{N_\infty}^{{\rm E}, {\rm sub}})\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathbf{E} f^{-1}L, \omega_{M_\infty}^{{\rm E}, {\rm sub}})\\ &\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{E} f^{-1}L. \end{align*} By Proposition \ref{prop3.14} (1)(iii), we have \begin{align*} \mathbf{E} f_\ast\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K &\simeq \mathbf{E} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, \omega_{M_\infty}^{{\rm E}, {\rm sub}})\\ &\simeq \mathbf{E} f_\ast{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, \mathbf{E} f^!\omega_{N_\infty}^{{\rm E}, {\rm sub}})\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathbf{E} f_{!!}K, \omega_{N_\infty}^{{\rm E}, {\rm sub}})\\ &\simeq \mathbf{D}_{N_\infty}^{{\rm E}, {\rm sub}}\mathbf{E} f_{!!}K. \end{align*} By Proposition \ref{prop3.18} (1)(ii) and Lemma \ref{lem3.26}, we have \begin{align*} J_{M_\infty}^{\rm E}\mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K &\simeq J_{M_\infty}^{\rm E}{\mathbf{R}}{\mathcal{I}}hom^+(I_{M_\infty}^{\rm E} K, \omega_{M_\infty}^{{\rm E}})\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, J_{M_\infty}^{\rm E} \omega_{M_\infty}^{{\rm E}})\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K,\omega_{M_\infty}^{{\rm E}, {\rm sub}})\\ &\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K. \end{align*} \noindent (2) Let $K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$. By Theorem \ref{main2} we have $I_{M_\infty}^{\rm E} K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Since $\mathbf{D}_{M_\infty}^{\rm E}\colon {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})^{\mbox{\scriptsize op}}\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$ (see, \cite[Prop.\:3.3.3 (ii)]{DK16-2}), we have $\mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Hence, by Theorem \ref{main2}, we have $J_{M_\infty}^{\rm E}\mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$. By the assertion (1), we have $J_{M_\infty}^{\rm E}\mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K \simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K$, and hence $\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$. Moreover, since $I_{M_\infty}^{\rm E} J_{M_\infty}^{\rm E}\mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K \simeq \mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K$ we have \begin{align*} \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K &\simeq J_{M_\infty}^{\rm E}\mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} J_{M_\infty}^{\rm E}\mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K\\ &\simeq J_{M_\infty}^{\rm E}\mathbf{D}_{M_\infty}^{\rm E} \mathbf{D}_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K\\ &\simeq J_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K\\ &\simeq K \end{align*} where the third (resp.\,last) isomorphism follows from \cite[Prop.\:3.3.3 (ii)]{DK16-2}) (resp.\:Theorem \ref{main1} (1)). \end{proof} Several operations preserve the $\mathbb{R}$-constructability as below. Let us recall that a morphism $f\colon (M, \che{M})\to (N, \che{N})$ of real analytic bordered spaces is called semi-proper if the second projection $\che{M}\times\che{N}\to\che{N}$ is proper on the closure $\var{\Gamma}_f$ of the graph $\Gamma_f$ of $f$ in $\che{M}\times\che{N}$. \begin{proposition} Let $f\colon M_\infty\to N_\infty$ be a morphism of real analytic bordered spaces associated with a morphism $\che{f}\colon\che{M}\to\che{N}$ of real analytic manifolds. The functors below are well defined: \begin{itemize} \item[\rm (1)] $e_{M_\infty}^{\rm sub}\rho_{M_\infty\ast}\colon {\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$, \item[\rm (2)] $\mathbf{E} f_{\ast}\colon{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{N_\infty}^{\rm sub})$, if $f$ is semi-proper, \item[\rm (3)] $\mathbf{E} f^{-1}\colon{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{N_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$, \item[\rm (4)] $\mathbf{E} f_{!!}\colon{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{N_\infty}^{\rm sub})$, if $f$ is semi-proper, \item[\rm (5)] $\mathbf{E} f^{!}\colon{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{N_\infty}^{\rm sub})\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$. \end{itemize} \end{proposition} \begin{proof} (1) Let $\mathcal{F}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty})$ and $U$ be an open subset of $M$ which is subanalytic and relatively compact in $\che{M}$. We set $\mathcal{F}^U := \Bbbk_{\{t = 0\}}\otimes \pi^{-1}\mathcal{F}|_U$. By Propositions \ref{prop3.4} (6) and \ref{prop3.24}, we have $$\mathbf{E} i_{U_\infty}^{-1}\(e_{M_\infty}^{\rm sub}\rho_{M_\infty\ast}\mathcal{F}\) \simeq e_{U_\infty}^{\rm sub}\rho_{U_\infty\ast}(\mathcal{F}|_U) \simeq\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\otimes \pi^{-1}\rho_{U_\infty\ast}(\mathcal{F}|_U).$$ Since $\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\simeq \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\Bbbk_{\{t=0\}}$, there exist isomorphisms \begin{align*} \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\otimes \pi^{-1}\rho_{U_\infty\ast}(\mathcal{F}|_U) &\simeq \(\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\Bbbk_{\{t=0\}}\) \otimes \rho_{U_\infty\times\mathbb{R}_\infty\ast}\pi^{-1}\mathcal{F}|_U\\ &\simeq \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \rho_{U_\infty\times\mathbb{R}_\infty\ast}\(\Bbbk_{\{t = 0\}}\otimes \pi^{-1}\mathcal{F}|_U\)\\ &\simeq \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \rho_{U_\infty\times\mathbb{R}_\infty}(\mathcal{F}^U), \end{align*} where in the first isomorphism we used Proposition \ref{prop3.14} (4)(iii). Since $\mathcal{F}$ is $\mathcal{R}$-constructible, we have $\mathcal{F}^U\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{U_\infty})$. This implies that $e_{M_\infty}^{{\rm sub}}\rho_{M_\infty\ast}\mathcal{F}$ is $\mathbb{R}$-constructible. \medskip \noindent (2) Let $K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$. By Theorem \ref{main1} and Proposition \ref{prop3.18} (3)(i), we have isomorphisms \begin{align*} \mathbf{E} f_{\ast}K \simeq \mathbf{E} f_{\ast}\lambda_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K \simeq \lambda_{N_\infty}^{\rm E}\mathbf{E} f_{\ast}I_{M_\infty}^{\rm E} K. \end{align*} Since $K$ is $\mathbb{R}$-constructible, we have $I_{M_\infty}^{\rm E} K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$ by Theorem \ref{main2}, and hence, by \cite[Prop.\:3.3.3 (iv)]{DK16-2} we have $\mathbf{E} f_{\ast}I_{M_\infty}^{\rm E} K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$. This implies that $\mathbf{E} f_{\ast}K\simeq \lambda_{N_\infty}^{\rm E}\mathbf{E} f_{\ast}I_{M_\infty}^{\rm E} K$ is $\mathbb{R}$-constructible by Theorem \ref{main2}. \medskip \noindent (3) Let $L\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{N_\infty}^{\rm sub})$. By Theorem \ref{main1} and Proposition \ref{prop3.18} (4)(i), we have isomorphisms \begin{align*} \mathbf{E} f^{-1}K \simeq \mathbf{E} f^{-1}\lambda_{N_\infty}^{\rm E} I_{N_\infty}^{\rm E} L \simeq \lambda_{M_\infty}^{\rm E}\mathbf{E} f^{-1}I_{N_\infty}^{\rm E} L. \end{align*} Since $L$ is $\mathbb{R}$-constructible, we have $I_{N_\infty}^{\rm E} L\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{N_\infty})$ by Theorem \ref{main2}, and hence, by \cite[Prop.\:3.3.3 (iii)]{DK16-2} we have $\mathbf{E} f^{-1}I_{N_\infty}^{\rm E} L\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. This implies that $\mathbf{E} f^{-1}L\simeq \lambda_{M_\infty}^{\rm E}\mathbf{E} f^{-1}I_{N_\infty}^{\rm E} L$ is $\mathbb{R}$-constructible by Theorem \ref{main2}. \medskip \noindent (4) Let $K\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$. Then we have $$\mathbf{E} f_{!!}K\simeq \mathbf{E} f_{!!} \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K \simeq \mathbf{D}_{N_\infty}^{{\rm E}, {\rm sub}}\mathbf{E} f_{\ast}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K$$ by Proposition \ref{prop3.27}. This implies that $\mathbf{E} f_{!!}K$ is $\mathbb{R}$-constructible by the assertion (2) and Proposition \ref{prop3.27} (2). \medskip \noindent (5) Let $L\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{N_\infty}^{\rm sub})$. Then we have $$\mathbf{E} f^{!}K\simeq \mathbf{E} f^{!} \mathbf{D}_{N_\infty}^{{\rm E}, {\rm sub}}\mathbf{D}_{N_\infty}^{{\rm E}, {\rm sub}}L \simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{E} f^{-1}\mathbf{D}_{N_\infty}^{{\rm E}, {\rm sub}}L$$ by Proposition \ref{prop3.27}. This implies that $\mathbf{E} f^{!}K$ is $\mathbb{R}$-constructible by the assertion (3) and Proposition \ref{prop3.27} (2). \end{proof} Moreover, convolution functors preserve the $\mathbb{R}$-constructability as below. \begin{proposition} \begin{itemize} \item[\rm (1)] The functors \begin{align*} (\cdot)\overset{+}{\otimes}(\cdot)&\colon {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})\times {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub}) \to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub}),\\ {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\cdot, \cdot)&\colon {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})^{\mbox{\scriptsize op}}\times {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub}) \to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub}) \end{align*} are well defined. \item[\rm (2)] For any $K, L\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$, one has \begin{itemize} \item[\rm (i)] $\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\(K\overset{+}{\otimes} L\) \simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, \mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}} L),$ \item[\rm (ii)] $\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, L) \simeq K\overset{+}{\otimes} \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}} L,$ \item[\rm (iii)] ${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, L) \simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}L, \mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}} K\),$ \item[\rm (iv)] ${\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(K, L) \simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}L, \mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}K\),$ \item[\rm (v)] ${\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}(K, L) \simeq {\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\(\mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}L, \mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}K\).$ \end{itemize} \end{itemize} \end{proposition} \begin{proof} (1) Let $K, L\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{M_\infty}^{\rm sub})$ and $U$ be an open subset of $M$ which is subanalytic and relatively compact in $\che{M}$. Then there exist $\mathcal{F}, \mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{U_\infty\times\mathbb{R}_\infty})$ such that \begin{align*} \mathbf{E} i_{U_\infty}^{-1}K\simeq \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}^{\rm sub}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{F},\hspace{9pt} \mathbf{E} i_{U_\infty}^{-1}L\simeq \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}^{\rm sub}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{G}. \end{align*} Hence ,we have \begin{align*} \mathbf{E} i_{U_\infty}^{-1}\(K\overset{+}{\otimes} L\) &\simeq \mathbf{E} i_{U_\infty}^{-1}K\overset{+}{\otimes}\mathbf{E} i_{U_\infty}^{-1}L\\ &\simeq \(\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}^{\rm sub}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{F}\) \overset{+}{\otimes} \(\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}^{\rm sub}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{G}\)\\ &\simeq \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \mathbf{Q}_{U_\infty}^{\rm sub}\(\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{F} \overset{+}{\otimes}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{G}\) \end{align*} where in the first isomorphism we used Proposition \ref{prop3.14} (2) and in the last isomorphism we used $\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} \Bbbk_{U_\infty}^{{\rm E}, {\rm sub}} \simeq\Bbbk_{U_\infty}^{{\rm E}, {\rm sub}}$. Since $\mathcal{F}, \mathcal{G}\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{U_\infty\times\mathbb{R}_\infty})$, there exists an isomorphism $\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{F} \overset{+}{\otimes}\rho_{U_\infty\times\mathbb{R}_\infty\ast}\mathcal{G} \simeq \rho_{U_\infty\times\mathbb{R}_\infty\ast}\(\mu_{!!}(p_1^{-1}\mathcal{F}\otimes p_2^{-1}\mathcal{G})\)$ and $\mu_{!!}(p_1^{-1}\mathcal{F}\otimes p_2^{-1}\mathcal{G})\in{\mathbf{D}}^{\mathrm{b}}_{\mathbb{R}-c}(\Bbbk_{U_\infty\times\mathbb{R}_\infty})$. Therefore, $K\overset{+}{\otimes} L$ is $\mathbb{R}$-constructible. Moreover, by using the assertion (2)(i), (iii), there exist isomorphisms \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, L) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L, \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K\)\\ &\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\(\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L\overset{+}{\otimes} K\)\\ &\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\(K\overset{+}{\otimes} \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L\), \end{align*} and hence by Proposition \ref{prop3.27} (2) and the first assertion of (1), ${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, L)$ is $\mathbb{R}$-constructible. \medskip \noindent (2)(i) By Proposition \ref{prop3.14} (1)(i), we have \begin{align*} \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\(K\overset{+}{\otimes} L\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K\overset{+}{\otimes} L, \omega_{M_\infty}^{{\rm E}, {\rm sub}}\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(L, \omega_{M_\infty}^{{\rm E}, {\rm sub}})\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K, \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L\). \end{align*} \noindent (ii) Since $L$ is $\mathbb{R}$-constructible, we have an isomorphism $L\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L$ by Proposition \ref{prop3.27} (2). Hence, by using the assertion (2)(i), we have \begin{align*} \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, L) &\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L)\\ &\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\( K\overset{+}{\otimes} \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L\)\\ &\simeq K\overset{+}{\otimes} \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}} L, \end{align*} where in the last isomorphism we used Proposition \ref{prop3.27} (2) and the assertion (1). \smallskip \noindent (iii) By Proposition \ref{prop3.14} (1)(i), we have \begin{align*} \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\(K\overset{+}{\otimes}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L\) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(K\overset{+}{\otimes}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L, \omega_{M_\infty}^{{\rm E}, {\rm sub}}\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, \omega_{M_\infty}^{{\rm E}, {\rm sub}})\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L, \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}K\). \end{align*} Since ${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, L)\simeq \mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}\(K\overset{+}{\otimes}\mathbf{D}_{M_\infty}^{{\rm E}, {\rm sub}}L\)$, we have \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, L)\simeq {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}L, \mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}K\). \end{align*} \noindent (iv) First, let us prove $\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} K\simeq K$. Since $\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\simeq \lambda_{M_\infty}^{\rm E} \Bbbk_{M_\infty}^{{\rm E}}$ and $K$ is $\mathbb{R}$-constructible, we have $$\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} K \simeq \lambda_{M_\infty}^{\rm E} \Bbbk_{M_\infty}^{{\rm E}}\overset{+}{\otimes} \lambda_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K \simeq \lambda_{M_\infty}^{\rm E} \(\Bbbk_{M_\infty}^{{\rm E}}\overset{+}{\otimes} I_{M_\infty}^{\rm E} K\) \simeq \lambda_{M_\infty}^{\rm E} I_{M_\infty}^{\rm E} K \simeq K,$$ where in the first isomorphism we used Theorem \ref{main1} (1) and Lemma \ref{lem3.19}, in the second isomorphism Proposition \ref{prop3.18} (4)(iii), in the last isomorphisms we used Theorem \ref{main1} and in the third isomorphism we used the fact that $\Bbbk_{M_\infty}^{{\rm E}}\overset{+}{\otimes} K'\simeq K'$ for any $K'\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\Bbbk_{M_\infty})$. Hence, there exist isomorphisms \begin{align*} {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(K, L) &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}(\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes} K, L)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(K, L)\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}L,\, \mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}K)\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes}\mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}L,\, \mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}K\)\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom^{{\rm E}, {\rm sub}}\(\mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}L, \mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}K\), \end{align*} where in the second and fourth isomorphisms we used Proposition \ref{prop3.14} (1)(i), in the third isomorphism we used the assertion (iii) and in the last isomorphism we used the fact $\Bbbk_{M_\infty}^{{\rm E}, {\rm sub}}\overset{+}{\otimes}\mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}L \simeq \mathbf{D}_{M_\infty}^{{\rm E},{\rm sub}}L$. \medskip The assertion (v) follows from the assertion (iv). \end{proof} \newpage \subsection{Irregular Riemann--Hilbert Correspondence and Enhanced Subanalytic Sheaves} In this subsection, we will explain a relation between \cite[Thm.\:9.5.3]{DK16} and \cite[Thm.\:6.3]{Kas16}. Theorems \ref{main3} and \ref{main4} are main results of this paper. \subsubsection{Main Results of \cite{DK16} and \cite{Kas16}}\label{subsec3.5.1} Let $X$ be a complex manifold and denote by $X_\mathbb{R}$ the underlying real analytic manifold of $X$. We denote by $\mathcal{O}_{X}$ and $\mathcal{D}_{X}$ the sheaves of holomorphic functions and holomorphic differential operators on $X$, respectively. Let ${\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_{X})$ be the bounded derived category of left $\mathcal{D}_{X}$-modules. Moreover we denote by ${\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize coh}}(\mathcal{D}_{X})$, ${\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_{X})$ and ${\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize rh}}(\mathcal{D}_{X})$ the full triangulated subcategories of ${\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_{X})$ consisting of objects with coherent, holonomic and regular holonomic cohomologies, respectively. For a morphism $f \colon X\to Y$ of complex manifolds, denote by $\overset{D}{\otimes}$, $\mathbf{D} f_\ast$, $\mathbf{D} f^\ast$, $\mathbb{D}_{X} \colon {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize coh}}(\mathcal{D}_{X})^{{\mbox{\scriptsize op}}} \overset{\sim}{\longrightarrow} {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize coh}}(\mathcal{D}_{X})$ the standard operations for $\mathcal{D}$-modules. The classical solution functor is defined by $$ {\rm Sol}_X \colon {\mathbf{D}}^{\mathrm{b}} (\mathcal{D}_X)^{{\mbox{\scriptsize op}}}\to{\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_X), \hspace{7pt}\mathcal{M} \longmapsto {\mathbf{R}}{\mathcal{H}}om_{\mathcal{D}_X}(\mathcal{M}, \mathcal{O}_X). $$ Let $M$ be a real analytic manifold of dimension $n$. We denote by $\mathcal{C}_M^\infty$ the sheaf of complex functions of class $\mathcal{C}^\infty$ on $M$ and by $\D b_M$ the sheaf of Schwartz's distributions on $M$. \begin{definition}[{\cite[Def.\:7.2.3]{KS01}}] Let $U$ be an open subset of $M$. \begin{itemize}\item[(1)] One say that $f\in\mathcal{C}_M^{\infty}(U)$ has polynomial growth at $p\in M$ if for a local coordinate system $(x_1, \ldots, x_n)$ around $p$, there exist a sufficiently small compact neighborhood $K$ of $p$ and a positive integer $N$ such that $$\sup_{x\in K\cap U}{\rm dist}(x, K\setminus U)^N|f(x)| < +\infty.$$ A function $f\in\mathcal{C}_M^{\infty}(U)$ is said to be tempered at $p\in M$ if all its derivatives have polynomial growth at $p$, and is said to be tempered if tempered at any point of $M$. Let us denote by $\mathcal{C}_{M}^{\infty, {\rm t}}(U)$ the subset of $\mathcal{C}_{M}^{\infty}(U)$ consisting of functions which are tempered. \item[(2)] Let us defined $\D b_M^{\rm t}(U)$ by the image of the restriction map $$\Gamma(M; \D b_M)\to \Gamma(U; \D b_M).$$ \end{itemize} \end{definition} Note that subanalytic presheaves $U\mapsto\mathcal{C}_{M}^{\infty, {\rm t}}(U)$ and $U\mapsto \D b_M^{\rm t}(U)$ are subanalytic sheaves, see \cite[\S 7.2]{KS01}, also \cite[\S 3.3]{Pre08}. We shall write ${\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_X^{\rm sub})$, ${\mathbf{E}}^{\mathrm{b}}(\mathbb{C}_X^{\rm sub})$, ${\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})$ instead of ${\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X_\mathbb{R}}^{\rm sub})$, ${\mathbf{E}}^{\mathrm{b}}(\mathbb{C}_{X_\mathbb{R}}^{\rm sub})$, ${\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_{X_\mathbb{R}}^{\rm sub})$, respectively. \begin{definition}[{\cite[\S 7.3]{KS01}, \cite[\S 5.2]{DK16} and also \cite[\S 3.3]{Pre08}}] Let us denote by $X^c$ the complex conjugate manifold of $X$. An object $\mathcal{O}_X^{\rm t}\in{\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_X^{\rm sub})$ is defined by $$\mathcal{O}_X^{\rm t} := {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}_{\rho_X!\mathcal{D}_{X^c}}(\rho_{X^{c}!}\mathcal{O}_{X^c}, \mathcal{C}_{X_\mathbb{R}}^{\infty, {\rm t}}) \simeq {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}_{\rho_X!\mathcal{D}_{X^c}}(\rho_{X^{c}!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\rm t}})$$ and is called the subanalytic sheaf of tempered holomorphic functions on $X$. Moreover, the tempered solution functor is defined by $$ {\rm Sol}_X^{\rm t} \colon{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)^{{\mbox{\scriptsize op}}}\to{\mathbf{D}}^{\mathrm{b}}({\rm I}\mathbb{C}_X), \hspace{7pt} \mathcal{M} \longmapsto {\mathbf{R}}{\mathcal{I}}hom_{\beta_X\mathcal{D}_X}(\beta_X\mathcal{M}, I_X\mathcal{O}_X^{\rm t}). $$ \end{definition} Note that an ind-sheaf $I_{X}\mathcal{O}_X^{\rm t}$ is nothing but the ind-sheaf of tempered holomorphic functions on $X$ which is denoted by $\mathcal{O}_X^{\rm t}$ in \cite[\S 7.3]{KS01}. Note also that there exist isomorphisms $\rho_X^{-1}\mathcal{O}_X^{\rm t}\simeq \alpha_XI_X\mathcal{O}_X^{\rm t} \simeq \mathcal{O}_X$ and hence we have $\alpha_X{\rm Sol}_X^{\rm t}(\mathcal{M})\simeq {\rm Sol}_X(\mathcal{M})$ for any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize coh}}(\mathcal{D}_X)$. \begin{definition}[{\cite[Def.\:8.1.1]{DK16} and \cite[Def.\:7.2.1]{KS16}}]\label{def3.32} Let $\tl{k}_M\colon M\times \mathbb{R}_\infty\to M\times {\mathbb P}^1\mathbb{R}$ be the natural morphism of real analytic bordered spaces and $t$ be a coordinate of $\mathbb{R}$, where ${\mathbb P}^1\mathbb{R}$ is the real projective line. An object $\D b_{M\times\mathbb{R}_\infty}^{\rm t}\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M\times\mathbb{R}_\infty})$ is defined by $$\D b_{M\times\mathbb{R}_\infty}^{\rm t} := \tl{k}_M^{!}I_{M\times{\mathbb P}^1\mathbb{R}}\D b_{M\times{\mathbb P}^1\mathbb{R}}^{\rm t} \simeq I_{M\times\mathbb{R}_\infty} \tl{k}_M^{!}\D b_{M\times{\mathbb P}^1\mathbb{R}}^{\rm t}$$ and $\D b_{M}^{{\mathsf{T}}}\in{\mathbf{D}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M\times\mathbb{R}_\infty})$ is defined by the complex, concentrated in $-1$ and $0$: $$\D b_{M\times\mathbb{R}_\infty}^{\rm t}\xrightarrow{\ \partial_t-1\ }\D b_{M\times\mathbb{R}_\infty}^{\rm t}.$$ Moreover, we set $\D b_{M}^{\rm E} := \mathbf{Q}_M(\D b_M^{\mathsf{T}})\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M})$ and call it the enhanced ind-sheaf of tempered distributions. \end{definition} Note that we have $\mathcal{H}^i(\D b_{M}^{\mathsf{T}}) = 0$ for any $i\neq -1$ and hence there exists an isomorphism $\D b_{M}^{\mathsf{T}}\simeq \operatorname{Ker}(\partial_t-1)[1]$ in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M\times\mathbb{R}_\infty})$. \begin{definition}[{\cite[Def.\:8.2.1]{DK16} and \cite[Def.\:7.2.3]{KS16}}]\label{def3.33} Let $\tl{i}\colon X\times\mathbb{R}_\infty\to X\times{\mathbb P}^1\mathbb{C}$ be the natural morphism of bordered spaces and $\tau\in\mathbb{C}\subset{\mathbb P}^1\mathbb{C}$ the affine coordinate such that $t = \tau|_\mathbb{R}$, where ${\mathbb P}^1\mathbb{C}$ is the complex projective line. An object $\mathcal{O}_X^{\rm E}\in{\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_X)$ is defined by \begin{align*} \mathcal{O}_X^{\rm E} &:= \mathbf{Q}_X{\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\beta_{X^c}\mathcal{D}_{X^c}}( \pi_{X^c}^{-1}\beta_{X^c}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}})\\ &\,\simeq \mathbf{Q}_X\tl{i}^!{\mathbf{R}}{\mathcal{I}}hom_{p^{-1}\beta_{{\mathbb P}^1\mathbb{C}}\mathcal{D}_{{\mathbb P}^1\mathbb{C}}} (p^{-1}\beta_{\mathbb P}\mathcal{E}_{\mathbb{C}|{\mathbb P}^1\mathbb{C}}^{\tau}, I_{X\times{\mathbb P}^1\mathbb{C}}\mathcal{O}_{X\times{\mathbb P}^1\mathbb{C}}^{\rm t})[2], \end{align*} where $\mathcal{E}_{\mathbb{C}|{\mathbb P}}^{\tau}$ is the meromorphic connection associated to $d+d\tau$, $p\colon X\times{\mathbb P}^1\mathbb{C}\to {\mathbb P}^1\mathbb{C}$ is the projection and $\pi_{X^c}\colon X^c \times\mathbb{R}_\infty\to X^c$ is the morphisms of bordered spaces associated with the projection $X^c\times\mathbb{R}\to X^c$. It is called the enhanced ind-sheaf of tempered holomorphic functions on $X$. Moreover, the enhanced solution functor is defined by $$ {\rm Sol}_X^{\rm E} \colon {\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)^{\mbox{\scriptsize op}}\to{\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_X), \hspace{7pt} \mathcal{M} \longmapsto {\mathbf{R}}{\mathcal{I}}hom_{\pi^{-1}\beta_X\mathcal{D}_X}(\pi^{-1}\beta_X\mathcal{M}, \mathcal{O}_X^{\rm E}), $$ where $\pi\colon X\times\mathbb{R}_\infty\to X$ is the morphism of bordered spaces associated with the first projection $X\times\mathbb{R}\to X$. \end{definition} Note that $\mathcal{O}_X^{\rm E}$ is isomorphic to the enhanced ind-sheaf induced by the Dolbeault complex with coefficients in $\D b_{X_\mathbb{R}}^{{\mathsf{T}}}[-1]$: $$\D b_{X_\mathbb{R}}^{{\mathsf{T}}}[-1]\xrightarrow{\ \var{\partial}\ }\Omega_{X^c}^1\otimes_{\mathcal{O}_{X^c}}\D b_{X_\mathbb{R}}^{{\mathsf{T}}}[-1] \xrightarrow{\ \var{\partial}\ }\cdots\xrightarrow{\ \var{\partial}\ } \Omega_{X^c}^{d_X}\otimes_{\mathcal{O}_{X^c}}\D b_{X_\mathbb{R}}^{{\mathsf{T}}}[-1],$$ where $\Omega_{X^c}^p$ is the sheaf of $p$-differential forms with coefficients in $\mathcal{O}_{X^c}$ and $d_X$ is the complex dimension of $X$. Note that ${\rm I}{\rm sh}_X\mathcal{O}_X^{\rm E} \simeq I_X\mathcal{O}_X^{\rm t}$ and hence there exists an isomorphism ${\rm I}{\rm sh}_X{\rm Sol}_X^{\rm E}(\mathcal{M})\simeq{\rm Sol}_X^{\rm t}(\mathcal{M})$ for any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize coh}}(\mathcal{D}_X)$. Let us recall the main results of \cite{DK16}. \begin{theorem}[{\cite[Thm.\:9.5.3\footnote{ In \cite{DK16}, although Theorem 9.5.3 was stated by using the enhanced de Rham functor, we can obtain a similar statement by using the enhanced solution functor.} and 9.6.1]{DK16}}]\label{thmDK} The enhanced solution functor induces an embedding functor \[ {\rm Sol}_{X}^{\rm E} \colon {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_{X})^{{\mbox{\scriptsize op}}} \hookrightarrow{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\mathbb{C}_{X}).\] Moreover for any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_{X})$ there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_{X}):$ \[\mathcal{M}\overset{\sim}{\longrightarrow} {\rm RH}_{X}^{{\rm E}}{\rm Sol}_{X}^{{\rm E}}(\mathcal{M}),\] where ${\rm RH}_{X}^{{\rm E}}(K) := {\mathbf{R}}{\mathcal{H}}om^{{\rm E}}(K, \mathcal{O}_{X}^{{\rm E}})$. \end{theorem} Let us recall the main results of \cite{Kas16}. \begin{definition}[{\cite[\S 5.2]{Kas16}}]\label{def3.35} Let $\tl{k}_M\colon M\times \mathbb{R}_\infty\to M\times {\mathbb P}^1\mathbb{R}$ be the natural morphism of real analytic bordered spaces. An object $\D b_{M}^{{\mathsf{T}}, {\rm sub}}\in{\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{M\times\mathbb{R}_\infty}^{\rm sub})$ is defined by the complex, concentrated in $-1$ and $0$: $$\tl{k}^!\D b_{M\times{\mathbb P}^1\mathbb{R}}^{\rm t}\xrightarrow{\ \partial_t-1\ }\tl{k}^!\D b_{M\times{\mathbb P}^1\mathbb{R}}^{\rm t}.$$ \end{definition} Note that we have $\mathcal{H}^i(\D b_{M}^{{\mathsf{T}}, {\rm sub}}) = 0$ for any $i\neq -1$ and hence there exists an isomorphism $\D b_{M}^{{\mathsf{T}},{\rm sub}}\simeq \operatorname{Ker}(\partial_t-1)[1]$ in ${\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{M\times\mathbb{R}_\infty}^{\rm sub})$. Remark that the notion $\D b^{{\mathsf{T}}}$ in \cite[\S 5.2]{Kas16} is equal to $\D b^{{\mathsf{T}}, {\rm sub}}[-1]$ in our notion. \begin{definition}[{\cite[\S\S 5.3, 5.4]{Kas16}}]\label{def3.36} An object $\mathcal{O}_X^{{\mathsf{T}}, {\rm sub}}\in{\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{\rm sub})$ is defined by the Dolbeault complex with coefficients in $\D b_{X_\mathbb{R}}^{{\mathsf{T}},{\rm sub}}[-1]$: $$\D b_{X_\mathbb{R}}^{{\mathsf{T}},{\rm sub}}[-1]\xrightarrow{\ \var{\partial}\ }\Omega_{X^c}^1\otimes_{\mathcal{O}_{X^c}}\D b_{X_\mathbb{R}}^{{\mathsf{T}},{\rm sub}}[-1] \xrightarrow{\ \var{\partial}\ }\cdots\xrightarrow{\ \var{\partial}\ } \Omega_{X^c}^{d_X}\otimes_{\mathcal{O}_{X^c}}\D b_{X_\mathbb{R}}^{{\mathsf{T}},{\rm sub}}[-1].$$ Moreover, we set $$ {\rm Sol}_X^{{\mathsf{T}},{\rm sub}} \colon {\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)^{\mbox{\scriptsize op}}\to{\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{\rm sub}), \hspace{7pt} \mathcal{M} \longmapsto {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}_{\pi^{-1}\rho_{X!}\mathcal{D}_X}(\pi^{-1}\rho_{X!}\mathcal{M}, \mathcal{O}_X^{{\mathsf{T}},{\rm sub}}). $$ \end{definition} Note also that there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\Bbbk_{X\times\mathbb{R}_\infty}^{\rm sub})$: $$\mathcal{O}_X^{{\mathsf{T}}, {\rm sub}}\simeq {\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{D}_{X^c}}^{\rm sub}\( \pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}[-1]\).$$ \begin{theorem}[{\cite[Thms.\:6.2 and 6.3\footnote{ In \cite{Kas16}, although Theorem 6.3 was stated by using the enhanced de Rham functor, we can obtain a similar statement by using the enhanced solution functor.}]{Kas16}}] There exists an embedding functor \[ {\rm Sol}_{X}^{{\mathsf{T}},{\rm sub}} \colon {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_{X})^{{\mbox{\scriptsize op}}} \hookrightarrow{\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{\rm sub}).\] Moreover for any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_{X})$ there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X):$ \[\mathcal{M}\overset{\sim}{\longrightarrow} {\mathbf{R}}{\mathcal{H}}om^{{\rm E}, {\rm sub}}\({\rm Sol}_{X}^{{\mathsf{T}}, {\rm sub}}(\mathcal{M}), \mathcal{O}_X^{{\mathsf{T}}, {\rm sub}}\).\] \end{theorem} \subsubsection{Relation between \cite[Thm.\:9.5.3]{DK16} and \cite[Thm.\:6.3]{Kas16}} Let us explain a relation between \cite[Thm.\:9.5.3]{DK16} and \cite[Thm.\:6.3]{Kas16}. \begin{definition}\label{def3.37} Let us define $$\mathcal{O}_X^{{\rm E}, {\rm sub}} := \mathbf{Q}_X^{\rm sub}\(\mathcal{O}_X^{{\mathsf{T}}, {\rm sub}}[1]\)\in{\mathbf{E}}^{\mathrm{b}}(\mathbb{C}_X^{\rm sub})$$ and set $$ {\rm Sol}_X^{{\rm E},{\rm sub}} \colon {\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)^{\mbox{\scriptsize op}}\to{\mathbf{E}}^{\mathrm{b}}(\mathbb{C}_{X}^{\rm sub}), \hspace{7pt} \mathcal{M} \longmapsto {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}_{\pi^{-1}\rho_{X!}\mathcal{D}_X}(\pi^{-1}\rho_{X!}\mathcal{M}, \mathcal{O}_X^{{\rm E},{\rm sub}}). $$ \end{definition} By the definition, it is clear that $$ \mathcal{O}_X^{{\rm E}, {\rm sub}}\simeq \mathbf{Q}_X^{\rm sub}{\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{D}_{X^c}}^{\rm sub}\( \pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}\), $$ and for any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)$ one has $${\rm Sol}_X^{{\rm E}, {\rm sub}}(\mathcal{M})\simeq \mathbf{Q}_X^{\rm sub}\({\rm Sol}_X^{{\mathsf{T}}, {\rm sub}}(\mathcal{M})\)[1].$$ \begin{lemma}\label{lem3.38} \begin{itemize} \item[\rm (1)] There exists an isomorphism $\D b_{M}^{{\mathsf{T}}}\simeq I_{M\times\mathbb{R}_\infty}\D b_{M}^{{\mathsf{T}}, {\rm sub}}$ in ${\mathbf{D}}^{\mathrm{b}}({\rm I}\mathbb{C}_{M\times\mathbb{R}_\infty})$. \item[\rm (2)] There exists an isomorphism $\mathcal{O}_X^{{\rm E},{\rm sub}}\simeq J_X^{\rm E}\mathcal{O}_X^{{\rm E}}$ in ${\mathbf{E}}^{\mathrm{b}}(\mathbb{C}_X^{\rm sub})$. \item[\rm (3)] For any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)$, there exists an isomorphism in ${\mathbf{E}}^{\mathrm{b}}(\mathbb{C}_X^{\rm sub}):$ $${\rm Sol}_X^{{\rm E}, {\rm sub}}(\mathcal{M})\simeq J_X^{\rm E}{\rm Sol}_X^{{\rm E}}(\mathcal{M}).$$ \item[\rm (4)] For any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)$, there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{\rm sub}):$ $${\rm Sol}_X^{{\mathsf{T}}, {\rm sub}}(\mathcal{M})[1]\simeq \mathbf{R}_{X}^{{\rm E}, {\rm sub}}{\rm Sol}_X^{{\rm E}, {\rm sub}}(\mathcal{M}).$$ \end{itemize} \end{lemma} \begin{proof} (1) Since the functor $I_{M\times\mathbb{R}_\infty}$ is exact, we have isomorphisms \begin{align*} \D b_{M}^{{\mathsf{T}}} &\simeq \operatorname{Ker}\(\partial_t-1\colon\D b_{M\times\mathbb{R}_\infty}^{\rm t}\to\D b_{M\times\mathbb{R}_\infty}^{\rm t}\)\\ &\simeq \operatorname{Ker}\(\partial_t-1\colon I_{M\times\mathbb{R}_\infty} \tl{k}_M^{!}\D b_{M\times{\mathbb P}^1\mathbb{R}}^{\rm t}\to I_{M\times\mathbb{R}_\infty} \tl{k}_M^{!}\D b_{M\times{\mathbb P}^1\mathbb{R}}^{\rm t}\)\\ &\simeq I_{M\times\mathbb{R}_\infty}\operatorname{Ker}\( \partial_t-1\colon \tl{k}_M^{!}\D b_{M\times{\mathbb P}^1\mathbb{R}}^{\rm t}\to\tl{k}_M^{!}\D b_{M\times{\mathbb P}^1\mathbb{R}}^{\rm t}\)\\ &\simeq I_{M\times\mathbb{R}_\infty}\D b_{M}^{{\mathsf{T}}, {\rm sub}}, \end{align*} see Definition \ref{def3.32} for the details of $\D b_{M}^{{\mathsf{T}}}$, Definition \ref{def3.35} for the details of $\D b_{M}^{{\mathsf{T}}, {\rm sub}}$. \medskip \noindent (2) Since $\beta_{X^c}\simeq I_{X^c}\circ\rho_{X^c!}$ and $\pi_{X^c}^{-1}\circ I_{X^c}\simeq I_{X^c\times\mathbb{R}_\infty}\circ\pi_{X^c}^{-1}$ (see \S 2.5 for the first isomorphism, Proposition \ref{prop3.7} (2)(iii) for the second isomorphism), we have isomorphisms \begin{align*} J_X^{\rm E}\mathcal{O}_X^{\rm E} &\simeq \mathbf{Q}_X\mathbf{R} J_{X\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\beta_{X^c!}\mathcal{D}_{X^c}}\( \pi_{X^c}^{-1}\beta_{X^c}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}}\)\\ &\simeq \mathbf{Q}_X\mathbf{R} J_{X\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\beta_{X^c!}\mathcal{D}_{X^c}}\( \pi_{X^c}^{-1}I_{X^c}\rho_{X^c!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}}\)\\ &\simeq \mathbf{Q}_X\mathbf{R} J_{X\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\beta_{X^c!}\mathcal{D}_{X^c}}\( I_{X^c\times\mathbb{R}_\infty}\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}}\). \end{align*} Moreover, there exist isomorphisms \begin{align*} &\mathbf{Q}_X\mathbf{R} J_{X\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\beta_{X^c!}\mathcal{D}_{X^c}}\( I_{X^c\times\mathbb{R}_\infty}\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}}\)\\ \simeq\ &\mathbf{Q}_X{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}_{\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{D}_{X^c}}\( \pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, \mathbf{R} J_{X_\mathbb{R}\times\mathbb{R}_\infty}\D b_{X_\mathbb{R}}^{{\mathsf{T}}}\)\\ \simeq\ &\mathbf{Q}_X{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}_{\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{D}_{X^c}}\( \pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}},{\rm sub}}\)\\ \simeq\ &\mathcal{O}_X^{{\rm E},{\rm sub}} \end{align*} where in the second isomorphism we used the assertion (1) and Proposition \ref{prop3.6} (1). \medskip \noindent (3) Let $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)$. By the fact that $\beta_{X}\simeq I_{X}\circ\rho_{X!}$ and the assertion (2), there exist isomorphisms \begin{align*} J_X^{\rm E}{\rm Sol}_X^{\rm E}(\mathcal{M}) &\simeq J_X^{\rm E}{\mathbf{R}}{\mathcal{I}}hom_{\pi^{-1}\beta_X\mathcal{D}_X}(\pi^{-1}\beta_X\mathcal{M}, \mathcal{O}_X^{\rm E})\\ &\simeq J_X^{\rm E}{\mathbf{R}}{\mathcal{I}}hom_{\pi^{-1}\beta_X\mathcal{D}_X}(\pi^{-1} I_{X}\rho_{X!}\mathcal{M}, \mathcal{O}_X^{\rm E})\\ &\simeq J_X^{\rm E}{\mathbf{R}}{\mathcal{I}}hom_{\pi^{-1}\beta_X\mathcal{D}_X}(I_{X\times\mathbb{R}_\infty}\pi^{-1}\rho_{X!}\mathcal{M}, \mathcal{O}_X^{\rm E})\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom_{\pi^{-1}\rho_{X!}\mathcal{D}_X}^{\rm sub}(\pi^{-1}\rho_{X!}\mathcal{M}, J_X^{\rm E}\mathcal{O}_X^{\rm E})\\ &\simeq {\mathbf{R}}{\mathcal{I}}hom_{\pi^{-1}\rho_{X!}\mathcal{D}_X}^{\rm sub}(\pi^{-1}\rho_{X!}\mathcal{M}, \mathcal{O}_X^{{\rm E}, {\rm sub}})\\ &\simeq {\rm Sol}_X^{{\rm E}, {\rm sub}}(\mathcal{M}). \end{align*} \noindent (4) First let us prove that $${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), \D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}\) \simeq\D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}.$$ By the assertion (1) and Proposition \ref{prop3.18} (1)(ii), \begin{align*} &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), \D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}\)\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), \mathbf{R} J_{X\times\mathbb{R}_\infty}\D b_{X_\mathbb{R}}^{{\mathsf{T}}}\)\\ \simeq\ &\mathbf{R} J_{X\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom^+\(I_{X\times\mathbb{R}_\infty}\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), \D b_{X_\mathbb{R}}^{{\mathsf{T}}}\)\\ \simeq\ &\mathbf{R} J_{X\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{X\times\mathbb{R}_\infty}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), \D b_{X_\mathbb{R}}^{{\mathsf{T}}}\). \end{align*} Moreover, by using the fact that ${\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{X\times\mathbb{R}_\infty}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), \D b_{X_\mathbb{R}}^{{\mathsf{T}}}\) \simeq\D b_{X_\mathbb{R}}^{{\mathsf{T}}}$, see e.g. \cite[Prop.\:8.1.3]{DK16}, we have $$\mathbf{R} J_{X\times\mathbb{R}_\infty}{\mathbf{R}}{\mathcal{I}}hom^+\(\iota_{X\times\mathbb{R}_\infty}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), \D b_{X_\mathbb{R}}^{{\mathsf{T}}}\) \simeq \mathbf{R} J_{X\times\mathbb{R}_\infty}\D b_{X_\mathbb{R}}^{{\mathsf{T}}} \simeq\D b_{X_\mathbb{R}}^{{\mathsf{T}},{\rm sub}}.$$ Hence, we proved ${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), \D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}\) \simeq\D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}.$ Next, we shall prove that $${\mathbf{R}}{\mathcal{I}}hom^{+, \sub}(\rho_{X\times\mathbb{R}_\infty\ast}( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}},\mathcal{O}_X^{{\mathsf{T}},{\rm sub}})) \simeq\mathcal{O}_X^{{\mathsf{T}},{\rm sub}}.$$ By the fact that $\mathcal{O}_X^{{\mathsf{T}}, {\rm sub}}\simeq {\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{D}_{X^c}}^{\rm sub}\( \pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}[-1]\),$ we have \begin{align*} &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}},\mathcal{O}_X^{{\mathsf{T}},{\rm sub}}\)\)\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), {\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{D}_{X^c}}^{\rm sub}\( \pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}\)\)[-1]\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{D}_{X^c}}^{\rm sub}\(\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), \D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}\)\)[-1]\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom_{\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{D}_{X^c}}^{\rm sub} \(\pi_{X^c}^{-1}\rho_{X^c!}\mathcal{O}_{X^c}, \D b_{X_\mathbb{R}}^{{\mathsf{T}}, {\rm sub}}\)[-1]\\ \simeq\ &\mathcal{O}_X^{{\mathsf{T}}, {\rm sub}}. \end{align*} By the definition, it is clear that \begin{align*} &\mathbf{R}_{X}^{{\rm E}, {\rm sub}}{\rm Sol}_X^{{\rm E}, {\rm sub}}(\mathcal{M})\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\), {\mathbf{R}}{\mathcal{I}}hom^{\rm sub}_{\pi^{-1}\rho_{X!}\mathcal{D}_X}(\pi^{-1}\rho_{X!}\mathcal{M}, \mathcal{O}_X^{{\mathsf{T}},{\rm sub}}\)[1]\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}_{\pi^{-1}\rho_{X!}\mathcal{D}_X}(\pi^{-1}\rho_{X!}\mathcal{M}, {\mathbf{R}}{\mathcal{I}}hom^{+, \sub}\(\rho_{X\times\mathbb{R}_\infty\ast}\( \Bbbk_{\{t\geq0\}}\oplus\Bbbk_{\{t\leq 0\}}\),\mathcal{O}_X^{{\mathsf{T}},{\rm sub}}\)[1]\\ \simeq\ &{\mathbf{R}}{\mathcal{I}}hom^{\rm sub}_{\pi^{-1}\rho_{X!}\mathcal{D}_X}\(\pi^{-1}\rho_{X!}\mathcal{M}, \mathcal{O}_X^{{\mathsf{T}},{\rm sub}}\)[1]\\ \simeq\ &{\rm Sol}_X^{{\mathsf{T}}, {\rm sub}}(\mathcal{M})[1]. \end{align*} \end{proof} \begin{theorem}\label{main3} The functor ${\rm Sol}_X^{{\rm E},{\rm sub}}$ induces an embedding functor \[ {\rm Sol}_{X}^{{\rm E}, {\rm sub}} \colon {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_{X})^{{\mbox{\scriptsize op}}} \hookrightarrow{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_{X}^{\rm sub}).\] Moreover for any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_{X})$ there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X):$ \[\mathcal{M}\overset{\sim}{\longrightarrow} {\rm RH}_{X}^{{\rm E},{\rm sub}}{\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}),\] where ${\rm RH}_{X}^{{\rm E}, {\rm sub}}(K) := {\mathbf{R}}{\mathcal{H}}om^{{\rm E},{\rm sub}}(K, \mathcal{O}_{X}^{{\rm E},{\rm sub}})$. \end{theorem} \begin{proof} First, let us prove that ${\rm Sol}_X^{{\rm E},{\rm sub}}(\mathcal{M})\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})$ for any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$. Let $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$. By Theorem \ref{thmDK}, we have ${\rm Sol}_X^{{\rm E}}(\mathcal{M})\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\mathbb{C}_X)$ and hence by Theorem \ref{main2} we have $J_X^{\rm E}{\rm Sol}_X^{{\rm E}}(\mathcal{M})\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})$. This implies that ${\rm Sol}_X^{{\rm E},{\rm sub}}(\mathcal{M})\in{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})$ by Lemma \ref{lem3.38} (3). Hence, a functor ${\rm Sol}_{X}^{{\rm E}, {\rm sub}} \colon {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_{X})^{{\mbox{\scriptsize op}}}\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_{X}^{\rm sub})$ is well defined. \medskip For any $\mathcal{M}, \mathcal{N}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$, there exist isomorphisms \begin{align*} \mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)}(\mathcal{M}, \mathcal{N}) &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\mathbb{C}_X)}\({\rm Sol}_X^{{\rm E}}(\mathcal{N}), {\rm Sol}_X^{{\rm E}}(\mathcal{M})\)\\ &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})}\(J_X^{\rm E}{\rm Sol}_X^{{\rm E}}(\mathcal{N}), J_X^{\rm E}{\rm Sol}_X^{{\rm E}}(\mathcal{M})\)\\ &\simeq \mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})}\({\rm Sol}_X^{{\rm E},{\rm sub}}(\mathcal{N}), {\rm Sol}_X^{{\rm E},{\rm sub}}(\mathcal{M})\), \end{align*} where in the first (resp.\,second, third) isomorphism we used Theorem \ref{thmDK} (resp.\,Theorem \ref{main2}, Lemma \ref{lem3.38} (3)). This implies that the functor ${\rm Sol}_{X}^{{\rm E}, {\rm sub}} \colon {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_{X})^{{\mbox{\scriptsize op}}}\to{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_{X}^{\rm sub})$ is fully faithful. \medskip Let $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$. By using the adjointness, there exist isomorphisms \begin{align*} &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)}\(\mathcal{M}, {\rm RH}_{X}^{{\rm E},{\rm sub}}{\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M})\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)}\( \mathcal{M}, {\mathbf{R}}{\mathcal{H}}om^{{\rm E},{\rm sub}}({\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E},{\rm sub}})\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)}\( \mathcal{M}, \rho_X^{-1}\mathbf{R} \pi_\ast{\mathbf{R}}{\mathcal{I}}hom^{{\rm sub}}({\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E},{\rm sub}})\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)}\( \pi^{-1}\rho_{X!}\mathcal{M}, {\mathbf{R}}{\mathcal{I}}hom^{{\rm sub}}({\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E},{\rm sub}})\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})}\({\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), {\mathbf{R}}{\mathcal{I}}hom_{\pi^{-1}\rho_{X!}\mathcal{D}_X}^{{\rm sub}}(\pi^{-1}\rho_{X!}\mathcal{M}, \mathcal{O}_{X}^{{\rm E},{\rm sub}})\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})}\({\rm Sol}_X^{{\rm E},{\rm sub}}(\mathcal{M}), {\rm Sol}_X^{{\rm E},{\rm sub}}(\mathcal{M})\)\\ \simeq\ &\mathrm{Hom}_{{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)}\(\mathcal{M}, \mathcal{M}\). \end{align*} Hence, there exists a canonical morphism $$\mathcal{M}\to {\rm RH}_{X}^{{\rm E},{\rm sub}}{\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M})$$ which is given by the identity map ${\rm id}_{\mathcal{M}}$ of $\mathcal{M}$. Let us prove that it is isomorphism. By Lemma \ref{lem3.38} (2), we have isomorphisms $${\rm RH}_{X}^{{\rm E},{\rm sub}}{\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}) \simeq {\mathbf{R}}{\mathcal{H}}om^{{\rm E},{\rm sub}}({\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E},{\rm sub}})\\ \simeq {\mathbf{R}}{\mathcal{H}}om^{{\rm E},{\rm sub}}({\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), J_X^{\rm E}\mathcal{O}_{X}^{{\rm E}})$$ and by Proposition \ref{prop3.18} (1)(iii) we have \begin{align*} {\mathbf{R}}{\mathcal{H}}om^{{\rm E},{\rm sub}}({\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), J_X^{\rm E}\mathcal{O}_{X}^{{\rm E}}) &\simeq \rho^{-1}{\mathbf{R}}{\mathcal{I}}hom^{{\rm E},{\rm sub}}({\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), J_X^{\rm E}\mathcal{O}_{X}^{{\rm E}})\\ &\simeq \rho^{-1}\mathbf{R} J_X{\mathbf{R}}{\mathcal{I}}hom^{{\rm E}}(I_X^{\rm E}{\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E}}). \end{align*} By Proposition \ref{prop3.7} (3)(ii) and Lemma \ref{lem3.38} (3), there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X})$ \begin{align*} \rho^{-1}\mathbf{R} J_X{\mathbf{R}}{\mathcal{I}}hom^{{\rm E}}(I_X^{\rm E}{\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E}}) &\simeq \alpha_X{\mathbf{R}}{\mathcal{I}}hom^{{\rm E}}(I_X^{\rm E}{\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E}})\\ &\simeq {\mathbf{R}}{\mathcal{H}}om^{{\rm E}}(I_X^{\rm E}{\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E}})\\ &\simeq {\mathbf{R}}{\mathcal{H}}om^{{\rm E}}(I_X^{\rm E} J_X^{\rm E}{\rm Sol}_{X}^{{\rm E}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E}}). \end{align*} Since $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$, ${\rm Sol}_{X}^{{\rm E}}(\mathcal{M})$ is $\mathbb{R}$-constructible by the first assertion and hence there exists an isomorphism $I_X^{\rm E} J_X^{\rm E}{\rm Sol}_{X}^{{\rm E}}(\mathcal{M}) \simeq{\rm Sol}_{X}^{{\rm E}}(\mathcal{M}) $ by Theorem \ref{main2}. By Theorem \ref{thmDK}, we have $${\mathbf{R}}{\mathcal{H}}om^{{\rm E}}({\rm Sol}_{X}^{{\rm E}}(\mathcal{M}), \mathcal{O}_{X}^{{\rm E}}) \simeq{\rm RH}_{X}^{{\rm E}}{\rm Sol}_{X}^{{\rm E}}(\mathcal{M})\simeq\mathcal{M}.$$ Therefore, there exists an isomorphism $\mathcal{M}\overset{\sim}{\longrightarrow} {\rm RH}_{X}^{{\rm E},{\rm sub}}{\rm Sol}_{X}^{{\rm E},{\rm sub}}(\mathcal{M}).$ \end{proof} \begin{theorem}\label{main4} \begin{itemize} \item[\rm(1)] For any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$, there exists an isomorphism in ${\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_{X}):$ $${\rm Sol}_X^{{\rm E}}(\mathcal{M})\simeq I_X^{\rm E} {\rm Sol}_X^{{\rm E}, {\rm sub}}(\mathcal{M}).$$ \item[\rm(2)] For any $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}(\mathcal{D}_X)$, there exists an isomorphism in ${\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{\rm sub}):$ $${\rm Sol}_X^{{\mathsf{T}}, {\rm sub}}(\mathcal{M})[1]\simeq\mathbf{R}_X^{{\rm E}, {\rm sub}} J_X^{{\rm E}}{\rm Sol}_X^{{\rm E}}(\mathcal{M}).$$ Moreover, there exists a commutative diagram: \[\xymatrix@M=10pt@R=20pt@C=55pt{ {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)^{\mbox{\scriptsize op}}\ar@{^{(}->}[r]^-{{\rm Sol}_X^{{\mathsf{T}},{\rm sub}}(\cdot)[1]}\ar@{^{(}->}[rd]_-{{\rm Sol}_X^{\rm E}} & {\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{\rm sub})\\ {}&{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\mathbb{C}_X)\ar@{^{(}->}[u]_-{\mathbf{R}_X^{{\rm E},{\rm sub}}\circ J_X^{\rm E}}. }\] \end{itemize} \end{theorem} \begin{proof} (1) Let $\mathcal{M}\in{\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)$. Since ${\rm Sol}_X^{\rm E}(\mathcal{M})$ is $\mathbb{R}$-constructible, there exists an isomorphism $I_X^{\rm E} {\rm Sol}_X^{{\rm E}, {\rm sub}}(\mathcal{M})\simeq I_X^{\rm E} J_X^{\rm E}{\rm Sol}_X^{{\rm E}}(\mathcal{M})\simeq{\rm Sol}_X^{{\rm E}}(\mathcal{M})$ by Theorem \ref{main2} and Lemma \ref{lem3.38} (3). \medskip \noindent (2) This follows from Lemma \ref{lem3.38} (3), (4). \end{proof} Let us summarize results of Theorems \ref{main1}, \ref{main2}, \ref{main3} and \ref{main4} in the following commutative diagram: \[\xymatrix@M=7pt@R=35pt@C=60pt{ {}&{}&{\mathbf{D}}^{\mathrm{b}}(\mathbb{C}_{X\times\mathbb{R}_\infty}^{\rm sub}) & {}\\ {\mathbf{D}}^{\mathrm{b}}_{\mbox{\rm \scriptsize hol}}(\mathcal{D}_X)^{\mbox{\scriptsize op}}\ar@{^{(}->}[r]_-{{\rm Sol}_X^{{\rm E}, {\rm sub}}} \ar@{^{(}->}[rru]^-{{\rm Sol}_X^{{\mathsf{T}}, {\rm sub}}(\cdot)[1]}\ar@{^{(}->}[rd]_-{{\rm Sol}_X^{{\rm E}}} & {\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}(\mathbb{C}_X^{\rm sub})\ar@{}[r]|-{\text{\large $\subset$}} \ar@<-1.0ex>@{->}[d]_-{I_X^{\rm E}}\ar@{}[d]|-\wr & {\mathbf{E}}^{\mathrm{b}}(\mathbb{C}_X^{\rm sub})\ar@{^{(}->}[u]_-{\mathbf{R}_X^{{\rm E}, {\rm sub}}} \ar@<-1.0ex>@{^{(}->}[rd]_-{I_X^{\rm E}} \ar@<-1.0ex>@{->}[d]_-{I_X^{\rm E}}\ar@{}[d]|-\wr\\ {}&{\mathbf{E}}^{\mathrm{b}}_{\mathbb{R}-c}({\rm I}\mathbb{C}_X)\ar@{}[r]|-{\text{\large $\subset$}} \ar@<-1.0ex>@{->}[u]_-{J_X^{\rm E}} &{\mathbf{E}}^{\mathrm{b}}_{{\rm I}\mathbb{R}-c}({\rm I}\mathbb{C}_X)\ar@{}[r]|-{\text{\large $\subset$}} \ar@<-1.0ex>@{->}[u]_-{J_X^{\rm E}} & {\mathbf{E}}^{\mathrm{b}}({\rm I}\mathbb{C}_X).\ar@<-1.0ex>@{->}[lu]_-{J_X^{\rm E}} }\] \begin{remark} One can consider $\mathbb{C}$-constructability for enhanced subanalytic sheaves as similar to \cite[Def.\:3.19]{Ito20}. We shall skip the explanation of it. \end{remark}
{ "timestamp": "2021-11-01T01:19:13", "yymm": "2109", "arxiv_id": "2109.13991", "language": "en", "url": "https://arxiv.org/abs/2109.13991" }
\section{Introduction} In \cite[Section 1.4]{Per2}, Perelman constructed ancient ovals for the Ricci flow. These are ancient 3-dimensional Ricci flows that for $t\to 0$ converge to a round point, but for $t\to -\infty$ look like a long neck capped off by two Bryant solitons. Uniqueness of Perelman's ancient ovals has been proved in recent important work by Brendle-Daskalopoulos-Sesum \cite{BDS}. While 3-dimensional Ricci flow is by now extremely well understood, one of the few remaining open problems is whether or not Perelman's ancient ovals actually occur as blowup limit in 3d Ricci flow through singularities. In this short note, we prove that Perelman's ancient ovals occur as blowup limit if and only if there is an accumulation of spherical singularities.\\ For the purpose of this note it is most convenient to describe 3-dimensional Ricci flow through singularities as metric flows, as introduced by Bamler \cite{Bam2}.\footnote{{In particular, as sketched in \cite[Section 3.7]{Bam2}, after selecting a branch and passing to the metric completion, any 3-dimensional singular Ricci flow from Kleiner-Lott \cite{KL} can be viewed as a metric flow.}} {A metric flow is given by a set $\mathcal{X}$,} a time-function $\mathfrak{t}:\mathcal{X}\to \mathbb{R}$, complete separable metrics $d_t$ on the time-slices $\mathcal{X}_t=\mathfrak{t}^{-1}(t)$, and probability measures $\nu_{x;s}\in \mathcal{P}(\mathcal{X}_s)$ such that the Kolmogorov consistency condition and a certain sharp gradient estimate for the heat flow hold (see Section \ref{sec_prelim} for details). We remark that by our recent work \cite{CH}, any metric flow $\mathcal{X}$ {arising as a noncollapsed limit of smooth Ricci flows or arising from a 3-dimensional singular Ricci flow from \cite{KL}} is a weak solution of the Ricci flow in the sense of Haslhofer-Naber \cite{HN}, namely for almost every $(p,t)$ the estimate $|\nabla_p\mathbb{E}_{(p,t)}[F]| \leq \mathbb{E}_{(p,t)}[|\nabla^\parallel F|]$ holds for all cylinder functions $F$ on path-space. However, for our present purpose it is enough to assume that the Ricci flow equation $ \mathcal{L}_{\partial_{\mathfrak{t}}} g = -2 \textrm{Ric}(g) $ holds on the regular part.\\ Next, we recall from \cite[Section 6.8]{Bam2} that a tangent flow at some space-time point $x\in \mathcal{X}$ is an $\mathbb{F}$-limit of parabolic rescalings by any sequence $\lambda_i\to\infty$ of $(\mathcal{X},(\nu_{x;s})_{s\leq \mathfrak{t}(x)})$, where as above $\nu_{x;s}$ denotes the conjugate heat kernel measures centered at $x$. We say that $\mathcal{X}$ has a spherical singularity at $x$, if some (and thus any) tangent flow at $x$ is a round shrinking sphere. More generally than tangent flows, which are given as rescaling limits around a fixed center $x$, one can consider rescalings around any sequence of space-time points $x_i\to x$. A blowup limit at $x$ is any $\mathbb{F}$-limit of parabolic rescalings by any sequence $\lambda_i\to\infty$ of $(\mathcal{X},(\nu_{x_i;s})_{s\leq \mathfrak{t}(x_i)})$. While tangent flows are always self-similarly shrinking as a consequence of Perelman's monotonicity formula, the analysis of general blowup limits is much more involved. Finally, we say that there is an accumulation of spherical singularities at $x\in\mathcal{X}$, if the flow $\mathcal{X}$ has spherical singularities at a sequence of pairwise distinct space-time points $x_i\to x$. Using the above notions, we can now state our main result: \begin{theorem}\label{thm_equivalence} Perelman's ancient ovals occur as blowup limit in a 3-dimensional Ricci flow through singularities if and only if there is an accumulation of spherical singularities.\end{theorem} In essence, we use a similar scheme of proof as in our prior joint work with Hershkovits \cite{CHH}, where we proved a corresponding result for the mean curvature flow. Namely, if there is a sequence of spherical singularities converging to a (quotient) neck singularity, then we blow up by the {largest} spherical scale to construct a nonselfsimilar blowup limit, and conversely we show by contradiction that Perelman's ancient ovals cannot occur as blowup limit if there are only finitely many spherical singularities. However, while the theory of mean curvature flow through singularities has been very well developed over the past 40 years, the study of Ricci flow through singularities was only initiated recently \cite{KL,HN,Sturm,BK,Bam2,Bam3,CH}. Hence, we need to be more careful to set up the geometric analytic framework, and several properties that are well known or obvious for mean curvature flow -- such as quantitative differentiation and change of base-points -- have to be implemented with more care. Another new feature in the setting of Ricci flow are quotients by finite isometry groups. Hence, when analyzing the almost equality case of the monotonicity formula we have to include the possibility of nontrivial spherical space forms and the $\mathbb{Z}_2$-quotients of the cylinder.\\ Finally, motivated by the recent proof of the mean-convex neighborhood conjecture \cite{CHH_mean,CHHW} and the higher-dimensional uniqueness result from Brendle-Daskalopoulos-Naff-Sesum \cite{BDNS}, it seems likely that there is a version of Theorem \ref{thm_equivalence} for Ricci flow through neck-singularities in higher dimensions. However, let us also remark that while blowup limits in 3d Ricci flow are always modelled on $\kappa$-solutions, quotient necks in higher dimensions can lead to new phenomena. In particular, examples of 4d steady solitons asymptotic to {quotient} necks are quotients of the 4d-Bryant soliton, which have an orbifold singularity, and Appleton's solitons \cite{App}, which are asymptotic to $\mathbb{R}\times S^3/\mathbb{Z}_k${, $k\ge 3$,} and have curvatures of mixed signs.\\ \noindent\textbf{Acknowledgments.} The second author has been partially supported by an NSERC Discovery Grant (RGPIN-2016-04331) and a Sloan Research Fellowship. {We thank the referees for their detailed comments and suggestions.}\\ \section{Notation and preliminaries}\label{sec_prelim} As introduced by Bamler \cite[Definition 3.2]{Bam2} a metric flow over $I\subseteq\mathbb{R}$, \begin{equation} \mathcal{X}=\left(\mathcal{X},\mathfrak{t},(d_t)_{t\in I},(\nu_{x;s})_{x\in \mathcal{X}, s\in I,s\leq \mathfrak{t}(x)}\right), \end{equation} consists of a set $\mathcal{X}$, a time-function $\mathfrak{t}:\mathcal{X}\to \mathbb{R}$, complete separable metrics $d_t$ on the time-slices $\mathcal{X}_t=\mathfrak{t}^{-1}(t)$, and probability measures $\nu_{x;s}\in \mathcal{P}(\mathcal{X}_s)$, called conjugate heat kernel measures, such that: \begin{itemize} \item $\nu_{x;\mathfrak{t}(x)}=\delta_x$ for all $x\in \mathcal{X}$, and for all $t_1\leq t_2\leq t_3$ in $I$ and all $x\in\mathcal{X}_{t_3}$ we have the Kolmogorov consistency condition $ \nu_{x; t_1} = \int_{\mathcal{X}_{t_2}} \nu_{\cdot; t_1}\, d\nu_{x; t_2}$. \item For all $s<t$ in $I$, any $T>0$, and any $T^{-1/2}$-Lipschitz function $f_s:\mathcal{X}_s\to\mathbb{R}$, setting $v_s=\Phi\circ f_s$, where $\Phi:\mathbb{R}\to (0,1)$ denotes the antiderivative of $(4\pi)^{-1}e^{-y^2/4}$, the function $ v_t:\mathcal{X}_t\to \mathbb{R},\, x \mapsto \int_{\mathcal{X}_s} v_s \, d\nu_{x; s} $ is of the form $v_t=\Phi\circ f_t$ for some $(t-s+T)^{-1/2}$-Lipschitz function $f_t:\mathcal{X}_t\to\mathbb{R}$. \end{itemize} In particular, on any metric flow we always have a heat flow of integrable functions and a conjugate heat flow of probability measures, which are defined for $s\leq \mathfrak{t}(x)$ via the formulas \begin{equation}\label{heat_flow_def} v_{\mathfrak{t}(x)}(x):= \int_{\mathcal{X}_s} v_s \, d\nu_{x; s},\qquad \mu_s:= \int_{\mathcal{X}_t} \nu_{x; s} \, d\mu_{\mathfrak{t}(x)}(x)\, . \end{equation} We recall from \cite[Definition 3.30 and Definition 4.25]{Bam2} that a metric flow $\mathcal{X}$ is called $H_n$-concentrated, where $H_n=(n-1)\pi^2/2+4$, if for all $s\leq t$ in $I$ and all $x_1,x_2\in \mathcal{X}_t$ we have the variance bound \begin{equation} \textrm{Var}(\nu_{x_1; s}, \nu_{x_2; s})\leq d_t^2(x_1,x_2)+H_n(t-s), \end{equation} and is called future-continuous at $t_0\in I$ if for all conjugate heat flows $(\mu_t)_{t\in I'}$ with finite variance and $t_0\in I'$, the function $ t\mapsto \int_{\mathcal{X}_t}\int_{\mathcal{X}_t} d_t\, d\mu_t \, d\mu_t $ is right continuous at $t_0$. {Throughout this note, we will work with an $H_3$-concentrated future-continuous metric flow $\mathcal{X}$ that satisfy the partial regularity results from \cite{Bam3}, and such that $\mathcal{L}_{\partial_{\mathfrak{t}}} g = -2 \textrm{Ric}(g)$ and $R(g)\geq -6$ holds on the regular part.} Let $\mathcal{X}$ be a metric flow as above. Given any rescaling factors $\lambda_i\to \infty$ and space-time points $x_i\to x$, we consider the sequence of parabolically rescaled flows \begin{equation} \mathcal{X}^{x_i,\lambda_i}=\left(\mathcal{X},\lambda_i^2(\mathfrak{t}-\mathfrak{t}(x_i)),(d_{\lambda_i^2(t-\mathfrak{t}(x_i))})_{t\in I},(\nu_{x;\lambda_i^2(s-\mathfrak{t}(x_i))})_{x\in \mathcal{X}, s\in I,s\leq \mathfrak{t}(x)}\right), \end{equation} equipped with the parabolically rescaled adjoint heat kernel measures \begin{equation} \nu_{s}^{x_i,\lambda_i}=\nu_{x_i, \mathfrak{t}(x_i)+\lambda_i^{-2}s}. \end{equation} By Bamler's compactness theory \cite{Bam2}, the metric flow pair $(\mathcal{X}^{x_i,\lambda_i},(\nu_{s}^{x_i,\lambda_i})_{s\leq 0})$ subsequentially converges to a limit $(\mathcal{X}^\infty,(\nu^\infty_{x_{\max};s})_{s\leq 0})$, called a blowup limit at $x$. A priori the limit is just a metric flow pair obtained in the sense of $\mathbb{F}$-convergence on compact time-intervals within some correspondence, as defined in \cite[Section 6]{Bam2}. However, since we are working in dimension 3, by \cite[Theorem {2.44}]{Bam3} (see also Perelman \cite{Per1,Per2}) all blowup limits are $\kappa$-noncollapsed, have nonnegative curvature, and are smooth with bounded curvature on compact time intervals. Hence, by the local regularity theorem \cite[Theorem {2.29}]{Bam3} (see also Hein-Naber \cite{HeinN}) the convergence is actually locally smooth. {Finally, recall that if $(\mathcal{X}^{x_i,\lambda_i},(\nu_{s}^{x_i,\lambda_i})_{s\leq 0})$ converges to a blowup limit $(\mathcal{X}^\infty,\nu^\infty)$, and if for some sequence of space-time points $\widetilde{x}_i$ the sequence of probability measures $\widetilde{\nu}^i=(\nu_{s}^{\widetilde{x}_i,{\lambda}_i})_{s\leq 0}$ converges to the same limiting measure $\nu^\infty$, then by Bamler's change of base-point theorem \cite[Theorem 6.40]{Bam2} the sequence $(\mathcal{X}^{\widetilde{x}_i,\lambda_i},\widetilde{\nu}^i)$ also converges to the same limit $(\mathcal{X}^\infty,\nu^\infty)$.} \section{The proofs} To prove Theorem \ref{thm_equivalence}, we proceed by establishing the following two propositions. \begin{prop}\label{prop1} If there is a sequence of pairwise distinct spherical singularities $x_i$ converging to a neck or quotient neck singularity at $x\in\mathcal{X}$, then Perelman's ancient ovals occur as blowup limit at $x$. \end{prop} Here, by neck or quotient neck singularity at $x$ we mean that some tangent flow at $x$ is either a round shrinking $S^2\times\mathbb{R}$ or one of its $\mathbb{Z}_2$-quotients, respectively. \begin{proof} Fix a small enough constant $\varepsilon>0$. Given $x\in\mathcal{X}$, we consider the rescaled and restricted flow \begin{equation} \mathcal{X}^{\alpha}_x:=\left(\mathcal{X}^{x,{1}/{r_{\alpha}}}|_{(-\varepsilon^{-1},0]}, (\nu ^{x, 1/{r_\alpha}}_s)_{s\in (-\varepsilon^{-1},0]} \right) \end{equation} on dyadic scales $r_\alpha=2^\alpha$, where $\alpha\in\mathbb{Z}$. We say that $\mathcal{X}$ is $\varepsilon$-selfsimilar around $x$ at scale $r_\alpha$, if \begin{equation} d_{\mathbb{F}}(\mathcal{X}^{\alpha}_x,\mathcal{S}) <\varepsilon \end{equation} for some metric soliton $\mathcal{S}$ that becomes extinct at time zero, where $d_{\mathbb{F}}$ denotes the $\mathbb{F}$-distance on the time interval $(-\varepsilon^{-1},0]$ between metric flow pairs \cite[Definition 5.8]{Bam2}. Since we are working in dimension 3, as a consequence of \cite[Theorem {2.44}]{Bam3} and \cite{Per2} the only metric solitons are flat $\mathbb{R}^3$, the round shrinking $S^3$, the round shrinking $\mathbb{R}\times S^2$, as well as finite quotients thereof (note also that a lower bound for the Nash entropy, which we always have near any neck singularity, gives an upper bound for the order of the quotient group). In particular, as a consequence of the local regularity theorem \cite[Theorem {2.29}]{Bam3} we actually have \begin{equation} d_{C^{\lfloor 1/ \tilde{\varepsilon}\rfloor}}(\mathcal{X}^{\alpha}_x,\mathcal{S}) <\tilde{\varepsilon} \end{equation} with $\tilde{\varepsilon}(\varepsilon)\to 0$ as $\varepsilon\to 0$, where the $C^{\lfloor 1/ \tilde{\varepsilon}\rfloor}$-distance is modulo diffeomorphisms on $B_{1/\tilde{\varepsilon}}\times (-\tilde{\varepsilon}^{-1},\tilde{\varepsilon}]$.\\ For any space-time point $x\in\mathcal{X}$ we denote by $S(x)$ the largest spherical scale, i.e. the supremum of $r_\alpha$ such that $\mathcal{X}$ is $\varepsilon$-close around $x$ at scale $r_\alpha$ to a round shrinking sphere, and by $Z(x)$ the infimum of $r_\alpha$ such that $\mathcal{X}$ is $\varepsilon$-close around $x$ at scale $r_\alpha$ to a round shrinking cylinder or to one of its $\mathbb{Z}_2$-quotients.\\ Now assume $x_i\in \mathcal{X}$ is a sequence of spherical singularities converging to a neck or quotient neck singularity $x\in \mathcal{X}$. Since the flow has a spherical singularity at $x_i$ it clearly holds that $S(x_i)>0$. On the other hand, recall that $x_i\to x$ by definition means convergence in the natural topology \cite[Definition 3.43]{Bam2} and by {\cite[Proposition 3.45]{Bam2}} is equivalent to $\mathfrak{t}(x_i)\to \mathfrak{t}(x)$ and $\nu_{x_i;s}\to \nu_{x;s}$ for all $s<\mathfrak{t}(x)$ in the $W_1$-Wasserstein distance. {Since the flow has a (quotient) neck singularity at $x$, applying the change of base-point theorem {\cite[Theorem 6.40]{Bam2}}, as recalled in the previous section, we thus obtain a sequence of metric flows around $x_i$ rescaled by $\lambda_i\to \infty$ which still converges to a (quotient) neck. This implies \begin{equation} \lim_{i\to \infty} Z(x_i)=0. \end{equation}} \bigskip Next, recall that the pointed Nash entropy, which is monotone by \cite{Per1}, is defined as \begin{equation} \begin{aligned} \mathcal{N}_{x_i}(\tau) = \int_{\mathcal{X}_{\mathfrak{t}(x_i)-\tau}} f_{x_i}(y,\mathfrak{t}(x_i)-\tau )\, d\nu_{x_i;\mathfrak{t}(x_i)-\tau }(y) -\frac{n}{2}, \end{aligned}\end{equation} where $f_x$ is the (almost everywhere defined) potential function for the heat kernel measure, namely \begin{equation} \begin{aligned} d\nu_{x_i;\mathfrak{t}(x_i)-\tau}(y) = (4\pi\tau)^{-3/2} e^{-f_{x_i}(y,\mathfrak{t}(x_i)-\tau)}\, d\mathrm{Vol}_{g_{\mathfrak{t}(x_i)-\tau}}(y) .\end{aligned}\end{equation} By Hamilton's roundness estimate \cite{Ham} we have $S(x_i)\leq Z(x_i)$. On the other hand, we claim that \begin{equation}\label{eq_bdd_ratio} Z(x_i)\leq C S(x_i). \end{equation} To see this, recall that by the almost rigidity case of the monotonicity formula from \cite[Theorem {2.20}]{Bam3} we can find a constant $\delta>0$, such that if $\mathcal{N}_{x_i}(r_{\alpha})-\mathcal{N}_{x_i}(r_{\alpha+\lceil \epsilon^{-1} \rceil})<\delta$ then $\mathcal{X}$ is $\varepsilon$-close around $x_i$ at scale $r_\alpha$ to a metric soliton. Recall that since we are working in dimension 3 the only metric solitons are flat $\mathbb{R}^3$, the round shrinking $S^3$, the round shrinking $\mathbb{R}\times S^2$, as well as finite quotients thereof. Now, if $r_{\alpha_i}=S(x_i)$ is the largest spherical scale, then by monotonicity there must be some $\beta_i \in \{\alpha_{i}+1,\ldots,\alpha_{i}+N\lceil \epsilon ^{-1} \rceil \}$, where $N<\infty$ is {a positive integer} independent of $i$, such that $\mathcal{X}$ is $\varepsilon$-close around $x_i$ at scale $r_{\beta_i}$ to a metric soliton. {To see this, note since the flow has a (quotient) neck singularity at $x$, by the semicontinuity of the Nash entropy from \cite[Proposition 4.37]{Bam3}, which is applicable thanks to our assumption that the scalar curvature is bounded below, we get a uniform entropy bound $\mathcal{N}_{x_i}(\tau_0)\geq -Y$ at some fixed scale $\tau_0>0$. Hence, if there was no $\beta_i$ such that $\mathcal{N}_{x_i}(r_{\beta_i})-\mathcal{N}_{x_i}(r_{\beta_i+\lceil \epsilon^{-1} \rceil})<\delta$, then for $i$ large enough by monotonicity we would obtain $\mathcal{N}_{x_i}(r_{\alpha_i+N\lceil \epsilon ^{-1} \rceil })\leq \mathcal{N}_{x_i}(r_{\alpha_i})-N\delta$, which yields a contradiction provided we choose $N>Y/\delta$.} Now, if $\mathcal{X}$ was $\varepsilon$-close around $x_i$ at scale $r_{\beta_i}$ to flat $\mathbb{R}^3$, then, provided $\varepsilon$ is small enough, the local regularity theorem \cite[Theorem {2.29}]{Bam3} would yield a contradiction with the assumption that $x_i$ is a singular point. Also, if $\mathcal{X}$ was $\varepsilon$-close around $x_i$ at scale $r_{\beta_i}$ to a round shrinking sphere or a round shrinking nontrivial spherical space form, then we would obtain a contradiction with the definition of $S(x_i)$ or with Hamilton's convergence theorem \cite{Ham}, respectively. Hence, $\mathcal{X}$ is $\varepsilon$-close around $x_i$ at scale $r_{\beta_i}$ to a round shrinking $\mathbb{R}\times S^2$ or one of its $\mathbb{Z}_2$-quotients. This shows that $Z(x_i)/S(x_i)\leq 2^{N\lceil \epsilon ^{-1} \rceil } $, and thus proves \eqref{eq_bdd_ratio}.\\ Now, considering the rescaled flows $(\mathcal{X}^{x_i,1/S(x_i)}, (\nu^{x_i,1/S(x_i)})_{s\le 0})$ by Bamler's compactness theory \cite{Bam2} we can pass to a subsequential limit $(\mathcal{X}^\infty,(\nu^\infty_{x_{\infty};s})_{s\leq 0})$. By \cite[Theorem {2.44}]{Bam3} (see also \cite{Per1,Per2}) any such 3-dimensional blowup limit $\mathcal{X}^\infty$ is $\kappa$-noncollapsed, has nonnegative curvature, and is smooth -- with bounded curvature on compact time intervals -- until it becomes extinct. Moreover, by construction $\mathcal{X}^\infty$ satisfies $S({x_{\infty}})\geq 1$ and $Z({x_{\infty}})\leq C$. Hence, by the recent classification of compact $\kappa$-solutions from Brendle-Daskalopoulos-Sesum \cite{BDS} we conclude that $\mathcal{X}^\infty$ must be an ancient oval. This proves the proposition. \end{proof} \bigskip \begin{prop}\label{prop2} If there are only finitely many spherical singularities near $x\in\mathcal{X}$, then Perelman's ancient ovals do not occur as blowup limit at $x$. \end{prop} \begin{proof} Suppose towards a contradiction that there are $x_i \to x$ and $\lambda_i\to\infty$ such that $(\mathcal{X}^{x_i,\lambda_i}, (\nu^{x_i,\lambda_i}_s)_{s\le 0})$ has as $\mathbb{F}$-limit an ancient oval, say $(\mathcal{X}^\infty , (\nu ^\infty_{x_{\max};s})_{s\le 0}) $, that becomes extinct at time zero. Since the time $-1$ slices converge smoothly by the local regularity theorem \cite[Theorem {2.29}]{Bam3}, and since the ancient ovals are compact with positive curvature, it follows that the flow $\mathcal{X}$ has a spherical singularity at some nearby space-time point $y_i \in \mathcal{X}$ with $y_i\to x$. Now, using the change of base-point theorem \cite[Theorem 6.40]{Bam2} we see that $(\mathcal{X}^{y_i,\lambda_i},(\nu^{y_i,\lambda_i}_s)_{s\le 0})$ still $\mathbb{F}$-converges to an ancient oval. However, since there are only finitely many spherical singularities, we infer that $y_i=x$ for large $i$, and passing to a subsequential limit we obtain a tangent flow at $x$, which is selfsimilar. This is a contradiction, and thus proves the proposition. \end{proof} \bigskip We can thus conclude that our main theorem holds true: \begin{proof}[Proof of Theorem \ref{thm_equivalence}] Fix $x\in \mathcal{X}$. Suppose first that there is an accumulation of spherical singularities at $x$. Note that $x$ must be a singular point by the local regularity theorem \cite[Theorem {2.29}]{Bam3}. {Recall that since we are working in dimension 3, as a consequence of \cite[Theorem {2.44}]{Bam3} and \cite{Per2} the only nontrivial metric solitons are the round shrinking $S^3$ and the round shrinking $\mathbb{R}\times S^2$, as well as finite quotients thereof.} {Note that spherical space forms cannot occur as tangent flow at $x$, since round singularities are isolated. Indeed, as a consequence of Hamilton's classical theorem \cite{Ham}, at any spherical singularity $x$ the tangent flow is unique and becomes extinct in a unique singular point. Hence, if there was a sequence $x_i$ of singular points converging to $x$ in the natural topology, then for $i$ large enough we would have $x_i=x$.} Thus, any tangent flow at $x$ must be a round shrinking cylinder or one of its $\mathbb{Z}_2$-quotients, i.e. the flow $\mathcal{X}$ has a neck or quotient neck singularity at $x$. Hence, by Proposition \ref{prop1}, Perelman's ancient ovals occur as blowup limit at $x$. Conversely, if there is no accumulation of spherical singularities at $x$, then, by Proposition \ref{prop2}, Perelman's ancient ovals cannot occur as blowup limit at $x$. This proves the theorem. \end{proof} \bigskip \bibliographystyle{amsplain}
{ "timestamp": "2022-05-31T02:20:16", "yymm": "2109", "arxiv_id": "2109.13875", "language": "en", "url": "https://arxiv.org/abs/2109.13875" }
\section{Sample preparation} Polycrystalline samples of $\beta$-Li$_2$IrO$_3$ were synthesized from stoichiometric mixtures of Li$_2$CO$_3$ and IrO$_2$ in air at 1050\,$^{\circ}$C with several intermediate re-grindings. X-ray diffraction (XRD) data showed no impurity phases except for about 1\,wt.\% of $\alpha$-Li$_2$IrO$_3$ that does not affect any of the results presented in the manuscript. Single crystals of $\beta$-Li$_2$IrO$_3$ were grown from separated educts~\cite{freund2016} at 1020\,$^{\circ}$C using Li and Ir metals as starting materials. \section{X-ray diffraction} Powder XRD data were collected using the MiniFlex (Rigaku) and Empyrean (PANalytical) diffractometers with the CuK$_{\alpha}$ radiation. JANA2006 software~\cite{jana2006} was used for the Rietveld refinement. Unlike $\alpha$-Li$_2$IrO$_3$, which is prone to twinning and stacking faults~\cite{freund2016}, \mbox{$\beta$-Li$_2$IrO$_3$} shows high degree of crystallinity and the symmetric peak shape. The powder profile could be fitted by a Lorentzian function without any additional corrections for strain broadening. Moreover, the peak width defined by the two Lorentzian parameters remained essentially unchanged after the powder sample was subjected to pressure treatment in the course of the $\mu$SR experiment (Table~\ref{tab:refinement}). This indicates a low amount of structural defects in the material and the absence of pressure-induced defects. Single-crystal XRD data were collected at the ID27 beamline of the ESRF, Grenoble, France (Perkin Elmer XRD1621 flat panel detector, sample-to-detector distance 418.8\,mm, $\lambda=0.3738$\,\r A, x-ray spot size $2.6\times 2.6$\,$\mu$m$^2$). XRD images were collected during continuous rotation of the diamond anvil cell (DAC), typically from $-20$ to $+20^{\circ}$ on $\omega$; while data collection experiments were performed by a narrow $0.5^{\circ}$ scanning of the omega range from $-30$ to $+30^{\circ}$. The DAC equipped with 350\,$\mu$m Boehler-Almax diamonds was used for pressure generation. Two single crystals of $\beta$-Li$_2$IrO$_3$ of about $15\times 15\times 10$\,$\mu$m$^3$ size, together with a small ruby chip (for pressure determination), were loaded into a hole of a pre-indented rhenium gasket. Helium was used as a pressure-transmitting medium. Only one of the two crystals had the quality sufficient for structure solution and refinement. Integrations of the reflection intensities were performed using the CrysAlisPro software~\cite{crysalispro}. A single crystal of an orthoenstatite [(Mg$_{1.93}$,Fe$_{0.06}$)(Si$_{1.93}$,Al$_{0.06}$)O$_6$, $Pbca$, $a=8.8117(2)$, $b=5.18320(10)$, $c=18.2391(3)$\,\r A], was used to calibrate the instrument model of CrysAlisPro software (sample-to-detector distance, the detector's origin, offsets of the goniometer angles and rotation of the X-ray beam and the detector around the instrument axis). The structures were solved with SHELXT~\cite{shelxt} and refined against $F^2$ on all data by the full-matrix least squares method with SHELXL~\cite{shelxl}. During the compression, no significant change in the crystal quality ($R_{\rm int}$ and sample mosaicity) was observed. The absolute values of the sample mosaicity representing average peak widths remain almost unchanged (Table~\ref{tab:xrd2}) confirming that no pressure-induced defects occur in $\beta$-Li$_2$IrO$_3$. Note that the $e_3$ parameter describes the reflection width in the scanning ($\omega$) direction, therefore it is always larger than the scan width ($0.5^{\circ}$). Reciprocal space images (Figure~\ref{fig:xrd}) further confirm the unchanged crystal quality under pressure. \begin{table*} \caption{\label{tab:xrd2} Integration quality ($R_{\rm int}$) and crystal mosaicity ($e_1,e_2,e_3$) in single-crystal XRD measurements. } \begin{ruledtabular} \begin{tabular}{ccccc} & 0\,GPa & 1.08\,GPa & 2.40\,GPa & 3.45\,GPa \\ $R_{\rm int}$ (\%) & 4.80 & 6.46 & 6.19 & 5.12 \\ $e_1/e_2/e_3$ & 0.11/0.11/0.58 & 0.11/0.11/0.52 & 0.11/0.11/0.50 & 0.12/0.11/0.57 \end{tabular} \end{ruledtabular} \end{table*} \begin{table} \caption{\label{tab:refinement} Rietveld refinement results for $\beta$-Li$_2$IrO$_3$ samples before and after pressure treatment in the course of the $\mu$SR experiment. The lattice parameters $a$, $b$, and $c$, as well as the Lorentzian profile parameters LX and LY are listed. The error bars are from the least-squares refinement against all data points of the XRD profile and thus smaller than the actual error bars. } \begin{ruledtabular} \begin{tabular}{cccccc} Sample & $a$ & $b$ & $c$ & LX & LY \\\hline before & 5.90444(4) & 8.44958(7) & 17.8117(2) & 5.12(7) & 4.3(2) \\ after & 5.90470(3) & 8.45012(5) & 17.8128(1) & 4.88(6) & 5.1(1) \\ \end{tabular} \end{ruledtabular} \end{table} \begin{figure} \includegraphics[width=8.7cm]{S1} \caption{\label{fig:xrd} Reciprocal space images of the $\beta$-Li$_2$IrO$_3$ crystal under pressure. } \end{figure} \section{Magnetization measurements} The bulk magnetization measurements under hydrostatic pressure~\cite{tateiwa2011,tateiwa2013} were performed on a polycrystalline sample in a CuBe pressure cell placed inside a commercial SQUID magnetometer from Quantum Design. All measurements were performed upon compression, whereas no pressure control during decompression was possible. The highest feasible pressure was about 2.0\,GPa. Daphne 7373 oil was used as pressure transmitting medium. One small piece of lead ($\sim0.1$\,mg) is placed together with the sample inside the pressure cell and another piece ($\sim0.1$\,mg) is placed outside the pressure cell. Under pressure the superconducting transition temperatures of the inner lead sample decreases. The difference between the superconducting transition temperatures of the two lead samples determines the pressure inside the cell. The gasket of the pressure cell contained the sample of mass $\sim1$\,mg and a lead piece of mass $\sim0.1$\,mg. The empty cell background data has been subtracted~\cite{tateiwa2011} by using automatic background subtraction (ABS) procedure mentioned in Ref.~\onlinecite{MPMS}. Measurements of lead and sample were performed with the fields of 2\,mT and 1\,T respectively. Measurements under pressure have been reproduced several times. Field-cooled and zero-field-cooled scans were performed at ambient pressure and showed no dependence on the cooling regime. Above 1.4\,GPa, the signal of the sample was at the sensitivity limit of our measurement setup even in the field of 1\,T, so we can neither confirm nor exclude the dependence of the magnetization on the cooling history, as expected in the glassy phase pinpointed by $\mu$SR below 15\,K. \section{Specific heat and thermal expansion} Specific heat measurements were carried out on a polycrystalline sample in the Quantum Design PPMS with thermal relaxation method. Thermal expansion was measured by high-resolution capacitive dilatometry, enabling the detection of length changes $\Delta L(T)$ smaller than $0.05$\,\AA~over a sample with the length $L_0$ of several\,mm~\mbox{\cite{barron1980,manna2012,kuechler2012}}. We utilized the dilatometer of Ref.~\onlinecite{kuechler2012} in the multi-function probe of the PPMS. The linear thermal expansion coefficient $\alpha=d[\Delta L(T)/L_0]/dT$ was determined from the differential length change over temperature intervals of 0.5\,K. Measurements were done on a pressed pellet of 2.1\,mm length. Pellets were pressed inside the glove box in order to avoid air trapping inside the pellet. Two different pellets from different batches have been measured in order to check the reproducibility. Thermal expansion data were taken upon warming with a temperature sweep rate of $+0.3$\,K/min. Isothermal field sweeps, i.e., magnetostriction measurements were performed up to 14\,T with a field sweep rate of $+120$\,mT/min. \section{$\mu$SR experiments} Ambient-pressure $\mu$SR experiments were carried out at the HIFI beam of ISIS, UK and Dolly spectrometer of PSI, Switzerland. Pressure experiments were performed at the GPD spectrometer of PSI, Switzerland. The $\mu$SR time spectra were analyzed using the MUSRFIT software package. A 2\,g polycrystalline sample was used. To generate high pressure, a double-wall piston-cylinder type cell manufactured from MP35 alloy was used~\cite{khasanov2016}. This allowed for a significant sample volume, high enough pressure, and temperature-independent background in the range studied. The momentum of incoming muons was chosen to optimize the stopping of the muons within the sample area. In order to transmit and distribute the pressure, Daphne7373 oil was used. The pressure was applied at room temperature. It was additionally measured at low temperatures by monitoring the pressure-induced shift of the superconducting transition temperature of indium. The total $\mu$SR signal presented here consists of two contributions \begin {equation} A(t) = A_{\rm PC}P_{\rm PC}(t) + A_SP_S(t) \end {equation} where $A_{\rm PC}(t)$ and $A_S(t)$ represent the asymmetry of the signal coming from the pressure cell and the sample itself, and $P_{\rm PC}$ and $P_S$ corresponds to the function, which evolves with time $t$. The contribution of the signal from the pressure cell is around 50\% and has been kept constant as a function of temperature and pressure. Kubo-Toyabe depolarization function has been used as $P_{\rm PC}$. The extracted parameters are same as expected for an empty pressure cell~\cite{khasanov2016}. \begin{figure} {\centering {\includegraphics[width=0.53\textwidth]{S3}}\par} \caption{Temperature dependence of the three oscillating frequencies estimated from the measurement at ISIS and also from the measurement at PSI with pressure cell and without pressure cell, all performed at ambient pressure. The solid lines are described in the main text.} \label{fig:oscillations} \end{figure} At first, we performed ambient pressure experiments with the pressure cell. We used the function described in the main text as $P_S$, and confirmed that both temperature dependence and magnitudes of the three oscillation frequencies are same in nature as in the measurements without the pressure cell (Fig.~\ref{fig:oscillations}). It has to be mentioned that three oscillation frequencies have been resolved from the PSI data, whereas only two were visible in the ISIS data, but they are quite close to the PSI ones. The third, highest frequency was overlooked at ISIS due to the fact that PSI can accumulate muons at much lower time compared to that at ISIS. \begin{figure} {\centering {\includegraphics[width=0.4\textwidth]{S2}}\par} \caption{$\mu$SR time spectra in the presence of weak transverse magnetic field of 50 Gauss at ambient pressure and highest pressure, and at temperatures below and above the ordering temperatures.} \label{fig:tf} \end{figure} The upper panel of Figure~\ref{fig:tf} displays the weak transverse field (WTF) measurements at ambient pressure at 45\,K (above the ordering temperature) and 10\,K (below the ordering temperature), whereas the lower panel shows the same data for the pressure of 2.27\,GPa. To deconvolute the $\mu$SR time spectra, we have used the following equation \begin{equation} A(t) = A_0 \cos(\omega t)\, e^{-\lambda t} \end{equation} where $\omega$ corresponds to the field of 50\,G. The temperature dependence of $A(t)$ gives the non-magnetic volume fraction. From Figure~\ref{fig:tf} it can be clearly seen that the volume fraction of static spins at 10\,K decreased for the applied pressure of 2.27\,GPa compared to the ambient pressure. This is compatible with the increased fraction of paramagnetic spins in Fig.~3a of the manuscript. \section{Crystal structure under pressure} Precise structure determination for $\beta$-Li$_2$IrO$_3$ is hindered by the large difference in the scattering powers of Ir and light elements (Li and O). Ambient-pressure crystal structures reported in Refs.~\onlinecite{biffin2014,takayama2015} show small, but not insignificant differences in the lattice parameters and oxygen positions, which in turn influence Ir--O distances and Ir--O--Ir angles underlying magnetic exchange. For example, Ref.~\onlinecite{takayama2015} reports nearly ideal IrO$_6$ octahedra with the Ir--O distances of 2.025(3), 2.024(3), and 2.027(3)\,\r A, whereas the crystal structure of Ref.~\onlinecite{biffin2014} displays a somewhat asymmetric oxygen environment with the Ir--O distances of 2.01(3), 2.04(2), and 2.07(4)\,\r A. Moreover, the Ir--O--Ir angles for the $X/Y$- and $Z$-type Kitaev bonds are, respectively, $94.4(1)^{\circ}$ and $94.7(2)^{\circ}$ in Ref.~\onlinecite{takayama2015}, compared to $92.5(4)^{\circ}$ and $95.2(7)^{\circ}$ in Ref.~\onlinecite{biffin2014} While the difference in the lattice parameters can be traced back to different temperatures of the experiment (100\,K~\cite{biffin2014} vs. 296\,K~\cite{takayama2015}), atomic parameters are unlikely to change significantly upon cooling. Their discrepancies are a result of the lower precision of the oxygen positions in Ref.~\onlinecite{biffin2014}, which is also seen from the much higher error bars. This problem is rooted in the lower number of independent reflections used in the refinement, 298 vs. 1248. Pressure cell reduces the accessible part of the reciprocal space and thus the number of independent reflections to about 105 in our data. While this would be enough to refine 7 structural parameters (positions of Li, Ir, and O) and 3 atomic displacement parameters (Ir, O1, and O2), we expect only a moderate accuracy for the positions of light atoms. Indeed, the refinement of the XRD data leads to realistic Ir--O distances of $2.0-2.1$\,\r A, but the Ir--O--Ir angles show a large scatter and no systematic change under pressure, similar to Ref.~\onlinecite{veiga2017}. To circumvent this problem, we undertook a combined approach, with the lattice parameters and Ir positions determined by XRD, while Li and O positions were refined \textit{ab initio}. \begin{table} \caption{\label{tab:structures} Crystallographic parameters of $\beta$-Li$_2$IrO$_3$ under pressure listed for the space group $Fddd$ (setting 2). The $z$-coordinate and atomic displacement parameter $U_{\rm iso}$ (in\,\r A$^2$) of Ir are determined from the refinement of single-crystal XRD data. The oxygen and Li positions are further refined \textit{ab initio}, as explained in the text. The last two lines list \mbox{Ir--Ir} distances (in\,\r A) and Ir--O--Ir bridging angles (in~deg) for the \mbox{$X$,$Y$-/$Z$-}type bonds of the hyperhoneycomb lattice (see also Fig.~\ref{fig:structure}). } \begin{ruledtabular} \begin{tabular}{ccccc} & 0\,GPa & 1.08\,GPa & 2.4\,GPa & 3.45\,GPa \\ $a$ (\r A) & 5.9004(3) & 5.8816(3) & 5.8614(3) & 5.8475(4) \\ $b$ (\r A) & 8.4457(5) & 8.4054(5) & 8.3590(4) & 8.3147(5) \\ $c$ (\r A) & 17.795(14) & 17.736(17) & 17.687(15) & 17.618(19) \\ $z$(Ir) & 0.7086(2) & 0.7084(2) & 0.7082(3) & 0.7086(3) \\\smallskip $U_{\rm iso}$(Ir) & 0.016(3) & 0.010(3) & 0.010(4) & 0.009(3) \\ $x$(O1) & 0.8596 & 0.8612 & 0.8642 & 0.8625 \\ $x$(O2) & 0.6316 & 0.6314 & 0.6312 & 0.6303 \\ $y$(O2) & 0.3648 & 0.3657 & 0.3667 & 0.3676 \\ $z$(O2) & 0.0384 & 0.0386 & 0.0387 & 0.0390 \\ $z$(Li1) & 0.0454 & 0.0454 & 0.0453 & 0.0457 \\\smallskip $z$(Li2) & 0.8718 & 0.8783 & 0.8781 & 0.8782 \\ $d_{\rm Ir-Ir}$ & 2.967/2.975 & 2.959/2.958 & 2.950/2.943 & 2.930/2.946 \\ $\varphi_{\rm Ir-O-Ir}$ & 94.3/94.1 & 94.0/93.6 & 93.8/93.2 & 93.2/93.4 \\ \end{tabular} \end{ruledtabular} \end{table} VASP code~\cite{vasp1,vasp2} was used for crystal structure optimization. Details of the relaxed structures strongly depend on the underlying approximation. We tested several exchange-correlation potentials and different settings for the spin-orbit coupling and correlation effects. Similar to Ref.~\onlinecite{hermann2018}, calculations without the spin-orbit coupling and Hubbard $U_d$ resulted in structural dimerization. Experimental ambient-pressure crystal structure of Ref.~\onlinecite{takayama2015} is well reproduced only on the DFT+$U$+SO level, whereas the choice of the exchange-correlation potential and the change in the $U_d$ value had only a minor effect. The best agreement was found for $U_d=3$\,eV, $J_d=0.5$\,eV, and the PBEsol exchange-correlation potential~\cite{perdew2008}. Same methodology was then used for relaxing Li and O positions under pressure, whereas Ir atoms were placed into their experimental positions and fixed. The resulting structural parameters are summarized in Table~\ref{tab:structures}. Two effects are worth noting. First, by combining XRD determination of the Ir position with the \textit{ab initio} refinement of Li and O coordinates, we obtain a rather monotonic evolution of the Ir--O--Ir angles, which then allows to track changes in the exchange couplings. The structural changes are well in line with earlier predictions based on DFT~\cite{kim2016}. Second, the atomic displacement parameter of Ir remains nearly constant under pressure, thus excluding the formation of local Ir--Ir dimers up to at least 3.45\,GPa. This proves that the spin-liquid state pinpointed in our $\mu$SR experiment occurs on the undistorted hyperhoneycomb lattice of $\beta$-Li$_2$IrO$_3$. \section{Exchange couplings} All exchange parameters are given in the global coordinate frame defined as \begin{equation} \mathbf X=(\mathbf e_a+\mathbf e_c)/\sqrt 2,\,\,\, \mathbf Y=(\mathbf e_c-\mathbf e_a)/\sqrt 2,\,\,\, \mathbf Z=-\mathbf e_b, \end{equation} where $\mathbf e_a,\mathbf e_b$, and $\mathbf e_c$ are unitary vectors along the $a$, $b$, and $c$ crystallographic directions, respectively. The presence of Kitaev interactions discriminates all nearest-neighbor bonds into the $X$-, $Y$-, and $Z$-types with the symmetric part of the exchange written as follows~\cite{winter2016}, \begin{widetext} \begin{equation*} \mathbb J_X\!=\!\left(\begin{array}{ccc} J_{XY}+K_{XY} & \Gamma_{XY}'+\zeta & \Gamma_{XY}'-\zeta \\ \Gamma_{XY}'+\zeta & J_{XY}+\xi & \Gamma_{XY} \\ \Gamma_{XY}'-\zeta & \Gamma_{XY} & J_{XY}-\xi \end{array}\right),\,\, \mathbb J_Y\!=\!\left(\begin{array}{ccc} J_{XY}+\xi & \Gamma_{XY}'+\zeta & \Gamma_{XY} \\ \Gamma_{XY}'+\zeta & J_{XY}+K_{XY} & \Gamma_{XY}'-\zeta \\ \Gamma_{XY} & \Gamma_{XY}'-\zeta & J_{XY}-\xi \end{array}\right),\,\, \mathbb J_Z\!=\!\left(\begin{array}{ccc} J_Z & \Gamma_Z & 0 \\ \Gamma_Z & J_Z & 0 \\ 0 & 0 & J_Z+K_Z \end{array}\right). \end{equation*} \end{widetext} \begin{figure} \includegraphics{S4} \caption{\label{fig:structure} Crystal structure of $\beta$-Li$_2$IrO$_3$ (left) and the hyperhoneycomb spin lattice (right). The crystallographic coordinate frame $abc$ and spin coordinate frame $XYZ$ are shown. } \end{figure} The $X$- and $Y$-bonds are related by symmetry and thus feature same values of the exchange parameters, although the signs of the off-diagonal terms change from one bond to another following symmetry transformations of the $\beta$-Li$_2$IrO$_3$ structure. In particular, the $\Gamma_{XY}$-term changes sign, see Ref.~\onlinecite{ducatman2018} for further details. The $X$- and $Y$- bonds adopt the $C_i$ symmetry that forbids antisymmetric exchange. The $Z$-bonds adopt the higher symmetry $D_2$ that sets to zero all off-diagonal terms other than $\Gamma$, and renders the sign of $\Gamma_Z$ constant throughout the lattice. Therefore, the sign of $\Gamma$ can be defined unambiguously as the sign of $\Gamma_Z$ within the given coordinate frame, and it is this sign that defines the $\Gamma<0$ or $\Gamma>0$ regimes of the $J-K-\Gamma$ model on the hyperhoneycomb lattice~\cite{lee2015,rousochatzakis2017}. Our choice of $\mathbf X$, $\mathbf Y$, and $\mathbf Z$ follows that of Refs.~\onlinecite{lee2015,rousochatzakis2017,ducatman2018}, thus facilitating a direct comparison to theory. The absence of inversion symmetry on the $Z$-bond allows an antisymmetric Dzyaloshinsky-Moriya interaction of the form $(D,D,0)$. \subsection{DFT} Full exchange tensors were calculated using atomic positions from Table~\ref{tab:structures} within the second-order perturbation theory~\cite{winter2016} in electronic correlations ($U_{\rm eff}=1.7$\,eV, $J_H=0.3$\,eV) and spin-orbit coupling ($\lambda=0.4$\,eV)~\footnote{Note that $U_{\rm eff}$ and $J_H$ are smaller than, respectively, $U_d$ and $J_d$ of DFT+$U$+SO, because $U_{\rm eff}$ and $J_H$ correspond to the mixed Ir--O states in the valence band, whereas $U_d$ and $J_d$ are applied to the Ir $5d$ states only.}. Hopping parameters within the $t_{2g}$ manifold were obtained in the FPLO code~\cite{fplo} on the scalar-relativistic level of local density approximation (LDA)~\cite{pw92} by constructing Wannier functions via the internal procedure of FPLO~\cite{wannier}. \begin{table} \caption{\label{tab:dft} Nearest-neighbor exchange parameters (in\,meV) from second-order perturbation theory (DFT). } \begin{ruledtabular} \begin{tabular}{ccccc} Pressure (GPa) & $K_Z$ & $J_Z$ & $\Gamma_Z$ & $D$ \\ 0 & $-10.52$ & $-5.38$ & $-13.63$ & 0.56 \\ 1.08 & $-7.74$ & $-6.60$ & $-15.28$ & 0.47 \\ 2.40 & $-5.52$ & $-7.50$ & $-16.71$ & 0.40 \\ 3.45 & $-5.91$ & $-7.24$ & $-15.93$ & 0.37 \\ \end{tabular} \end{ruledtabular} \vspace{0.1cm} \begin{ruledtabular} \begin{tabular}{ccccccc} Pressure (GPa) & $K_{XY}$ & $J_{XY}$ & $\Gamma_{XY}$ & $\Gamma_{XY}'$ & $\xi$ & $\zeta$ \\ 0 & $-12.10$ & $-4.76$ & $-13.53$ & 0.32 & $-0.10$ & 0.69 \\ 1.08 & $-10.14$ & $-5.49$ & $-14.31$ & 0.21 & $-0.13$ & 0.66 \\ 2.40 & $-8.80$ & $-6.11$ & $-15.06$ & 0.09 & $-0.10$ & 0.59 \\ 3.45 & $-5.70$ & $-7.50$ & $-17.12$ & 0.31 & $-0.10$ & 0.51 \\ \end{tabular} \end{ruledtabular} \end{table} Table~\ref{tab:dft} lists all nearest-neighbor interactions in \mbox{$\beta$-Li$_2$IrO$_3$}. Additionally, we calculated the couplings between second and third neighbors, which are all below 0.5\,meV except for $J_3$, which shows values as high as 2.66\,meV at ambient pressure and increases to 3.39\,meV at 3.45\,GPa. Although non-negligible, $J_3$ is still weaker than nearest-neighbor $\Gamma$ and $K$, which justifies the neglect of these coupling to a first approximation. \subsection{Quantum chemistry} The material model was based on embedded clusters with two edge-sharing octahedra as central region. The four nearest-neighbor octahedra were also explicitly included in the calculations in order to describe the finite charge distribution in the immediate neighborhood, while the solid-state surroundings were modeled by an array of point charges fitted to reproduce the ionic Madelung potential in the cluster region. Energy-consistent relativistic pseudopotentials along with quadruple-zeta basis functions were used for the Ir\,\cite{Figgen09} ions of the central unit. All-electron basis sets of quintuple-zeta quality were employed for the bridging O\,\cite{Dunning89} ligands while all-electron basis sets of triple-zeta quality were used for the remaining O sites\,\cite{Dunning89} present in the two-octahedra central region. Ir$^{4+}$ sites belonging to the octahedra adjacent to the reference unit were described as closed-shell Pt$^{4+}$ ions, using relativistic pseudopotentials and valence triple-zeta basis functions\,\cite{Figgen09}. Ligands of these adjacent octahedra that are not shared with the central reference unit were modeled with minimal all-electron atomic-natural-orbital basis sets \cite{Pierloot95}. All calculations were performed using the quantum chemistry package {\sc molpro}\,\cite{Molpro12}. Results of spin-orbit MRCI calculations for the nearest-neighbor effective couplings are listed in Table\,\ref{QC_table}. Details of the mapping procedure are described in, e.\,g., Ref.\,\cite{yadav16}. For the $X$- and $Y$-bonds we neglect small lattice distortions that reduce the point-group symmetry from $C_{2h}$ to $C_i$. This translates to setting $\xi$ and $\Gamma_{XY}'$ to zero, an approximation that finds support in the fact that $\xi$ and $\Gamma_{XY}'$ are the smallest parameters in the DFT-based derivation (Table~\ref{tab:dft}). \begin{table}[t] \caption{\label{tab:qchem} Nearest-neighbor exchange parameters (in\,meV) from quantum chemistry calculations, see text for details.} \label{QC_table} \begin{ruledtabular} \begin{tabular}{ccccc} Pressure\,(GPa) & $K_Z$ & $J_Z$ & $D$ & $\Gamma_Z$ \\ $0$ & $-12.75$ & $-0.18$ & $0.64$ & $-2.88$ \\ $1.08$ & $-11.80$ & $-0.53$ & $0.75$ & $-3.20$ \\ $2.40$ & $-11.24$ & $-0.79$ & $0.81$ & $-3.56$ \\ $3.45$ & $-11.51$ & $-0.76$ & $0.77$ & $-3.43$ \\ \end{tabular} \end{ruledtabular} \vspace{0.1cm} \begin{ruledtabular} \begin{tabular}{ccccc} Pressure\,(GPa) & $K_{XY}$ & $J_{XY}$ & $\Gamma_{XY}$ & $\zeta$ \\ $0$ & $-12.62$ & $-0.40$ & $-3.90$ & $-0.38$ \\ $1.08$ & $-12.01$ & $-0.76$ & $-4.11$ & $-0.48$ \\ $2.40$ & $-11.30$ & $-1.20$ & $-4.34$ & $-0.60$ \\ $3.45$ & $-9.99$ & $-1.76$ & $-4.92$ & $-0.67$ \\ \end{tabular} \end{ruledtabular} \end{table} The MRCI results put forward $K$ as the leading term, whereas DFT yields an even stronger $\Gamma$. A similar discrepancy has been reported for $\alpha$-Li$_2$IrO$_3$~\cite{winter2016,nishimoto2016} and requires further investigation going beyond the scope of our present study. At this point, we only mention that, despite differences on the quantitative level, both DFT and quantum chemistry yield similar pressure evolution of $J$, $K$, and $\Gamma$. These trends, the enhancement of $J$ and $\Gamma$ and the weakening of $K$, are also consistent with the structural changes reported in Table~\ref{tab:structures}, because the reduction in the Ir--O--Ir angles toward $90^{\circ}$ should indeed reduce $K$ \cite{winter2016,nishimoto2016} and enhance $J$ and $\Gamma$~\cite{winter2016}. \end{document}
{ "timestamp": "2018-06-12T02:15:17", "yymm": "1802", "arxiv_id": "1802.06819", "language": "en", "url": "https://arxiv.org/abs/1802.06819" }
\section{Introduction} Distributed word representations, also known as word vectors, have been widely used in natural language processing, leading to state of the art results for many tasks. Publicly available models, which are pre-trained on large amounts of data, have become a standard tool for many NLP applications, but are mostly available for English. While different techniques have been proposed to learn such representations~\cite{collobert2008unified,mikolov2013distributed,pennington2014glove}, all rely on the \emph{distributional hypothesis} -- the idea that the meaning of a word is captured by the contexts in which it appears. Thus, the quality of word vectors directly depends on the amount and quality of data they were trained on. A common source of data to learn word representations, available in many languages, is the online encyclopedia Wikipedia~\cite{al2013polyglot}. This provides high quality data which is comparable across languages. Unfortunately, for many languages, the size of Wikipedia is relatively small, and often not enough to learn high quality word vectors with wide coverage. An alternative source of large scale text data is the web and resources such as the common crawl. While they provide noisier data than Wikipedia articles, they come in larger amounts and with a broader coverage. In this work, we contribute high quality word vectors trained on Wikipedia and the Common Crawl corpus, as well as three new word analogy datasets. We collected training corpora for 157 languages, using Wikipedia and Common Crawl. We describe in details the procedure for splitting the data by language and pre-processing it in Section~2. Using this data, we trained word vectors using an extension of the fastText model with subword information~\cite{bojanowski2017enriching}, as described in Section~3. In Section~4, we introduce three new word analogy datasets for French, Hindi and Polish and evaluate our word representations on word analogy tasks. Overall, we evaluate our word vectors on 10 languages: Czech, German, Spanish, Finnish, French, Hindi, Italian, Polish, Portuguese and Chinese. Our models for 157 languages other than English are available at \url{https://fasttext.cc}. \paragraph{Related work.} In previous work, word vectors pre-trained on large text corpora have been released alongside open source implementation of word embedding models. English word vectors trained on a part of the Google News dataset (100B tokens) were published with \texttt{word2vec}~\cite{mikolov2013distributed}. \newcite{pennington2014glove} released \texttt{GloVe} models trained on Wikipedia, Gigaword and Common Crawl (840B tokens). A notable effort is the work of~\newcite{al2013polyglot}, in which word vectors have been trained for 100 languages using Wikipedia data. \section{Training Data} \label{sec:data} We train our word vectors using datasets composed of a mixture of Wikipedia and Common Crawl. \subsection{Wikipedia} Wikipedia is the largest free online encyclopedia, available in more than 200 different languages. Because the articles are curated, the corresponding text is of high quality, making Wikipedia a great resource for (multilingual) natural language processing. It has been applied to many different tasks, such as information extraction~\cite{wu2010open}, or word sense disambiguation~\cite{mihalcea2007using}. We downloaded the XML Wikipedia dumps from September 11, 2017. The first preprocessing step is to extract the text content from the XML dumps. For this purpose, we used a modified version of the \texttt{wikifil.pl} script\footnote{\url{http://mattmahoney.net/dc/textdata.html}} from Matt Mahoney. Even if Wikipedia is available for more than 200 languages, many dumps are relatively small in size (compared to the English one). As an example, some widely spoken languages such as Hindi, have relatively small Wikipedia data (39 millions tokens). Overall, 28 languages contain more than 100 millions tokens, and 82 languages contain more than 10 millions tokens. We give the number of tokens for the 10 largest Wikipedia in Table~\ref{tab:wikisize}. For these reasons (and the fact that Wikipedia is restricted to encyclopedic domains), we decided to also use data from the common crawl to train our word vectors. \begin{table}[t] \centering \begin{tabular}{lrr} \toprule Language & \# tokens & \# words \\ \midrule German & 1,384,170,636 & 3,005,294 \\ French & 1,107,636,871 & 1,668,310 \\ Japanese & 998,774,138 & 916,262 \\ Russian & 823,849,081 & 2,230,231 \\ Spanish & 797,362,600 & 1,337,109 \\ Italian & 702,638,442 & 1,169,177 \\ Polish & 386,874,622 & 1,298,250 \\ Portuguese & 386,107,589 & 815,284 \\ Chinese & 374,650,371 & 1,486,735 \\ Czech & 178,516,890 & 784,896 \\ Finnish & 127,176,620 & 880,713 \\ Hindi & 39,733,591 & 183,211 \\ \bottomrule \end{tabular} \caption{ Comparison of the size of the Wikipedia corpora for selected languages. The second column indicates the number of words which appear at least five times in the corpus. } \label{tab:wikisize} \end{table} \subsection{Common Crawl} The common crawl is a non profit organization which crawls the web and makes the resulting data publicly available. This large scale corpus was previously used to estimate $n$-gram language models~\cite{buck2014ngram} or to learn English word vectors~\cite{pennington2014glove}. To the best of our knowledge, it was not used yet to learn word vectors for a large set of languages. The data is distributed either as raw HTML pages, or as WET files which contain the extracted text data, converted to UTF-8. We decided to use the extracted text data, as it is much smaller in size, and easier to process (no need to remove HTML). We downloaded the May 2017 crawl, corresponding to roughly 24 terabytes of raw text data. \begin{table}[t] \centering \setlength\tabcolsep{4pt} \begin{tabular}{lcccccc} \toprule & \multicolumn{2}{c}{TCL} & \multicolumn{2}{c}{Wikipedia} & \multicolumn{2}{c}{EuroGov} \\ \cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} Model & Acc. & Time & Acc. & Time & Acc. & Time \\ \midrule \texttt{langid.py} & 93.1 & 8.8 & 91.3 & 9.4 & 98.7 & 13.1 \\ \texttt{fastText} & 94.7 & 1.3 & 93.0 & 1.3 & 98.7 & 2.9 \\ \bottomrule \end{tabular} \setlength\tabcolsep{6pt} \caption{Accuracy and processing time of our language detector and \texttt{langid.py} on three publicly available datasets. The TCL dataset was converted to UTF-8.} \label{tab:langdect} \end{table} \paragraph{Language Identification.} The first preprocessing step consists in splitting the data based on the language. As noted by \newcite{buck2014ngram}, some pages contain text in different languages. We thus decided to detect the language of each line independently. For this purpose, we built a fast language detector using the \texttt{fastText} linear classifier~\cite{joulin2017bag}, which can recognize 176 languages. We used 400 millions tokens from Wikipedia (described in the previous section) as well as sentences from the Tatoeba website\footnote{\url{www.tatoeba.org}} to train our language detector. The model uses character ngrams of length 2, 3 and 4 as features, and a hierarchical softmax for efficiency. We evaluate our model on publicly available datasets from \newcite{baldwin2010language} and report results in Table~\ref{tab:langdect}. Our approach compares favorably to existing methods such as \texttt{langid.py}~\cite{lui2012langid}, while being much faster. This language detector will be released along the other resources described in this article. After language identification, we only keep lines of more than 100 characters and with a high confidence score ($\ge 0.8$). \subsection{Deduplication and Tokenization} The second step of our pipeline is to remove duplicate lines from the data. We used a very simple method for this, computing the hash of each line, and removing lines with identical hashes (we used the default hash function of java String objects). While this could potentially remove unique lines (which do not have a unique hash), we observed very little collision in practice (since each language is processed independently). Removing duplicates is important for the crawl data, since it contains large amounts of boilerplate, as previously noted by \newcite{buck2014ngram}. Overall, 37\% of the crawl data is removed by deduplication, while 21\% of the Wikipedia data is removed by this operation. The final step of our preprocessing is to tokenize the raw data. We used the Stanford word segmenter~\cite{chang2008optimizing} for Chinese, Mecab~\cite{kudo2005mecab} for Japanese and UETsegmenter~\cite{nguyen2016hybrid} for Vietnamese. For languages written using the Latin, Cyrillic, Hebrew or Greek scripts, we used the tokenizer from the Europarl preprocessing tools~\cite{koehn2005europarl}. For the remaining languages, we used the ICU tokenizer. We give statistics for the most common languages in Table~\ref{tab:wikisize} and \ref{tab:crawlsize}. \begin{table}[t] \centering \begin{tabular}{lrr} \toprule Language & \# tokens & \# words \\ \midrule Russian & 102,825,040,945 & 14,679,750 \\ Japanese & 92,827,457,545 & 9,073,245 \\ Spanish & 72,493,240,645 & 10,614,696 \\ French & 68,358,270,953 & 12,488,607 \\ German & 65,648,657,780 & 19,767,924 \\ Italian & 36,237,951,419 & 10,404,913 \\ Portuguese & 35,841,247,814 & 8,370,569 \\ Chinese & 30,176,342,544 & 17,599,492 \\ Polish & 21,859,939,298 & 10,209,556 \\ Czech & 13,070,585,221 & 8,694,576 \\ Finnish & 6,059,887,126 & 9,782,381 \\ Hindi & 1,885,189,625 & 1,876,665 \\ \bottomrule \end{tabular} \caption{ Comparison accross languages of the size of the datasets obtained using the Common Crawl. The second column indicates the vocabulary size of the models trained on this data. } \label{tab:crawlsize} \end{table} \section{Models} \label{sec:model} In this section, we briefly describe the two methods that we compare to train our word vectors. \paragraph{Skipgram.} The first model that we consider is the skipgram model with subword information, introduced by \newcite{bojanowski2017enriching}. This model, available as part of the \texttt{fastText}\footnote{\url{https://fasttext.cc/}} software, is an extension of the skipgram model, where word representations are augmented using character ngrams. A vector representation is associated to each character ngram, and the vector representation of a word is obtained by taking the sum of the vectors of the character ngrams appearing in the word. The full word is always included as part of the character ngrams, so that the model still learns one vector for each word. We refer the reader to \newcite{bojanowski2017enriching} for a more thorough description of this model. \paragraph{CBOW.} The second model that we consider is an extension of the CBOW model~\cite{mikolov2013distributed}, with position weights and subword information. Similar to the model described in the previous paragraph, this model represents words as bags of character ngrams. The second difference with the original CBOW model is the addition of position dependent weights, in order to better capture positional information. In the CBOW model, the objective is to predict a given word $w_0$ based on context words $w_{-n}, ..., w_{-1}, w_1, ..., w_{n}$. A vector representation $\mathbf{h}$ of this context is obtained by averaging the corresponding word vectors: $$ \mathbf{h} = \sum_{\substack{i=-n \\ i \neq 0}}^n \mathbf{u}_{w_i} $$ Here, we propose to use the model with position weights introduced by \newcite{mnih2013learning}. Before taking the sum, each word vector is multiplied (element wise) by a position dependent vector. More formally, the vector representation $\mathbf{h}$ of the context is obtained using: $$ \mathbf{h} = \sum_{\substack{i=-n \\ i \neq 0}}^n \mathbf{c}_i \odot \mathbf{u}_{w_i}, $$ where $\mathbf{c}_i$ are vectors corresponding to each position in the window, $\odot$ is the element-wise multiplication and $\mathbf{u}_{w_i}$ are the word vectors. We remind the reader that the word vectors $\mathbf{u}_{w_i}$ are themselves sums over the character ngrams. We refer the reader to \newcite{mikolov2017advances} for a study of the effect of deduplication and model variants (such as position-weighted CBOW) on the quality of the word representations. \section{Evaluations} \label{sec:eval} In this work, we evaluate our word vectors on the word analogy task. Given a triplet of words \emph{A : B :: C}, the goal is to guess the word \emph{D} such that \emph{A : B} and \emph{C : D} share the same relation. An example of such analogy question is \emph{Paris : France :: Berlin : ?}, where the corresponding answer is \emph{Germany}. Word vectors can be evaluated at this task by computing the expected representation of the answer word \emph{D}. Given word vectors $x_A$, $x_B$ and $x_C$ respectively for words \emph{A}, \emph{B} and \emph{C}, the answer vector can be computed as $x_B - x_A + x_C$. In order to evaluate, the closest word vector $x$ in the dictionary is retrieved (omitting the vectors $x_A$, $x_B$ and $x_C$) and the corresponding word is returned. Performance is measured using average accuracy over the whole corpus. \subsection{Evaluation Datasets} \label{sec:anal-data} Analogy datasets are composed of word 4--uplets, of the form \emph{Paris : France :: Rome : Italy}. Such datasets are usually composed of all the possible combinations of pairs such as \emph{Paris : France}, \emph{Berlin : Germany} or \emph{Beijing : China}. In our evaluation, we use the dataset of~\newcite{svoboda2016new} for Czech, that of~\newcite{koper2015multilingual} for German, that of~\newcite{cardellino2016spanish} for Spanish, that of~\newcite{venekoski2017finnish} for Finnish, that of~\newcite{berardi2015word} for Italian, the European variant of the dataset proposed by~\newcite{hartmann2017portuguese} for Portuguese and that of~\newcite{chen2015joint} for Chinese. One of the contributions of this work is the introduction of word analogy datasets for French, Hindi and Polish. To build these datasets, we use the English analogies introduced by~\newcite{mikolov2013efficient} as a starting point. Most of the word pairs are directly translated, and we introduced some modifications, which are specific for each language. \paragraph{French.} We directly translated all the word pairs in the \verb+capital-common-countries+, \verb+capital-world+ and \verb+currency+ analogies. For \verb+family+ we translated most pairs, but got rid of ambiguous ones (singular and plural for \emph{fils}) or those that translate into nominal phrases. We replaced the \verb+city-in-state+ category by capitals of French \emph{d\'epartements}, removing those where either the \emph{d\'epartement} or capital name is a phrase. We also added a category named \verb+antonyms-adjectives+ composed of antinomic adjectives such as \emph{chaud} / \emph{froid} (hot and cold). For syntactic analogies, we translated word pairs in all categories, except for \verb+comparative+ and \verb+superlative+, which in french are trivial: for example \emph{fort}, \emph{plus fort}, \emph{le plus fort} (strong, stronger, strongest). When the word pair was ambiguous we either removed it or replaced with another one. Finally, we added a new \verb+past-participle+ category with pairs such as \emph{pouvoir} and \emph{pu}. In total, this dataset is composed of 31,688 questions. \begin{table*}[t] \centering \begin{tabular}{l c cccccccccc c r} \toprule && \textsc{Cs} & \textsc{De} & \textsc{Es} & \textsc{Fi} & \textsc{Fr} & \textsc{Hi} & \textsc{It} & \textsc{Pl} & \textsc{Pt} & \textsc{Zh} && Average \\ \midrule Baseline && 63.1 & 61.0 & 57.4 & 35.9 & 64.2 & 10.6 & 56.3 & 53.4 & 54.0 & 60.2 && 51.0 \\ $n$-gram 5-5 && 57.7 & 61.8 & 57.5 & 39.4 & 65.9 & 8.3 & 57.2 & 54.5 & 54.8 & 59.3 && 50.9 \\ CBOW && 63.9 & 71.7 & 64.4 & 42.8 & 71.6 & 14.1 & 66.2 & 56.0 & 60.6 & 51.5 && 55.5 \\ +negatives && 64.8 & 73.7 & 65.0 & 45.0 & 73.5 & 14.5 & 68.0 & 58.3 & 62.9 & 56.0 && 57.4 \\ +epochs && 64.6 & 73.9 & 67.1 & 46.8 & 74.9 & 16.1 & 69.3 & 58.2 & 64.7 & 60.6 && 58.8 \\ \midrule Using Crawl && 69.9 & 72.9 & 65.4 & 70.3 & 73.6 & 32.1 & 69.8 & 67.9 & 66.7 & 78.4 && 66.7 \\ \bottomrule \end{tabular} \caption{ Performance of the various word vectors on the word analogy tasks. We restrict the vocabulary for the analogy tasks to the 200,000 most frequent words from the training data. } \label{tab:anal} \end{table*} \begin{table*}[t] \centering \begin{tabular}{l c ccccccccccc } \toprule && \textsc{Cs} & \textsc{De} & \textsc{Es} & \textsc{Fi} & \textsc{Fr} & \textsc{Hi} & \textsc{It} & \textsc{Pl} & \textsc{Pt} & \textsc{Zh} \\ \midrule Wikipedia && 76.9 & 79.1 & 93.9 & 94.6 & 88.1 & 70.8 & 80.9 & 69.5 & 79.2 & 100.0 \\ Common Crawl && 78.6 & 81.1 & 90.4 & 92.2 & 92.5 & 70.7 & 82.6 & 63.4 & 75.7 & 100.0 \\ \bottomrule \end{tabular} \caption{ Coverage of models trained on Wikipedia and Common Crawl on the word analogy tasks. } \label{tab:ana-cov} \end{table*} \paragraph{Hindi.} All the word pairs in the categories \verb+capital-common-countries+, \verb+capital-world+ and \verb+currency+ were translated directly. For the \verb+family+ category, most of the pairs were translated. However, we got rid of word pairs like stepbrother and stepsister which translate into two-word phrases. Also, word-pairs which differentiate in the maternal or paternal origin of the relationship like `d\=ad\=a - d\=ad\=\i' (paternal grandparents) and `n\=an\=a - n\=an\=\i' (maternal grandparents) were added. For the \verb+city-in-state+ category, city-state pairs from India were added, removing pairs in which the city or the state name is a phrase. We had to remove \verb+adjective-to-adverb+, \verb+comparative+, \verb+superlative+, \verb+present-participle+ and \verb+past-tense+ categories as in these cases, we are left with phrases rather than words. We also added a new category \verb+adjective-to-noun+, where an adjective is mapped to the corresponding abstract noun: for example `m\=\i\d{t}h\=a'(sweet)' is mapped to `mi\d{t}h\=as'(sweetness). \paragraph{Polish.} As for the other languages, we translated all the word pairs in the \verb+capital-common-countries+, \verb+capital-world+, \verb+currency+ and \verb+family+ categories. For the \verb+city-in-state+ category, we used the capital of Polish regions (\emph{wojew\'odztwo}). For the syntactic analogies, we translated word pairs in all categories except for \verb+plural-verbs+, which we replaced with \verb+verb-aspect+. One example with two aspects is \emph{iść} and \emph{chodzić} which are both imperfective verbs, but the second one expresses an aimless motion. For the \verb+past-tense+ category, we use a mixture of perfective and imperfective aspects. Overall, by taking all possible combinations, we come up with 24,570 analogies. \subsection{Model Variants} \label{sec:model-var} In all our experiments, we compare our word vectors with the ones obtained by running the \texttt{fastText} skipgram model with default parameters -- we refer to this variant as ``Baseline''. Additionally, we perform an ablation study showing the importance of all design choices. We successively add features as follows: \begin{itemize} \item $n$-gram 5--5: getting word vectors with character $n$-grams of length 5 only. By default, the \texttt{fastText} library uses all character $n$-grams from length 3 to 6. One motivation for using fewer $n$-grams is that the corresponding models are much more efficient to learn. \item CBOW: using the model described in Sec.~\ref{sec:model} instead of the skipgram variant from~\newcite{bojanowski2017enriching}. \item +negatives: using more negative examples. By default, the \texttt{fastText} library samples 5 negative examples. Here, we propose to use 10 negatives. \item +epochs: using more epochs to train the models. By default, the \texttt{fastText} library trains models for 5 epochs. Here, we propose to train for 10 epochs. \item Using Crawl: instead of only training on Wikipedia, we also use the crawl data. For many languages, this corresponds to a large increase of training data size. \end{itemize} \subsection{Results} \label{sec:anal} We evaluate all the model variants on word analogies in ten languages and report the accuracy in Table~\ref{tab:anal}. We restrict the vocabulary for the analogy tasks to the 200,000 most frequent words from the training data. Therefore, the models trained on Wikipedia and Wikipedia+Crawl do not share the exact same vocabulary (see coverage in Table~\ref{tab:ana-cov}). \paragraph{Influence of models and parameters.} First, we observe that on average, all the modifications discussed in Section~\ref{sec:model-var} lead to improved accuracy on the word analogy tasks compared to the baseline \texttt{fastText} model. First, using character $n$-grams of size 5, instead of using the default range of 3--6, does not significantly decrease the accuracy (except for Czech). However, using a smaller number of character $n$-grams leads to faster training, especially when using the CBOW model. Second, we note that using the CBOW model with position weights, described in Section~\ref{sec:model}, gives the biggest improvement overall. Finally, using more negative examples and more epochs, while making the models slower to train, also leads to significant improvement in accuracy. \paragraph{Influence of training data.} One of the contributions of this work is to train word vectors in multiple languages on large scale noisy data from the web. We now compare the quality of the obtained models to the ones trained on Wikipedia data. Unsurprisingly, we observe that for high resources languages, such as German, Spanish or French, using the crawl data does not increase (or even slightly decreases) the accuracy. This is partly explained by the domain of the analogy datasets, which corresponds well to Wikipedia. However, it is important to keep in mind that the models trained on the crawl data have a larger coverage, and might have better performance on other domains. Second, we observe that for languages with small Wikipedia, such as Finnish or Hindi, using the crawl data leads to great improvement in performance: +23.5 for Finnish, +9.7 for Polish, +16.0 for Hindi, +17.8 for Chinese. \section{Conclusion} In this work, we contribute word vectors trained on Wikipedia and the Common Crawl, as well as three new analogy datasets to evaluate these models, and a fast language identifier which can recognize 176 languages. We study the effect of various hyper parameters on the performance of the trained models, showing how to obtain high quality word vectors. We also show that using the common crawl data, while being noisy, can lead to models with larger coverage, and better models for languages with small Wikipedia. Finally, we observe that for low resource languages, such as Hindi, the quality of the obtained word vectors is much lower than for other languages. As future work, we would like to explore more techniques to improve the quality of models for such languages. \section{Bibliographical References} \label{main:ref} \bibliographystyle{lrec}
{ "timestamp": "2018-03-30T02:00:24", "yymm": "1802", "arxiv_id": "1802.06893", "language": "en", "url": "https://arxiv.org/abs/1802.06893" }
\section{Introduction} With the advent of the Internet of Things (IoT), next-generation military networks will rely more on machine intelligence and the information collected from the densely deployed IoT devices \cite{IoBT,IoBTtwo,IoBTthree}. The integration of military networks with the various IoT devices will potentially achieve battlefield autonomy and increase considerably the efficiency of battlefield operations, thus forming the so-called the \emph{Internet of battlefield Things (IoBT)}\cite{IoBTone}. However, due to its adversarial nature, the IoBT is prone to a multitude of security attacks. One important attack on IoBT is the misinformation attack \cite{IoBT} in which an adversary injects false information at each IoBT device. Such misinformation can then be used by the adversary to manipulate the decisions of the military commanders, in an effort to jeopardize the success of the military mission. Thus, realizing the vision of a large-scale IoBT is largely contingent on developing novel security mechanisms to combat misinformation propagation across the various IoBT nodes. The dynamics of misinformation propagation have been recently modeled using epidemic models for social networks in \cite{social} and for mobile opportunistic networks such as those encountered in the battlefield in \cite{mobility}. Epidemic models are suitable for IoBT misinformation propagation due to the presence of strong interactions among the densely deployed IoBT devices. This dense nature of the IoBT implies that an IoBT device can get easily infected with misinformation whenever it communicates with any one of its infected neighbors. Further, epidemic models can capture systems with infinite number of nodes, which is suitable for the large-scale IoBT systems. Many existing works have considered the problem of controlling the spread of network epidemics and studied the interaction between the network and the adversary using game-theoretic approaches \cite{epidemicgame2, epidemicgame3, epidemicgame7, epidemicgame1, epidemicgame0, epidemicgame4, epidemicgame5,epidemicgame6, epidemicgame9, mean-fieldSIR}. In \cite{epidemicgame2, epidemicgame3, epidemicgame7}, a noncooperative game is considered in which the players are the network nodes whose goal is to choose a curing rate that minimizes the protection cost as well as the infection costs at steady-state. In \cite{epidemicgame1}, several noncooperative games are proposed for network epidemic control between a network operator and an attacker with the goal of minimizing the infection cost. A zero-sum differential game is proposed in \cite{epidemicgame0} and \cite{epidemicgame4} for network malware propagation in which the network operator controls the recovery rates of the sensor nodes whereas an attacker chooses the infection rate that maximizes the infection cost. The work in \cite{epidemicgame0} particularly considers a wireless sensor network in which the network operator controls the sleep rate of the sensor nodes in addition to the recovery rate in order to limit the spread of infections. The authors in \cite{epidemicgame5} propose a network formation game in which the network nodes choose to construct links starting from an empty network in order to reach a connected, steady-state network while minimizing the costs of infection. In \cite{epidemicgame6} and \cite{epidemicgame9}, the problem of controlling the network epidemic through vaccination is formulated as a zero-determinant game where both the network administrator and the nodes are the players. In \cite{mean-fieldSIR}, a mean-field game is proposed to study infection spread in a fully connected regular network. However, most of this prior art \cite{epidemicgame2, epidemicgame3, epidemicgame7, epidemicgame1, epidemicgame0, epidemicgame4, epidemicgame5,epidemicgame6, epidemicgame9, mean-fieldSIR} models the network as either a fully connected graph or as a $k$-regular graph. However, in an IoBT, the nodes have heterogeneous connectivty, and, thus, there is a need to consider more suitable graph models that account for the IoBT heterogeneity. Also, considering the network operator as the sole network player as done in \cite{epidemicgame1, epidemicgame0, epidemicgame4} is not suitable for the IoBT since it requires a centralized control over all of the IoBT nodes and therefore significant time and control overheads, which is not tolerable in time-sensitive military missions. Hence, distributed approaches are more favorable for the IoBT since the nodes must instanteneously take control to limit misinformation propagation. Moreover, existing works, such as in\cite{epidemicgame2} and \cite{epidemicgame3}, that consider the network nodes as the players typically seek to maximize the payoff when the system is at the steady state. Such approaches are not suitable for the problem of misinformation propagation in IoBT. This is due to the fact that information propagation in the IoBT is time sensitive. Thus, in order to maintain the successful operation of the IoBT, it is critical to limit the spread of misinformation at each time instant and not only at the steady state. In addition, choosing only the curing rate, as done by most of the existing works \cite{epidemicgame2, epidemicgame3, epidemicgame7, epidemicgame1, epidemicgame0, epidemicgame4, epidemicgame5,epidemicgame6, epidemicgame9, mean-fieldSIR}, is not adequate to instanteneously limit the spread of misinformation. In fact, curing the nodes comes at the expense of security costs. Thus, there is a need to implement cost-efficient actions that can effectively limit the spread of misinformation. Here, it is also worth noting that, recently, a number of works \cite{MIoT1, MIoT2, MIoT3, MIoT4} have studied various IoBT security scenarios, however, these works do not analyze the critical problem of misinformation spread. The main contribution of this paper is a novel, comprehensive framework for thwarting the spread of misinformation in a large-scale IoBT system. In particular, the proposed framework will yield the following key contributions: \begin{itemize} \item We propose a novel approach to control misinformation propagation in the IoBT. In particular, we propose a distributed approach in which each IoBT node decides whether or not to accept the received information at each time instant, in order to to limit the propagation of misinformation. Thus, our proposed approach, due to its distributed nature, is scalable for a large-scale system such as the IoBT. Further, due to the heterogeneity of the IoBT nodes in terms of connectivity, we model the IoBT as a random graph in which the nodes have heterogeneous degrees that follow a predetermined distribution. \item We consider an epidemic model suitable for IoBT misinformation propagation that accounts for the heterogeneous characteristics of the IoBT nodes to effectively identify misinformation. In particular, we consider an SELI epidemic model for the IoBT in which, in addition to the conventional susceptible ($S$) and infected ($I$) states of the nodes, we introduce latent ($L$) and exposed ($E$) states to capture scenarios in which the IoBT nodes choose to perform further processing to check the validity of any received information. \item We formulate the IoBT misinformation propagation problem as a finite-state mean-field game \cite{finitemean-field} with multiclass agents \cite{multiclass} whose players are the IoBT nodes each of which is seeking to determine its probability of accepting the received information. Mean-field games \cite{meanw} are suitable for our problem since they handle an infinite number of players which is the case for a large scale IoBT. Further, the proposed framework of mean-field games with \emph{multiclass agents} can capture the presence of several types of populations where agents belonging to the same type have similar characteristics. Thus, such games \cite{multiclass} are suitable to model the heterogeneous characteristics of the IoBT nodes, unlike conventional mean-field games \cite{mean-fieldSIR} that assume all players to be similar. To the best of our knowledge, there is no prior work that combines mean-field games with multiclass agents, as proposed here. \item We consider a suitable metric for misinformation propagation known as the quality-of-information (QoI) in the IoBT nodes' payoff in addition to the infection cost. The QoI of each node is defined as a function of the information received from its neighbors, the integrity of the received information and the age of information. \item We extend the definition of the finite state mean-field equilibrium (MFE) that is defined in \cite{finitemean-field} to the case of multiclass agents and propose an algorithm based on the forward backward sweep method \cite{FBSM} to find the MFE. Then, we analyze the case of a finite IoBT game and prove the convergence of the equilibrium of the finite game to the MFE. This result, in turn, shows that the MFE is an effective approximation to a real-world IoBT system with a large number of players. \item Numerical results show that the proposed scheme can achieve a 1.2-fold increase in the quality of information (QoI) compared to a baseline scheme in which the IoBT nodes are always transmitting. Further, the proposed scheme can reduce the proportion of infected nodes by $99\%$ compared to the baseline. \end{itemize} \vspace{-1 cm} The rest of the paper is organized as follows: Section II presents the system model. Section III present the IoBT mean-field game. Section IV presents the finite IoBT game and the convergence conditions of the finite game to the mean-field game. Section V presents the simulation results. Finally, conclusions are drawn in section VI. \section{System Model} \vspace{-0.1 cm} \subsection{Epidemic Model} Consider an IoBT network modeled by a random graph whose node degrees are distributed according to a distribution $P$, with $P(k)$ being the probability that an IoBT node has a degree $k$. We let $K_{\max}$ be the maximum degree in the IoBT system. The random graph is a realistic model for a large-scale IoBT due to the heterogeneity in the connectivity of the IoBT nodes. In fact, the IoBT network comprises IoBT devices that are locally connected to a cluster head, and sinks/fusion centers that are connected to a multitude of cluster heads and various IoBT devices. Further, by properly choosing the degree distribution $P$, the random graph can represent a tree-like structure which commonly represents the topology of military networks \cite{treelike}. In the considered IoBT graph, nodes having degree $k$ are classified into types within a set $\mathcal{H}_k$ and distributed according to $p_k$, where $p_k(i)$ is the probability that an node of degree $k$ has type $i \in \mathcal{H}_k$. The existence of multiple types of nodes having a degree $k$ stems from the heterogeneity of the IoBT nodes that include simple sensors, wearables, vehicles, cameras, and robots or drones that have different capabilities, characteristics, and roles. In this IoBT, an attacker seeks to inject false information into the nodes in order to disrupt the normal operations of the system. Let $\lambda_{ik}$ be the rate of injection of false information into a node of degree $k$ and type $i$. At any given time instant $t$, each IoBT node chooses to either accept the received information and then transmit it or to doubt the integrity of this received information. An IoBT node becomes infected once it accepts false information. When the IoBT node chooses to doubt the information, it retains the information for some time for further inspection. For instance, it can potentially run a classification machine learning algorithm to decide whether to forward or discard the stored information. Finally, an infected node no longer uses the misinformation when it becomes obsolete and, hence, this node goes back to being susceptible to attacks. Thus, each one of the IoBT nodes can be in one of the following states: \begin{itemize} \item \emph{Susceptible (S)}: A node is said to be susceptible when it does not contain misinformation but, simultaneously, it does not have strong security mechanisms to identify misinformation. Hence, it can get infected with misinformation either when it accepts information forwarded from an infected node or when the attacker succeeded in injecting misinformation directly into the designated susceptible node. \item \emph{Exposed (E)}: A node is said to be exposed when it receives misinformation from its neighbour. Yet, it is doubtful about the credibility of the received information, and, hence, it does not immediately forward the information to its neighbour. \item \emph{Latent (L)}: A node is said to be latent when it receives true information. However, it still decides to do further processing to inspect the received information. \item \emph{Infected (I)}: A node is said to be infected when it contains misinformation which it believes it is correct, and, subsequently, it forwards the misinformation to its neighbour. \end{itemize} \vspace{-0.3 cm} The SEI model and its variants have been been commonly adopted as a realistic model to analyze misinformation propagation in networks (e.g. see \cite{social, mobility}). However, existing models do not consider the case when the information which a node doubts is true, which is different from the case when the node doubts misinformation. This is due to the fact that the probability of accepting information after processing generally depends on whether the information is true or not. Thus, we introduce the latent (\emph{L}) state to represent tin which the IoBT nodes analyze true information. Further, existing models such as those in \cite{mobility} do not explicitly take into account the delay incurred when a node decides to analyze its received information. In contrast, in our model, we account for the processing delay in the latent and the exposed states by the probabilities of residing in states $E$ and $L$, respectively. \begin{figure} \begin{center} \begin{tikzpicture}[->, >=stealth', auto, semithick, node distance=4.5 cm] \tikzstyle{every state}=[fill=white,draw=black,thick,text=black,scale=1] \node[state] (A) {$S$}; \node[state] (B)[above right of=A] {$E$}; \node[state] (C)[below right of=A] {$L$}; \node[state] (D)[below right of=B] {$I$}; \path (A) edge[bend left] node{$\sigma_{ik}(t)R_{ik}(\Theta(t))$} (B) edge [bend left, above] node{$\hspace{0.4 cm}\alpha_{ik}(t)R_{ik}(\Theta(t))$} (D) edge node{$\sigma_{ik}(t)L_{ik}(\Theta(t))$} (C) (B) edge[bend left] node{$\beta^E_{ik}(1-\delta_{ik})$} (D) edge [above] node{$\hspace{2.8 cm} \gamma^E_{ik}(1-\delta_{ik})$} (A) (C) edge[bend left] node{$(\beta^L_{ik}+\gamma^L_{ik})(1-\delta_{ik})$} (A) (D) edge node{$\nu_{ik}$} (A) ; \end{tikzpicture} \end{center} \caption{State transition diagram of IoBT node of degree $k$ and type $i$}\label{StateTrans} \vspace{-0.2 cm} \end{figure} Fig. \ref{StateTrans} shows the state transition diagram of each IoBT node of degree $k$ and type $i$. When an IoBT node of degree $k$ and type $i$ is susceptible and receives information, it will either accept the information with probability $\alpha_{ik}(t)$ and become infected or it will doubt the information with probability $\sigma_{ik}(t)=1-\alpha_{ik}(t)$. The IoBT node receives misinformation either from infected neighbours or directly from the attacker. Based on \cite{hetero}, the probability with which a node with degree $k$ is infected by one of its neighbours is derived as $k\Theta(t)$ where $\Theta(t)$ is the probability that a randomly chosen link is pointing to an infected node and is given by \begin{equation} \Theta(t)=\frac{\sum_k kP(k)\sum_{i \in \mathcal{H}_k}p_k(i) I_{ik}(t)}{<k>}, \label{theta} \end{equation} where $<k>=\sum_{k}kP(k)$ and $I_{ik}(t)$ is the proportion of infected devices of degree $k$ and type $i$. Thus, since the attacker directly infects the node with rate $\lambda_{ik}$, the total infection rate is $\alpha_{ik}(t)R_{ik}(\Theta(t))$ where $R_{ik}(\Theta(t))=\lambda_{ik}+k\Theta(t)$. Similarly, when the IoBT node is susceptible and receives true information, it becomes latent with probability $\sigma_{ik}(t)L_{ik}(\Theta(t))$ or remains susceptible with probability $\alpha_{ik}(t)L_{ik}(\Theta(t))$, where $L_{ik}(\Theta(t))=(1-\lambda_{ik})(1-\Theta(t))^k$ is the probability that an IoBT node does not receive misinformation at time $t$. When an IoBT node is in the exposed state, it remains in this state with probability $\delta_{ik}$. The probability $\delta_{ik}$ corresponds to the delay incurred to judge the credibility of the obtained information. Then, the node will accept the information with probability $\beta^E_{ik}$ and become infected, or it will refuse the misinformation and return back to the susceptible state with probability $\gamma^E_{ik}$. Similarly, when the node is in the latent state, it remains in this state in order to process the information with probability $\delta_{ik}$, it accepts the information with probability $\beta^L_{ik}$ or rejects it with probability $\gamma^L_{ik}$. When in the latent state, the IoBT node will return to the susceptible state whether it decides to accept or reject the information (i.e. with probability $\beta^L_{ik}+\gamma^L_{ik}$). The probabilities $\delta_{ik}$, $\beta^E_{ik}$, $\gamma^E_{ik}$, $\beta^L_{ik}$ and $\gamma^L_{ik}$ depend on the node's capabilities (for example the strength of the machine learning algorithm used) and its effectiveness in identifying the misinformation. Finally, an IoBT node of degree $k$ and type $i$ discards the misinformation when it is no longer useful after some time with probability $\nu_{ik}$. Let $m^S_{ik}(t)$, $m^{E}_{ik}(t)$, $m^{L}_{ik}(t)$, and $m^{I}_{ik}(t)$ be the proportions of IoBT nodes of degree $k$ and type $i$ in states $\emph{S}$, $\emph{E}$, $\emph{L}$, and $\emph{I}$, respectively. Since IoBT networks typically have a massive number of devices, we consider a mean-field epidemic model. Mean-field models provide a simple yet effective representation of large and complex interacting system of agents. Mean-field models mainly study decision making processes in which the number of agents tends to infinity, and that the dynamics of agents of similar characteristics can be described by an aggregate behavior. Further, in mean-field models since the number of agents tend to infinity, the influence of a single agent on the overall system is negligible, yet the effect of all agents is considerable and approximated by their average effect. Thus, at the mean-field level, the state dynamics of IoBT nodes of degree $k$ and type $i$ are governed by the following Kolmogorov differential equations: \vspace{-0.6 cm} \small \begin{eqnarray} &&\hspace{-1.3 cm}\frac{\partial m^{S}_{ik}(t)}{\partial t}=\hspace{-0.1 cm}-(1-\bar{\alpha}_{ik}(t)+\bar{\alpha}_{ik}(t) R_{ik}(\Theta(t))m^{S}_{ik}(t)+(1-\delta_{ik})\gamma^E_{ik}m^E_{ik}+(1-\delta_{ik})m^L_{ik}+\nu_{ik}m^I_{ik},\label{ms} \end{eqnarray} \begin{eqnarray} &&\hspace{-7 cm}\frac{\partial m^{E}_{ik}(t)}{\partial t}=-(1-\delta_{ik})m^{E}_{ik}+(1-\bar{\alpha}_{ik}(t))R_{ik}(\Theta(t))m^{S}_{ik}(t),\label{me} \end{eqnarray} \vspace{-0.6 cm} \begin{eqnarray} &&\hspace{-7 cm}\frac{\partial m^{L}_{ik}(t)}{\partial t}=-(1-\delta_{ik})m^{L}_{ik}+(1-\bar{\alpha}_{ik}(t))L_{ik}(\Theta(t))m^{S}_{ik}(t),\label{ml} \end{eqnarray} \begin{eqnarray} &&\hspace{-5.8 cm}\frac{\partial m^{I}_{ik}(t)}{\partial t}= \bar{\alpha}_{ik}(t) R_{ik}(\Theta(t))m^{S}_{ik}(t)+(1-\delta_{ik})\beta^E_{ik}m^E_{ik}-\nu_{ik}m^I_{ik}(t), \label{mI} \end{eqnarray} \normalsize where $\bar{\alpha}^{S}_{ik}(t)$ is the aggregate rate of accepting information for all nodes with degree $k$ and type $i$ when in the susceptible state. At time $0$, all the nodes are susceptible, and, thus, $m^{S}_{ik}(0)=1$ and $m^I_{ik}(0)=m^{E}_{ik}(0)=m^{L}_{ik}(0)=0, \hspace{0.2 cm} \forall i,k.$ Each IoBT node of degree $k$ and type $i$ seeks to determine the probability $\alpha_{ik}(t)$ of accepting the received information that minimizes its cost. Let $x^{S}_{ik}(t)$, $x^{E}_{ik}(t)$, $x^{L}_{ik}(t)$, and $x^I_{ik}(t)$ be the probabilities that the node of degree $k$ and type $i$ is in the $S$, $E$, $L$, and $I$ states, respectively. Based on the epidemic dynamics in (\ref{mI}), the state $\boldsymbol{x}_{ik}(t)=(x^{S}_{ik}(t),x^{E}_{ik}(t),x^L_{ik}(t), x^I_{ik}(t))$ of each node is governed by the following differential equations: \vspace{-0.4 cm} \small \begin{eqnarray} &&\hspace{-2cm}\frac{\partial x^{S}_{ik}(t)}{\partial t}=\hspace{-0.1 cm}-(1-\bar{\alpha}_{ik}(t)+\bar{\alpha}_{ik}(t) R_{ik}(\Theta(t))x^{S}_{ik}(t)+(1-\delta_{ik})\gamma^E_{ik}x^E_{ik}+(1-\delta_{ik})m^L_{ik}+\nu_{ik}x^I_{ik},\label{xs}\\ &&\hspace{-2 cm}\frac{\partial x^{E}_{ik}(t)}{\partial t}=-(1-\delta_{ik})x^{E}_{ik}+(1-\bar{\alpha}_{ik}(t))R_{ik}(\Theta(t))x^{S}_{ik}(t),\label{xE}\\ &&\hspace{-2 cm}\frac{\partial x^{L}_{ik}(t)}{\partial t}=-(1-\delta_{ik})x^{L}_{ik}+(1-\bar{\alpha}_{ik}(t))L_{ik}(\Theta(t))x^{S}_{ik}(t),\label{xL}\\ &&\hspace{-2 cm}\frac{\partial x^{I}_{ik}(t)}{\partial t}= \bar{\alpha}_{ik}(t) R_{ik}(\Theta(t))x^{S}_{ik}(t)+(1-\delta_{ik})\beta^E_{ik}x^E_{ik}-\nu_{ik}x^I_{ik}(t).\label{xI} \end{eqnarray} \normalsize \vspace{-0.5 cm} \subsection{Cost Functions} The payoff of each IoBT node is expressed in terms of the \emph{quality of its information} and the cost of infection. The QoI is a metric that has been widely used to assess the information generated by sensors networks, in general, and military networks as discussed in particular \cite{qqq} and \cite{QoI2}. It is mainly a function of the precision of the sensing device, the integrity, and the age of information. Further, in \cite{QoC}, the authors consider the QoI of the information received by each node as an increasing function of the number of its neighbors. However, the integrity of the information is not considered. In contrast, in our IoBT problem, we define the QoI $Q_{ik}(t)$ to be a joint function of the degree of the node, the information integrity, and the age of information. Consequently, the QoI of a given IoBT node of degree $k$ and type $i$ will be given by \begin{equation} Q_{ik}(t)=V_{ik}(t)-\kappa\delta_{ik}, \end{equation} where $\delta_{ik}$ is the delay of processing the information when in the latent or exposed state, $\kappa$ is a normalization constant, $V_{ik}(t)$ is an increasing linear function of the number of transmitting noninfected links, a decreasing linear function in the number of infected links if the node accepts the information, and $V_{ik}(t)=0$ otherwise. Thus, the function $V_{ik}(t)$ captures the integrity of the information generated at each node by accounting for infected links. When the node is not infected, it is transmitting only when in the $\emph{S}$ state. For a node with degree $k$ and type $i$, let $n_1$ and $n_2$ be the number of links pointing to a node in states $\emph{I}$ and $\emph{S}$, respectively. The function $V_{ik}(t)$ also captures the integrity of the information generated by the node itself. Thus, this function $V_{ik}(t)$ is given by $V_{ik}(t)=n_2-n_1-y_{ik}(t)+(1-y_{ik}(t))=n_2-n_1-y_{ik}(t)+(1-y_{ik}(t))$, where $y_{ik}(t)$ indicates if the attacker successfully injects false information. Next, we define $\eta(t)$ as the probability that a randomly chosen link is pointing to a susceptible node. $\eta(t)$ will then be given by: \begin{equation} \eta(t)=\frac{\sum_k kP(k)\sum_{i \in \mathcal{H}_k}p_k(i) m^{S}_{ik}(t)}{<k>}. \label{etta} \end{equation} Thus, the number of links pointing to nodes in \emph{I}, $\emph{S}$, and either in $\emph{L}$ or $\emph{E}$ states follows a multinomial distribution with parameters $\Theta(t)$, $\eta(t)$, and $1-\Theta(t)-\eta(t)$. Consequently, when the node is susceptible and accepts true information, the expected QoI is $\bar{V}^T_{ik}(\eta(t))=k\eta(t)+1$. Otherwise, when a susceptible node accepts misinformation, the expected value of $V^M_{ik}(t)$ given that the node receives misinformation and $\boldsymbol{\gamma}(t)=(\Theta(t),\eta(t))$ will be \vspace{-0.4 cm} \begin{eqnarray} &&\hspace{-0.9 cm}\bar{V}^{M}_{ik}(\boldsymbol{\gamma}(t))=\frac{F^M_{ik}(\boldsymbol{\gamma}(t))}{1-L_{ik}(\Theta(t))}, \label{qoIM} \end{eqnarray} where \begin{eqnarray} &&\hspace{-0.9 cm}\bar{F}^{M}_{ik}(\boldsymbol{\gamma}(t))=\hspace{-0.1 cm}\sum_{n_1=0}^k\sum_{n_2=0}^{k-n_1} \Big(\lambda_{ik} (n_2-n_1-1)+\sum_{n_1=0}^k\sum_{n_2=0}^{k-n_1}(1-\lambda_{ik}) (n_2-n_1)-\sum_{n_2=0}^{k}(1-\lambda_{ik}) (n_2)\Big)\nonumber\\ &&\hspace{1.3 cm}\times\frac{k!}{n_1!n_2!(k-n_1-n_2)!}\times\Theta(t)^{n_1}\eta(t)^{n_2}(1-\Theta(t)-\eta(t))^{k-n_1-n_2} \nonumber\\ &&\hspace{0.7 cm}=k\eta(t)-k\Theta(t)-\lambda_{ik}-(1-\lambda_{ik})k\eta(t).\label{qoIM} \end{eqnarray} \normalsize Whenever a given IoBT node is in the susceptible state and suspects its information to be modified by the attacker, this node becomes exposed, and the QoI in this case will be $\bar{V}^I_{ik}(t)-\delta_{ik}(t)$ if it accepts the information with probability $\beta^E_{ik}(t)$, where $V_{ik}(t)$ is given by (\ref{qoIM}). Otherwise, if it does not accept the information, the QoI will be $0$. When the node is in the susceptible state and doubts information which is true, the QoI is $k\eta(t)+1-\delta_{ik}$ if it accepts the information with probability $\beta^L_{ik}$. Thus, the expected QoI given that the node is in the susceptible state with respect to $\alpha_{ik}(t)$ will be given by \vspace{-0.5 cm} \begin{eqnarray} \mathbb{E}_{\alpha_{ik}(t)}[Q_{ik}(t)]&=&\alpha_{ik}(t)(L_{ik}(\Theta(t))(k\eta(t)+1)+\alpha_{ik}(t)(k\eta(t)-k\Theta(t)-\lambda_{ik}-(1-\lambda_{ik})k\eta(t))\nonumber\\ &&\hspace{-0.2 cm}+\sigma_{ik}(t)L_{ik}(\Theta(t))(\beta^L_{ik}(t)(k\eta(t)+1-\kappa \delta_{ik}))\nonumber\\ &&\hspace{-0.2 cm}+\sigma_{ik}(t)(\beta^E_{ik}(k\eta(t)-k\Theta(t)-\lambda_{ik}-(1-\lambda_{ik})k\eta(t)-\kappa \delta_{ik}(1-L_{ik}(\Theta(t))). \end{eqnarray} \normalsize For a node of degree $k$ and type $i$, the cost $c_{ik}$ of infection depends on its importance and functionality in the IoBT. For example, the cost of infection of a fusion center will be higher than that of a cluster head, and the cost of infection of a drone is higher than that of a simple sensor. When the node is in the $\emph{S}$ state, the cost is expressed as the square of the difference between the expected QoI and a target value $Q_T$. Thus, the cost of a node of degree $k$ and type $i$ at time $t$ will be \vspace{-0.3 cm} \begin{eqnarray} &&\hspace{-0.6 cm}v_{ik}(\boldsymbol{x}_{ik},\alpha_{ik}(t), \boldsymbol{\gamma}(t))=x^{S}_{ik}(t)(\mathbb{E}_{\alpha_{ik}(t)}[\bar{Q}_{ik}(\boldsymbol{\gamma}(t))]-Q_T)^2+x^I_{ik}(t)c_{ik} \label{instcost} \end{eqnarray} \normalsize The objective of each IoBT node of degree $k$ and type $i$ is to minimize its cost over $[0,T]$, i.e., each IoBT node will seek to solve the following optimization problem: \begin{eqnarray} &&\hspace{-1 cm}\min_{\boldsymbol{\alpha}_{ik}(t)}\int_{t=0}^Tv_{ik}({\boldsymbol{x}_{ik},\alpha}_{ik}(t), \boldsymbol{\gamma}(t)) \hspace{0.1 cm} \text{s.t.} \hspace{0.1 cm} {\alpha}_{ik}(t) \in [0,1]^2, \label{payoff} \end{eqnarray} \normalsize subject to the state constraints in (\ref{xs})-(\ref{xI}) and $\textrm{with}\hspace{0.1 cm} v_{ik}(T)=0$. As shown in (\ref{xs})-(\ref{xI}) and (\ref{instcost}), the state evolution of each IoBT node as well its cost function depend on $\Theta(t)$ and $\eta(t)$ and therefore on the mean-field vector $\boldsymbol{m}_{ik}(t)=(m^l_{ik}(t))_{l \in \mathcal{S}}$ for all $(i,k)$. Further, the dynamics of the IoBT nodes and their cost functons depend on their degree and their type. Thus, the problem is formulated using game theory \cite{gametheory,gametheory1}. In particular, we use as a finite state mean-field game \cite{finitemean-field} with multiclass agents \cite{multiclass} as explained next. \vspace{-0.1 cm} \section{IoBT Mean-Field Game with Multiclass Agents} \vspace{-0.1 cm} \subsection{Game Formulation} Our problem is formulated as a finite state, mean-field game \cite{finitemean-field} with multiclass agents \cite{multiclass} where the players are the IoBT nodes, and each IoBT node can be in a state belonging to the set $\mathcal{S}=\{S,E,L,I\}$. IoBT nodes having the same degree $k$ and type $i$ belong to class $(i,k)$. We denote by $\mathcal{C}$ the set of all classes. The proportion $m^l_{ik}$ of IoBT nodes of class $(i,k)$ in each state $l$ evolves according to (\ref{ms})-(\ref{mI}). We fix a reference player of class $(i,k)$. The state evolution of the reference player is given by (\ref{xs})-(\ref{xI}). Thus, the state evolution of the reference player depends on $\Theta(t)$ and $\eta(t)$ and therefore on $\boldsymbol{m}_{ik}(t)$ for all $(i,k)$ as well as its control which is the probability $\alpha_{ik}(t)$. Each IoBT node has full knowledge of the state distribution $\boldsymbol{m}_{ik}(t)$ of the remaining nodes for all classes in $\mathcal{C}$. The cost of each IoBT node of class $(i,k)$ is given by (\ref{instcost}). The objective of each IoBT node of class $(i,k)$ is to find the optimal probability $\alpha_{ik}(t)$ that minimizes its cost according to (\ref{payoff}). In order to find the minimum cost, the reference player solves a continuous-time finite-state Markov decision process with finite horizon \cite{finitemarkov} defined by the set of states $\mathcal{S}$. In our game, since each IoBT node chooses the probability $\alpha_{ik}(t)$ when in the \emph{S} state, then the action sets for each state will be given by: $\mathcal{A}_{S}=[0,1]$ and $\mathcal{A}_I=\mathcal{A}_E=\mathcal{A}_L=\phi$. The running costs for each state $l$ are given by $v_{ik}(S, \boldsymbol{\gamma}(t), \alpha_{ik}(t))=(\mathbb{E}_{\alpha_{ik}(t)}[\bar{Q}_{ik}(\boldsymbol{\gamma}(t))]-Q_T)^2 $, $v_{ik}(E)=v_{ik}(L)=0$, and $v_{ik}(I)=c_{ik}$. Let $u^l_{ik}(t)=\int_{t}^Tv_{ik}(l(s),\boldsymbol{\gamma}(s),\alpha^{j}_{ik}(s))ds$ be the total cost starting from time $t$ when in state $l$ where $l(s)$ is the state at time $s$ and $\alpha^j_{ik}(s)$ is the action taken when in state $j$ at time $s$. Then, for a given $\boldsymbol{\gamma}(t)$, the reference player uses the so-called Hamilton Jacobi (HJ) equations \cite{finitemean-field} to find the minimum cost. The HJ equations are defined as $-\frac{\partial u^l_{ik}}{\partial t}=h(\Delta_l \boldsymbol{u}_{ik}, \boldsymbol{\gamma}(t),l),$ for every state $l \in \mathcal{S}$ where $\boldsymbol{u}_{ik}=(u^l_{ik})_{l \in \mathcal{S}}$, $\Delta_l \boldsymbol{u}_{ik}=(u^j_{ik}-u^l_{ik})_{j \in \mathcal{S}}$, and \vspace{-0.6 cm} \small \begin{eqnarray} \hspace{-0.5 cm}h(\Delta_l \boldsymbol{u}_{ik}, \boldsymbol{\gamma}(t),l)&=&\min_{\alpha^l_{ik}(t)}v_{ik}(l,\boldsymbol{\gamma}(t),\alpha^l_{ik}(t))+\sum_{j \in \mathcal{S}} G^{ik}_{lj} (\alpha^l_{ik}(t),\Theta(t))(u^j_{ik}-u^l_{ik}), \label{legendre} \end{eqnarray} \normalsize where $G^{ik}_{lj}$ is the transition rate from state $l$ to $j$. For our problem, the HJ equations are specifically given by \vspace{-1 cm} \small \begin{eqnarray} &&\hspace{-2.9 cm}-\frac{\partial u^{S}_{ik}}{\partial t}=\min_{\alpha_{ik}(t)}v_{ik}(S,\boldsymbol{\gamma}(t),\alpha^{S_T}_{ik}(t))+(1-\alpha_{ik})R_{ik}(\Theta(t))(u^{E}_{ik}-u^{S}_{ik})\hspace{-2 cm}\nonumber\\ &&\hspace{-1.3 cm}+(1-\alpha_{ik})L_{ik}(\Theta(t))(u^{L}_{ik}-u^{S}_{ik})+\alpha_{ik}(t)R_{ik}(\Theta(t))(u^I_{ik}-u^{S_T}_{ik}), \label{vs}\\ &&\hspace{-3 cm}-\frac{\partial u^{E}_{ik}}{\partial t}=(1-\delta_{ik})\beta^E_{ik}(u^I_{ik}-u^E_{ik})+(1-\delta_{ik})\gamma^E_{ik}(u^S_{ik}-u^E_{ik}),\\ &&\hspace{-3 cm}-\frac{\partial u^{L}_{ik}}{\partial t}=(1-\delta_{ik})(u^S_{ik}-u^L_{ik}),\\ &&\hspace{-3 cm}-\frac{\partial u^I_{ik}}{\partial t}=c_{ik}+\nu_{ik}(u^{S}_{ik}-u^I_{ik}),\label{vi} \end{eqnarray} \normalsize with $u^{S}_{ik}(T)=u^{E}_{ik}(T)=u^{L}_{ik}(T)=u^I_{ik}(T)=0$. Thus, $\alpha_{ik}(t)$ that minimizes the Hamiltonian $h(\Delta_{S} \boldsymbol{u}_{ik}, \boldsymbol{\gamma}(t),S)$ is the optimal value. We denote by $\alpha_{ik}(\Delta_{S}\boldsymbol{u}_{ik}, \boldsymbol{\gamma}(t))$ the optimal value. Due to the dependence of the optimal value on the mean-field dynamics through $\boldsymbol{\gamma}(t)$, the optimal value $\alpha_{ik}(\Delta_{S}\boldsymbol{u}_{ik}, \boldsymbol{\gamma}(t))$ is called the \emph{best response} with respect to $\boldsymbol{\gamma}(t)$. The following remark presents the best response $\alpha_{ik}(\Delta_{S}\boldsymbol{u}_{ik}, \boldsymbol{\gamma}(t))$ of a player of class $(i,k)$ for $\boldsymbol{\gamma}(t)$. \begin{remark} \emph{For a given $\boldsymbol{\gamma}(t)$, the best response $\alpha_{ik}(\Delta_{l}\boldsymbol{u}_{ik},\boldsymbol{\gamma}(t))$ of a player of class $(i,k)$ is } \small \begin{equation} \hspace{-0.1 cm} \alpha_{ik}(\Delta_l\boldsymbol{u}_{ik},\boldsymbol{\gamma}(t))= \begin{cases} 0, &\hspace{-0.2 cm} \text{\emph{if} } g_{ik}(\Delta_l\boldsymbol{u}_{ik},\boldsymbol{\gamma}(t)) < 0,\\ \smaller g_{ik}(\Delta_S\boldsymbol{u}_{ik}\normalsize,\boldsymbol{\gamma}(t)), & \hspace{-0.3 cm} \text{\emph{if} } 0<g_{ik}(\Delta_{l}\boldsymbol{u}_{ik},\boldsymbol{\gamma}(t)) < 1,\\ 1, & \text{\emph{otherwise}}, \end{cases} \label{alphab} \end{equation} \normalsize \emph{where} \vspace{-0.2 cm} \small \begin{equation} g_{ik}(\Delta_{l}\boldsymbol{u}_{ik},\boldsymbol{\gamma}(t))=\frac{R_{ik}(\Theta(t))(u^E_{ik}-u^S_{ik})+L_{ik}(\Theta(t))(u^L_{ik}-u^S_{ik})-R_{ik}(\Theta(t))(u^I_{ik}-u^S_{ik})+2A_1(Q_T-A_2)}{2A^2_1},\nonumber\\ \end{equation} \begin{eqnarray} &&\hspace{-0.8 cm}A1=(L_{ik}(\Theta(t))(k\eta(t)+1)+(k\eta(t)-k\Theta(t)-\lambda_{ik}-(1-\lambda_{ik})k\eta(t)))+L_{ik}(\Theta(t))(\beta^L_{ik}(t)(k\eta(t)+1-\kappa\delta_{ik}))\nonumber\\ &&-(\beta^E_{ik}(k\eta(t)-k\Theta(t)-\lambda_{ik}-(1-\lambda_{ik})k\eta(t)-\kappa\delta_{ik}(1-L_{ik}(\Theta(t))), \nonumber \end{eqnarray} \begin{eqnarray} && \hspace{-0.9 cm}A_2=+L_{ik}(\Theta(t))(\beta^L_{ik}(t)(k\eta(t)+1-\kappa\delta_{ik}))+(1-L_{ik}(\Theta(t)))(\beta^E_{ik}(k\eta(t)-k\Theta(t)-\lambda_{ik}-(1-\lambda_{ik})k\eta(t)-\kappa \delta_{ik})).\nonumber \end{eqnarray} \end{remark} \vspace{-0.3 cm} The result directly follows by equating the partial derivative of the right-hand side of (\ref{vs}) with respect to $\alpha_{ik}(t)$. Given the best response in (\ref{alphab}), next, we characterize the MFE for our IoBT game . \vspace{-0.5 cm} \subsection{Mean-Field Equilibrium} The MFE occurs when the best response $\alpha_{ik}(\Delta_S\boldsymbol{u}_{ik},\boldsymbol{\gamma}(t))$ of a player belonging to class $(i,k)$ is the same as the strategy $\bar{\alpha}_{ik}$ of the population belonging to class $(i,k)$, for all $(i,k)$. Thus, the MFE will be the solution of the Hamilton-Jacobi equations in (\ref{vs})-(\ref{vi}), and the Kolomogrov equations in (\ref{ms})-(\ref{mI}) with $\bar{\alpha}_{ik}(t)=\alpha_{ik}(\Delta_S\boldsymbol{u}_{ik},\boldsymbol{\gamma}(t))$, $u^{S}_{ik}(T)=u^{E}_{ik}(T)=u^{L}_{ik}(T)=u^I_{ik}(T)=0$ and $m^{S}_{ik}(0)=1$, $m^{L}_{ik}(0)=m^{E}_{ik}(0)=m^I_{ik}(0)=0 \hspace{0.2 cm} \forall i,k.$ For our IoBT mean-field game, the MFE exists. The proof follows from \cite[Proposition 4]{finitemean-field}. Even though existence follows from \cite{finitemean-field}, it is difficult to characterize analytically the MFE for our IoBT game. Further, our game does not satisfy the standard conditions for uniqueness for mean-field games (see \cite{finitemean-field} and \cite{uniqueness}). In particular, the Hamiltonian $h(\Delta_S \boldsymbol{u}_{ik},\boldsymbol{\gamma}(t),l)$ is not strongly convex in $\Delta_s \boldsymbol{u}_{ik}$. Thus, due to the aformentioned reasons, it is difficult to analyze analytically the uniqueness of our game. In (\ref{ms})-(\ref{mI}), the mean-field equations are subject to initial conditions. Thus, in order to find the MFE, we propose an algorithm based on the forward backward sweep method \cite{FBSM}, which has been widely used to solve optimal control problems with initial conditions. The details of the proposed forward backward sweep as tailored to the IoBT are given by Algorithm \ref{FBSMalg}. \begin{algorithm}[t] \smaller \textbf{Input:} $\epsilon$, $T$, $\nu_{ik}$, $\lambda_{ik}$ for all $(i,k)$ \\ \textbf{Output:} The equilibrium acceptance probabilities $\boldsymbol{\alpha^{*}_{ik}}=(\alpha^{*}_{ik}(t))_{t \in [0,T]}$ for all $(i,k)$\\ Initialize: for all $t \in [0,T]$, $\alpha^{*}_{ik}(t)=\alpha_{ik,0},$ $iter=0$ \\ \Repeat{ $||\boldsymbol{\alpha^{*}_{ik}}-\boldsymbol{\alpha^{*}_{ik,\textrm{old}}}|| \leq \epsilon$ or $iter >I_{\max}$ }{ $\boldsymbol{\alpha^{*}_{ik,\textrm{old}}} \leftarrow \boldsymbol{\alpha^{*}_{ik}}$ \\ Compute $m^*_{ik}(t)$ using the mean-field equations (\ref{ms})-(\ref{mI}) with $\bar{\alpha}_{ik}(t)=\alpha_{ik,\textrm{old}}(t)$ $\forall (i,k)$, $t\in[0,t]$ (forward sweep)\\ Using $m^*_{ik}(t)$, (\ref{theta}), and (\ref{etta}), compute $\boldsymbol{\gamma}^*(t)$ $\forall t \in [0,T]$\\ Using $\boldsymbol{\gamma}^*(t)_{t \in [0,T]}$ , compute $\boldsymbol{\alpha^{*}_{ik}}$ using the Hamilton Jacobi equations (\ref{vs})-(\ref{vi}) (backward sweep) $iter=iter+1$ } \caption{Forward Backward Sweep Algorithm for the IoBT mean-field Game} \label{FBSMalg} \end{algorithm} Algorithm \ref{FBSMalg} first solves the mean-field equations using a finite difference method and using the initial guess of the optimal acceptance probability $\boldsymbol{\alpha_{ik}}$. Next, using the mean-field solution $(m^*_{ik}(t))_{t \in [0,T]}$, the probabilities $\Theta^*(t)$ and $\eta^*(t)$ are computed according to (\ref{theta}) and (\ref{etta}), respectively. Algorithm \ref{FBSMalg} then computes the optimal acceptance probability $\boldsymbol{\alpha_{ik}}$ based on the HJ equations and on the the computed $(\Theta^*(t))_{t \in [0,T]}$ and $(\eta^*(t))_{t \in [0,T]}$, using a finite difference method. The newly computed $\boldsymbol{\alpha_{ik}}$ are consequently used to recompute the mean-field solution. The process is repeated until convergence, or if the maximum number of iterations $I_{\max}$ is reached, since the forward backward sweep algorithm is not guaranteed to converge, in general. \vspace{-0.1 cm} In practice, each IoBT node will run Algorithm 1 in order to determine its optimal probability of accepting information $\alpha^*_{ik}(t)$ over the considered interval $[0,T]$. It is assumed that the IoBT nodes acquire information about the characteristics (such as the processing delay $\delta_{ik}$ and the probabilities $\beta^E_{ik}$ and $\beta^L_{ik}$) of IoBT nodes belonging to other classes through an initial phase in which the fusion center/ base station broadcasts the information. For each subsequent time duration, each IoBT node will run Algorithm 1 and use, as the initial mean-field values, the mean-field values $m^{l*}_{ik}(T)$ at time $T$ from the previous run. The process will be repeated until the end of the military operation. \normalsize While the mean-field formulation provides a tractable approach to analyze a massive IoBT system, it assumes that the number of IoBT devices to be infinite. In practice, the IoBT will have a large, but finite number of devices. As such, in the next section, we analyze an IoBT having a large, but finite number of nodes. The finite IoBT case is a better fit to a real-world IoBT and can account for all potential network sizes. However, computing the equilibria for the finite case is computationaly expensive as discussed next, and therefore is not suitable to be used by the IoBT nodes to determine the optimal strategies in real-time. Thus, the complexity of determining the equilibria of the finite IoBT makes the proposed mean-field approach in Section II more suitable to determine the optimal stategies of the IoBT nodes. \vspace{-0.3 cm} \section{On the Tractability of the Finite IoBT Case} \vspace{-0.2 cm} \subsection{Game Formulation} We now consider the game when the IoBT network is composed of a finite number of nodes $N+1$, each of which can be in any one of the states in $\mathcal{S}$. The number of \emph{background nodes}, i.e. any node other than the reference player, belonging to class $(i,k)$ is $N_{ik}$ with the assumption that $\lim_{N\to\infty}\frac{N_{ik}}{N}=\pi_{ik}$ where $\pi_{ik}=P(k)p_k(i)$ is the distribution of the players in the previously studied infinite IoBT case. Thus, at any time $t$, the number of background nodes $\boldsymbol{n}_{ik}(t)=(n^j_{ik}(t))_{j \in \mathcal{S}}$ of class $(i,k)$ in each state evolves according to a Markov chain. Hence, the probability that a link is infected in the finite IoBT is $\Theta_N(t)=\sum_k k \sum_{i \in \mathcal{H}_k} \frac{n^I_{ik}(t)}{N}$ Similarly, the probability of a randomly chosen link pointing to a susceptible node is $\eta_N(t)=\sum_k k \sum_{i \in \mathcal{H}_k} \frac{n^{S}_{ik}(t)}{N}$. We define the vector $\boldsymbol{n}(t)=(\boldsymbol{n}_{ik}(t))_{(i,k) \in \mathcal{C}}$ . The state evolution of a background node belonging to class $(i,k)$ is \vspace{-0.4 cm} \begin{equation} \mathbb{P}(s_{ik}(t+h)=j|s_{ik}(t)=l)= G^{lj}_{ik}(\Theta_N(t),\bar{\alpha}^l_{ik}(t))h+o(h), \label{finiteevol} \end{equation} \normalsize where $o(h) \rightarrow 0$ as $h \rightarrow \infty$, $G^{lj}_{ik}$ is the transition rate from state $j$ to state $l$ and has the same expression as the infinite case with the variable $\Theta(t)$ replaced by $\Theta_N(t)$, and $\bar{\alpha}^{N}_{ik}(t)$ is the acceptance probability of the background nodes of class $(i,k)$. Thus, in our problem, $G^{lj}_{ik}$ depends only on $\Theta_N(t)$ and $\bar{\alpha}^{N}_{ik}(t)$ when it is in the $S$ state. Subsequently, $G^{lj}_{ik}(\Theta_N(t), \bar{\alpha}_{ik}(t))$ is replaced by $G^{lj}_{ik}(\boldsymbol{n}(t), \bar{\alpha}^N_{ik}(t))$ as $\Theta_N(t)$ is a function of $n^I_{ik}(t)$ $\forall (i,k) \in \mathcal{C}$. Thus, the evolution of the number of nodes $\boldsymbol{n}_{ik}(t)$ will affect the transition rate $G^{lj}_{ik}$. The state evolution of the reference node is also given by (\ref{finiteevol}) but with $\bar{\alpha}^{N}_{ik}(t)$ replaced by $\alpha^{N}_{ik}(t)$. Further, the state transitional rates of the different nodes are independent conditioned on $\boldsymbol{n}(t)$ and $l(t)$, where $l(t)$ is the state of the reference node. Thus, the evolution of $\boldsymbol{n}_{ik}(t)$ is given by $\mathbb{P}(\boldsymbol{n}_{ik}(t+h)=\boldsymbol{n}_{ik}+\boldsymbol{e}_{jl}|\boldsymbol{n}_{ik}(t)=\boldsymbol{n_{ik}}, \boldsymbol{n}(t)=\boldsymbol{n}, \boldsymbol{s}(t)=s)=\eta^s_{ik}(j,l,\boldsymbol{n})h+o(h)$ where $\boldsymbol{e}_{jl}=\boldsymbol{e}_{j}-\boldsymbol{e}_{l}$, $\boldsymbol{e}_{j}$ is the $j^{th}$ vector of the canonical basis of $\mathbb{R}^{|\mathcal{S}|}$, and \vspace{-0.1 cm} \begin{equation} \rho^s_{ik}(l,j,\boldsymbol{n})= \begin{cases} n^l_{ik} G^{lj}_{ik}(\boldsymbol{n}',\bar{\alpha}^{N}_{ik}(t)), & \text{if } l=S,\\ n^l_{ik} G^{lj}_{ik}, & \text{otherwise}, \end{cases} \label{rho} \end{equation} where $\boldsymbol{n}'=\boldsymbol{n}+\boldsymbol{e}^{ik}_{sl}$ if $(i,k)=(i',k')$ where $(i',k')$ is the class of the reference player, $\boldsymbol{n}+\boldsymbol{e}^{ik}_{sl}$ is the same as $\boldsymbol{n}$ but with $\boldsymbol{n}_{ik}$ replaced by $\boldsymbol{n}_{ik}+\boldsymbol{e}_{sl}$. $\boldsymbol{n}'=\boldsymbol{n}-\boldsymbol{e}^{ik}_{l}$ if $(i,k) \neq (i',k')$ where $\boldsymbol{n}-\boldsymbol{e}^{ik}_{l}$ is the same as $\boldsymbol{n}$ but with $\boldsymbol{n}_{ik}$ replaced by $\boldsymbol{n}_{ik}-\boldsymbol{e}_{l}$. The reference node has full knowledge of the evolution of $\boldsymbol{n}_{ik}(t)=(n^j_{ik}(t))_{j \in \mathcal{S}}$ of the background nodes for all $(i,k) \in \mathcal{C}$. Thus, the reference IoBT node of class $(i,k)$ starting from state $l$ seeks to find the acceptance probabilities $\alpha^{N,l}_{ik}(t)$ ($l \in \{S_N,S_T\}$) that minimizes its total expected cost $u^{N,\boldsymbol{n},l}_{ik}=\min_{\alpha^N_{ik}(t)}\mathbb{E}\int_{0}^T v_{ik}(j(s),\boldsymbol{\gamma}^N(s),\alpha^j_{ik}(s))ds $ where $\boldsymbol{\gamma}^N(s)=(\Theta^N(s),\eta^N(s))$ and $v_{ik}(j(s),\boldsymbol{\gamma}^N(s),\alpha^j_{ik}(s))$ has the same expression as the infinite case . We denote by $u^{N,\boldsymbol{n},l}_{ik}(t)$ the total expected cost starting from time $t$ and when in state $l$ conditioned on $\boldsymbol{n}(t)=\boldsymbol{n}$. The HJ equation for this case will be given by \cite{finitemean-field} \vspace{-0.4 cm} \small \begin{eqnarray} &&\hspace{-2 cm}-\frac{du^{N,\boldsymbol{n},l}_{ik}}{dt}=\sum_{r,v}\rho^l_{ik}(v,r,\boldsymbol{n})(u^{N,\boldsymbol{n}+e^{ik}_{vr},l}_{ik} - u^{N,\boldsymbol{n},l}_{ik}) + h(\Delta_l \boldsymbol{u}^{N,\boldsymbol{n}}_{ik} ,\boldsymbol{\gamma}_N(t), l),\label{HBf} \end{eqnarray} \normalsize \vspace{-0.4 cm} \\ with $u^{N,\boldsymbol{n},l}_{ik}(T)=0$ for all $l \in \mathcal{S}$ and $(i,k) \in \mathcal{C}$. where $\Delta_l \boldsymbol{u}^{N,\boldsymbol{n}}_{ik}=(u^{N,\boldsymbol{n},j}_{ik}-u^{N,\boldsymbol{n},l}_{ik})_{j \in \mathcal{S}}$ and $h(\Delta_l u^{N,\boldsymbol{n}}_{ik} ,\boldsymbol{\gamma}_N(t), l)$ has the same expression as in (\ref{legendre}). In the finite IoBT game, the equilibrium also occurs when $\alpha^{N}_{ik}(t)=\bar{\alpha}^{N}_{ik}(t)$ $\forall$ $(i,k)$. It can be easily shown that the equilibrium exists using a similar argument as in \cite{finitemean-field}. However, due to the dependence of $u^{N,\boldsymbol{n},l}_{ik}$ on $\boldsymbol{n}$ in (\ref{HBf}), the number of possible evaluations of $u^{N,\boldsymbol{n},l}_{ik}$ grows exponentially with $T$ according to (\ref{HBf}). Hence, computing $u^{N,\boldsymbol{n},l}_{ik}$ for the finite IoBT is computationally expensive, unlike in the proposed mean-field approach where the computational complexity of computing $u^{l}_{ik}$ for all $l \in \mathcal{S}$ is only linear in $T$ according to (\ref{vs})-(\ref{alphab}). Thus, the mean-field approach is computationally more favourable to be used in terms of to find the optimal strategies of the IoBT nodes. However, in order to ensure that the mean-field game yields a performance comparable to the finite IoBT game for large number of nodes, we discuss the convergence of the cost and distribution functions of the finite IoBT case to the mean-field case, as $N$ goes to infinity. \vspace{-0.4 cm} \subsection{Convergence Conditions of the Finite IoBT Game} \vspace{-0.2 cm} We now extend the convergence results of the finite state mean-field games in \cite{finitemean-field} to the case of multiclass agents and when the transitional probability is a function of the control as well as the mean-field. We show the conditions under which the cost and distribution functions of the $N+1$ player game converges uniformly to corresponding functions in the mean-field game, in order to ensure that the mean-field IoBT game yields a performance comparable to the finite IoBT game. Our proof relies on the following useful property from \cite[Proposition 7]{finitemean-field} which holds for the solution $u^{N,\boldsymbol{n},l}_{ik}$ to our HJ equations in (\ref{HBf}). \begin{remark} \emph{Let $u^{N,\boldsymbol{n},l}_{ik}(t)$ be the solution of (\ref{HBf}). Then, there exists $C>0$ and $T^*>0$ such that for $0<T<T^*$, } \vspace{-0.7 cm} \small \begin{equation} \max_{rv}||u^{N,\boldsymbol{n}+\boldsymbol{e}^{rv}_{ik},l}_{ik}(t)-u^{N,\boldsymbol{n},l}_{ik}(t)|| \leq \frac{2C}{N}, \end{equation} \label{grad} \end{remark} \vspace{-0.7 cm} where the norm ||.|| used is the $\infty$ norm. Also, we rely on the following properties of our game. \begin{prop} The studied IoBT game exhibits the following properties: \begin{enumerate} \item The transitional rate $G^{jl}_{ik}(\alpha_{ik}(t), \Theta(t))$ is a Lipchitz function of $\alpha_{ik}(t)$ for all $i,k$. \item The best response $\alpha^*_{ik}(\Delta_S \boldsymbol{u}_{ik}, \boldsymbol{\gamma}(t))$ is Lipschitz in $\Delta_l \boldsymbol{u}_{ik}$, $\Theta(t)$, $\eta(t)$, and $m_{ik}(t)$ $\forall$ $(i,k) \in \mathcal{C}$ provided that the immediate cost $v_{ik}$ is strongly convex in $\alpha_{ik}$. \item The transitional rate $G^{jl}_{ik}(\alpha^*_{ik},\Theta(t))$ is Lipchitz in $\Delta_j u$ and $\Theta(t)$. \item The immediate cost $v_{ik}$ and its derivative $\nabla_{\alpha_{ik}} v_{ik}$ is Lipchitz in $\Theta(t)$ and $\eta(t)$. \item The function $h(\Delta_S \boldsymbol{u}_{ik},\boldsymbol{\gamma}(t),l)$ is Lipchitz in $\Delta_l u$, $\Theta(t)$, and $\eta(t)$. \end{enumerate} \label{prop1} \end{prop} \begin{proof} See Appendix A. \end{proof} Using the results of Proposition \ref{prop1}, we can now present the convergence results in the following theorem. \begin{thm} Let $T^*$ be defined as in Remark \ref{grad}. There exists a constant $C$ independent of $N$, for which, if $T<T^*$, satisfies $\mu=TC<1$ then \vspace{-0.4 cm} \small \begin{equation} \sum_{i,k}V^N_{ik}(t)+W^N_{ik}(t) \leq \frac{C}{1-\mu}\frac{1}{N_{\max}}, \end{equation}\normalsize \vspace{-0.4 cm} \\ for all $t \in [0,T]$, where $N_{\max}=\max_{(r,v)\in\mathcal{C}}N_{rv}$, $W^N_{ik}(t)=\mathbb{E}\Big[||\boldsymbol{u}_{ik}(t)-\boldsymbol{u}^{N,\boldsymbol{n}}_{ik}(t)||^2\Big]$, $V^N_{ik}(t)=\mathbb{E}(||\frac{\boldsymbol{n}_{ik}(t)}{N_{ik}}-\boldsymbol{m}_{ik}(t)||)^2$, $\boldsymbol{m}_{ik}(t)$ and $\boldsymbol{u}_{ik}(t)$ are the mean-field and cost functions at the MFE, $\boldsymbol{n}_{ik}(t)$ and $\boldsymbol{u}^{N,\boldsymbol{n}}_{ik}(t)$ are the equilibrium distribution and cost value of $N+1$ player game . \end{thm} \begin{proof} The proof of Theorem 1 relies on the following two lemmas. \begin{lem} Define $T^*$ defined as done in Remark \ref{grad}, then, there exists $C_1$ such that \begin{equation} W^N_{ik}(t)\leq \frac{C_1}{N}+C_1 \mathbb{E} \int_{t}^T \Big(W^N_{ik}(s) + \sum_{(r,v) \in \mathcal{C}}V^N_{rv}(s)\Big)ds. \label{Wik} \end{equation} \end{lem} \begin{proof} See Appendix B. \end{proof} \begin{lem} Define $T^*$ as done in Remark \ref{grad}, then, there exists $C_2$ such that \begin{equation} V^N_{ik}(t) \leq C_2 \mathbb{E} \int_{0}^t (V^N_{ik}(s)+W^N_{ik}(s)+V^N_{yz}(s)) ds +\frac{C_2}{N_{\max}}, \label{vik} \end{equation} where $(y,z)=\arg \max_{(r,v)} V_{rv}(t)$ and $N_{\max}=\max_{(r,v)\in \mathcal{C}}N_{rv}$. \end{lem} \begin{proof} See Appendix C. \end{proof} By adding (\ref{Wik}) and (\ref{vik}) for all $(i,k)$, we have \small \begin{eqnarray} \sum_{ik}W^N_{ik}(t)+\sum_{ik} V^N_{ik}(t) &\leq& C_1 \mathbb{E} \int_{0}^t \sum_{ik}\Big(W^N_{ik}(s)+\sum_{rv}V^N_{rv}(s)\Big) +\frac{C_1 |\mathcal{C}|}{N}ds \nonumber\\ &&+ C_2 \mathbb{E} \int_{t}^T \sum_{ik}\Big(W^N_{ik}(s)+V^N_{ik}(s)+V^N_{yz}(s)\Big)+ \frac{C_2|\mathcal{C}|}{N_{\max}},\nonumber\\ &\leq& \bar{C} \mathbb{E} \int_{0}^T \sum_{ik}V^N_{ik}(s)+W^N_{ik}(s) + \frac{\bar{C}}{N_{\max}}, \end{eqnarray} \normalsize where $\bar{C}=\max\{C_1|\mathcal{C}|,C_2+1, C_2|\mathcal{C}|\}$. \\ Let $W^N_{ik}+V^N_{ik} =\max_{0 \leq t \leq T}W^N_{ik}(t)+V^N_{ik}(t)$. Then, \begin{eqnarray} \sum_{ik}W^N_{ik}(t)+V^N_{ik}(t) \leq \sum_{ik} W^N_{ik}+V^N_{ik} \leq \bar{C}T\sum_{ik}W^N_{ik}+V^N_{ik}+\frac{\bar{C}}{N_{\max}} \leq \frac{\bar{C}}{(1-\mu)N_{\max}}, \end{eqnarray} where $\mu=\bar{C}T$. Thus, the value function and the proportion of nodes converges uniformly in distribution to the meanfield case. Hence, the mean-field equilibrium constitutes an $\epsilon$ equilibrium for the finite game as demonstrated in \cite{mean-fieldSIR}. \end{proof} \vspace{-0.3 cm} Thus in this section, we have demonstrated that finding the equilibrium for the finite game has exponential complexity in the number of IoBT nodes. However, we have also show that, under mild conditions, the finite game will converge to the mean-field game as $N$ goes to infinity. \vspace{-0.6 cm} \section{Simulation Results} \vspace{-0.1 cm} For our simulations, we consider an IoBT where the values of the degree of the nodes $k \in \{1,10, 15, 20\}$. The degree distribution is $ P(k=1)=0.4$, $P(k=10)=0.3$, $P(k=15)=0.2$, and $P(k=20)=0.1$. The distribution is chosen such that the proportion of nodes decreases with the degree, which represents a typical hierarchical IoBT structure. We consider one type of device of each degree. Thus, nodes of degree $1$ correspond to simple sensors. Nodes of degrees $10$ and $15$ correspond to cluster heads, and nodes of degree $15$ correspond to local sinks. The infection costs are set to: $c_{1}=1$, $c_{10}=10$, $c_{15}=20$, and $c_{20}=30$. The cost values are chosen depending on the importance of the nodes. For a node of degree $k$, the target QoI $Q_T=k$. The attacker's infection rate for all nodes is set to be $0.2$. The time period $T$ is set to be $0.9$ seconds. The delays of the $\emph{E}$ and $\emph{L}$ states are set to be $\delta_1=0$, $\delta_{10}=0.4$, $\delta_{15}=0.3$, and $\delta_{20}=0.3$. The acceptance probabilities of the $\emph{E}$ state are set to $\beta^E_1=0.5$, $\beta^E_{10}=0.3$, $\beta^E_{15}=0.2$, and $\beta^E_{20}=0.1$. The acceptance probabilities when in $\emph{L}$ state are set to $\beta^L_1=0.5$, $\beta^L_{10}=0.6$, $\beta^L_{15}=0.7$, and $\beta^L_{20}=0.8$. The parameters are chosen to reflect the computational capabilities of the different IoBT nodes. For example, $\beta^E_1=\beta^L_1=0.5$ is chosen for simple sensors that can not identify the misinformation. Thus, such sensors accept/reject the information with probability $0.5$, and, hence, the processing delay $\delta_1$ is set to $0$. For comparison, we consider a baseline in which the nodes always transmit with probability one. For the considered simulation values, we compute the equilibrium acceptance probability using Algorithm 1. We also compute the proportion of infected nodes, the probability of an infected link, and the QoI for both the baseline and the MFE. For all considered values, Algorithm 1 converges in at most $16$ iterations. Also, Algorithm 1 always yields the same solution for any initial guess of the acceptance probabilities. \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{acceptance2.pdf} \caption{The acceptance probability as a function of time.}\label{acceptance} }\vspace{-0.4 cm} \end{figure} \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{infected22.pdf} \caption{Evolution of the proportion of infected nodes over time.}\label{infected} }\vspace{-0.4 cm} \end{figure} \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3 cm,angle=0]{Theta22.pdf} \caption{Evolution of the probability of an infected link over time.}\label{theta} }\vspace{-0.7 cm} \end{figure} \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{quality22.pdf} \caption{Time evolution of the QoI resulting from the proposed MFE and the baseline.}\label{qoI} }\vspace{-0.3 cm} \end{figure} Fig. \ref{acceptance} shows the MFE acceptance probability $\alpha$ versus time for the considered values of degree $k$. First, when $k=1$, the acceptance probability is zero for the entire time duration, since the processing delay $\delta_1$ is $0$. Thus, a node of degree $1$ can reduce the spread of misinformation by accepting the information with probability $\beta^E_1=0.5$ instead of $1$. When $k=10$, the acceptance probability is $0$ for $t \leq 0.83$ seconds. Then, it increases with time until it reaches 0.01 at $t=0.9$, as the spread of misinformation ceases in the IoBT. When $k=15, 20$, the acceptance probability varies similar to the case when $k=10$. Fig. \ref{infected} shows how the proportion of infected nodes changes over time for for the considered values of degree $k$ and for both the baseline as well as the MFE. Using the baseline and for the considered degree values, the proportion of infected nodes increases with time until it reaches $0.45$, $0.95$, $0.97$, and $0.98$ at $t=0.9$ for $k=$$\hspace{0.1 cm}1$, $10$, $15$, and $20$, respectively. From Fig. \ref{infected} we can also see that, using the MFE and for all considered degree values, the proportion of infected nodes increases with time until it reaches $0.0212$, $0.009$, $0.0078$, and $0.0065$ at $t=0.9$ for $k=$$\hspace{0.1 cm}1$, $10$, $15$, and $20$, respectively. Thus, Fig. \ref{infected} shows that, for all considered degree values, the proportion of infected nodes using the MFE is maintained significantly lower than the baseline case. The considerable decrease in the the proportion of infected nodes is due to two reasons 1) At the MFE, the acceptance probabilities of information for all nodes when in the \emph{S} is zero for a considerable time duration as shown in Fig. \ref{acceptance} 2) The acceptance probabilities of misinformation when in the $\emph{E}$ are low, which limits the spread of misinformation. The decrease in the proportion of infected nodes reaches up to $99\%$ when $k=15$. In Fig. \ref{theta}, we show the probability of an infected link $\Theta$ over time for both the baseline and the MFE. From this figure, we can see that, for the baseline, $\Theta$ increases with time until it reaches $0.94$ at $t=0.9$ seconds. This is due to the fact that the proportion of infected nodes increases with time using the baseline, for all considered degree values as shown in Fig. \ref{infected}. For the MFE, from Fig. \ref{theta} we can see that $\Theta$ increases with time until it reaches $0.0085$ at $t=0.9$ seconds. The decrease in $\Theta$ using the MFE reaches up to $99\%$ compared to the baseline. Thus, Fig. \ref{theta} shows the effectiveness of our proposed scheme in limiting the spread of misinformation. Fig. \ref{qoI} shows the evolution of the QoI over time for both the baseline and the MFE for degree values $k=15$ and $20$, respectively. First when $k=15$ and using the baseline, the QoI will be $16$ at $t=0$. Then, the QoI decreases until it reaches $-12.75$ at $t=0.9$ sec. The decrease in the QoI is due to the increase in the probability $\Theta$ as shown in Fig. \ref{theta}. When $k=15$ and using the MFE, the QoI is $6$ at $t=0$, since initially all nodes are susceptible and the acceptance probability is zero as shown in Fig. \ref{acceptance}. Then, the QoI decreases with time until it reaches $1.628$ at $t=0.9$ seconds. The decrease in the QoI is mainly due to the increase in the probability $\Theta$ with time, as shown in Fig. \ref{theta}. When $k=20$ , the QoI resulting from the baseline is $21$ at $t=0$. Then, the QoI decreases with time until it reaches $-17$ at $t=0.9$. When $k=20$ , the QoI resulting from the proposed MFE is $9.8$ at $t=0$. As $t$ increases, the QoI decreases until it reaches $3.64$. The decrease in the QoI when $k=20$ for the baseline and the MFE is due to the same aformentioned reasons for the case when $k=15$. Thus, the proposed MFE approach achieves a 1.2-fold increase in the value of the QoI compared to the value of the baseline, at $t=0.9$ and when $k=20$. \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{alphabeta2.pdf} \caption{Time evolution of the acceptance probability of nodes with degree $20$ for different values of $\beta^E_{20}$.}\label{alphabeta} }\vspace{-0.5 cm} \end{figure} Fig. \ref{alphabeta} shows the MFE probability of accepting information by IoBT nodes with degree $20$ over time for $k=20$ and when the value of the $\beta^E_{20}$ is $0.1$, $0.3$, and $0.5$. When $\beta^E_{20}=0.1$, the acceptance probability is zero for $t \leq 0.56$, respectively. Then, the acceptance probability increases with time until it reaches $0.1$ at $t=0.9$ seconds. When the value of $\beta^E_{20}=0.3,0.5$, the acceptance probability is zero for the entire duration. Thus, Fig. $\ref{alphabeta}$ shows that the acceptance probability decreases as the node has higher capability to identify the misinformation. \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{Thetabeta2.pdf} \caption{Time evolution of the probability $\Theta(t)$ for different values of $\beta^E_{20}$.}\label{thetabeta} }\vspace{-0.3 cm} \end{figure} \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{QoIbeta2.pdf} \caption{Time evolution of the QoI of nodes with degree $20$ for different values of $\beta^E_{20}$.}\label{qoIbeta} }\vspace{-0.5 cm} \end{figure} Fig. \ref{thetabeta} shows the probability $\Theta$ of a link pointing to an infected node over time for $k=20$ and when the value of the probability $\beta^E_{20}$ is $0.1$, $0.3$, and $0.5$, respectively. For the three considered values of $\beta^E_{20}$, the probability $\Theta$ increases with time. From Fig. \ref{thetabeta}, we can see that, when $\beta^E_{20}=0.1$, the probability $\Theta$ increases until it reaches $0.0085$ at $t=0.9$. Meanwhile, for $\beta^E_{20}=0.3$ and $0.5$, the probability $\Theta$ reaches $0.01$ and $0.013$, respectively at $t= 0.9$ seconds. As for the baseline, the probability $\Theta$ is not affected by $\beta^E_4$ since the nodes do not reach the $\emph{E}$ or $\emph{L}$ states. Thus, $\Theta(t)$ is the same as the one shown in Fig. \ref{theta}. Thus, the MFE shows a considerable decrease in $\Theta$ even in the case when the nodes having degree $k=20$ cannot identify the misinformation (i.e. when $\beta^E_{20}=0.5$). In this case, the decrease in $\Theta$ is $97\%$ compared to the baseline. Further, Fig. \ref{thetabeta} clearly demonstrates that the spread of misinformation becomes more limited when the IoBT nodes have a higher capability to identify misinformation. In Fig. \ref{qoIbeta}, we plot the evolution of the QoI over time, for $k=20$ for the three values of $\beta^E_{20}$: $0.1$, $0.3$, and $0.5$. Fig. \ref{qoIbeta} shows that, for $\beta^E_{20}=0.1$, the QoI is the same as in Fig. \ref{qoI}. However, when $\beta^E_{20}=0.3$, the QoI decreases with time until it reaches $2.7$ at $t=0.9$ seconds. When $\beta^E_{20}=0.5$, the QoI decreases with time until it reaches $2.37$ at $t=0.9$ seconds. Using the baseline, the QoI is not affected by $\beta^E_4$. Thus, as demonstrated earlier in Fig. \ref{theta}, the QoI decreases until it reaches $-17$ at $t=0.9$. Fig. \ref{qoIbeta} further shows that for $t \leq 0.57$, the QoI increases with $\beta^E_{20}$ due to the low value of the probability $\Theta$. Thus, a higher value of $\beta^E_{20}$ will result in a higher QoI. For $t \leq 0.57$, the QoI decreases with $\beta^E_{20}$ since, in the considered time duration, $\Theta$ increases at a faster rate as shown in Fig. \ref{thetabeta}. Also, Fig. \ref{qoIbeta} shows the improvement in the QoI compared to the baseline even when the nodes of degree $k=20$ are not able to identify the misinformation. In this case, the improvement in the QoI reaches up to $99\%$ when $\beta^E_{20}=0.1$ and at $t=0.9$ sec. \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{acceptancedelta2.pdf} \caption{Time evolution of the acceptance probability of nodes with degree $20$ for different values of $\delta_4$.}\label{alphadelta} }\vspace{-1 cm} \end{figure} Fig. \ref{alphadelta} shows the MFE probability of accepting information by IoBT nodes with degree $20$ over time for $k=20$ and when the value of the delay $\delta_{20}$ is $0.3$, $0.5$, and $0.9$. When $\delta_{20}=0.3, 0.5$, the acceptance probability is zero for $t \leq 0.3$ and $t \leq 0.56$, respectively. Then, the acceptance probability increases with time until it reaches $0.1$ and $0.1846$, respectively. When the value of $\delta_{20}=0.9$, the acceptance probability is zero at $t=0$. Then, it increases with time until it reaches $0.3$ at $t=0.9$. Fig. \ref{alphadelta} shows that, at MFE, the acceptance probability increases with the delay $\delta_{20}$ in order to prevent the QoI from deteriorating due to the age of information. \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{Thetadelta2.pdf} \caption{Time evolution of $\Theta(t)$ of nodes with degree $20$ for different values of $\delta_{20}$.}\label{thetadelta} }\vspace{-0.5 cm} \end{figure} \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{QoIdelta2.pdf} \caption{Time evolution of QoI of nodes with degree $20$ for different values of $\delta_{20}$.}\label{qoIdelta} }\vspace{-0.4 cm} \end{figure} Fig. \ref{thetadelta} shows the probability $\Theta$ of a link pointing to an infected node versus time for $k=20$ and when the value of the delay $\delta_{20}$ is $0.3$, $0.5$, and $0.9$. For the three considered values of $\delta_{20}$, Fig. \ref{thetadelta} shows that the probability $\Theta$ increases with time. When $\delta_{20}=0.3$ and $t=0$, the probability $\Theta$ is $0$. Then, it increases with time until it reaches $0.0085$ at $t=0.9$ seconds. When $\delta_4=0.5$, the probability $\Theta$ increases with time until it reaches $0.01$ at $t=0.9$ . Similarly, when $\delta_{20}=0.9$, the probability $\Theta$ reaches $0.015$ at $t=0.9$. Also, as shown in Fig. \ref{thetadelta} the probability $\Theta$ increases with an increase in $\delta_{20}$. The increase in $\Theta$ is due to the fact that the acceptance probability increases with $\delta_{20}$, as shown in Fig. \ref{alphadelta}. For the baseline, the probability $\Theta$ is not affected with change in $\delta_{20}$ since the nodes do not enter the $\emph{E}$ or $\emph{L}$ states. Thus, $\Theta(t)$ is the same as the one shown in Fig. \ref{theta}. Thus, Fig. \ref{thetadelta} confirms that the proposed MFE approach can achieve a significant decrease in the probability $\Theta$ compared to the baseline, reaching up to $99\%$ when $\delta_{20}=0.3$ and at $t=0.9$ seconds. \vspace{-0.1 cm} Fig. \ref{qoIdelta} shows how the QoI resulting from the MFE and the baseline will vary over time, for $k=20$, and for $\delta_{20} =0.3$, $0.5$, and $0.9$ for both the MFE and the baseline. When $\delta_4=0.3$ and for the MFE, the QoI is $10$ at $t=0$. Then, as time increases, the QoI decreases until it reaches $3.64$ at $t=0.9$ seconds. When $\delta_{20}=0.5,0.9$, the evolution of QoI is similar to the case when $\delta_{20}=0.3$ and the minium value of QoI is $2.15$ and $0.6$, respectively at $t=0.9$. Using the baseline, the QoI is not affected with $\delta_{20}$. Hence as demonstrated earlier in Figs. \ref{qoI} and \ref{qoIbeta}, the QoI decreases with time until it reaches $-17$ at $t=0.9$. Thus, Fig. \ref{qoIdelta} shows that the QoI deteriorates with an increase in the information processing delay, due to the increase in the probability $\Theta$ as shown in Fig. \ref{thetadelta}. Nonetheless, the MFE maintains a significant gain in the QoI compared to the baseline even with high information processing delays. In particular, the MFE achieves a 1.2-fold increase in the quality of information when $\delta_4=0.3$ and $t=0.9$ compared to the baseline. \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{thetasim.pdf} \caption{Time evolution of the probability $\Theta(t)$ for the IoBT mean-field game and the finite IoBT game.}\label{thetasim} }\vspace{-1 cm} \end{figure} \begin{figure}[t]{ \centering \includegraphics[width=9 cm,height=5.3cm,angle=0]{QoIsim.pdf} \caption{Time evolution of the QoI for the IoBT mean-field game and the finite IoBT game.}\label{qoIsim} } \end{figure} \vspace{-0.1 cm} Next, for the considered simulation values, we consider for the finite IoBT case when $N=10,000$ and compute, using Algorithm 1 tailored to the finite game, the values of the probability $\Theta$ and the QoI at equilibrium shown in Figs. \ref{thetasim} and \ref{qoIsim}. Figs. \ref{thetasim} and \ref{qoIsim} shows that the values of $\Theta$ and the QoI of the finite IoBT game coincide with the values of the mean-field game, which confirms the convergence of the finite game to the mean-field game for large $N$. \vspace{-0.4 cm} \section{Conclusion} In this paper, we have considered the problem of misinformation propagation in an IoBT in which the nodes seek to determine the optimal probability of accepting the information. We have formulated the problem as a finite state mean-field game with multiclass agents. We have proposed an algorithm based on the forward backward sweep algorithm to find the mean-field equilibrium. We have analyzed the finite IoBT game and derived the conditions of convergence of the finite IoBT game to the mean-field game as the number of nodes tends to infinity. Our results have shown our proposed scheme can achieve a $1.2$-fold increase in the QoI compared to the value of the baseline when the nodes are transmitting. Further, our proposed scheme can reduce the proportion of infected nodes by $99\%$ compared to the baseline. \vspace{-0.3 cm}
{ "timestamp": "2018-02-21T02:03:08", "yymm": "1802", "arxiv_id": "1802.06887", "language": "en", "url": "https://arxiv.org/abs/1802.06887" }
\section{Introduction} \label{intro} Sivers function is a transverse momentum dependent parton distribution function (TMD) which encodes the correlation between the azimuthal anisotropy in the transverse momentum distribution of an unpolarised parton and the spin of its parent hadron~\cite{Sivers:1989cc,Sivers:1990fh}, $\Delta^N f_{a/p^\uparrow}(x,\mathbf{k}_\perp)\equiv\hat f_{a/p^\uparrow}(x,\mathbf{k}_\perp)-\hat f_{a/p^\downarrow}(x,\mathbf{k}_\perp)$. In collisions of transversely polarised nucleons off unpolarised nucleons (or leptons), this anisotropy can lead to an azimuthal anisotropy in the distribution of the inclusive final state, i.e a single-spin asymmety (SSA). The SSA for an inclusive process $A^\uparrow B\to C+X$ is defined as \begin{equation} A_N=\frac{d\sigma^\uparrow-d\sigma^\downarrow}{d\sigma^\uparrow+d\sigma^\downarrow} \end{equation} where $d\sigma^\uparrow$ and $d\sigma^\downarrow$ represent the cross-section for scattering of a transversely polarized hadron A off an unpolarized hadron (or lepton) B with A being polarised upwards and downwards respectively, with respect to the production plane. One of the two main theoretical approaches to discuss these asymmetries is based on factorisation in terms of a hard-part and transverse momentum dependent parton distribution functions and fragmentation functions. While TMD factorisation has only been formally established for two-scale processes, a lot of work has been done on a TMD description of single hard-scale processes under the assumption of factorisation, in what is generally referred to as the generalised parton model (GPM) approach~\cite{DAlesio:2004eso,DAlesio:2007bjf}. In this work, we study the low-virtuality leptoproduction ($Q^2\approx0$) of open-charm as a possible probe of the poorly understood gluon Sivers function (GSF), adopting the GPM framework. At the leading-order (LO) of this process, the production of open-charm happens only via the photon-gluon fusion (PGF) process, making detection of a SSA in this process a direct indication of a non-zero GSF. In Section 2, we present the parametrisation of the TMDs that we have used. In Section 3, we present the expressions for the SSA in $p^\uparrow l \to D+X$ as well as the results. \section{Formalism and parametrisation of the TMDs} \label{sec:1} The denominator and numerator of the asymmetry (Eq. 2) are given by, \begin{eqnarray*} d\sigma ^\uparrow + d\sigma ^\downarrow &=& \frac{E_D \, d\sigma^{p^\uparrow l \to DX}} {d^{3} \mbox{\boldmath $p$}_D} + \frac{E_D \, d\sigma^{p^\downarrow l \to DX}} {d^{3} \mbox{\boldmath $p$}_D} = \> 2\int dx_g \, dx_\gamma \, dz \, d^2 \mathbf{k}_{\perp g} \, d^2 \mathbf{k}_{\perp \gamma} \, d^3 \mathbf{k}_{D} \, \delta (\mathbf{k}_{D} \cdot \hat{\mbox{\boldmath $p$}}_c) \, \> \\ && \hspace*{-2.0cm} \times ~ {\mathcal C}(x_g,x_\gamma,z,\mathbf{k}_D)~f_{g/p}(x_g,\mathbf{k}_{\perp g}) \> f_{\gamma/l}(x_\gamma, \mathbf{k}_{\perp \gamma}) ~ \frac{d \hat{\sigma}^{g\gamma \to c \bar c}} {d\hat t}D_{D/c}(z,\mathbf{k}_D) ~\delta (\hat s +\hat t +\hat u - 2m_c^2) \label{denominator} \end{eqnarray*} and \begin{eqnarray*} d\sigma ^\uparrow - d\sigma ^\downarrow &=& \frac{E_D \, d\sigma^{p^\uparrow l \to DX}} {d^{3} \mbox{\boldmath $p$}_D} - \frac{E_D \, d\sigma^{p^\downarrow l\to DX}} {d^{3} \mbox{\boldmath $p$}_D} = \> \int dx_g \, dx_\gamma \, dz \, d^2 \mathbf{k}_{\perp g} \, d^2 \mathbf{k}_{\perp \gamma} \, d^3 \mathbf{k}_{D} \, \delta (\mathbf{k}_{D} \cdot \hat{\mbox{\boldmath $p$}}_c) \, \> \\ && \hspace*{-2.0cm} \times~ {\mathcal C}(x_g,x_\gamma,z,\mathbf{k}_D)~ \Delta ^N f_{g/p^\uparrow}(x_g,\mathbf{k}_{\perp g}) \> f_{\gamma/l}(x_\gamma, \mathbf{k}_{\perp \gamma}) ~ \frac{d \hat{\sigma}^{g\gamma \to c \bar c}} {d\hat t} \> D_{D/c}(z,\mathbf{k}_D)~\delta (\hat s +\hat t +\hat u - 2m_c^2). \label{numerator} \end{eqnarray*} For the unpolarised TMD PDF we use the standard Gaussian form:\, \begin{equation} f_{g/p}(x,k_\perp;Q)=f_{g/p}(x,Q)\frac{1}{\pi\langle k_\perp^2\rangle}e^{-k_\perp^2/\langle k_\perp^2\rangle}, \end{equation} with $\langle k_\perp^2\rangle=0.25$ GeV$^2$. For the density of quasi-real photons in a lepton, we use a similar Gaussian form as well, with the Weiszacker-Williams distribution for the collinear part and a Gaussian $k_\perp$-spread of width $\langle k_{\perp \gamma}^2\rangle=0.1$ GeV$^2$. We also take the transverse-momentum-dependence of the FF to be Gaussian with a width $\langle k_{\perp D}^2\rangle=0.25$ GeV$^2$. For the Sivers function, we use the parametrization ~\cite{DAlesio:2015fwo} \begin{eqnarray*} \Delta^N f_{g/p^\uparrow}(x,k_\perp;Q)=2\mathcal{N}_{g}(x)f_{g/p}(x,Q)~ \frac{\sqrt{2e}}{\pi} \sqrt\frac{1 - \rho}{\rho} {k_\perp} \frac{e^{- k_{\perp}^2 / \rho \langle k_{\perp}^2 \rangle}}{{\langle k_{\perp}^2 \rangle}^{3/2}}, \end{eqnarray*} where $0<\rho<1$. $\mathcal{N}_g(x)$ here parametrises the $x$-dependence of the GSF and is generally written as \begin{equation} \mathcal{N}_g(x)=N_g x^{\alpha_g}(1-x)^{\beta_g}\frac{(\alpha_g+\beta_g)^{\alpha_g+\beta_g}}{\alpha_g^{\alpha_g} \beta_g^{\beta_g}}. \end{equation} The requirement that the Sivers function satisfy the positivity bound $|\Delta^Nf_{g/p^\uparrow}(x,\mathbf{k}_\perp)|/2f_{g/p}(x,\mathbf{k}_\perp)\leq1$ $\>\forall \>x, \mathbf{k}_\perp$, implies $|\mathcal{N}_g(x)|<1$. In this work, in order to demonstrate the efficacy of the suggested probe, we explore two choices for the gluon Sivers function: \begin{enumerate} \item the Sivers function with the positivity bound saturated, viz., $\mathcal{N}_g(x)=1$ and $\rho=2/3$. \item the SIDIS1 and SIDIS2 extractions of the gluon Sivers function from Ref.~\cite{DAlesio:2015fwo}, which have been obtained using data on mid-rapidity pion production measured by the PHENIX experiment at RHIC~\cite{Adare:2013ekj}. \end{enumerate} The first choice, which we call the `saturated' Sivers function, would give an upper bound on the magnitude of the asymmetry for a fixed width $\langle k^2_\perp\rangle$ and $\rho$, and for a given choice of unpolarised gluon density. The parameter $\rho$ is set to $2/3$ in order to maximize the first $k_\perp$-moment of the Sivers function, following Ref.~\cite{DAlesio:2010sag}. The SIDIS1 and SIDIS2 GSFs from Ref.~\cite{DAlesio:2015fwo} are the first (and so far, only) available extractions of the GSF in a GPM framework. They were obtained by fitting to the PHENIX data on $A_N$ for inclusive pion production in the midrapidity region at RHIC. In their analysis, they used quark Sivers functions extracted from semi-inclusive deep inelastic scattering data to account for the quark contribution to the asymmetry, $A_N$. The two GSF extractions differ in the choice of QSFs, as well as that of the fragmentation functions adopted in the fitting process. As a result, they show very different $x$-dependencies, with SIDIS1 being larger in the moderate-$x$ region and SIDIS2 being larger in the low-$x$ region. The fact that these widely different choices for the GSF are consisten with the same data on $A_N$ underscores the utility of the process proposed by us for determination of the GSF. The values of the parameters of the two GSF fits are given in Table I. \begin{table*}[t] \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline SIDIS1 & \multicolumn{2}{l|}{$N_g=0.65$} & $\alpha_g=2.8$ & $\beta_g=2.8$ & $\rho=0.687$ & \multirow{2}{*}{$\langle k^2_\perp\rangle=0.25$ GeV$^2$} \\ \cline{1-6} SIDIS2 & \multicolumn{2}{l|}{$N_g=0.05$} & $\alpha_g=0.8$ & $\beta_g=1.4$ & $\rho=0.576$ & \\ \cline{1-7} \end{tabular} \caption{Parameters of the GSF fits from Ref.~\cite{DAlesio:2015fwo}.} \label{SIDIS-gluon-fits} \end{table*} \section{Results} \begin{figure*}[t] \begin{center} \vspace*{-1cm} \includegraphics[width=0.8\linewidth]{EIC_sidis_combined.pdf} \vspace*{-0.5cm} \caption{SSA from GSF fits of Ref.~\cite{DAlesio:2015fwo} at EIC as a function of $x_F$ (at fixed $P_T$, left panel) and $P_T$ (at fixed $\eta$, right panel). Using MRST2001LO PDF for collinear gluon density. Figure from Ref.~\cite{Godbole:2017fab}.} \label{EICsidisAN} \end{center} \vspace*{-0.5cm} \end{figure*} Fig.~\ref{EICsidisAN} we show the asymmetries obtained using the SIDIS1 and SIDIS2 fits~\cite{DAlesio:2015fwo}. Since the fits were obtained using MRST2001LO PDFs~\cite{Martin:2001es} for the collinear densities, to be consistent, we use the same. Both fits give asymmetries much smaller than allowed by the positivity bound. Further the SSAs for SIDIS1 and SIDIS2 differ from each other substantially and thus offer discrimination between the two GSF extractions. While we have not shown the plots here, we find that the probe is able to discriminate between the two fits at COMPASS kinematics as well~\cite{Godbole:2017fab}. \begin{figure*}[t] \begin{center} \vspace*{-1cm} \includegraphics[width=0.8\linewidth]{satAN_pdf_variation_EIC_only.pdf} \vspace*{-0.5cm} \caption{Variation of results for saturated GSF for different choices unpolarised gluon densities. We consider the MRST2001LO (green, long-dashed), CTEQ6L (purple, short-dashed) and MSTW2008LO (red, dotted) gluon distributions. } \label{EICsatAN} \end{center} \vspace*{-0.7cm} \end{figure*} In Fig.~\ref{EICsatAN}, we show estimates for the maximum value of the magnitude of the asymmetry $|A_N|$ at the Electron-Ion Collider (EIC), calculated by using the saturated gluon Sivers function. In case of the saturated GSF, the $x$-dependence is determined only by the choice of unpolarised gluon densities. Therefore, in order to demonstrate effects that uncertainties in the gluon densities might have on the probe, we have presented results for three different choices of leading-order (LO) unpolarised PDFs, MRST2001LO, CTEQ6L~\cite{Pumplin:2002vw} and MSTW2008LO~\cite{Martin:2009iq}. We find that the results are somewhat affected by the choice of PDF set, with the estimate for the saturated asymmetry varying by up to 6\% between CTEQ6L and MSTW2008. In general, with the large centre of mass energy of the EIC, the general features of $|A^\text{max}_N|$ are similar to those that had been observed in calculations for $pp$ collisions at RHIC~\cite{Anselmino:2004nk,Godbole:2016tvq}, especially the azimuthal suppression of the asymmetry in the backward hemisphere ($x_F<0$). Since the experiments detect $D$-mesons through the muons produced in the decay, it is an interesting question to ask, how much -- if any -- of the SSA present at the level of the $D$-mesons is transmitted at the level of the detected muons? This has the advantage that the asymmetry measurement will not have the additional errors due to $D$-meson reconstruction. The results for the SSA in the kinematics of the decay muons are presented in Fig.~\ref{EICsatANdecay} as a function of ${x_F}_\mu=2{P_L}_\mu/\sqrt{s}$, with the muon transverse momentum ${P_T}_\mu=1.5$ GeV. It appears that an azimuthal anisotropy in $D$ production would be retained significantly in the decay-muons. Peak values of the muon $A^\mu_N$ for all three choices of the GSF are close to those obtained for the meson. \begin{figure}[h] \begin{center} \vspace*{-1cm} \includegraphics[width=0.5\linewidth]{EIC_muon.pdf} \vspace*{-0.5cm} \caption{SSA for decay-muons. Using MRST2001LO PDF for collinear gluon density. Figure from Ref.~\cite{Godbole:2017fab}.} \label{EICsatANdecay} \end{center} \vspace*{-0.5cm} \end{figure} \vspace*{-1.5cm} \section{Conclusions} We find that an asymmetry of upto around 22\% is allowed by the saturated gluon Sivers function at the EIC. Further, asymmetry is significantly retained in the distribution of decay muons. We also find that the probe is able to discriminate well between the two available phenomenological fits of the gluon Sivers function, both of which were obtained using the same data on $A_N$ measured at PHENIX. Thus we see that this process offers a good probe of the gluon Sivers function and can be of help in a global extraction of it. \begin{acknowledgements} R.M.G. wishes to acknowledge support from the Department of Science and Technology, India under Grant No. SR/S2/JCB-64/2007 under the J.C. Bose Fellowship scheme. A.M would like to thank the Department of Science and Technology, India for financial support under Grant No.EMR/2014/0000486. A.M would also like to thank the Theory Division, CERN, Switzerland for their kind hospitality. \end{acknowledgements} \vspace*{-0.5cm} \bibliographystyle{spphys} \section{Introduction} \label{intro} Sivers function is a transverse momentum dependent parton distribution function (TMD) which encodes the correlation between the azimuthal anisotropy in the transverse momentum distribution of an unpolarised parton and the spin of its parent hadron~\cite{Sivers:1989cc,Sivers:1990fh}, $\Delta^N f_{a/p^\uparrow}(x,\mathbf{k}_\perp)\equiv\hat f_{a/p^\uparrow}(x,\mathbf{k}_\perp)-\hat f_{a/p^\downarrow}(x,\mathbf{k}_\perp)$. In collisions of transversely polarised nucleons off unpolarised nucleons (or leptons), this anisotropy can lead to an azimuthal anisotropy in the distribution of the inclusive final state, i.e a single-spin asymmety (SSA). The SSA for an inclusive process $A^\uparrow B\to C+X$ is defined as \begin{equation} A_N=\frac{d\sigma^\uparrow-d\sigma^\downarrow}{d\sigma^\uparrow+d\sigma^\downarrow} \end{equation} where $d\sigma^\uparrow$ and $d\sigma^\downarrow$ represent the cross-section for scattering of a transversely polarized hadron A off an unpolarized hadron (or lepton) B with A being polarised upwards and downwards respectively, with respect to the production plane. One of the two main theoretical approaches to discuss these asymmetries is based on factorisation in terms of a hard-part and transverse momentum dependent parton distribution functions and fragmentation functions. While TMD factorisation has only been formally established for two-scale processes, a lot of work has been done on a TMD description of single hard-scale processes under the assumption of factorisation, in what is generally referred to as the generalised parton model (GPM) approach~\cite{DAlesio:2004eso,DAlesio:2007bjf}. In this work, we study the low-virtuality leptoproduction ($Q^2\approx0$) of open-charm as a possible probe of the poorly understood gluon Sivers function (GSF), adopting the GPM framework. At the leading-order (LO) of this process, the production of open-charm happens only via the photon-gluon fusion (PGF) process, making detection of a SSA in this process a direct indication of a non-zero GSF. In Section 2, we present the parametrisation of the TMDs that we have used. In Section 3, we present the expressions for the SSA in $p^\uparrow l \to D+X$ as well as the results. \section{Formalism and parametrisation of the TMDs} \label{sec:1} The denominator and numerator of the asymmetry (Eq. 2) are given by, \begin{eqnarray*} d\sigma ^\uparrow + d\sigma ^\downarrow &=& \frac{E_D \, d\sigma^{p^\uparrow l \to DX}} {d^{3} \mbox{\boldmath $p$}_D} + \frac{E_D \, d\sigma^{p^\downarrow l \to DX}} {d^{3} \mbox{\boldmath $p$}_D} = \> 2\int dx_g \, dx_\gamma \, dz \, d^2 \mathbf{k}_{\perp g} \, d^2 \mathbf{k}_{\perp \gamma} \, d^3 \mathbf{k}_{D} \, \delta (\mathbf{k}_{D} \cdot \hat{\mbox{\boldmath $p$}}_c) \, \> \\ && \hspace*{-2.0cm} \times ~ {\mathcal C}(x_g,x_\gamma,z,\mathbf{k}_D)~f_{g/p}(x_g,\mathbf{k}_{\perp g}) \> f_{\gamma/l}(x_\gamma, \mathbf{k}_{\perp \gamma}) ~ \frac{d \hat{\sigma}^{g\gamma \to c \bar c}} {d\hat t}D_{D/c}(z,\mathbf{k}_D) ~\delta (\hat s +\hat t +\hat u - 2m_c^2) \label{denominator} \end{eqnarray*} and \begin{eqnarray*} d\sigma ^\uparrow - d\sigma ^\downarrow &=& \frac{E_D \, d\sigma^{p^\uparrow l \to DX}} {d^{3} \mbox{\boldmath $p$}_D} - \frac{E_D \, d\sigma^{p^\downarrow l\to DX}} {d^{3} \mbox{\boldmath $p$}_D} = \> \int dx_g \, dx_\gamma \, dz \, d^2 \mathbf{k}_{\perp g} \, d^2 \mathbf{k}_{\perp \gamma} \, d^3 \mathbf{k}_{D} \, \delta (\mathbf{k}_{D} \cdot \hat{\mbox{\boldmath $p$}}_c) \, \> \\ && \hspace*{-2.0cm} \times~ {\mathcal C}(x_g,x_\gamma,z,\mathbf{k}_D)~ \Delta ^N f_{g/p^\uparrow}(x_g,\mathbf{k}_{\perp g}) \> f_{\gamma/l}(x_\gamma, \mathbf{k}_{\perp \gamma}) ~ \frac{d \hat{\sigma}^{g\gamma \to c \bar c}} {d\hat t} \> D_{D/c}(z,\mathbf{k}_D)~\delta (\hat s +\hat t +\hat u - 2m_c^2). \label{numerator} \end{eqnarray*} For the unpolarised TMD PDF we use the standard Gaussian form:\, \begin{equation} f_{g/p}(x,k_\perp;Q)=f_{g/p}(x,Q)\frac{1}{\pi\langle k_\perp^2\rangle}e^{-k_\perp^2/\langle k_\perp^2\rangle}, \end{equation} with $\langle k_\perp^2\rangle=0.25$ GeV$^2$. For the density of quasi-real photons in a lepton, we use a similar Gaussian form as well, with the Weiszacker-Williams distribution for the collinear part and a Gaussian $k_\perp$-spread of width $\langle k_{\perp \gamma}^2\rangle=0.1$ GeV$^2$. We also take the transverse-momentum-dependence of the FF to be Gaussian with a width $\langle k_{\perp D}^2\rangle=0.25$ GeV$^2$. For the Sivers function, we use the parametrization ~\cite{DAlesio:2015fwo} \begin{eqnarray*} \Delta^N f_{g/p^\uparrow}(x,k_\perp;Q)=2\mathcal{N}_{g}(x)f_{g/p}(x,Q)~ \frac{\sqrt{2e}}{\pi} \sqrt\frac{1 - \rho}{\rho} {k_\perp} \frac{e^{- k_{\perp}^2 / \rho \langle k_{\perp}^2 \rangle}}{{\langle k_{\perp}^2 \rangle}^{3/2}}, \end{eqnarray*} where $0<\rho<1$. $\mathcal{N}_g(x)$ here parametrises the $x$-dependence of the GSF and is generally written as \begin{equation} \mathcal{N}_g(x)=N_g x^{\alpha_g}(1-x)^{\beta_g}\frac{(\alpha_g+\beta_g)^{\alpha_g+\beta_g}}{\alpha_g^{\alpha_g} \beta_g^{\beta_g}}. \end{equation} The requirement that the Sivers function satisfy the positivity bound $|\Delta^Nf_{g/p^\uparrow}(x,\mathbf{k}_\perp)|/2f_{g/p}(x,\mathbf{k}_\perp)\leq1$ $\>\forall \>x, \mathbf{k}_\perp$, implies $|\mathcal{N}_g(x)|<1$. In this work, in order to demonstrate the efficacy of the suggested probe, we explore two choices for the gluon Sivers function: \begin{enumerate} \item the Sivers function with the positivity bound saturated, viz., $\mathcal{N}_g(x)=1$ and $\rho=2/3$. \item the SIDIS1 and SIDIS2 extractions of the gluon Sivers function from Ref.~\cite{DAlesio:2015fwo}, which have been obtained using data on mid-rapidity pion production measured by the PHENIX experiment at RHIC~\cite{Adare:2013ekj}. \end{enumerate} The first choice, which we call the `saturated' Sivers function, would give an upper bound on the magnitude of the asymmetry for a fixed width $\langle k^2_\perp\rangle$ and $\rho$, and for a given choice of unpolarised gluon density. The parameter $\rho$ is set to $2/3$ in order to maximize the first $k_\perp$-moment of the Sivers function, following Ref.~\cite{DAlesio:2010sag}. The SIDIS1 and SIDIS2 GSFs from Ref.~\cite{DAlesio:2015fwo} are the first (and so far, only) available extractions of the GSF in a GPM framework. They were obtained by fitting to the PHENIX data on $A_N$ for inclusive pion production in the midrapidity region at RHIC. In their analysis, they used quark Sivers functions extracted from semi-inclusive deep inelastic scattering data to account for the quark contribution to the asymmetry, $A_N$. The two GSF extractions differ in the choice of QSFs, as well as that of the fragmentation functions adopted in the fitting process. As a result, they show very different $x$-dependencies, with SIDIS1 being larger in the moderate-$x$ region and SIDIS2 being larger in the low-$x$ region. The fact that these widely different choices for the GSF are consisten with the same data on $A_N$ underscores the utility of the process proposed by us for determination of the GSF. The values of the parameters of the two GSF fits are given in Table I. \begin{table*}[t] \centering \begin{tabular}{|l|l|l|l|l|l|l|} \hline SIDIS1 & \multicolumn{2}{l|}{$N_g=0.65$} & $\alpha_g=2.8$ & $\beta_g=2.8$ & $\rho=0.687$ & \multirow{2}{*}{$\langle k^2_\perp\rangle=0.25$ GeV$^2$} \\ \cline{1-6} SIDIS2 & \multicolumn{2}{l|}{$N_g=0.05$} & $\alpha_g=0.8$ & $\beta_g=1.4$ & $\rho=0.576$ & \\ \cline{1-7} \end{tabular} \caption{Parameters of the GSF fits from Ref.~\cite{DAlesio:2015fwo}.} \label{SIDIS-gluon-fits} \end{table*} \section{Results} \begin{figure*}[t] \begin{center} \vspace*{-1cm} \includegraphics[width=0.8\linewidth]{EIC_sidis_combined.pdf} \vspace*{-0.5cm} \caption{SSA from GSF fits of Ref.~\cite{DAlesio:2015fwo} at EIC as a function of $x_F$ (at fixed $P_T$, left panel) and $P_T$ (at fixed $\eta$, right panel). Using MRST2001LO PDF for collinear gluon density. Figure from Ref.~\cite{Godbole:2017fab}.} \label{EICsidisAN} \end{center} \vspace*{-0.5cm} \end{figure*} Fig.~\ref{EICsidisAN} we show the asymmetries obtained using the SIDIS1 and SIDIS2 fits~\cite{DAlesio:2015fwo}. Since the fits were obtained using MRST2001LO PDFs~\cite{Martin:2001es} for the collinear densities, to be consistent, we use the same. Both fits give asymmetries much smaller than allowed by the positivity bound. Further the SSAs for SIDIS1 and SIDIS2 differ from each other substantially and thus offer discrimination between the two GSF extractions. While we have not shown the plots here, we find that the probe is able to discriminate between the two fits at COMPASS kinematics as well~\cite{Godbole:2017fab}. \begin{figure*}[t] \begin{center} \vspace*{-1cm} \includegraphics[width=0.8\linewidth]{satAN_pdf_variation_EIC_only.pdf} \vspace*{-0.5cm} \caption{Variation of results for saturated GSF for different choices unpolarised gluon densities. We consider the MRST2001LO (green, long-dashed), CTEQ6L (purple, short-dashed) and MSTW2008LO (red, dotted) gluon distributions. } \label{EICsatAN} \end{center} \vspace*{-0.7cm} \end{figure*} In Fig.~\ref{EICsatAN}, we show estimates for the maximum value of the magnitude of the asymmetry $|A_N|$ at the Electron-Ion Collider (EIC), calculated by using the saturated gluon Sivers function. In case of the saturated GSF, the $x$-dependence is determined only by the choice of unpolarised gluon densities. Therefore, in order to demonstrate effects that uncertainties in the gluon densities might have on the probe, we have presented results for three different choices of leading-order (LO) unpolarised PDFs, MRST2001LO, CTEQ6L~\cite{Pumplin:2002vw} and MSTW2008LO~\cite{Martin:2009iq}. We find that the results are somewhat affected by the choice of PDF set, with the estimate for the saturated asymmetry varying by up to 6\% between CTEQ6L and MSTW2008. In general, with the large centre of mass energy of the EIC, the general features of $|A^\text{max}_N|$ are similar to those that had been observed in calculations for $pp$ collisions at RHIC~\cite{Anselmino:2004nk,Godbole:2016tvq}, especially the azimuthal suppression of the asymmetry in the backward hemisphere ($x_F<0$). Since the experiments detect $D$-mesons through the muons produced in the decay, it is an interesting question to ask, how much -- if any -- of the SSA present at the level of the $D$-mesons is transmitted at the level of the detected muons? This has the advantage that the asymmetry measurement will not have the additional errors due to $D$-meson reconstruction. The results for the SSA in the kinematics of the decay muons are presented in Fig.~\ref{EICsatANdecay} as a function of ${x_F}_\mu=2{P_L}_\mu/\sqrt{s}$, with the muon transverse momentum ${P_T}_\mu=1.5$ GeV. It appears that an azimuthal anisotropy in $D$ production would be retained significantly in the decay-muons. Peak values of the muon $A^\mu_N$ for all three choices of the GSF are close to those obtained for the meson. \begin{figure}[h] \begin{center} \vspace*{-1cm} \includegraphics[width=0.5\linewidth]{EIC_muon.pdf} \vspace*{-0.5cm} \caption{SSA for decay-muons. Using MRST2001LO PDF for collinear gluon density. Figure from Ref.~\cite{Godbole:2017fab}.} \label{EICsatANdecay} \end{center} \vspace*{-0.5cm} \end{figure} \vspace*{-1.5cm} \section{Conclusions} We find that an asymmetry of upto around 22\% is allowed by the saturated gluon Sivers function at the EIC. Further, asymmetry is significantly retained in the distribution of decay muons. We also find that the probe is able to discriminate well between the two available phenomenological fits of the gluon Sivers function, both of which were obtained using the same data on $A_N$ measured at PHENIX. Thus we see that this process offers a good probe of the gluon Sivers function and can be of help in a global extraction of it. \begin{acknowledgements} R.M.G. wishes to acknowledge support from the Department of Science and Technology, India under Grant No. SR/S2/JCB-64/2007 under the J.C. Bose Fellowship scheme. A.M would like to thank the Department of Science and Technology, India for financial support under Grant No.EMR/2014/0000486. A.M would also like to thank the Theory Division, CERN, Switzerland for their kind hospitality. \end{acknowledgements} \vspace*{-0.5cm} \bibliographystyle{spphys}
{ "timestamp": "2018-02-21T02:05:56", "yymm": "1802", "arxiv_id": "1802.06980", "language": "en", "url": "https://arxiv.org/abs/1802.06980" }
\section{Conclusion} \label{sec:conc} While spectrum based bug localization is an extensively studied research area, studying buggy code in association with code naturalness (thus unnaturalness) is relatively new. In this work, we introduced the notion of code entropy as captured by statistical language model in \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace to make the overall bug localization more robust, and proposed an effective way of integrating entropy with \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace suspicious scores. We implemented our concept in a prototype called EnSpec\xspace. Our experimental results with EnSpec\xspace show that code entropy is positively correlated with the buggy lines executed by the failing test cases. Our results also demonstrate that EnSpec\xspace, when configured to use both entropy and \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace, outperforms the configuration that uses only various \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace as features. EnSpec\xspace can also be leveraged for detecting bugs in cross-project setting for relatively new projects, where project bug database and evolutionary history is not strong enough. Our future direction includes leveraging EnSpec\xspace to repair buggy program line more effectively, and improving EnSpec\xspace further by incorporating language model which captures not only the syntactic structure but also code semantic structure. \section{Introduction} \label{sec:intro} Localizing bugs is an important, time consuming, and expensive process, especially for a system at-scale. Automatic bug localization can play an important role in saving developers' time in debugging, and thus, may help developers fixing more bugs in a limited time. Using various statistical and program analysis approaches, these bug localization techniques automatically identify suspicious code elements that are highly likely to contain bugs. Developers then manually examine these suspicious code to pinpoint the bugs. Existing bug localization techniques can be broadly classified into two categories: i) test coverage-based dynamic approaches~\cite{jones2005tarantula,abreu2009practical,zeller2002simplifying,cleve2005locating,liblit2005scalable,liu2005sober}, and ii) pattern-based~\cite{copeland2005pmd,findbugs, Engler:2001:SOSP,Chelf:2002:Paste} or information retrieval-based (IR) static approaches~\cite{rao2011retrieval,zhou2012should,saha2013improving,ye2014learning}. Dynamic approaches first run all the test cases, and then analyze the program statements covered by passing\xspace and failing\xspace test cases. For example, spectrum based bug localization (\hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace), a popular dynamic bug localization technique, prioritizes the program elements for debugging that are executed more by failing\xspace test cases than by passing\xspace test cases. In contrast, static approaches do not run any test cases. Rather, it searches for some previously known buggy patterns in source code or looks for buggy files based on bug reports. Both of these bug localization approaches have their own set of advantages and disadvantages. For instance, static methods are often imprecise or inaccurate. On the other hand, the accuracy of dynamic approaches is highly dependent on the quality (code coverage, \hbox{\emph{etc.}}\xspace) of the test suite. In real world projects, most of the test suite may not have enough code coverage to locate bugs efficiently. Therefore, in many cases, developers do not get the full benefit of bug localization techniques~\cite{johnson2013don} and have to significantly rely on manual effort and prior experiences. Besides static and dynamic properties of a program, it has also been observed that how developers write code is also important for code quality~\cite{hellendoorn2015will}. Real-world software that are developed by regular programmers tend to be highly repetitive and predictable~\cite{gabel2010study}. Hindle \hbox{\emph{et al.}}\xspace was the first to show that such repetitiveness can be successfully captured by a statistical language model~\cite{hindle2012naturalness}. They called this property as {\it naturalness} of code and measured it by standard information theory metric {\it entropy}. The less entropy a code snippet exhibits, the more the code is natural. Inspired by this phenomena, Ray \hbox{\emph{et al.}}\xspace~\cite{ray2016naturalness} investigated if there is any correlation between buggy code and entropy. They observed that buggy codes are in general less natural, \hbox{\emph{i.e.}}\xspace they have higher entropy than non-buggy code. In this paper, our key intuition is that, since the high entropic code tends to be buggy~\cite{ray2016naturalness,campbell2014syntax,wang2016bugram}, code entropy can be an effective orthogonal source of information to \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace to improve the overall accuracy of bug localization. This notion seems to be plausible since from a set of suspicious code, as reported by a standard \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace technique, experienced programmers often intuitively identify the actual bugs because buggy code elements are usually a bit unnatural than rest of the corpus. If entropy can improve \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace, it would be particularly useful when a test suite is not strong enough to discriminate buggy lines or when the suspicious scores of many lines are the same. Furthermore, to realize this hybrid approach, we only need source code (no other external meta-source), which is always available to the developers. To this end, we introduce EnSpec\xspace, that automatically calculates the entropy for each program line and combines with a state-of-the-art \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace using a machine learning technique to return a ranked list of suspicious lines for investigation. Here, we studied bug localization at line granularity to ensure maximum benefit to the developers, although locating bugs in method and file-granularity is also possible. We performed an extensive evaluation of EnSpec\xspace on two popular publicly available bug-dataset: Defects4J\xspace~\cite{just2014defects4j} and ManyBugs\xspace~\cite{le2015manybugs} written in Java and C respectively. In total, we studied more than 500 bugs (3,715 buggy lines) bugs, around 4M LOC from 10 projects. Overall, our findings corroborate our hypothesis that entropy can indeed improve bug localization capability of spectrum based bug localization technique. We further evaluate EnSpec\xspace for both C and Java projects showing that the tool is not programming language dependent. In particular, our results show that: \begin{itemize} \item Entropy score, as captured by statistical language models, can significantly improve bug localization capability of standard \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace technique. \item Entropy score also boosts \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace in cross-project bug localization settings. \end{itemize} In summary, we make the following contributions in this paper: \begin{enumerate} \item We introduce the notion of entropy in spectrum based bug localization. \item We present EnSpec\xspace that effectively combine entropy score with the suspicious score of spectrum based bug localization using a machine learning technique. \item We provide an extensive evaluation of EnSpec\xspace on two publicly available benchmarks that demonstrates the effectiveness of our approach. \end{enumerate} \section*{Acknowledgment} \balance \bibliographystyle{abbrv} \section{Study Method \label{sec:method} In this section, we describe the dataset we studied and the analysis methods we used to answer our research questions. \subsection{Study Subject} \label{subsec:study_subject} We used two publicly available bug dataset: Defects4J\xspace~\cite{just2014defects4j} and ManyBugs\xspace~\cite{le2015manybugs} (see Table~\ref{tab:ssubj}). Defects4J\xspace dataset contains $5$ open source projects with $321$K lines of code, $357$ reproducible bugs, and $20$K total number of tests. All the Defects4J\xspace projects are written in Java. We also studied $5$ projects from ManyBugs\xspace benchmark dataset~\cite{le2015manybugs}. These are medium to large open source C projects, with total $4459$K lines of code, $160$ reproducible bugs, and $9262$ total test cases. In both datasets, each bug is associated with a buggy and its corresponding fix program versions. There are some failing test cases that reproduce the bugs in the buggy versions, while after the fixes all the test cases pass. The dataset also provides APIs for instrumentation and recording program execution traces. \input{tables/dataset} \subsection{Data Collection} \label{subsec:dc} Here we describe how we identified the buggy lines in a buggy program version. {\em Annotating buggy program statements.} We compared each buggy program version with its corresponding fixed version; the lines that are deleted or modified in the buggy version are annotated as buggy program statements. To get the differences between two program versions, we used Defects4J\xspace APIs for Defects4J\xspace dataset.~\sk{How did you get differences of ManyBugs\xspace dataset?} We notice that some bugs in Defects4J\xspace dataset\sk{what about ManyBugs\xspace?} are caused due to the error of omission\textemdash tests fail due to missing features/functionalities in the buggy version. Fixing these bugs do not require deleting or modifying program statements in the buggy version, but adding new lines in the fixed version. In such cases, we cannot trivially annotate any of the existing lines in the buggy version as `buggy.' We filter out such bugs from further consideration, as our goal in this work to locate existing buggy lines, as opposed to detecting error of omission.~\ripon{Ripon, I do not think we need to cite example, as it is trivial. What do you think?} Table~\ref{tab:ssubj} shows the total number of buggy lines per each Defects4J\xspace project. Obviously, this is a very small number to work on. So we searched on those project evolution history for more bugs. We adopted the similar strategy described in \textit{Ray et al.}\cite{ray2016naturalness} First, we searched project repository for bug fix commit messages based on special keywords in commit messages. As described in Mockus ~\hbox{\emph{et al.}}\xspace~\cite{mockus2000identifying}, we searched commit messages containing error related keywords such as `error', `bug', `fix', `issue', `mistake', `flaw', `defect' etc. Lines changed or deleted on those commits are marked as buggy. Then the commits those had introduced these buggy lines are located. then using `git blame' with `--reverse' option, those lines are located in the nearest snapshot of the originating commit. ~\sk{Elaborate this.} From project evolution history, we found other bugs to train our model on. And total number of buggy lines found are also shown in table \ref{tbl:buggy_lines}. Another approach is very popular in literature, which injects faults in the software system for testing \cite{gunneflo1989evaluation,segall1995fiat}. But in our model, we heavily rely on the naturalness of the code. And, injected code will violate the hypotheses of code being natural. So, we did not adopt that approach. \subsubsection{Data Filtering} \label{subsec:data_filtering} We are dealing with fault localization. We know there are faults because, some of the test cases failed. There are some lines in a project which is not even been touched by any of the failing test cases. So we exclude those lines from consideration. Obviously, some of the buggy lines from project evolution are excluded. The reason for that is, although we have sufficient failing test cases for all of the bugs given by defects4j, we do not have sufficient test cases to identify all the buggy lines identified from the project evolution. ~\sk{Is it also done for manybugs dataset.} Number of buggy lines after this filtering is shown in table \ref{tbl:after_filtering} \begin{table}[h] \centering \begin{tabular}{c|c|c} Project name & \#Total Lines & \#Buggy Lines\\ \hline \hline jfree-\textbf{Chart} & 2311 & 30\\ Commons-\textbf{Lang} & 29525 & 249\\ \textbf{Closure}-compiler & 4504 & 134\\ Commons-\textbf{Math} & 2793 & 577\\ Joda-\textbf{Time} & 588 & 46\\ \hline Total & 39721 & 1035\\ \end{tabular} \caption{Number of lines to be considered and number of buggy lines} \label{tbl:after_filtering} \end{table} \subsection{Methodology} \label{subsec:method} \subsubsection{Program Spectra Generation} \label{subsec:program_spectra} First, we extracted the Test Classes and Test methods from the Project Source codes. Then, we run our test cases and recorded the execution trace of different passing and failing test cases. We also recorded the passing status of the corresponding test case. From there we calculated line 4 values$(e_p, e_f, n_p, n_f)$ described in Section \ref{sec:intro}. With these 4 values, we generated 25 ranking matrices in the state of the art described in \textit{Xuan et al.} \cite{xuan2014learning}. We used these 25 ranking metrics as our considered features. These values are referred to as Spectrum based features in the context of this report. \subsubsection{Entropy Generation} \label{sebsec:entropy_gen} For generating entropy we adopted the \textit{\$-gram} model proposed by \textit{Tu et al.}\cite{tu2014localness}. For every line in source code we, calculated $3$ entropy values. Those are: \begin{itemize} \item \textbf{Forward Entropy $(E_f)$}: This entropy is calculated by parsing the file from begin to end, \textit{i.e.} considering the token sequence as it is in the source file. \item \textbf{Backward Entropy $(E_b)$}: This entropy is calculated by parsing the file in reverse order, \textit{i.e.} considering the token sequence from end to the begin of file. \item \textbf{Average Entropy $(E_a)$}: This entropy value is calculated as the average of $E_f$ and $E_b$. \end{itemize} We used these 3 values as our feature for algorithm. In this report, we refer to this values as \textit{Entropy based features}. Inspired by the approach described at Li ~\hbox{\emph{et al.}}\xspace~\cite{li2007mcrank}, we annotated all lines in our train dataset with a relevance score of being buggy. Intuitively, this score should by higher for the actual buggy lines. To maintain clarity, we maintained one relevance score for all the buggy lines and one score for non-buggy lines. Let $R_b$ be the relevance score of buggy lines, and $R_g$ be the relevance score for good lines. We chose $R_b$ and $R_g$ such that $R_b > R_g$. Impact of assigning different scores for buggy and/or non-buggy lines is beyond the scope of this research. We represent each line $l_i$ in the training code corpus as a tuple, $<Sp_i,En_i,R_i>$, where $SP_i$ are the spectrum based suspiciousness scores, $EN_i$ are the Entropy scores, and $R_i \in \{R_b, R_g\}$ is the relavenace score.Themn, we pass the whole corpus to a learning machine, which learns the probability distribution $P(R_i|Sp_i,En_i)$ of the relevance scores given the feature values. While the model is in test phase, for each line $L_i$ in test corpus, we compute supiciousness score based on equation ~\ref{eqn:proba_suspiciousness_score} \begin{equation} \label{eqn:proba_suspiciousness_score} \begin{split} Susp_{L_i} = f(R_b) &* P(R_b|Sp_{L_i},En_{L_i})\\ &+ f(R_g) * P(R_b|Sp_{L_i},En_{L_i}) \end{split} \end{equation} Here, $f(R_i)$ is monotonically increasing function. To keep things simple, we used identity function \textit{i.e.} $f(R_b) = R_b$ and $f(R_g) = R_g$, which is of course monotonically increasing function. Choosing this identity function will transform equation ~\ref{eqn:proba_suspiciousness_score} into \begin{equation} \label{eqn:expected_value_suspiciousness} \begin{split} Susp_{L_i} &= \sum_{R_i \in \{R_b, R_g\}}{R_i * P(R_i|Sp_{L_i},En_{L_i})} \\ &= \mathbb{E}[R_i|Sp_{L_i}, En_{L_i}] \end{split} \end{equation} This is the expected relevance score for a line being buggy. We evaluated our approach based on this score. \subsection{Evaluation Metric} \label{sunsec:evaluation_metric} Todo: Add description of evaluation metric. \subsection{Random Forest Algorithm} \label{subsec:randomforest} Random Forest(RF) is an ensemble learning technique developed by Breiman(2001)\cite{breiman2001random} based on a combination of a large set of decision trees. Each tree is trained by selecting a random set of variables and a random sample from the training dataset (\textit{i.e.}, the different spectrum scores and/or entropies by \$-gram model). In addition to learning the discrimination function for classification task, RF also learns class conditional probability distribution of train dataset. Additionally, RF algorithm learns the importances for different features for discrimination. In this work, we leveraged the feature importance(i.e. importances of different \textit{suspiciousness} scores and \textit{entropy} based features) and class conditional probability distribution learned by RF algorithms. As described by Provost \hbox{\emph{et al.}}\xspace~\cite{provost2003tree}, we used the class conditional probability estimation of a test program line as the suspiciousness score. \subsection{Training} \label{subsec:training} Having the pre-processing described in section \ref{subsec:program_spectra},\ref{subsec:bug_annot},and\ref{subsec:data_filtering} done, we train our model using simplified Random Forest Algorithm, where we learn the relative importances of the features to classify the data into buggy and non-buggy version, and also we learned the class conditional probability distribution of the training data. \section{Proposed Approach \label{sec:method} \begin{figure*}[!htpb] \centering \includegraphics[width=\textwidth]{schema.png} \caption{\textbf{\small EnSpec\xspace Workflow}} \label{fig:schema} \end{figure*} \vspace{15pt} In this section, we describe our tool, EnSpec\xspace. An overview of our approach is shown in Figure~\ref{fig:schema}. The goal of EnSpec\xspace is to localize bugs using a hybrid bug localization technique: a combination of dynamic spectrum based bug localization (\hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace) and static natural language model based defect prediction (LM\xspace). EnSpec\xspace takes two sets of code corpus as input\textemdash training and testing set. Next, EnSpec\xspace works in following four steps: {\em Step-1.} EnSpec\xspace collects entropy score per code element based on a language model for each input project. {\em Step-2.} For each project version in the training and test corpus, EnSpec\xspace records test coverage and collects various \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace based suspiciousness scores per code element. {\em Step-3.} In this step, EnSpec\xspace learns from the training data, how the suspiciousness scores and entropy collected in above two steps relate to buggy/non-buggy classes and learns feature weight. In Section~\ref{subsec:dc}, we describe the data collection phase in more detail: how we annotate each code element as buggy/non-buggy. {\em Step-4.} Based on the learned feature-weight, EnSpec\xspace assigns a suspiciousness score of each code element in the test corpus. The suspiciousness score depicts the probability of a code element to be buggy. Finally, the output of EnSpec\xspace is a ranked list of code elements based on their decreasing suspiciousness score. In theory, EnSpec\xspace should work on code elements at any granularity\textemdash line, method, file, etc. In this paper, we use EnSpec\xspace to localize bugs at a line granularity. In the following section, we describe these steps in details. \subsection*{Step-1:~Generating entropy using LM\xspace} For generating entropy per program line, we adopted the \$gram\xspace language model proposed by Tu \hbox{\emph{et al.}}\xspace~\cite{tu2014localness}. For every line in source code we calculated following three entropy values: 1.~{Forward Entropy $(E_f)$}: Entropy value of a token is calculated based on the probability of seeing the token given its prefix token sequences. We calculate this entropy by parsing the file from beginning to end, \hbox{\emph{i.e.}}\xspace considering the token sequences as it is in the source file. 2.~{Backward Entropy $(E_b)$}: Entropy value of a token is calculated based on the probability of seeing the token given its suffix token sequences. We calculate this entropy by parsing the file in reverse order, \hbox{\emph{i.e.}}\xspace from end to beginning. 3.~{Average Entropy $(E_a)$}: This entropy value is calculated as the average of $E_f$ and $E_b$. We use these three values as our LM\xspace based features. We further normalized these values based on their AST type, as shown in Equation~\ref{eqZscore}. We refer these three normalized entropy values as \textit{entropy related features} in the rest of the paper. \input{rankings} \subsection*{Step-2:~Extracting suspiciousness score using \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace techniques.} For all the input project versions, we first instrument the source code to record program execution traces, or coverage data. Both Defects4J\xspace and ManyBugs\xspace dataset provide APIs for collecting such coverage data. Then, to collect the execution traces, we extract the test classes and test methods from the project source code and run the test cases. We record the execution traces for each test case with its passing/failing status. These test spectra characterize the program's behavior across executions by summarizing how frequently each source code line was executed for passing and failing tests. Now for each line we calculate 4 values $(e_p, e_f, n_p, n_f)$, as described in Section~\ref{sec:prelim}. Next, using these 4 values, we generate 25 suspiciousness scores, as described by Xuan \hbox{\emph{et al.}}\xspace~\cite{xuan2014learning}. We use these 25 scores(see Table~\ref{tbl:rank_metrics}) as our \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace features. The next two steps implement the training and testing phase of a classifier based on buggy and non-buggy program lines. We adapted Li~\hbox{\emph{et al.}}\xspace's learning to rank algorithm for this purpose~\cite{li2007mcrank}. \subsection*{Step-3:~Training Phase} Given a set of buggy and non-buggy lines, EnSpec\xspace learns the relation between \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace and entropy related features on the bugginess of program lines. First, all lines in the training dataset were annotated with a relevance score of bugginess: $R_b$ for each buggy line, and $R_g$ for each non-buggy line, where $R_b > R_g$. Thus, each line $l$ in the training code corpus is represented as a tuple, $\langle Sp_l, En_l, R_l \rangle$, where $Sp_l$ is a set of \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace features, $En_l$ is a set of entropy related features, and $R_l \in \{R_b, R_g\}$ is the bug-relevance score. Then, we pass the whole corpus to a machine learner, which learns the probability distribution $P(R_l|Sp_l,En_l)$ of the relevance scores given the feature values. \subsection*{Step-4:~Testing Phase} In testing phase, for each line $l$ in the test corpus, we compute a suspiciousness score ($susp$) based on equation ~\ref{eqn:proba_suspiciousness_score}. \begin{equation} \label{eqn:proba_suspiciousness_score} \begin{split} susp_{l} = f(R_b) &* P(R_b|Sp_{l},En_{l})\\ &+ f(R_g) * P(R_b|Sp_{l},En_{l}) \end{split} \end{equation} Here, $f(R_b)$ and $f(R_g)$ are monotonically increasing functions. To keep things simple, we used identity function, \hbox{\emph{i.e.}}\xspace, $f(R_b) = R_b$ and $f(R_g) = R_g$, which is monotonic as well. This transforms equation ~\ref{eqn:proba_suspiciousness_score} into: \begin{equation} \label{eqn:expected_value_suspiciousness} \begin{split} susp_{l} &= \sum_{R_l \in \{R_b, R_g\}}{R_l * P(R_l|Sp_{l},En_{l})} \\ &= \mathbb{E}[R_l|Sp_{l}, En_{l}] \end{split} \end{equation} We use ensemble~\cite{dietterich2002ensemble} of $M$ different models trained on randomly sampled subset of original dataset. Each model $M_k$ computes a suspiciousness score $susp_{l}^{k}$, based on the expected relevance score of equation~\ref{eqn:expected_value_suspiciousness}. Our final hybrid suspiciousness score is calculated by equation~\ref{eqn:ensemble}: \begin{equation} \label{eqn:ensemble} HySusp_{l} = \frac{1}{M}\sum_{k=1}^{M}{susp_{l}^k} \end{equation} EnSpec\xspace outputs a ranked list of source code lines based on the decreasing order of hybrid suspiciousness score (HySusp), line with highest suspiciousness tops the list. \section{Experimental Setup} \label{sec:experiment} In this section, we describe how we setup our experiment to evaluate EnSpec\xspace. In particular, we describe the subject systems, how we collect data, evaluation metric, and research questions to evaluate EnSpec\xspace. \input{dataset} \subsection{Study Subject} \label{subsec:study_subject} We used two publicly available bug dataset: Defects4J\xspace~\cite{just2014defects4j} and ManyBugs\xspace~\cite{le2015manybugs} (see Table~\ref{tab:ssubj}). Defects4J\xspace dataset contains $5$ open source projects with $321$K lines of code, $357$ reproducible bugs, and $20$K total number of tests. All the Defects4J\xspace projects are written in Java. We also studied $5$ projects from ManyBugs\xspace benchmark dataset~\cite{le2015manybugs}. These are medium to large open source C projects, with total $4459$K lines of code, $160$ reproducible bugs, and $9262$ total test cases. In both datasets, each bug is associated with a buggy and its corresponding fix program versions. There are some failing test cases that reproduce the bugs in the buggy versions, while after the fixes all the test cases pass. The dataset also provides APIs for instrumentation and recording program execution traces. \subsection{Data Collection} \label{subsec:dc} Here we describe how we identified the buggy program statements. We followed two techniques as described below: \rom{1}.~\textit{Buggy lines retrieved from original dataset.} We compared each buggy program version with its corresponding fixed version; the lines that are deleted or modified in the buggy version are annotated as buggy program statements. To get the differences between two program versions, we used Defects4J\xspace APIs for Defects4J\xspace dataset. For every snapshot, ManyBugs\xspace dataset provides {\tt diff} of different files changed during the fix revision. We directly used those {\tt diff} files. We notice that some bugs are caused due to the error of omission\textemdash tests fail due to missing features/functionalities in the buggy version. Fixing these bugs do not require deleting or modifying program statements in the buggy version, but adding new lines in the fixed version. In such cases, we cannot trivially annotate any of the existing lines in the buggy version as `buggy.' We filter out such bugs from further consideration, as our goal in this work to locate existing buggy lines, as opposed to detecting error of omission. Table~\ref{tab:ssubj} shows the total number of buggy lines per project in the original dataset. \begin{figure}[!htpb] \centering \includegraphics[width=0.9\columnwidth]{evobug} \caption{{\small{\bf Evolutionary bug data retrieval: vertical dashed lines correspond to buggy project versions under investigation, each triangle represents project commit (c0$\ldots$c3). For every bug-fix commit (\hbox{\emph{e.g.,}}\xspace c2), as identified by keyword search, we first git-blame the buggy lines to identify the original bug-introducing commit (\hbox{\emph{e.g.,}}\xspace c0) and then map them to the corresponding project versions.}(Adopted from Ray \hbox{\emph{et al.}}\xspace~\cite{ray2016naturalness})}} \label{fig:evolution} \end{figure} \rom{2}.~\textit{Buggy lines retrieved from project evolution.} As shown in Table~\ref{tab:ssubj}, the percentage of buggy lines \hbox{\emph{w.r.t.}}\xspace the total lines of code in Defects4J\xspace dataset is very small (0.07\%). Such unbalanced data of buggy vs.~non-buggy lines poses a threat to the efficiency of any classification probelm~\cite{japkowicz2002class,chawla2004editorial}. To reduce such imbalance and thus increase the effectiveness of \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace, previous work injects artificial bugs in the software system under tests~\cite{gunneflo1989evaluation,segall1995fiat}. However, since the motivation of this research comes from the findings that bugs are {\em unnatural}~\cite{ray2016naturalness}, artificially introducing bugs like our predecessors may question our conclusions. To overcome this problem, we injected bugs that developers introduced in the source code in reality\textemdash we collected such bugs from project evolutionary history. We adopted the similar strategy described in Ray \hbox{\emph{et al.}}\xspace~\cite{ray2016naturalness}. First, we identified bug-fix commits by searching a project's commit logs using bug-fix related keywords: `error', `bug', `fix', `issue', `mistake', `flaw', and `defect', following the methodology described by Mockus~\hbox{\emph{et al.}}\xspace~\cite{mockus2000identifying}. Lines modified or deleted on those big-fix commits are marked as buggy. Then we identified the original commits that introduce these bugs using SZZ algorithm~\cite{sliwerski2005changes}. Next, we used {\tt git blame} with {\tt --reverse} option to locate those buggy lines in the buggy program version under investigation. Figure~\ref{fig:evolution} illustrates this process. Using this method, we found $1541$ additional buggy lines across all the versions of five Defects4J\xspace projects. Thus, in total, in this dataset, we studied $1761$ buggy lines, as shown in~\ref{tab:ssubj}. \subsection{Evaluation Metric} \label{sunsec:evaluation_metric} \begin{figure} \includegraphics[width=0.9\linewidth]{aucec} \caption{{\small{\bf Example Cost Effectiveness (CE) curve for bug localization. Baseline shows CE while inspecting random program elements. At optimal CE, 100\% of bugs are found when all the buggy program elemnets are inspected first. A real CE falls somewhere in between baseline and optimal.}}} \label{aucec_explain} \end{figure} To evaluate the bug localization capability of EnSpec\xspace, we adopted commonly used non-parametric measure from literature: Cost Effectiveness (CE) metric originally proposed by Arisholm~\hbox{\emph{et al.}}\xspace~\cite{arisholm2010systematic} to investigate defects in telecom softwares. The main assumption behind this metric is, cost behind bug localization is the inspection effort\textemdash the number of Program Element (PE) needs to be inspected before locating the bug, and the payoff is the percentage of bugs found. A Cost-Effectiveness(CE) curve shows percentage of inspected PE in {\it x-axis} and percentage of bugs found in {\it y-axis}. If bugs are uniformly distributed in the source code, by randomly inspecting $n\%$ of source PE, one might expect to find $n\%$ bugs. The corresponding CE curve will be a straight line with $slope~\text{=}~1$ (see Figure~\ref{aucec_explain}). This is our baseline. Any ranking metric assigns suspiciousness score to each PE for bug localization. Then, PEs are inspected based on the decreasing order of suspiciousness score. An optimal ranking metric would assign scores in a way that all buggy PEs are ranked prior to the non-buggy PEs. So, inspecting top PEs would cover 100\% of the bugs. For any real bug localization techniques, \hbox{\emph{e.g.,}}\xspace Tarantula~\cite{jones2005tarantula}, Multric~\cite{xuan2014learning} etc., CE curve falls in between baseline and optimal. AUCEC, the area of under the CE curve is a quantitative measurement describing how good a model is to find the bugs. Baseline AUCEC (random AUCEC) is 0.5. Optimal AUCEC would be very close to 1.00. This AUCEC metric is a non-parametric metric similar to the ROC curve and does not depend on bug distribution~\cite{ray2016naturalness}, thus becomes standard in bug-localization literature~\cite{d2010extensive}. Higher AUCEC signifies higher prioritization of buggy lines over non-buggy lines and hence a better model. For example, for the optimal CE, 100\% source program elements should not have to be inspected to find all the bugs; thus, optimal exhibits higher AUCEC than the baseline (see Figure~\ref{aucec_explain}). This intuition is the basis of our evaluation metric. \subsection{Implementation of EnSpec\xspace} \label{subsec:impl} We implemented EnSpec\xspace's learning to rank technique as described in Step 3 \& 4 of Section~\ref{sec:method} using two approaches. First, we used RankBoost~\cite{freund2003efficient} algorithm. RankBoost algorithm uses boosting ensemble technique to learn model parameter for ranking training instances. At each iteration, it learns one weak ranker and then re-weights the training data. At the final stage, it combines all the weak rankers to assign scores to the test data. This algorithm is used in the past by Xuan \hbox{\emph{et al.}}\xspace~\cite{xuan2014learning} for implementing \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace at method level bug localization. Though there are many competitive approaches to implement \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace, Xuan \hbox{\emph{et al.}}\xspace report the best results till date. Thus, we adapted their approach in EnSpec\xspace to locate bugs at line granularity. We used RankBoost implementation of standard {\bf {\it RankLib}}~\cite{ranklib} library for this purpose. There are two configurable parameters: $\beta$ (initial ranking metric) and $\gamma$ (number of neighbor). Following Xuan \hbox{\emph{et al.}}\xspace, we set these two parameter values to Tarantula and 10 respectively. Table~\ref{tbl:compare_rankboost} shows the result. In the second approach, we used Random Forest Algorithm (RF) to implement the proposed learning to rank technique. Random Forest is an ensemble learning technique developed by Breiman~\cite{breiman2001random} based on a combination of a large set of decision trees. Each tree is trained by selecting a random set of features from a random data sample selected from the training corpus. In our case, the algorithm, therefore, chooses some \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace and/or entropy related features randomly in training phase (step 3). RF then learns conditional probability distribution of the chosen features \hbox{\emph{w.r.t.}}\xspace the bugginess of each line in the training dataset. In addition, RF learns the importances for different features for discrimination. During training, the model learns $M$ decision trees and corresponding probability distribution. In the testing phase, {\it suspiciousness} scores from each of the learned model, and calculate final {\it suspiciousness} score based on equation~\ref{eqn:ensemble}. For implementation, we used standard python {\tt scikit-learn} package~\cite{sklearn_api}. \input{comparison} We compare the performance of the above two approaches using AUCEC$_{100}$ score. For the comparison purpose, we only used \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace related features (\hbox{\emph{i.e.}}\xspace did not include entropy scores), since we first wanted to measure how the two approaches perform in traditional \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace setting. Table~\ref{tbl:compare_rankboost} reports the result. For all of the studied projects, except Wireshark, Random Forest is performing better. Thus, we carried out rest of our experiments using Random Forest based implementation, since this gives the best \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace performance at line level, even when we compare against state of the art Xuan \hbox{\emph{et al.}}\xspace's technique. \subsection{Research Questions} \label{sec:rq} To evaluate EnSpec\xspace, we investigate whether a good language model (LM\xspace) that captures naturalness (hence unnaturalness) of a code element can improve spectrum based testing. Previously, Ray \hbox{\emph{et al.}}\xspace~\cite{ray2016naturalness} and Wang \hbox{\emph{et al.}}\xspace~\cite{wang2016bugram} demonstrated that unnaturalness of code elements (measured in terms of entropy) correlate with bugginess. Thus, LM\xspace can help in bug localization in a static setting. In contrast, \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace is a dynamic approach that relies on the fact that code elements that are covered by more negative test cases are more bug-prone. Therefore, to understand the effectiveness of EnSpec\xspace, we will investigate whether the combination of the two can improve bug-localization as a whole. Since LM\xspace based bug-localization approach says that more entropic code is more bug prone, and \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace says that code element covered by more negative test cases are more prone to bugs, to make the combined approach work, the difference between entropies of the buggy and non-buggy lines should be significant for the negative test spectra. Thus, to understand the potential of entropy, we start our investigation with the following research question: \RQ{rq1}{How is entropy associated with bugginess for different types of test spectra?} If the answer to the above question is affirmative for failing test spectra, entropy can be used along with \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace for bug-prediction. For every code element, \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace provides suspiciousness scores, and LM\xspace predicts its uncertainty in terms of entropy. Thus, one may expect that among the lines with higher suspiciousness score, more entropic lines are even more likely to be buggy. We, therefore, investigate whether entropy can help improving the bug prediction capability of EnSpec\xspace over \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace. \RQ{rq2}{Can entropy improve \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace's bug-localization capability?} To build a good LM\xspace based bug localization technique, we need a large code corpus with adequate bug history; this is often challenging for smaller projects. A similar problem arises for history based defect prediction models\textemdash for newer projects enough history is usually not available to build a good model. In such case, researchers, in general, rely on the evolutionary history of other projects~\cite{zimmermann2009cross}. To mitigate the threat of using our proposed approach for smaller code base, we leverage such cross-project defect prediction strategy. We investigate, whether a language model trained on different projects can still improve \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace's performance. \RQ{rq3}{What is the effect of entropy on \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace's bug localization capability in a cross-project setting?} \section{Motivating Example} \label{sec:motivation} In this section, we present a real-world example that motivated us to incorporate the \Comment{Natural Language Property} {\it naturalness} property of code (i.e. {\it entropy} based features) in \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace to overcome a key limitation of \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace. The main limitation of testing based bug\xspace localization approaches, such as \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace, is that the quality of their results highly depend on the quality of test cases. If the passing test cases have low code coverage, an \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace tool may return a large number of program elements with high suspiciousness score, most of which are false positives. However, generating an adequate test suite is incredibly difficult. Therefore, in many cases \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace performs poorly. \begin{table}[h] \begin{tabular}{p{0.9\textwidth}} \scriptsize {\tt{--- /Closure/89/buggy/src/com/google/javascript/jscomp/}} \scriptsize {\tt{GlobalNamespace.java}}\\ \scriptsize {\tt{+++ /Closure/89/fix/src/com/google/javascript/jscomp/}} \scriptsize {\tt{GlobalNamespace.java}} \lstinputlisting[ language=java]{example.java}\\ \end{tabular} \caption{Effectiveness of entropy based features to improve \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace} \label{tbl:motivating_example} \end{table} \vspace{20pt} \Comment{For example, on a test run, a trained model is tested with 3731 lines of which 10 lines are actual bug. Test result shows that 9 of those 10 buggy lines are placed relatively higher position in the ranklist when we incorporated {\it Entropy} based features.} Table~\ref{tbl:motivating_example} presents a patch that fixed a bug in Closure compiler\xspace (Defects4J bug\xspace ID: 89). The buggy line, marked in~\Red{red}, was never used in the existing corpus before. Hence, the line was unnatural to a LM\xspace with a high entropy score of $7.36$. When developer fixed the bug (see~\dkgreen{green} line), the code becomes more natural with a reduced entropy score of $1.15$. A traditional state-of-the art \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace technique placed the buggy line at 57$^{th}$ position, while EnSpec\xspace using both {\it entropy} and {\it spectrum} based features, placed the line at 12$^{th}$ position in the ranked list of suspicious lines. This shows that entropy of code, as derived from LM\xspace, can play an important role to improve the ranking of the actual buggy\xspace lines. \Comment{When using only \ripon{nth and nth} position respectively. However, we observe that the code surrounding buggy lines is dealing with \texttt{maxMiddleIndex} while the buggy lines are dealing with \texttt{minMiddleIndex}, which is unnatural. After fixing the bug by replacing \textit{minMiddleIndex} by \textit{maxMiddleIndex}, that unnaturalness is gone, which supports the findings of Ray \hbox{\emph{et al.}}\xspace~\cite{}.} \Comment{\ripon{I did not find the argument of unnaturalness in the above paragraph very convincing. Isn't it the reason that if minMiddleIndex is there, it breaks a pattern that is present elsewhere in the program?}} \Comment{The example above is a code snippet from Project Chart bug number 7. In our experiment, using only \textit{spectrum} based features cannot detect this fault, but when we incorporated \textit{entropy} based features, this fault is identified. In any testing based fault localization, defecting faults highly depends on the test cases. If test cases are insufficient to cover all possible execution path, testing cannot guarantee detecting all bugs. Generating such high performance test suite is also incredibly difficult. We firmly believe this is the case for this reason. But, as we can see in the code surrounding lines of the buggy version of the code is dealing with \textit{maxMiddleIndex} while the buggy lines are dealing with \textit{minMiddleIndex}, which is quite unnatural. And the fix of that bug also supports our intuition. the bug is fixed by replacing \textit{minMiddleIndex} by \textit{maxMiddleIndex} in line $297$} \Comment{On the other hand, there there Another example from Project Math where the bug cannot be identified by the \textit{entropy} based features but can be identified by \textit{spectrum} based features.} \Comment{On the other hand, the main limitation of entropy-based metric is that although source code elements are repetitive, there are exceptions. Therefore, in case a bug fix does not follow the repetitious pattern, the buggy lines may not be ranked high enough. Figure~\ref{example:math_only_spectrum} presents such an example. In this case, \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace may help entropy-based localization. \begin{figure}[h] \includegraphics[width=0.5\textwidth]{images/math_only_spectrum.png} \caption{Entropy cannot but spectrum based features can detect this bug\xspace} \label{example:math_only_spectrum} \end{figure} \Comment{We believe this \textit{return new VectorialPointValuePair(point, objective)} is fairly common in code corpus, so, the entropy based features has failed to identify this. But, this buggy code also defies the software requirement and hence lets the test cases to fail. }} \section{Preliminaries} \label{sec:prelim} In this section, we discuss the preliminaries and backgrounds of our work. \subsection{Spectrum-based Bug\xspace Localization} \label{subsec:spectrum} Given a buggy\xspace code-base with at least one bug\xspace reproducing test case, a spectrum-based bug\xspace localization technique (\hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace) ranks the code elements under investigation (e.g., files/classes, functions/methods, blocks, or statements) based on the execution traces of passing and failing test cases. Therefore, in this approach, first the subject program is instrumented at an appropriate granularity to collect the execution trace of each test case ({\it test spectra}). The basic intuition behind \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace is that the more a code element appears in failing traces (but not in passing traces), the more suspicious the element to be buggy\xspace. \input{metric} More specifically, for a given program element $e$, \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace records how many test cases execute and do not execute $e$, and computes the following four metrics: the number of (i) tests passed ($e_p$) and (ii) tests failed ($e_f$) that executed $e$, and the number of (iii) tests passed ($n_p$) and (iv) tests failed ($n_f$) that did not execute $e$. A suspiciousness score is calculated is calculated as a function of these four metric: $S = Func(e_p,e_f,n_p,n_f)$, as shown in Table~\ref{tab:spectra}. The table also presents two widely used suspiciousness score measure: Tarantula and Ochai. \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace ranks the program elements in a decreasing order of suspiciousness and presents to the developers for further investigation to fix the bug~\cite{xuan2014learning}. These scores also help to repair program automatically~\cite{le2012representations}. \subsection{Language Models} Although real-world software programs are often large and complex, the program constructs (such as tokens) are repetitive, and thus provide useful predictable statistical property~\cite{Hindle:2012:ICSE, Raychev:2014:PLDI, Tu:2014:FSE, Franks:2015:ICSE}. These statistical properties of code resemble natural languages, and thus, natural language models can be leveraged for software engineering tasks. \textbf{Cache based N-gram Model (\$gram\xspace):} Hindle \hbox{\emph{et al.}}\xspace introduced n-gram model for software code~\cite{hindle2012naturalness}, which is essentially an extension of n-gram language model used in natural language processing tasks based on the \textit{Markov independence assumption}~\cite{brown1992class_ngram}. If a sequence, $s$ consists of $m$ tokens ($a_1 a_2 a_3 ... a_m$), according to the \textit{Full Markov Model}, the probability, $p(s)$, of that sequence is given in Equation~\ref{eqMkvModel} \begin{equation} \label{eqMkvModel} \small p(s) = p(a_1)p(a_2|a_1)p(a_3|a_1a_2) ... p(a_m|a_1a_2...a_{m-1}) \end{equation} \textit{N-gram model} is a simplification of full Markov model based on the assumption that every token is dependent on previous $n-1$ token, where $n$ is a model parameter. Essentially with $n=\infty$, the model converges to the full Markov model. Since, actual probabilities are very difficult to find, researchers often use empirical probabilities to represent actual probabilities, which is highly dependent on the training data. Initially, the probability of a token or any n-gram which is not seen in the corpus will be \textit{zero} resulting the total probability to be \textit{zero}. To overcome this problem Hindle \hbox{\emph{et al.}}\xspace~\cite{hindle2012naturalness} also adopted smoothing techniques from natural language processing literature. Tu \hbox{\emph{et al.}}\xspace~\cite{tu2014localness} further improved the above model based on the observation that source code tends to be highly localized, \hbox{\emph{i.e.}}\xspace particular token sequences may occur often within a single file or within particular classes or functions. They proposed a \$gram\xspace model that introduces an additional cache\textemdash list of n-grams curated from local context and used them in addition to a global n-gram model. They also defined the entropy of a code sequence $S$ by language model $M$ by Equation~\ref{eqLocalModel}: \begin{equation} \label{eqLocalModel} \small H_M(S) = -\frac{1}{N}\log_2p_M(S) = -\frac{1}{N}\sum\limits_{i=1}^{N}\log_2P(t_i|h) \end{equation} \textbf{Language model to predict buggy code:} Ray \hbox{\emph{et al.}}\xspace~\cite{ray2016naturalness} demonstrated that there is a strong negative correlation between code being buggy and the naturalness of code. When a simple \$gram\xspace model is trained on previous versions of project source code and applied to calculate naturalness of code snippet, buggy codes are shown to exhibit higher unnaturalness than the non-buggy codes. They introduced a syntax-sensitive entropy model to measure naturalness of code. In their investigation, they found that some token types such as packages, methods, variable names are less frequent, and hence high entropic than others. They normalized the entropy score and derived a Z-score with line-type information from the program's Abstract Syntax Tree (AST). The Z-score is defined as: \begin{equation} \label{eqZscore} \small \$gram+type\xspace = \frac{entropy_{line}-\mu_{type}}{SD_{type}} \end{equation} In Equation~\ref{eqZscore}, $\mu_{type}$ is the mean \$gram\xspace model entropy of a given line type, and $SD_{type}$ is the standard deviation of that line type. This Z-score gives the syntax-sensitive entropy. They reported that, buggy lines of codes are usually unnatural and highly entropic. With further investigation, they also found that, when developers fixed those buggy lines of codes, the entropy of the code decreased. In this work, we used state of the art \$gram\xspace model along with syntax-sensitive entropy model. \section{Related Work} \label{sec:related} Automatic bug\xspace localization has been an active research area over two decades. Existing techniques can be broadly classified into two broad categories: i) static and ii) dynamic approaches. {\bf Static approaches} primarily rely on program source code. There are mainly two kinds of static approaches: a) program analysis based approaches, and b) information retrieval (IR) based approaches. Program analysis based approaches detect bugs by identifying well-known buggy patterns that frequently happened in practice. Therefore, although these approaches are effective in preventing bugs by enforcing good programming practices, they generally cannot detect functional bugs. {FindBugs}~\cite{ayewah2008using} is a popular example in this category. On the other hand, IR-based approaches, given a bug report, generally rank source code files based on the textual similarity between source code and the bug report so that potential buggy files ranked high in the ranked-list. These approaches are generally fast but identify bugs at coarse grained level. BugLocator~\cite{zhou2012should}, BLUiR~\cite{saha2013improving} are some of the examples in this category. There is a new line of work that recently started based on statistical modeling and machine learning. Wang et al.\cite{wang2016automatically} proposed a Deep Belief Network based approached to detect file level defects. Wang et al.\cite{wang2016bugram} used n-gram language model to generate a list of probable bugs. {\bf Dynamic approaches} generally rely on the execution traces of test cases. \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace is a dynamic fault localization technique that leverages program spectra\textemdash program paths executed by passed and failed test cases~\cite{reps1997use} to compute a suspiciousness score of each program element. We have described \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace in detail in Section~\ref{subsec:spectrum}. Several metrics have been proposed in the literature to calculate the suspiciousness score. For example, Jones \hbox{\emph{et al.}}\xspace presented Tarantula~\cite{jones2005tarantula} based on the fact that program elements executed by failed test cases are more likely to bug\xspace than the elements not executed by them (see Table~\ref{tab:spectra}). Jaccard and Ochiai are some of the well known variants of this approach proposed by Abreu \hbox{\emph{et al.}}\xspace~\cite{abreu2007ochiai_and_jaccard}. Xie \hbox{\emph{et al.}}\xspace proposed five ranking metrics by theoretical analysis and four other metrics based on genetic algorithms~\cite{xie2013theoretical}. Later, Lucia \hbox{\emph{et al.}}\xspace did a comprehensive study of the different ranking metrics and showed that no ranking metric is unanimously best~\cite{lucia2014extended}. \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace approaches can identify bugs at fine-grained level. Xuan \hbox{\emph{et al.}}\xspace~\cite{xuan2014learning} proposed an approach to combine multiple ranking metrics. They adopted neighborhood based strategy to reduce the imbalance ratio of buggy and non-buggy program entities. For their algorithm, they need an initial metric to define the neighborhood. They sort the data based on an initial ranking metric. The filtered $\beta$ non-faulty entities before and after a faulty entity. After that, they applied state of the art "Learning to Rank" algorithms to combine all 25 suspiciousness scores. Their dependence on an initial ranking metric might cause a bias towards that metric. In contrast, to be unbiased to any of the metric, we considered all the data and applied state of the art random under-sampling\cite{batista2004study} technique. Gong \hbox{\emph{et al.}}\xspace~\cite{gong2012interactive} proposed a feedback based fault localization system, which uses user feedback to improve performance. Pytlik \hbox{\emph{et al.}}\xspace~\cite{pytlik2003automated} proposed the fault localization system using the likely invariant. Le et al.\cite{b2016learning} also proposed an approach similar to Pytlic et al. with a larger invariant set. Unlike this work, they experimented on method level fault localization system. Sahoo et al. extended Pytlik et al's work. Their work is on test case generation and also they adopted backward slicing to reduce the number of program element to be considered. {\bf Multi-modal techniques} generally combine two or more model of bug\xspace localization to improve the accuracy further. Le et al.~\cite{le2015information} proposed a multi-modal technique for bug localization that basically combines the IR and spectrum based bug localization together. Their technique needs three artifacts: i) a bug report, ii) program source code, and iii) a set of test cases having at least a fault reproducing test case. Their technique first rank the source code methods based on the textual similarity between bug report and source code methods. Then using program spectra, they rank the source code lines and also identifies a list of suspicious words associated with the bug. Finally they combine these scores using a probabilistic model which is trained on a set of previously fixed bugs. Based on an empirical evaluation on 157 real bugs from four software systems, their model outperforms a state-of-the-art IR-based bug localization technique, a state-of-the-art spectrum-based bug localization technique, and three state-of-the-art multi-modal feature location methods that are adapted for bug localization. The proposed approach in this paper is also multi-modal in nature. However, instead of combining IR-based textual similarity score,inspired by Ray et al.'s\cite{ray2016naturalness} finding that the buggy codes are unnatural, and thus entropy of a buggy source code is naturally high, we combine source code entropy with program spectra to improve bug localization. To our knowledge, no one leveraged the localness of code and test spectrum together to locate faults. The advantage of our approach is that we do not need any bug report which may not be available for development bugs. Therefore, our approach is complementary to Le et al's approach. \section{Result} \label{sec:result} \setcounter{RQCounter}{0} In this section, we answer the research questions introduced in~\ref{sec:rq}. Our investigation starts with whether the buggy lines in failing test spectrum are more entropic than non-buggy lines. Note that, all the buggy and non-buggy lines are annotated using the strategy described in~\ref{subsec:dc}. \input{rq1} \input{rq2} \input{rq3} \section{Threats to Validity} \label{sec:threats} Efficiency of EnSpec\xspace depends on the availabity of previous bugs on which the model will be trained. To minimize this threat, we demonstrated that EnSpec\xspace works well in cross-project setting. EnSpec\xspace is also dependent on the adequecy of the test suite. If there are not enough failing test cases, performance of \hbox{${\cal S}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal B}\hspace{-0.01in}{\cal L}$}\xspace may get hurt, and hence EnSpec\xspace's performance will also be worse. However, since EnSpec\xspace is a hybrid approach, it does not solely depend on test suites. The LM\xspace based part will still be able to locate bugs since the latter does not require anything but source code. Further, to annotate buggy lines, we rely on the publicly available bug-dataset and some evolutionary bugs. It can be possible there are other bugs lying in the code corpus that are polluting our results. However, at any given point of time it is impossible to know the presence of all the bugs in a software. Finally, to minimize threats due to external validity, we evaluated EnSpec\xspace on 10 projects for 2 languages: C and Java. This proves, EnSpec\xspace is not restricted to any particular programming language.
{ "timestamp": "2018-02-21T02:04:44", "yymm": "1802", "arxiv_id": "1802.06947", "language": "en", "url": "https://arxiv.org/abs/1802.06947" }
\section{Introduction} \label{sec:introduction} \IEEEPARstart{P}{rior} work~\cite{knoop, akyildiz2006next} has shown that dynamic spectrum access is one of the keys to improving the spectrum utilization in wireless networks and meeting the increasing need for more capacity, particularly in the presence of other networks operating in the same spectrum. In the context of cognitive radio research, a standard assumption has been that secondary users may search and use idle channels that are not being used by their primary users (PU). Although there are many existing works that focus on the algorithm design and implementation in this field, nearly all of them assume a simple independent-channel (or PU activity) model, that may not hold in practice. For instance, the operation of a low power wireless sensor network (WSN) is based on IEEE 802.15.4-radios, which uses the globally available 2.4 GHz and 868/900 MHz bands. These bands are shared by various wireless technologies (e.g. Wi-Fi, Bluetooth, RFID), as well as industrial/scientific equipment and appliances (e.g. micro-wave ovens) whose activities can affect multiple IEEE 802.15.4 channels. Thus, external interference can cause the channels in WSNs to be highly correlated, and the design of new algorithms and schemes in dynamic multichannel access is required to tackle this challenge. Motivated by such practical considerations, we consider in this work a multichannel access problem with $N$ correlated channels. Each channel has two possible states: \textit{good} or \textit{bad}, and their joint distribution follow a $2^N$-states Markovian model. There is a single user (wireless node) that selects one channel at each time slot to transmit a packet. If the selected channel is in the \textit{good} state, the transmission is successful; otherwise, there is a transmission failure. The goal is to obtain as many successful transmissions as possible over time. As the user is only able to sense his selected channel at each time slot, there is no full observation of the system available. In general, the problem can be formulated as a partially observable Markov decision process (POMDP), which is PSPACE-hard and finding the exact solution requires an exponential computation complexity~\cite{pspace-hard}. Even worse, the parameters of the joint Markovian model might not be known \emph{a-priori}, which makes it more difficult to find a good solution. We investigate the use of Deep Reinforcement Learning, in particular, Deep Q learning, from the field of machine learning as a way to enable learning in an unknown environment as well as overcome the prohibitive computational requirements. By integrating deep learning with Q learning, Deep Q learning or Deep Q Network (DQN)~\cite{dqn} can use a deep neural network with states as input and estimated Q values as output to efficiently learn policies for high-dimensional, large state-space problems. We implement a DQN that can find a channel access policy through online learning. This DQN approach is able to deal with large systems, and find a good or even optimal policy directly from historical observations without any requirement to know the system dynamics \emph{a-priori}. We provide a study of the optimal policy for known fixed-pattern channel switching situation and conduct various experiments showing that DQN can achieve the same optimal performance. We then study the performance of DQN in more complex scenarios and show through both simulations and real data trace that DQN is able to find superior, near-optimal policies. In addition, we also design an adaptive DQN framework that is able to adapt to time-varying, dynamic environments, and validate through simulations that the proposed approach can be aware of the environment change and re-learn the optimal policy for the new environment. The rest of the paper is organized as follows. In section~\ref{sec:related-work}, we discuss related work in the Dynamic Multichannel Access field. In section~\ref{sec:problem-formulation}, we formulate the dynamic multichannel access problem when channels are potentially correlated. In section~\ref{sec:myopic-whittle}, a Myopic and a Whittle Index-based heuristic policies are presented for independent channels. In section~\ref{sec:dqn}, we present the DQN framework to solve the problem through online learning. We present the optimal policy study on the known fixed-pattern channel switching situation in section~\ref{sec:optimal-policy-deterministic-switching}, and show through simulations that DQN can achieve the same optimal performance in section~\ref{sec:simulation-deterministic-switching}. We present the experiment and evaluation results of DQN in more complex situations through both simulations and real-data trace in section~\ref{sec:evaluation}. We propose an adaptive DQN approach in section~\ref{sec:adaptive-dqn} and conclude our work in section~\ref{sec:conclusion}. \section{Related Work} \label{sec:related-work} The dynamic multichannel access problem has been widely studied. But unlike many decision making problems, such as vertical handoff in heterogeneous networks~\cite{handoff} and power allocation in energy harvesting communication systems~\cite{power}, that can be modeled as MDP, dynamic multichannel problem is modeled as a POMDP, as channels are generally modeled as (two-state) Markov chains and a user has only partial observations of the system. And finding an optimal channel access policy has exponential time and space complexities. To overcome the prohibitive computation complexity, a Myopic policy and its performance are first studied in~\cite{myopic_1} when channels are independent and identically distributed (i.i.d.). The Myopic policy is shown to have a simple and robust round robin structure without the necessity to know the system transition probabilities except whether it is positively or negatively correlated. It is first proved in~\cite{myopic_1} that the Myopic policy is optimal when there are only two positively correlated channels in the system. Later in the subsequent work~\cite{myopic_n}, its optimality result is extended to any number of positively correlated channels and two or three negatively correlated channels. However, the Myopic policy does not have any performance guarantee when channels are correlated or follow different distributions, which is the situation considered in our work. When channels are independent but may follow different Markov chains, the dynamic multichannel access problem can also be modeled as Restless Multi-armed bandit problem (RMAB). Each channel can be considered as an arm, and its state evolves following a Markov chain. At each time slot, a user chooses an arm with a state-dependent reward. The goal is to maximize the total expected reward over time. A Whittle Index policy is introduced in~\cite{whittle} and shares the same simple semi-universal structure and optimality result as the Myopic policy when channels are stochastically identical. Numerical results are also provided showing that the Whittle Index policy can achieve near-optimal performance when channels are nonidentical. But the Whittle Index approach cannot be applied when channels are correlated. In this work, we plan to study the multichannel access problem in the complicated correlated case. Both the Myopic policy and the Whittle Index policy are derived under the assumption that the system transition matrix is known. When the underlying system statistics are unknown, the user must apply an online learning policy with time spent on exploration to learn the system dynamics (either explicitly or implicitly). When channels are independent, the RMAB approach can be applied and the corresponding asymptotic performance is compared with the performance achieved by a genie that has the full knowledge of the system statistics.The commonly used performance metric is called regret, which is defined as the expected reward difference between a genie and a given policy. A sublinear regret is desirable as it indicates the policy achieves the same optimal performance as the genie asymptotically. A logarithmic regret bound that grows as a logarithmic function of time $t$ is achieved in~\cite{logregret_weak_1, logregret_weak_2, logregret_weak_3} when a \emph{weak regret}\footnote{As stated in~\cite{logregret_strong_1}, ``The genie being compared with is weaker in the sense that it is aware only of the steady-state distribution for each channel, and not the full transition matrices''} is considered, and a $O(\sqrt{t})$ regret bound and a $O(\log t)$ regret bound with respect to \emph{strict regret}\footnote{As stated in~\cite{logregret_strong_1}, ``Comparing the performance of a policy to the genie that knows the probability transition matrices for each channel and can thus perform optimally''} is achieved in~\cite{rootregret_strong_1} and~\cite{logregret_strong_1} respectively. However, all these prior RMAB works are based on the independent channel assumption, which cannot be generalized for correlated channels. In recent years, some works began to focus on the more practical and complex problem where both the system statistics is unknown and the channels are correlated. Q-learning, one of the most popular reinforcement learning approaches, is widely used as it is a model-free method that can learn the policy directly. The authors in~\cite{qlearning_seq} apply Q-learning to design channel sensing sequences, while in~\cite{qlearning_imperfect} it is shown that Q-learning can also take care of imperfect sensing. Additionally, the work~\cite{qlearning_experiment} uses universal software radio peripheral (USRP) and GNU radio units to implement and evaluate Q-learning in a multi-hop cognitive radio network testbed. However, all these works assume that the system state is fully observable and formulate the problem as an MDP, which significantly reduces the state space so that Q-learning can be easily implemented by using a look-up table to store/update Q-values. Since a user is only able to observe the state of the chosen channel at each time slot in our work, the current state of the system is not fully observable and our problem falls into the framework of POMDP. When updating Q-values, the original state space cannot be directly used because of its partial observability. Instead, one could consider using either the belief or a number of historical partial observations. This can lead to a very large state space, which makes it impossible to maintain a look-up Q table. New methods able to find approximations of Q-values are required to solve the large space challenge. In recent years, Reinforcement learning, including Q learning, has been integrated with advanced machine learning techniques, particularly deep learning, to tackle difficult high-dimensional problems~\cite{levine2015end,assael2015data,ba2014multiple}. In 2013, Google Deepmind used a deep neural network, called DQN, to approximate the Q values in Q learning that overcomes the limitation of the state space of the traditional look-up table approach. In addition, this deep neural network method also provides an end-to-end approach that an agent can learn a policy directly from his observations. In this work, we formulate the dynamic multi-channel access problem as a POMDP and employ DQN to solve this problem. To the best of our knowledge, this is the first study and implementation of DQN in the field of dynamic multi-channel access. \section{Problem Formulation} \label{sec:problem-formulation} Consider a dynamic multichannel access problem where there is a single user dynamically choosing one out of $N$ channels to transmit packets. Each channel can be in one of two states: \textit{good} ($1$) or \textit{bad} ($0$). Since channels may be correlated, the whole system can be described as a $2^N$-state Markov chain. At the beginning of each time slot, a user selects one channel to sense and transmit a packet. If the channel quality is good, the transmission succeeds and the user receives a positive reward ($+1$). Otherwise, the transmission fails and the user receives a negative reward ($-1$). The objective is to design a policy that maximizes the expected long-term reward. Let the state space of the Markov chain be $\mathcal{S} = \{\mathbf{s}_1, ..., \mathbf{s}_{2^N}\}$. Each state $\mathbf{s}_i$ ($i \in \{1, ..., 2^N\}$) is a length-$N$ vector $[s_{i1}, ..., s_{iN}]$, where $s_{ik}$ is the binary representation of the state of channel $k$: good ($1$) or bad ($0$). The transition matrix of the Markov chain is denoted as $\mathbf{P}$. Since the user can only sense one channel and observe its state at the beginning of each time slot, the full state of the system, i.e., the states of all channels, is not observable. However, the user can infer the system state according to his sensing decisions and observations. Thus, the dynamic multichannel access problem falls into the general framework of POMDP. Let $\Omega(t) = [\omega_{\mathbf{s}_1}(t),..., \omega_{\mathbf{s}_{2^N}}(t)]$ represent the belief vector maintained by the user, where $\omega_{\mathbf{s}_i}(t)$ is the conditional probability that the system is in state $\mathbf{s}_i$ given all previous decisions and observations. Given the sensing action $a(t) \in \{1, ..., N\}$ representing which channel to sense at the beginning of time slot $t$, the user can observe the state of channel $a(t)$, denoted as $o(t) \in \{0, 1\}$. Then, based on this observation, he can update the belief vector at time slot $t$, denoted as $\hat{\Omega}(t) = [\hat{\omega}_{\mathbf{s}_1}(t),..., \hat{\omega}_{\mathbf{s}_{2^N}}(t)]$. The belief of each possible state $\hat{\omega}_{\mathbf{s}_i}(t)$ is updated as follows: \begin{equation} \hat{\omega}_{\mathbf{s}_i}(t) = \begin{cases} \frac{\omega_{\mathbf{s_i}}(t)\mathbbm{1}(\mathbf{s}_{ik}(t)=1)}{\sum_{i=1}^{2^N} \omega_{\mathbf{s}_i}(t)\mathbbm{1}(\mathbf{s}_{ik}(t)=1)} & a(t) = k, o(t) = 1 \\ \frac{\omega_{\mathbf{s}_i}(t)\mathbbm{1}(\mathbf{s}_{ik}(t)=0)}{\sum_{i=1}^{2^N} \omega_{\mathbf{s}_i}(t)\mathbbm{1}(\mathbf{s}_{ik}(t)=0)} & a(t) = k, o(t) = 0 \end{cases} \end{equation} where $\mathbbm{1}(.)$ is the indicator function. Combining the newly updated belief vector $\hat{\Omega}(t)$ for time slot $t$ with the system transition matrix $\mathbf{P}$, the belief vector for time slot $t+1$ can be updated as: \begin{equation} \label{eqn:belief} \Omega(t+1) = \hat{\Omega}(t) \mathbf{P} \end{equation} A sensing policy $\pi : \Omega(t) \rightarrow a(t)$ is a function that maps the belief vector $\Omega(t)$ to a sensing action $a(t)$ at each time slot $t$. Given a policy $\pi$, the long-term reward considered in this paper is the expected accumulated discounted reward over infinite time horizon, defined as: \begin{equation} \mathbb{E}_{\pi} [\sum_{t=1}^{\infty} \gamma^{t-1} R_{\pi(\Omega(t))}(t) | \Omega(1)] \end{equation} where $0 \leq \gamma < 1$ is a discounted factor, $\pi(\Omega(t))$ is the action (i.e., which channel to sense) at time $t$ when the current belief vector is $\Omega(t)$, and $R_{\pi(\Omega(t))}(t)$ is the corresponding reward. If no information about the initial distribution of the system state is available, one can assume the initial belief vector $\Omega(1)$ to be the stationary distribution of the system. Our objective is to find a sensing policy $\pi^*$ that maximizes the expected accumulated discounted reward over infinite time \begin{equation} \pi^* = \argmax_{\pi} \mathbb{E}_{\pi} [\sum_{t=1}^{\infty} \gamma^{t-1} R_{\pi(\Omega(t))}(t) | \Omega(1)] \end{equation} As the dynamic multichannel access problem is a POMDP, the optimal sensing policy $\pi^*$ can be found by considering its belief space and solving an augmented MDP instead. Let $\mathcal{B}$ represent the belief space, and let $V^*(b)$ be the maximum expected accumulated discounted reward from the optimal policy $\pi^*$ with initial belief as $b$. Then for all belief $b\in \mathcal{B}$, we have the following Bellman optimality equation \begin{equation} \begin{aligned} V^*(b) = \max_{k=1, ..., N} \Bigg{\{}\!\!\sum_{i=1}^{2^N}\omega_{\mathbf{s}_i} \mathbbm{1}(\mathbf{s}_{ik}=1) &+ \gamma \sum_{i=1}^{2^N}\omega_{\mathbf{s}_i} \mathbbm{1}(\mathbf{s}_{ik}=1) V^*(T(b|a=k, o=1)) \\ &+ \gamma \sum_{i=1}^{2^N}\omega_{\mathbf{s}_i} \mathbbm{1}(\mathbf{s}_{ik}=0) V^*(T(b|a=k, o=0))\Bigg{\}} \end{aligned} \end{equation} where the $T(b|a, o)$ is the updated belief at given the action $a$ and observation $o$ as in Eq.~(\ref{eqn:belief}). In theory, the value function $V^*(b)$ together with the optimal policy $\pi^*$ can be found via value iteration approach. However, since there are multiple channels and they might be correlated, the belief space becomes a high-dimensional space. For instance, in a typical multichannel WSN based on the widely used IEEE 802.15.4-2015 standard~\cite{ieee802154-2015}, nodes have to choose one out of $16$ available channels to sense at each time slot. If we consider the potential correlations among channels and simplify each channel's condition to be in only two states: good or bad, the state space size becomes $2^{16}$. As the belief represents a probability distribution function over all possible states, it also becomes high dimensional, which increases computation cost. Even worse, the infinite size of the continuous belief space and the impact of the current action on the future reward makes POMDP PSPACE-hard, which is even less likely to be solved in polynomial time than NP-hard problems~\cite{pspace-hard}. To exemplify the time complexity of solving such POMDP problem, we simulate the multichannel access problem with known system dynamics and use a POMDP solver called SolvePOMDP~\cite{pomdp} to find its optimal solution. In Figure~\ref{fig:run_time}, we show the run-time as we increase the number of channels in the system. When the number of channels is higher than $5$, the POMDP solver can not converge after a long interval, and it gets terminated when the run-time exceeds the time limit. \begin{figure} \vspace{-0.8cm} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{optimal_pomdp.png} \captionof{figure}{Running time (seconds) in log scale of the POMDP solver as we vary the number of channels in the system} \label{fig:run_time} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \centering \includegraphics[width=0.8\linewidth]{chann_fig.pdf} \captionof{figure}{Gilbert-Elliot channel model} \label{fig:chann_fig} \end{minipage} \vspace{-0.5cm} \end{figure} All these factors make it impossible to find the optimal solution to a POMDP in general, and many existing works~\cite{myopic_1, myopic_n, whittle, logregret_weak_1, logregret_weak_2, logregret_weak_3, logregret_strong_1, rootregret_strong_1} attempt to address this challenge of prohibitive computation by considering either simpler models or approximation algorithms. \section{Myopic Policy and Whittle Index} \label{sec:myopic-whittle} In the domain of dynamic multichannel access, there are many existing works on finding the optimal/near-optimal policy with low computation cost when the channels are independent and system statistics ($\mathbf{P}$) is known. The Myopic policy and the Whittle Index policy are two effective and easy-to-implement approaches for this settings. \subsection{Myopic Policy} A Myopic policy only focuses on the immediate reward obtained from an action and ignores its effects in the future. Thus the user always tries to select a channel which gives the maximized expected immediate reward \begin{equation} \hat{a}(t) = \argmax_{k=1, ..., N} \sum_{i=1}^{2^N} \omega_{\mathbf{s}_i}(t) \mathbbm{1}(\mathbf{s}_{ik}(t)=1) \end{equation} The Myopic policy is not optimal in general. Researchers in~\cite{myopic_1},~\cite{myopic_n} have studied its optimality when $N$ channels are independent and statistically identical Gilbert-Elliot channels that follow the same $2$-state Markov chain with the transition matrix as $\icol{p_{00} \quad p_{01}\\p_{10} \quad p_{11}}$, as illustrated in Fig.~\ref{fig:chann_fig}. It is shown that the Myopic policy is optimal for any number of channels when the channel state transitions are positively correlated, i.e., $p_{11} \geq p_{01}$. The same optimal result still holds for two or three channels when channel state transitions are negatively correlated, i.e., $p_{11} < p_{01}$. In addition, the Myopic policy has a simple robust structure that follows a round-robin channel selection procedure. \subsection{Whittle Index Based Heuristic Policy} When channels are independent, the dynamic multichannel access problem can also be considered as a restless multi-armed bandit problem (RMAB) if each channel is treated as an arm. An index policy assigns a value to each arm based on its current state and chooses the arm with the highest index at each time slot. Similarly, the index policy does not have optimality guarantee in general. In~\cite{whittle}, the Whittle Index is introduced in the case when $\mathbf{P}$ is known and all channels are independent but may follow different $2$-state Markov chain models. In this case, the Whittle Index policy can be represented as a closed-form solution, and it has the same optimal result as the Myopic policy: the Whittle Index policy is optimal for any number of channels when channels are identical and positively correlated, or for two or three channels when channels are negatively correlated. In addition, when channels follow identical distributions, the Whittle Index policy has the same round-robin structure as the Myopic policy. When channels are correlated, the Whittle Index cannot be defined and thus the Whittle Index policy cannot be directly applied to our problem. To leverage its simplicity, we propose an heuristic that ignores the correlations among channels and uses the joint transition matrix $\mathbf{P}$ and Bayes' Rule to compute the $2$-state Markov chain for each individual channel. Assume that for channel $k$, the transition matrix is represented as $p(c_k^{t+1}=m|c_k^{t}=n)$, where $m,n \in \{0,1\}$ (bad or good). Then, based on Bayes' Rule we have, \begin{equation} p(c_k^{t+1}=m|c_k^{t}=n) = \frac{p(c_k^{t+1}=m, c_k^{t}=n)}{p(c_k^{t}=n)} = \frac{\sum_{j=1}^{2^N}\sum_{i=1}^{2^N}p(\mathbf{s}_j|\mathbf{s}_i)p(\mathbf{s}_i)\mathbbm{1}(s_{jk}=m)\mathbbm{1}(s_{ik}=n)}{\sum_{i=1}^{2^N}p(\mathbf{s}_i) \mathbbm{1}(s_{ik}=n)} \end{equation} where $p(\mathbf{s}_i)$ is the stationary distribution and $p(\mathbf{s}_j|\mathbf{s}_i)$ is the transition probability from state $\mathbf{s}_i$ to state $\mathbf{s}_j$ defined in $\mathbf{P}$. After each channel model is found, we can apply the Whittle Index policy. The Myopic policy and the Whittle Index policy are easy to implement in practice, as both of them have polynomial run-time. And in the case of independent channels, the Myopic and the Whittle Index policies can achieve optimality under certain conditions. However, so far to the best of our knowledge there are no easy-to-implement policies applicable to the general case where channels are correlated. Moreover, both policies require the prior knowledge of the system's transition matrix, which is hard to obtain beforehand in practice. Thus, we need to come up with a new approach that copes with these challenges. \section{Deep Reinforcement Learning Framework} \label{sec:dqn} When channels are correlated and system dynamics are unknown, there are two main approaches to tackle the dynamic multichannel access problem: (i) Model-based approach: first estimating the system model from observations and then either solve it by following the dynamic programming method in Section III or apply some computationally efficient heuristic algorithm such as the Myopic policy and the Whittle Index policy (which have polynomial run-time); (ii) Model-free approach: learn the policy directly through interactions with the system without estimate the system model. The model-based approach is less favored since the user can only observe one channel at a time slot and the limited observation capability may result in a bad system model estimation. Even worse, even if the system dynamics is well estimated, solving a POMDP in a large state space is always a bottleneck as the dynamic programming method has exponential time complexity (as explained in Section III) and the heuristic approaches do not have any performance guarantee in general. All these challenges motivate us to follow the model-free approach, which, by incorporating the idea of Reinforcement Learning, can learn directly from observations without the necessity of finding an estimated system model and can be easily extended to very large and complicated systems. \subsection{Q-Learning} We focus on Reinforcement Learning paradigm, Q-learning~\cite{Q-learning} specifically, to incorporate learning in the solution for the dynamic multichannel access problem. The goal of Q-learning is to find an optimal policy, i.e., a sequence of actions that maximizes the long-term expected accumulated discounted reward. Q-learning is a value iteration approach and the essence is to find the Q-value of each state and action pairs, where the state $\mathbf{x}$ is a function of observations (and rewards) and the action $a$ is some action that a user can take given the state $\mathbf{x}$. The Q-value of a state-action pair $(\mathbf{x}, a)$ from policy $\pi$, denoted as $Q^{\pi}(\mathbf{x}, a)$, is defined as the sum of the discounted reward received when taking action $a$ in the initial state $\mathbf{x}$ and then following the policy $\pi$ thereafter. $Q^{\pi^*} (\mathbf{x}, a)$ is the Q-value with initial state $\mathbf{x}$ and initial action $a$, and then following the optimal policy $\pi^*$. Thus, the optimal policy $\pi^* $ can be derived as \begin{equation} \pi^*(\mathbf{x}) = \argmax_{a} Q^{\pi^*}(\mathbf{x}, a), \forall \mathbf{x} \end{equation} One can use online learning method to find $Q^{\pi^*}(\mathbf{x}, a)$ without any knowledge of the system dynamics. Assume at the beginning of each time slot, the agent takes an action $a_t \in \{1, ..., N\}$ that maximizes its Q-value of state-action pair $(\mathbf{x}_t, a_t)$ given the state is $\mathbf{x}_t$, and gains a reward $r_{t+1}$. Then the online update rule of Q-values with learning rate $0<\alpha<1$ is given as follows: \begin{equation} \label{eqn:q-update} Q(\mathbf{x}_t, a_t) \leftarrow Q(\mathbf{x}_t, a_t) + \alpha [r_{t+1} + \gamma \max_{a_{t+1}}Q(\mathbf{x}_{t+1}, a_{t+1}) - Q(\mathbf{x}_t, a_t)] \end{equation} It has been shown that in the MDP case, if each action is executed in each state an infinite number of times on an infinite run and the learning rate $\alpha$ decays appropriately, the Q-value of each state and action pair will converge with probability 1 to the optimal $Q^{\pi^*}$, and thus the optimal policy can be found~\cite{qlearning}. In the context of the dynamic multichannel access, the problem can be converted to an MDP when considering the belief space, and Q-learning can be applied consequently. However, this approach is impractical since the belief update is maintained by knowing the system transition matrix $\mathbf{P}$ \textit{a-priori}, which is hardly available in practice. Instead, we apply Q-learning by directly considering the history of observations and actions. We define the state for the Q-learning at time slot $t$ as a combination of historical selected channels as well as their observed channel conditions over previous $M$ time slots, i.e., $\mathbf{x}_t = [a_{t-1}, o_{t-1}, ..., a_{t-M}, o_{t-M}]$. Then we can execute the online learning following Eq.~(\ref{eqn:q-update}) to find the sensing policy. Intuitively, the more historical information we consider (i.e., the larger $M$ is), the better Q-learning can learn. \subsection{Deep Q-Network} Q-learning works well when the problem's state-action spaces are small, as a look-up table can be used to execute the update rule in Eq.~(\ref{eqn:q-update}). But this is impossible when the state-action space becomes very large. Even worse, since many states are rarely visited, their corresponding Q-values are seldom updated. This causes Q learning takes a very long time to converge. In this work, the state space size of Q-learning is $(2N)^M$, which grows exponentially with $M$. This is because the state of Q-learning is defined as a combination of observations and actions over past $M$ time slots. In a single time slot, the number of possible observations is $2N$, as the user can only sense one out of $N$ channels and each channel has $2$ possible states. We do not consider the size of action space as action information is implicitly included in the observation. Thus, the state size of Q-learning is the number of all possible combinations of observations over previous $M$ time slots, which is $(2N)^M$. As we mentioned before, the number of previous time slots $M$ is also required to be large so that Q-learning can capture enough system information and learn better. This can cause the state space of Q-learning become very large, which prohibits using a traditional look-up table approach. Researchers have proposed both linear and non-linear Q-value approximations to overcome the state space size limit. In 2013, DeepMind developed a Deep Q-Network (DQN), which makes use of a deep neural network to approximate the Q-values, and it achieves human-level control in the challenging domain of classic Atari 2600 games~\cite{dqn}. A neural network is a biologically-inspired programming paradigm organized in layers. Each layer is made up of a number of nodes known as neurons, each of which executes an `activation function'. Each neuron takes the weighted linear combination of the outputs from neurons in the previous layer as input and outputs the result from its nonlinear activation function to the next layer. The networked-neuron architecture enables the neural network to be capable of approximating nonlinear functions of the observational data. A deep neural network is a neural network that can be considered as a deep graph with many processing layers. A deep neural network is able to learn from low-level observed multi-dimensional data and find its success in areas such as computer vision and natural language processing~\cite{dnn_cv, dnn_nlp}. DQN combines Q-learning with deep learning, and the Q-function is approximated by a deep neural network called Q-network that takes the state-action pair as input and outputs the corresponding Q-value. Q-network updates its weights $\mathbf{\theta}$ at each iteration $i$ to minimize the loss function $L_i(\mathbf{\theta}_i) = \mathbb{E} [(y_i - Q(\mathbf{x},a;\mathbf{\theta}_i))^2]$, where $y_i= \mathbb{E}[r + \gamma \max_{a'} Q(\mathbf{x}', a';\mathbf{\theta}_{i-1})]$ is derived from the same Q-network with old weights $\mathbf{\theta}_{i-1}$ and new state $\mathbf{x}'$ after taking action $a$ from state $\mathbf{x}$. Since we directly use the previous historical observations and actions as the state for the Q-learning, the state space becomes exponentially large as we increase the considered historical information, and a traditional look-up table approach to maintain Q values does not work well. Therefore, a DQN implementation is needed to help to find a tractable policy implementation in the dynamic multichannel access problem. \section{Optimal Policy for Known Fixed-Pattern Channel Switching} \label{sec:optimal-policy-deterministic-switching} To study the performance of DQN, we first consider a situation when all the $N$ channels in the system can be divided into several independent subsets and these subsets take turns to be activated following a fixed pattern. We assume at each time slot, only a single subset is activated such that all channels in the activated subset are good and all channels in inactivated subsets are bad. At each time slot, with probability $p$ ($0 \leq p \leq 1$) the next following subset is activated, and with probability $1-p$ the current subset remains activated. We assume the activation order of the subsets is fixed and will not change over time. In this section, we assume that the subset activation order, the activation switching probability $p$ as well as the initially activated subset are known \emph{a-priori}. The optimal policy can be found analytically and is summarized in Theorem 1. This serves as a baseline to evaluate the performance of DQN implementation in the next section. \begin{theorem} When the system follows a fixed-pattern channel switching, if the activation order, switching probability $p$ and the initial activation subset are known, the optimal channel access policy follows Algorithm~\ref{alg:optPolicy_1} or Algorithm~\ref{alg:optPolicy_2} depending on the value of $p$. \end{theorem} \vspace{-0.6cm} \begin{minipage}[t]{.45\textwidth} \begin{algorithm}[H] \small \caption{\small{Optimal Policy when $0.5 \leq p \leq 1$}} \label{alg:optPolicy_1} \begin{algorithmic}[1] \State At the beginning of time slot $0$, choose a channel in the initial activated subset $C_1$ \For{$n=1,2,\ldots$ } \State At the beginning of time slot $n$, \If{The previous chosen channel is good} \State Choose a channel in the next activated subset according to the subset activation order \Else \State Stay in the same channel \EndIf \EndFor \end{algorithmic} \end{algorithm} \end{minipage}% \hfill \begin{minipage}[t]{.45\textwidth} \begin{algorithm}[H] \small \caption{\small{Optimal Policy when $0 \leq p < 0.5$}} \label{alg:optPolicy_2} \begin{algorithmic}[1] \State At the beginning of time slot $0$, choose a channel in the initial activated subset $C_1$ \For{$n=1,2,\ldots$ } \State At the beginning of time slot $n$. \If{The previous chosen channel is good} \State Stay in the same channel \Else \State Choose a channel in the next activated subset according to the subset activation order \EndIf \EndFor \end{algorithmic} \end{algorithm} \end{minipage} \vspace{1cm} \begin{IEEEproof} Assume the currently activated subset at each time slot is known \emph{a-priori}. Then the problem can be modeled as a fully-observable MDP, and the corresponding optimal policy can be found by finding and comparing Q values of all possible state-action pairs. Assume all $N$ channels in the system form $M$ independent subsets, thus there are $M$ states in total. The subsets are indexed according to their fixed activation order as $C_1, C_2, \ldots, C_M$, where $C_1$ is the initial activation subset at the start of the system. Note the channel subset activation order is circular so that $C_M$ is followed by $C_1$ in the order. The corresponding system state at a time slot is represented as $S_i$ ($1 \leq i \leq M$) when channel subset $C_i$ is activated. Let $p(S_j|S_i)$ be the transition probability from state $S_i$ to state $S_j$ ($i,j \in \{1, \ldots, M\}$) of the Markov chain, and we have: \begin{equation} p(S_j|S_i)= \begin{cases} p, & j=i+1 \\ 1-p, & j=i \end{cases} \label{eq-markov-chain} \end{equation} Then the corresponding Q-value of the optimal policy starting with state $S_i$ and action $a$ is: \begin{equation} Q^*(S_i, a) = \sum_{j=1}^{M} p(S_j|S_i)[R(S_i,a) + \gamma V^*(S_j)] \label{eq-q-value-optimal-policy} \end{equation} where $R(S_i,a)$ is the immediate reward, i.e. either $+1$ if the chosen channel is good or $-1$ if the chosen channel is bad. $V^*(S_j)$, defined as $\max_a Q^*(S_i,a)$, represents the expected accumulated discounted reward given by an optimal policy over infinite time horizon with initial state as $S_j$. Taking Eq.~(\ref{eq-markov-chain}) into Eq.~(\ref{eq-q-value-optimal-policy}), we have \begin{equation} \label{eqn:q_value} \begin{aligned} Q^*(S_i, a) = &\begin{cases} p\cdot 1+(1-p)\cdot(-1)+c, & a \in C_{i+1}\\ p\cdot(-1)+(1-p)\cdot 1+c, & a \in C_{i}\\ -1 + c, & \text{otherwise} \end{cases} =&\begin{cases} 2p-1+c, & a \in C_{i+1}\\ 1-2p+c, & a \in C_{i}\\ -1 + c, & \text{otherwise} \end{cases} \end{aligned}\\ \end{equation} where $c=\gamma[pV^*(S_{i+1})+(1-p)V^*(S_{i})]$, which does not depend on the action. Since the optimal action $a^*(S_i)$ for each state $S_i$ is $a^*(S_i) = \argmax_a Q^*(S_j,a)$, the optimal action to maximize the Q value of a given state $S_i$ in Eq.~(\ref{eqn:q_value}) is \begin{equation} \label{eqn:optimal_policy} a^*(S_i) = \begin{cases} \text{any channel in $C_{i+1}$}, & 0.5 \leq p \leq 1 \\ \text{any channel in $C_{i}$}, & 0 \leq p < 0.5 \end{cases} \end{equation} All the above analysis holds based on the assumption that the current state of each time slot is observable. As the initially activated channel subset is known, the user can initially choose a channel in this activated subset and then follow Eq.~(\ref{eqn:optimal_policy}) afterward. Based on the observation of the chosen channel the user is guaranteed to know what the current state is: if the chosen channel is good, the currently activated subset is the subset containing the chosen channel; otherwise, the currently activated subset is the subset prior to the chosen channel's subset in the activation order. Thus, the current state of the MDP is fully observable, and the optimality of the policies in Alg.~\ref{alg:optPolicy_1} and Alg.~\ref{alg:optPolicy_2} derived from Eq.~(\ref{eqn:optimal_policy}) is achieved. \end{IEEEproof} It turns out that the optimal policy for the fixed-pattern channel switching shares a similarly simple and robust structure with the Myopic policy in~\cite{myopic_1}: the optimal policy has a round-robin structure (in terms of the channel subset activation order) and does not require to know the exact value of $p$ except whether it is above/below $0.5$. This semi-universal property makes the optimal policy easy to implement in practice and robust to mismatches of system dynamics. \section{Experiment and Evaluation of Learning for Unknown Fixed-Pattern Channel Switching} \label{sec:simulation-deterministic-switching} Having derived the optimal policy for fixed-pattern channel switching when one has a full knowledge of the system statistics in the previous section, we implement a DQN in this section and study how it performs in the fixed-pattern channel switching even without any prior knowledge of the system statistics. We first present details of our DQN implementation and then evaluate its performance through three experiments. \subsection{DQN Architecture} We design a DQN by following the \emph{Deep Q-learning with Experience Replay Algorithm}~\cite{dqn} and implement it in TensorFlow~\cite{tensorflow}. The structure of our DQN is finalized as a fully connected neural network with each of the two hidden layers containing $200$ neurons\footnote{Generally speaking, deciding the number of hidden layers and the number of neurons in a layer needs many trials and errors. But we follow some general guidance provided in~\cite{Heaton}. We choose a two-hidden-layers neural network as it ``can represent an arbitrary decision boundary to arbitrary accuracy with rational activation functions and can approximate any smooth mapping to any accuracy." And to decide the number of neurons in each layer, one of the rules of thumb methods is that ``The number of hidden neurons should be between the size of the input layer and the size of the output layer." We have tried a different number of neurons between $16$ (output layer size) and $256$ (input layer size), and the network structure with $200$ neurons provided a good performance with small training time.}. The activation function of each neuron is Rectified Linear Unit (\textit{ReLU}), which computes the function $f(x)=\max(x,0)$. The state of the DQN is defined as the combination of previous actions and observations over previous $M$ time slots, which serves as the input to the DQN. And the considered number of historical time slots is the same as the number of channels in the system, i.e., $M=N$. A vector of length $N$ is used to represent the observation at a time slot, where each item in the vector indicates the quality of the corresponding channel. If channel $i$ is selected, the value of the $i$th item in the vector is $1$ if the channel quality is good or $-1$ if the channel quality is bad; otherwise, we use $0$ to indicate that channel $i$ is not selected. And this vector implicitly contains action information, as a non-zero item in the vector indicates the corresponding channel is selected. The output of the DQN is a vector of length $N$, where the $i$th item represents the Q value of a given state if channel $i$ is selected. We apply the $\epsilon$-greedy policy with $\epsilon$ fixed as $0.1$ to balance the exploration and exploitation, i.e., with probability $0.1$ the agent selects uniformly a random action, and with probability $0.9$ the agent chooses the action that maximizes the Q value of a given state. A technique called $\emph{Experience Replay}$ is introduced in~\cite{dqn} to break correlations among data samples and make the training stable and convergent. At each time slot $t$ during training, when an action $a_t$ is taken given the state is $\mathbf{x}_t$, the user gains a corresponding reward $r_t$ and the state is updated to $\mathbf{x}_{t+1}$, a piece of record $(\mathbf{x}_t, a_t, r_t, \mathbf{x}_{t+1})$ is stored into a place called replay memory. When updating the weights $\mathbf{\theta}$ of the DQN, a minibatch of $32$ samples are randomly selected from the replay memory to compute the loss function, and then a recently proposed Adam algorithm~\cite{adam} is used to conduct the stochastic gradient descent to update the weights (details on the hyperparameters are listed in Table~\ref{tab:hyperparameters}). In the following experiment settings, we consider a system of $16$ channels, i.e., $N=16$, which is a typical multichannel WSN. \begin{table} \centering \caption{List of DQN Hyperparameters} \label{tab:hyperparameters} \begin{tabular}{cp{0.25\textwidth}}\hline Hyperparameters & \qquad \qquad Values \\ \hline $\epsilon$ & \qquad \qquad $0.1$\\ Minibatch size & \qquad \qquad $32$\\ Optimizer & \qquad \qquad Adam \\ Activation Function & \qquad \qquad ReLU \\ Learning rate & \qquad \qquad $10^{-4}$\\ Experience replay size & \qquad \qquad $1,000,000$\\ $\gamma$ & \qquad \qquad $0.9$\\ \hline \end{tabular} \end{table} \subsection{Single Good Channel, Round Robin Switching Situation} We first consider a situation where there is only one good channel in the system at any time slot. The channels take turns to become good with some probability in a sequential round-robin fashion. In other words, if at time slot $t$, channel $k$ is good and all other channels are bad, then in the following time slot $t+1$, with probability $p$ the following channel $k+1$ becomes good and all others bad, and with probability $1-p$ channel $k$ remains good and all others bad. In this situation, the inherited dependence and correlation between channels are high. Actually, this is the fixed-pattern channel switching with each independent subset contains one single channel and is activated in a sequential order. In Fig.~\ref{fig:illu_round}, we provide a pixel illustration to visualize how the states of channels change in the $16$-channel system that follows a single good channel, round-robin situation over $50$ time slots. The x-axis is the index of each channel, and the y-axis is the time slot number. A white cell indicates that the corresponding channel is good, and a black cell indicates that the corresponding channel is bad. We compare the DQN with two other policies: the Whittle Index heuristic policy and the optimal policy with known system dynamics from section~\ref{sec:optimal-policy-deterministic-switching}. The optimal policy has full knowledge of the system dynamics and serves as a performance upper bound. In the Whittle Index heuristic, the user assumes all channels are independent. For each channel, the user observes it for $10,000$ time slots and uses Maximum Likelihood Estimation (MLE) to estimate the corresponding $2$-state Markov chain transition matrix. Once the system model is estimated, Whittle Index can be applied. As can be seen in Fig.~\ref{fig:round_robin}, as the switching probability $p$ varies, DQN remains robust and achieves the same optimal performance in all five cases as the optimal policy and performs significantly better than the Whittle Index heuristic. This lies in the fact that DQN can implicitly learn the system dynamics including the correlation among channels, and finds the optimal policy accordingly. On the contrary, the Whittle Index heuristic simply assumes the channels are independent and is not able to find or make use of the correlation among channels. Moreover, as the switching probability $p$ increases, the accumulated reward from DQN also increases because there is more certainty in the system that leads to an increase in the optimal reward. \begin{figure} \vspace{-1cm} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.4\linewidth]{pixel_round_1.png} \captionof{figure}{A capture of a single good channel, round robin switching situation over $50$ time slots} \label{fig:illu_round} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \vspace{1.2cm} \centering \includegraphics[width=1\linewidth]{plot_round_robin.png} \captionof{figure}{Average discounted reward as we vary the switching probability $p$ in the single good channel, round robin switching} \label{fig:round_robin} \end{minipage} \vspace{-0.5cm} \end{figure} \subsection{Single Good Channel, Arbitrary Switching Situation} Next, we study a situation in which there is still only one channel being good in any time slot. However, unlike the previous situation, the channels become good in an arbitrary order. Fig.~\ref{fig:illu_random} shows a pixel illustration of the $16$-channel system in this situation. \begin{figure} \vspace{-1cm} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=.4\linewidth]{pixel_random_1.png} \captionof{figure}{A capture of a single good channel, arbitrary switching situation over $50$ time slots} \label{fig:illu_random} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \vspace{1.2cm} \centering \includegraphics[width=1\linewidth]{plot_rand_single.png} \captionof{figure}{Average discounted reward as we vary the switching order in the single good channel, arbitrary switching} \label{fig:rand} \end{minipage} \vspace{-0.5cm} \end{figure} In the experiment, the channel-switching probability $p$ is fixed as $0.9$, and we randomly choose $8$ different arbitrary channel switching orders. As can be seen from Fig.~\ref{fig:rand}, DQN achieves the optimal performance and significantly outperforms Whittle Index heuristic in all cases. \subsection{Multiple Good Channels Situation} In this section, we investigate the situation when there may be more than one good channels in a time slot. The $16$ channels are evenly divided into several subsets, where each subset contains the same number of channels. At any time slot, there is only one subset activated where all channels in this subset are good, and channels in other inactivated subsets are bad. The subsets take turns to become available with a switching probability fixed at $0.9$. And this is the fixed-pattern channel switching with each independent subset contains one or more channels. Fig.~\ref{fig:illu_mul} shows a pixel illustration of the $16$-channel system in a multiple good channels situation. \begin{figure} \vspace{-1cm} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=0.5\linewidth]{pixel_mul_1.png} \captionof{figure}{A capture of a multiple good channels situation over $50$ time slots} \label{fig:illu_mul} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{plot_mul_channel.png} \captionof{figure}{Average discounted reward as we increase the number of good channels in the multiple good channels situation} \label{fig:mul_chann} \end{minipage} \vspace{-0.5cm} \end{figure} We vary the number of channels in a subset as $1$, $2$, $4$ and $8$ in the experiment, and present the experimental result in Fig.~\ref{fig:mul_chann}. The $16$ channels in the system are in order and the subsets are activated in a sequential round-robin order in the upper graph in Fig.~\ref{fig:mul_chann}, while the channels are arranged arbitrarily and the activation order of subsets is also arbitrary in the bottom graph in Fig.~\ref{fig:mul_chann}. As can be seen, DQN always achieve the optimal performance, and the training time decreases as the number of good channels increases. This is because there is more chance to find a good channel when more good channels are available at a time slot, and the learning process becomes easier so that the DQN agent can take less time exploring and is able to find the optimal policy more quickly. This also explains why Whittle Index heuristic performs better when there are more good channels available. However, DQN significantly outperforms Whittle Index heuristic in all cases. \section{Experiment and Evaluation of DQN for More Complex Situations} \label{sec:evaluation} From the results in Section~\ref{sec:simulation-deterministic-switching}, we can see that DQN outperforms Whittle Index heuristic and achieves optimal performance in the unknown fixed-pattern channel switching. Another question to ask is: can DQN achieve a good or even optimal performance in more complex and realistic situations? To answer this question and at the same time provide a better and deeper understanding of DQN, we have re-tuned our neural network structure to become a fully connected neural network with each hidden layer containing $50$ neurons (and the learning rate is set as $10^{-5}$)\footnote{We have tried the same DQN structure as that in Section VII, but it does not perform well. One intuition is that the parameters in the two-hidden layer DQN with each layer containing $200$ neurons DQN is very large, which may require careful and longer training. Additionally, the two-hidden layer neural network may also not be able to provide a good approximation of Q values in more complex problems. Therefore, we decide to add one more hidden layer and reduce the number of neurons to $50$. This deeper DQN with fewer neurons has the ability to approximate more complicated Q-value function, and in the meanwhile requires less time to train before finding a good policy.}, and considered more complex simulated situations as well as real data traces. In this section, in addition to the Whittle Index heuristic, we also compare DQN with a Random Policy in which the user randomly selects one channel with equal probability at each time slot. Since the optimal policy even with a full knowledge of the system statistics is computationally prohibitive to obtain (by solving the Bellman-Ford equation in the belief state space) in general, we implement the Myopic policy as it is simple, robust and can achieve an optimal performance in some situations. However, one cannot consider the Myopic policy in general when system statistics is unknown since a single user is not able to observe the states of all channels at the same time so that one could not provide an estimation of the transition matrix of the entire system. Moreover, even if we allow the user to observe the states of all channels, the state space of the full system is too large to estimate and one would easily run out of memory when storing such a large transition matrix. Therefore, in the following simulation, we only consider cases when $\mathbf{P}$ is sparse and easy to access, and implement the Myopic policy as a genie (knowing the system statistics \emph{a-priori}) and evaluate its performance. \subsection{Perfectly correlated scenario} We consider a highly correlated scenario. In a $16$-channel system, we assume only two or three channels are independent, and other channels are exactly identical or opposite to one of these independent channels. This is the case when some channels are perfectly correlated, i.e., the correlation coefficient $\rho$ is either $1$ or $-1$. During the simulation, we arbitrarily set the independent channels to follow the same $2$-state Markov chain with $p_{11} \geq p_{01}$. When the correlation coefficient $\rho=1$, the user can ignore those channels that are perfectly correlated with independent channels and only select a channel from the independent channels. In this case, the multichannel access problem becomes selecting one channel from several i.i.d. channels that are positively correlated, i.e., $p_{11} \geq p_{01}$. Then as it is shown in the previous work~\cite{myopic_1, myopic_n}, the Myopic policy with known $\mathbf{P}$ is optimal and has a simple round-robin structure alternating among independent channels. In the case when $\rho=-1$, the Myopic policy with known $\mathbf{P}$ also has a simple structure that alternates between two negatively perfectly correlated channels. Though more analysis needs to be done in future to show whether the Myopic policy is optimal/near-optimal when $\rho=-1$, it can still serve as a performance benchmark as the Myopic policy is obtained with full knowledge of the system dynamics. \begin{figure} \vspace{-1cm} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{16_reward.png} \captionof{figure}{Average discounted reward for 6 different cases. Each case considers a different set of correlated channels} \label{fig:16_reward} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{q_value.png} \captionof{figure}{Average maximum Q-value of a set of randomly selected states in 6 different simulation cases} \label{fig:q_value_fig} \end{minipage} \vspace{-0.5cm} \end{figure} In Fig.~\ref{fig:16_reward} we present the performance of all four policies: (i) DQN, (ii) Random, (iii) Whittle Index heuristic, and (iv) Myopic policy with known $\mathbf{P}$. In the first three cases (x-axis 0, 1 and 2), the correlation coefficient $\rho$ is fixed as $1$ and in the last three cases (x-axis 3, 4 and 5), $\rho$ is fixed as $-1$. We also vary the set of correlated channels to make cases different. The Myopic policy in the first three cases is optimal, and in the last three cases is conjectured to be near-optimal. As it is shown in Fig.~\ref{fig:16_reward}, the Myopic policy, which is implemented based on the full knowledge of the system, is the best among all six cases and serves as an upper bound. DQN provides a performance very close to the Myopic policy without any knowledge of the system dynamics. The Whittle Index policy performs worse than DQN in all cases. In addition, we collect the Q-values predicted from the DQN to show that DQN, indeed, tries to learn and improve its performance. Given a state $\mathbf{x}$, the maximum Q-value over all actions, i.e., $\max_{a} Q(\mathbf{x}, a)$, represents the estimate of the maximum expected accumulated discounted reward starting from $\mathbf{x}$ over an infinite time horizon. For each simulation case, we fix a set of states that are randomly selected, and then plot the average maximum Q value of all these states as the training is executed. As it is shown in Fig.~\ref{fig:q_value_fig}, in all cases, the average maximum Q-value first increases and then becomes stable, which indicates DQN learns from experience to improve its performance and converges to a good policy. As the environment cases are different, DQN may take a different amount of time to find a good policy, which is indicated as the different number of training iterations in each case in the figure for Q values becoming stable. \subsection{Real data trace} We use real data trace collected from our indoor testbed Tutornet\footnote{More information about the testbed on http://anrg.usc.edu/www/tutornet/} to train and evaluate the performance of DQN on real systems. The testbed is composed of TelosB nodes with IEEE 802.15.4 radio. We programmed a pair of motes distanced approximately 20~meters to be transmitter/receiver. The transmitter continually transmits one packet to each one of the $16$ available channels periodically and the receiver records the successful and failed attempts. The transmitter switches transmitting on different channels so fast that the time difference can be ignored and the channel states of $16$ channels measured at each period can be considered to be in the same time slot. Both nodes are synchronized to avoid packet loss due to frequency mismatch and the other motes on the testbed are not in use. The only interference suffered is from surrounding Wi-Fi networks and multi-path fading. There are 8 Wi-Fi access points on the same floor and dozens of people working in the environment, which creates a very dynamic scenario for multichannel access. The data are collected for around $17$ hours. Due to the configuration of Wi-Fi central channels, there are $8$ channels whose conditions are significantly better than others. Randomly selecting one channel from these good channels and keeping using it can lead to a good performance. Thus, in order to create a more adverse scenario and test the learning capability of the DQN, we ignore all these good channels and only use the data trace from the rest $8$ channels. We use the same data trace to train the DQN and to compute the MLE of the transition matrices of each channel for the Whittle index based heuristic policy. We compare the performance of the DQN policy, the Whittle index based heuristic policy and the Random policy. The Myopic Policy is not considered as finding the transmission matrix of the entire system is computationally expensive. The average accumulated discounted reward from each policy is listed in Table~\ref{tab:performance}. It can be seen that DQN performs best in this complicated real scenario. We also present the channel utilization of each policy in Fig.~\ref{fig:chann_util} to illustrate the difference among them. It shows DQN benefits from using other channels when the two best channels (used by the Whittle Index heuristic all the time) may not be in good states. \begin{table}[ht] \vspace{-0.5cm} \begin{minipage}[b]{0.45\linewidth} \centering \vspace{-5cm} \caption{Performance on Real Data Trace} \begin{tabular}{cp{0.7\textwidth}}\hline Method & \qquad Accumulated Discounted Reward \\ \hline DQN & \qquad \qquad $0.9473$\\ Whittle Index & \qquad \qquad $0.7673$\\ Random Policy & \qquad \qquad $-2.1697$\\ \hline \end{tabular} \label{tab:performance} \end{minipage}\hfill \begin{minipage}[b]{0.45\linewidth} \centering \includegraphics[width=1\textwidth]{real_chann_util.png} \captionof{figure}{Channel utilization of 8 channels in the testbed} \label{fig:chann_util} \end{minipage} \vspace{-0.5cm} \end{table} \subsection{Practical Issues} From the previous analysis and experiment results, DQN shows a promising performance in the multichannel access problem. However, there are issues that need to be considered when implementing it in real deployments. In this paper when discussing the channel access problem, we only focus on one user and simply assume the user can always observe the actual state of his selected channel at each time slot. In practice, there are two entities involved, the sender and the receiver. They must be synchronized and use the same channel to communicate all the time. In a time slot when the sender selects a channel to transmit a packet, the receiver knows the selected channel condition based on whether it receives the packet or not, and the sender knows the selected channel condition from the acknowledgement (ACK) or negative-acknowledgement (NAK) message sent back by the receiver. If the receiver successfully receives the packet, it knows the channel is good and sends back an ACK to the sender, so that based on the received ACK the sender also knows the channel is good; if the receiver does not receive any packet, it knows the channel is bad and sends back an NAK to the sender, and thus the sender also knows the channel is bad. Therefore, when applying DQN in practice, we need to make sure the sender and the receiver always select the same channel at each time slot to guarantee their communication as well as having the same information about channel conditions through ACKs and NAKs. One approach is to run the same structured DQNs at the sender and the receiver separately. The two DQNs start with the same default channel and are trained concurrently. We need to make sure the two DQNs have the same trained parameters and select the same channels at all times during training. Even though the ACK/NAK method can guarantee the sender and receiver have the same channel observations and thus training samples, there are still two facts that may cause the channel selection at the sender and the receiver to be different. First, in the exploration step, since each DNQ randomly selects a channel, it may happen that the two DQNs select different channels. Second, in the back propagation step, each DQN randomly selects a set of data samples from its experience replay to update its parameters. This may cause the parameters of two DQNs to become different, which further results in different channel selection policies. To resolve the possible mismatch, we can use the same random seed on both sides to initialize the pseudorandom number generator in the implementation. In this way, the two DQNs always select the same random channel during exploration and use the same set of data samples to update parameters. Therefore, we can ensure the two DQNs will always select the same channel and the final learned policy is guaranteed to be the same. The channel mismatch problem can still happen when an ACK or NAK is lost (due to noise and/or interference) so that the sender and receiver might have different observations on the selected channel condition, and thus they may select different channels later. This inconsistent channel observation not only causes loss of communication, but also results in different learned DQN models at the sender and receiver that give different channel selection policies. One possible approach is to find a way to let the sender and the receiver be aware of the time when a channel mismatch happens, and try to recover in time. Since the sender is expecting to receive an ACK or NAK after each message is sent, the sender can detect the mismatch events if no ACK or NAK are received. Once the sender detects the possible channel mismatch event, it stops updating its DQN model as well as training dataset and transmits data in the future using one single channel - or a small set of channels known so far to have better channel conditions~\cite{kim2017fastjoining}. In addition, along with the original data messages, the sender also sends the timestamp when the channel mismatch was perceived. The sender keeps sending this channel mismatch time information until an ACK being received, which indicates the receiver is on the same channel again and receives the channel mismatch information. Therefore, the receiver can set its DQN model as well as its observation training dataset back to the state right before the channel mismatch happened (assume the receiver uses additional memory to store different states of trained parameters and data samples), which guarantees that the sender and the receiver have the same DQN models as well as training datasets. They can resume operating and training thereafter. Suppose the sender only uses one current best channel to send the channel mismatch timestamp, and let $p_{good}$ be the probability of this channel being good in a time slot, $p_{ack}$ be the probability an ACK or NAK being lost, and $N$ be the number of channels in the system. As the receiver keeps training its DQN model before being aware of the channel mismatch problem, it applies the $\epsilon$-greedy exploration policy (explained in Section VII-A) during training phase. Therefore, with probability $\epsilon$, the receiver randomly picks a channel. Thus, after a channel mismatch happens, the probability that the sender and the receiver meet again on the same good channel and at the same time the ACK is successfully received is $\frac{\epsilon p_{good}(1-p_{ack})}{N}$. Once they meet on the same good channel, they can re-synchronize. Based on the above approach, the expected number of time slots required for re-syncing after a channel mismatch is $\frac{N}{\epsilon p_{good}(1-p_{ack})}$. Since the ACK packet is very small, the probability of loss is small~\cite{decouto2005highthroughput}. As long as the sender and the receiver can re-synchronize again after a channel mismatch, the effectiveness of the proposed policy is guaranteed and the performance will not be affected too much on average. \section{Adaptive DQN for Unknown, Time-Varying Environments} \label{sec:adaptive-dqn} The studies in previous sections all focus on stationary situations, and DQN performs well in learning and finding good or even optimal dynamic multichannel access policies. However, in practice, real systems are often dynamic across time, and our DQN framework in previous sections cannot perform well in such situations. This is because we keep evaluating the newly-learned policy after each training iteration and once a good policy is learned\footnote{In this paper, we manually check the evaluation performance and stop the learning when a policy is good enough. More advanced techniques such as Secretary Problem~\cite{secretary} (by considering each learned policy as a secretary) can be used to decide when to accept a policy and stop learning.}, our DQN framework stops learning and keeps following this good policy. Thus, it lacks the ability to discover the change and re-learn if needed. To make DQN more applicable in realistic situations, we have designed an adaptive algorithm in Algorithm~\ref{alg:adaptiveDQN} to make DQN able to be aware of the system change and re-learn if needed. The main idea is to let DQN periodically evaluate the performance (i.e., the accumulated reward) of its current policy, and if the performance degrades by a certain amount, the DQN can infer that the environment has changed and start re-learning. On the other hand, the Whittle Index heuristic cannot detect the environment change by simply observing the reward change. This is because the policy given from Whittle Index heuristic is far from the optimal policy, and it may have the low performance in both old and new environments so that there is no significant change in the reward leading to the claim that the environment has changed. In addition, even if the Whittle Index heuristic could detect the change, the new policy may still give a bad performance as the Whittle Index heuristic ignores the correlations among channels and is not able to have a correct estimation of the system dynamics due to its limited partial observation ability. \begin{algorithm}[!ht] \small \caption{\small{Adaptive DQN}}\label{alg:adaptiveDQN} \begin{algorithmic}[1] \State First train DQN to find a good policy to operate with \For{$n=1,2,\ldots$ } \State At the beginning of period $n$ \State Evaluate the accumulated reward of the current policy \If{The reward is reduced by a given threshold\footnotemark} \State Re-train the DQN to find a new good policy \Else \State Keep using the current policy \EndIf \EndFor \end{algorithmic} \end{algorithm} \footnotetext{The threshold is set by the user according to her preference.} In the experiment, we make the system initially follow one of the fixed-pattern channel switching cases from Section~\ref{sec:simulation-deterministic-switching}, and after some time it changes to another case. We consider both single good channel and multiple good channel situations. We let DQN automatically operate according to Alg.~\ref{alg:adaptiveDQN}, while we manually re-train Whittle Index heuristic when there is a change in the environment. Fig.~\ref{fig:pattern_change} compares the reward of both the old and new policies learned for DQN and the Whittle Index heuristic in the new environment, as we vary the pattern changes. As can be seen, DQN is able to find an optimal policy for the new environment as the genie optimal policy does, while Whittle Index heuristic does not. We also provide the real-time accumulated reward during the learning process of DQN and the Whittle Index heuristic in one of the above pattern changing situations in Fig.~\ref{fig:realtime}. The system initially starts with an environment that has $8$ channels being good at each time slot for the first $10$ iterations. As can be seen, both DQN and the Whittle Index heuristic are able to quickly find a good channel access policy, but DQN achieves the optimal performance. At iteration $11$, the environment changes to having only $1$ channel being good at each time slot. As there is a significant drop in the reward, DQN can detect the change and starts re-learning. And at iteration $70$, DQN finds the optimal policy and our system keeps following the optimal policy thereafter. On the other hand, even though we manually enable the Whittle Index heuristic to detect the change and re-estimate the system model and re-find a new policy, its performance is still unsatisfying as it cannot make use of the correlation among channels. \begin{figure} \vspace{-1cm} \centering \begin{minipage}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{plot_changing_pattern.png} \captionof{figure}{Average discounted reward as we vary the channel switching pattern situations} \label{fig:pattern_change} \end{minipage}% \hfill \begin{minipage}{.45\textwidth} \centering \includegraphics[width=1\linewidth]{plot_realtime.png} \captionof{figure}{Average discounted reward in real time during training in unknown fixed-pattern channel switching} \label{fig:realtime} \end{minipage} \vspace{-0.5cm} \end{figure} \section{Conclusion and Future Work} \label{sec:conclusion} In this paper, we have considered the dynamic multichannel access problem in a more general and practical scenario when channels are correlated and system statistics is unknown. As the problem, in general, is an unknown POMDP without any tractable solution, we have applied an end-to-end DQN approach that directly utilizes historical observations and actions to find the access policy via online learning. In the fixed-pattern channel switching, we have been able to analytically find the optimal access policy that is achieved by a genie with known system statistics and full observation ability. Through simulations, we have shown DQN is able to achieve the same optimal performance even without knowing any system statistics. We have re-tuned the DQN implementation, and shown from both simulations and real data trace that DQN can achieve near-optimal performance in more complex scenarios. In addition, we have also designed an adaptive DQN and shown from numerical simulations that it is able to detect system changes and re-learn in non-stationary dynamic environments to provide a good performance. There are a number of open directions suggested by the present work. First, we plan to apply the DQN framework to consider more realistic and complicated scenarios such as multi-user, multi-hop and simultaneous transmissions in WSNs. The framework of DQN can be directly extended to consider these practical factors in a simple way. For example, in the situation of multiple users, to avoid interference and collisions among users, we can adopt a centralized approach: assuming there is a centralized controller that can select a subset of non-interfering channels at any time slot, and assign one to each user to avoid a collision. By redefining the action as selecting a subset of non-interfering channels, the DQN framework can be directly used for this multi-user scenario. As the action space becomes large when selecting multiple channels, the current DQN structure requires careful re-design and may require very long training interval before finding a reasonable solution. Instead, we use the same DQN structure as that in Section VII and consider the multiple-users situation in a smaller system that contains $8$ channels where at any time slot $6$ channels become good and channel conditions change in a round-robin pattern. The number of users varies from $2$ to $4$. As is shown in Fig.~\ref{fig:multi_user}, DQN can still achieve a good performance in the multiple-user case. Other deep reinforcement learning approaches, such as Deep Deterministic Policy Gradient (DDPG)~\cite{ddpg}, will be studied in future to tackle the large action space challenge. \begin{figure}[] \vspace{-1cm} \centering \includegraphics[width=.5\textwidth]{multi_user.png} \caption{Average discounted reward as we vary the number of users in the multiple-user situation} \label{fig:multi_user} \vspace{-0.5cm} \end{figure} Second, when the number of users in the network becomes large, the above proposed centralized approach becomes too computationally expensive to implement in practice. In future, we plan to study a more practical distributed approach where each user can learn a channel selection policy independently. One intuitive idea is to implement a DQN at each user independently. Then users can learn their channel selection policies parallelly, and avoid interference and conflicts by making proper channel-selection decisions based on the information gained from observations and rewards. However, whether a good or optimal policy can be learned, and whether an equilibrium exists are unknown and need further investigation. Moreover, as DQN is not easy to tune and may get stuck in local optima easily, we plan to spend more time improving our DQN implementation as well as considering other Deep Reinforcement Learning approaches to see if they have the ability to reach the optimal performance in general situations and study the tradeoff between implementation complexity and performance guarantee. Also as a way to test the full potential of DQN (or Adaptive DQN) as well as other deep reinforcement learning technologies in the problem of multichannel access, we encourage the networking community to work together to create an open source dataset (like what has been done in computer vision and NLP community) that contains different practical channel access scenarios so that researchers can benchmark the performance of different approaches. We have published all the channel access environments and real data trace considered in this work online\footnote{https://github.com/ANRGUSC/MultichannelDQN-channelModel}. This might serve as an initial dataset for researchers to use. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-02-21T02:04:54", "yymm": "1802", "arxiv_id": "1802.06958", "language": "en", "url": "https://arxiv.org/abs/1802.06958" }
\section{INTRODUCTION} R Coronae Borealis (RCB) stars are low mass, hydrogen deficient carbon rich yellow supergiants associated with very late stages of stellar evolution. These are characterised by their unusual light variability showing a rapid aperiodic light dimming of several magnitudes in the optical with a slow return to their maximum light, and exhibit IR excess \citep{hz-PG1938,hz-C1996}. Six hydrogen deficient carbon stars (HdCs) are known. They are spectroscopically similar to RCBs but most of them do not exhibit light declines or show IR excess \citep{hz-W1967,hz-C2012}, the exception being HD\,175893 that shows IR excess \citep{hz-T2012}. DY\,Persei and DY\,Persei type (DY\,Per type) stars are however, a peculiar class of cooler carbon stars showing also dramatic but slower light declines than RCBs and with more symmetric rise in time. Some IR excess \citep{hz-Ak1994,hz-A2001} is also observed for these stars with a somewhat warmer circumstellar shells than RCBs \citep{hz-T2009}. DY\,Per type star candidates are the stars having similar light curve and position in $J-H$ and $H-K$ diagram like DY\,Per type stars found so far, but without any spectroscopic observations or confirmations. We introduce here the term DY\,Per suspect, i.e. carbon stars showing spectroscopic features similar to DY\,Per type stars but whose light curve has not shown characteristic symmetrical decline events but rather large photometric variations that could also be due to dust obscuration. The effective temperatures of DY\,Per type stars appear to be at the cooler end of the known RCB stars \citep{hz-KB1997}. DY\,Per type stars may be hydrogen deficient due to absence of hydrogen Balmer lines in their spectra, nevertheless, the status of hydrogen deficiency is not yet clear due to their cooler effective temperatures and absence of flux in the $G$ band of CH at 4300\AA\ region \citep{hz-KB1997,hz-Z2007,hz-Y2009}. Until now in addition to DY\,Persei itself, only seven Galactic DY\,Per type stars are known \citep{hz-T2008,hz-T2013,hz-M2012}. \citet{hz-A2001} and \cite{hz-T2004,hz-T2009} reported around 27 Magellanic DY\,Per type stars and candidates with more possible suspects given by \citet{hz-S2009} through their OGLE-III light curves. Due to the small number of known DY\,Per type stars and candidates, it is a challenge to characterise these stars and investigate any possible connection with the RCBs. We therefore also introduced DY\,Per suspect stars in our study. Two scenarios have been proposed to explain the evolutionary origin of an RCB star: first, the double-degenerate merger (DD) scenario involving the merger of an He and a C-O white dwarf, \citep{hz-W1984,hz-SJ2002,hz-Pet2006}, and second, the final helium shell flash (FF) \citep{hz-I1983}, scenario involving a single star evolving into planetary nebular (PN) phase or post asymptotic giant branch (post-AGB) phase contracting towards the white dwarf sequence. The ignition of the helium shell in a post-AGB star, say, a cooling white dwarf, results in what is known as a late or very late thermal pulse \citep{hz-H2001} that injests the thin hydrogen rich outer layer, making the star hydrogen deficient, and the star expands to supergiant dimensions \citep{hz-F1977,hz-R2007}. Based on the fluorine \citep{hz-P2008} , $^{13}$C \citep{hz-H2012}, and $^{18}$O \citep{hz-C2005,hz-C2007,hz-G2009,hz-G2010} abundances in RCB and HdC stars, a consensus is now emerging for DD scenario, however, a small fraction of these may be produced by FF scenario \citep{hz-C2011}. \begin{table*}[ht] \caption{Log of observations of RCB and HdC stars as well as DY\,Persei and the DY\,Per affiliated stars.} \centering \small \begin{threeparttable} \begin{tabular}{lllll}\hline\label{Table-1} Star name & Date of observation & $K$- mag.\tnote{*} & S/N & Star type\\ (SIMBAD) & & (SIMBAD) & (2.29 \textit{$\mu$}) & \\ \hline HD 137613 & 16 April 2016 & 5.25 & 70 & HdC \\ Z\,UMi & 01 May 2016 & 7.3 & 55 & RCrB \\ SV\,Sge & 18 November 2016 & 5.9 & 110 & RCrB \\ ES Aql & 18 November 2016 & 7.9 & 105 & RCrB \\ DY Persei & 04 October 2014, 16 January 2016, & 4.4 & 105 & DY\,Per prototype\\ & 17 Jan 2016 , 06 November 2016 & \\ ASAS J065113+0222.1 & 16 January 2016, 17 Jan 2016 , & 4.9 & 80 & DY\,Per type star\tnote{a}\\ & 23 February 2017 & \\ ASAS J040907-0914.2 & 16 January 2016, 17 Jan 2016 , & 3.6 & 95 & DY\,Per suspect\tnote{b}\\ (EV Eri) & 06 November 2016, 23 February 2017 & \\ ASAS J052114+0721.3 & 16 January 2016, 06 November 2016, & 2.19 & 110 & DY\,Per suspect\tnote{b}\\ (V1368 Ori) & 23 February 2017 & \\ ASAS J045331+2246.5 & 04 October 2014, 16 January 2016, & 2.84 & 80 & DY\,Per suspect\tnote{b} \\ & 17 Jan 2016 & \\ ASAS J054635+2538.1 & 16 January 2016, 17 Jan 2016 , & 4.3 & 90 & DY\,Per suspect\tnote{b} \\ (CGCS 1049)& 18 March 2016 & \\ ASAS J053302+1808.0 & 16 January 2016, 17 Jan 2016 & 5.6 & 90 & DY\,Per suspect\tnote{b}\\ (IRAS 05301+1805) & & & & \\ ASAS J191909-1554.4 & 01 July 2016 & 1.06 & 105 & DY\,Per type star\tnote{a} \\ (V1942\,Sgr ) & & & & \\ \hline \end{tabular} \begin{tablenotes} \item [a] \citet{hz-M2012} \item [b] \citet{hz-T2013} \item [*] \textit{Reported from the Two Micron All Sky Survey Point Source Catalogue \citep{hz-Cu2003}} \end{tablenotes} \end{threeparttable} \end{table*} Alongwith hydrogen-deficiency, the main spectral characteristics of RCBs and HdCs that distinguishes them from normal AGB and post-AGB stars are the presence of very high amounts of $^{18}$O and weak or no presence of $^{13}$C in their atmospheres. Using the NIR, $K$-band spectra of these stars, \citet{hz-C2005,hz-C2007,hz-G2009,hz-G2010} found that the isotopic ratios of $^{16}$O/$^{18}$O, derived from the relative strengths of the observed $^{12}$C$^{16}$O and $^{12}$C$^{18}$O molecular bands, range from 0.3 to 20. Note that the typical value of $^{16}$O/$^{18}$O $\sim$ 500 in solar neighbourhood and 200 to 600 in Galactic interstellar medium \citep{hz-G2002}. Also, the $^{12}$C/$^{13}$C ratio for several RCBs and all HdCs are significantly higher than the CN-equilibrium value of 3.4 \citep{hz-A2001,hz-H2012}. Thus, the low values of $^{16}$O/$^{18}$O and high values of $^{12}$C/$^{13}$C in both HdCs and RCBs make it obvious that these two classes of carbon-rich and hydrogen poor stars are indeed closely related. On the contrary, the possible evolutionary connection of DY\,Per type stars with RCBs/HdCs or with normal carbon rich AGBs needs to be explored. \citet{hz-Z2007} reported the high resolution spectrum of the DY\,Persei showing significant hydrogen deficiency with high $^{12}$C/$^{13}$C ratio like most RCBs. It is to be noted that the low resolution spectra of DY\,Per type variables in the Magellanic clouds show significant enhancement of $^{13}$C from the isotopic Swan bands at about 4700\AA, but the $^{13}$CN band at 6250\AA\ is not seen \citep{hz-A2001,hz-T2009}. Also, the enhancement of $^{13}$C in the atmospheres of Magellanic DY\,Per type stars is reported for only 9 cases out of 27 \citep{hz-A2001,hz-T2004,hz-T2009}. Hence, there seems to exist a mixed $^{12}$C/$^{13}$C isotopic ratio in Magellanic DY\,Per type stars. In this paper we search for the contributing spectral features involving $^{18}$O and $^{13}$C in the low resolution $H$- and $K$-band NIR spectra of the observed DY\,Per type stars and DY\,Per suspects. Note that our DY\,Per suspects are the cool carbon stars taken from Table-5 of \citet{hz-T2013} which they rejected as RCB candidates due to enhanced $^{13}$C in their spectra and no clear rapid decline events in their light curves. However, we selected these stars based on their similarity with DY\,Per type stars as given in the description by \citet{hz-T2013} in their text, verbatim, ``Their light curves show variations up to 2 mag, but with no clear signs of a fast decline. Because they all present large photometric oscillations of $\sim$ 0.8 mag amplitude and their spectra do not show clear signs of presence of hydrogen, they should be considered as DY\,Per star candidates." The objective is to explore possible connections between DY\,Per type stars and DY\,Per suspects with classical carbon stars or with RCBs/HdCs. Our observations, analysis, and results are discussed in the following Sections. \section{OBSERVATIONS AND REDUCTIONS} $H$- and $K$-band spectra of our target stars were obtained from TIFR Near Infrared Spectrometer and Imager (TIRSPEC) \citep{hz-N2014} mounted on Himalayan Chandra Telescope (HCT) at Hanle, Ladakh, India. The log of observations is given in Table \ref{Table-1} for the RCB and HdC stars as well as all the DY\,Per affiliated stars, and in Table \ref{Table-2} for the normal and cool carbon stars. \begin{table*}[ht] \centering \caption{Log of observations of normal cool giants selected from \citet{hz-J1992,hz-TA2007}} \begin{tabular}{llllr}\hline\label{Table-2} Star name & Date of observation & $K$- mag. & S/N & Star type \\ (SIMBAD) & & (SIMBAD) & (2.29 \textit{$\mu$})& \\ \hline Arcturus & 01 May 2016 & -2.9 & 85 & K \\ HD 156074 & 14 October 2014 & 5.28 & 125 & R \\ HD 112127 & 17 Jan 2016 , 18 March 2016 & 4.17 & 170 & R \\ BD+06 2063 & 16 April 2016 & 4.1 & 205 & S \\ HR 337 & 17 Jan 2016 & -1.85 & 120 & M \\ HD 64332 & 16 April 2016 & 2.3 & 185 & S \\ HD 123821 & 18 March 2016 & 6.3 & 110 & R \\ HR 3639 & 16 April 2016 & -1.7 & 130 & S \\ HD 58521 & 18 March 2016 & -0.44 & 140 & S \\ HD 76846 & 17 Jan 2016 , 18 March 2016 & 6.6 & 130 & R \\ V455\,Pup & 17 Jan 2016 , 16 April 2016, & 5.27 & 80 & C \\ & 23 February 2017 & \\ TU Gem & 18 March 2016 & 0.78 & 85 & N \\ Y CVn & 16 April 2016 & -0.81 & 80 & J \\ RY Dra & 16 April 2016 & 0.19 & 75 & J \\ \hline \end{tabular} \end{table*} Spectra were recorded in cross-dispersal mode in two dithered positions with multiple exposures in each position having average exposure time of 100s for each frames. The frames were combined to improve the signal-to-noise ratio (SNR) (see Tables \ref{Table-1} and \ref{Table-2}). The recorded spectra in the $H$- band appear noisier than the $K$- band due to lower photon counts. For stars fainter than $K$-magnitude 6, frames of 500s exposure were taken and combined to improve the SNR. After each set of star exposures, three continuum lamp spectra and an argon lamp spectrum were obtained. For removing the telluric lines from the star's spectrum, rapidly rotating O/B type dwarfs (telluric standards) were observed during each observing run in the direction of the programme stars. The slit setting mode S3 with a slit width of 1.97" was available. For this slit setting the average resolving power at the $H$- and $K$- central wavelength is about $\sim$900 as measured from the FWHM of the clean emission lines of the comparison lamp spectrum. The data obtained is made available after dark and cosmic ray corrections. The Image Reduction and Analysis Facility (IRAF) software package was used to reduce these recorded spectra. The dithered frames of the recorded spectra were combined to correct for background emission lines using the ABBA dithering technique. A master flat was made by combining the continuum lamp spectra. The object frames were flat corrected using standard IRAF tasks. One dimensional (1D) spectrum was then extracted and wavelength calibrated using the argon lamp spectrum. The wavelength calibrated star's spectrum is then divided by a telluric standard's spectrum, to remove the telluric absorption lines, using the task TELLURIC in IRAF. All the 7 DY\,Per affiliated stars (2 DY\,Per type stars and 5 DY\,Per suspects), we observed, were taken from the catalogue of stars presented by \cite{hz-T2013} and \cite{hz-M2012}. Our selection was limited by the location of the Observatory, HCT, where we could observe only the stars north of $-25^{\circ}$ declination. Three cool RCBs: Z\,UMi, SV\,Sge and ES\,Aql and one HdC star HD\,137613 were also observed. Except for Z\,UMi, the other two RCBs were observed at about their maximum light as verified from the AAVSO database(\href{https://www.aavso.org/}{www.aavso.org}); Z\,UMi was in recovery phase ($\Delta $V$ \sim $3) and so the observed spectrum is particularly noisy. We have also observed a variety of normal giants/supergiants covering the effective temperature range of the programme stars. The normal giants/supergiants were taken from \citet{hz-J1992,hz-TA2007} spanning K giants through N- and J-type cool carbon stars. These stars along with the HdC/RCBs were observed to compare and confirm the presence/absence of $^{13}$C$^{16}$O and $^{12}$C$^{18}$O features in DY\,Per type stars, and DY\,Per suspects. \section{CO BANDS AND OVERVIEW OF THE SPECTRA} The band head wavelengths of $^{12}$C$^{16}$O are available in literature for both $H$- and $K$-band region. We have calculated the wavelengths of $^{13}$C$^{16}$O and $^{12}$C$^{18}$O by using the standard formula for isotopic shift from \citet{hz-H1950} and the ground state constants of $^{12}$C$^{16}$O are taken from \citet{hz-M1975}. We have verified our calculated band head wavelengths of $^{13}$C$^{16}$O and $^{12}$C$^{18}$O for the first overtone transition, with those given by \citet{hz-C2005} and hence, applied the same procedure to calculate the second overtone band head wavelengths of $^{13}$C$^{16}$O and $^{12}$C$^{18}$O.\\ Figures \ref{fig.1} and \ref{fig.2} show the $H$-band (1.52$-$1.78 $\mu$m region) spectra of our programme stars and the comparison stars (normal giants/supergiants), respectively; the 2nd overtone features of $^{12}$C$^{16}$O, $^{12}$C$^{18}$O and $^{13}$C$^{16}$O including C$_{2}$ Balik-Ramsay system (0-0) are marked with other key features. $H$-band spectra of HD\,156704 (normal K giant) and Z\,UMi (RCB) were very noisy, hence, not shown. The $K$-band (2.25$-$2.42 $\mu$m region) spectra of the programme stars and the comparison stars are shown in Figures \ref{fig.3} and \ref{fig.4}, respectively; the first overtone band heads of $^{12}$C$^{16}$O, $^{12}$C$^{18}$O and $^{13}$C$^{16}$O are indicated. The spectra shown in Figures \ref{fig.1},\ref{fig.2},\ref{fig.3},\ref{fig.4} are normalised to the continuum and are aligned to lab wavelengths of $^{12}$C$^{16}$O band heads. \\ \begin{figure*} \includegraphics[width=16cm,height=16.cm]{Fig_1.eps} \caption{1.52$-$1.78 $\mu$m spectra of RCBs, HdCs, DY\,Persei, and DY\,Per affiliated stars. The band head positions of $^{12}$C$^{16}$O ,$^{12}$C$^{18}$O and $^{13}$C$^{16}$O and other key features are marked. The stars are ordered according to their increasing effective temperature (approximate) from bottom to top. }\label{fig.1} \end{figure*} \begin{figure*} \includegraphics[width=16cm,height=16.cm]{Fig_2.eps} \caption{1.52$-$1.78 $\mu$m spectra of normal giants/supergiants of different spectral type ranging from K giants on the top to cool N type carbon stars at the bottom. The band head positions of $^{12}$C$^{16}$O , $^{12}$C$^{18}$O and $^{13}$C$^{16}$O and other key features are marked.}\label{fig.2} \end{figure*} \begin{figure*} \includegraphics[width=16.cm,height=16.cm]{Fig_3.eps} \caption{2.25$-$2.42 $\mu$m spectra of RCBs, HdC, DY\,Persei, and DY\,Per affiliated stars , with wavelengths of $^{12}$C$^{16}$O , $^{12}$C$^{18}$O and $^{13}$C$^{16}$O indicated by vertical lines. The stars are ordered according to their increasing effective temperatures (approximate) from bottom to top. The position of the mean continuum for each spectrum is indicated by the line marked. }\label{fig.3} \end{figure*} \begin{figure*} \includegraphics[width=16.cm,height=16.cm]{Fig_4.eps} \caption{2.25$-$2.42 $\mu$m spectra of normal giants of different spectral type ranging from K giants in the top to cool N type carbon stars in the bottom. As in Figure 3 ,wavelengths of $^{12}$C$^{16}$O , $^{12}$C$^{18}$O and $^{13}$C$^{16}$O indicated by vertical lines. The position of the mean continuum for each spectrum is indicated by the line marked.}\label{fig.4} \end{figure*} \begin{table*}[ht] \caption{Absorption depths of first overtone CO band heads and the estimated $^{16}$O/$^{18}$O and $^{12}$C/$^{13}$C ratios of RCBs, HdC and DY\,Per affiliate stars. } \begin{center} \small \begin{tabular}{lllllllll}\hline\label{Table.3} Star name & Star type & \multicolumn{2}{c}{$^{12}$C$^{16}$O} & & \multicolumn{2}{c}{$^{12}$C$^{18}$O}& $^{16}$O/$^{18}$O & $^{12}$C/$^{13}$C \\ \cmidrule{3-4} \cmidrule{6-7} & & 2-0 & 3-1 & & 2-0 & 3-1 & & \\ \midrule HD 137613 & HdC & 0.174 & 0.127 & & 0.2 & 0.148 & $\sim$ 0.86 $\pm$ 0.02 & $>$ 15 \\ SV Sge & RCB & 0.46 & 0.45 & & 0.225 & 0.22 & $\geq$ 2.05 $\pm$ 0.01 & $>$ 45 \\ ES Aql & RCB & 0.373 & 0.362 & & 0.093 & 0.088 & $\geq$ 4 $\pm$ 0.1 & $>$ 37 \\ DY\,Persei & DY\,Persei & 0.24 & 0.19 & & 0.06 & 0.045 & $\geq$ 4 $\pm$ 0.2 & $>$ 24 \\ ASAS J045331+2246.5 & DY\,Per suspect & 0.25 & 0.22 & & 0.052 & 0.045 & $\geq$ 5 $\pm$ 0.2 & $>$ 19 \\ V1368 Ori & DY\,Per suspect & 0.275 & 0.25 & & 0.05 & 0.045 & $\geq$ 5.5 $\pm$ 0.1 & $>$ 25 \\ EV Eri & DY\,Per suspect & 0.29 & 0.22 & & 0.04 & 0.03 & $\geq$ 7.5 $\pm$ 0.2 & $>$ 20 \\ CGCS 1049 & DY\,Per suspect & 0.25 & 0.23 & & 0.025 & 0.024 & $\geq$ 10 $\pm$ 0.5 & $>$ 19 \\ IRAS 05301+1805 & DY\,Per suspect & 0.24 & 0.23 & & \nodata & \nodata & \nodata & $>$ 19 \\ ASAS J065113+0222.1 & DY\,Per type star & 0.22 & 0.20 & & \nodata & \nodata & \nodata & $>$ 15 \\ \hline \end{tabular} \end{center} \end{table*} \begin{table*}[ht] \caption{ Absorption depths of first overtone CO band heads and the estimated $^{12}$C/$^{13}$C ratios of normal and cool carbon giants.} \begin{center} \begin{tabular}{lllllll}\hline\label{Table.4} Star name & \multicolumn{2}{c}{$^{12}$C$^{16}$O} & & \multicolumn{2}{c}{$^{13}$C$^{16}$O}& $^{12}$C/$^{13}$C \\ \cmidrule{2-3} \cmidrule{5-6} & 2-0 & 3-1 & & 2-0 & 3-1 & \\ \midrule Arcturus & 0.228 & 0.205 & & 0.098 & 0.095 & $>$ 2.25 $\pm$ 0.2 \\ HD 156704 & 0.125 & 0.11 & & 0.04 & 0.032 & $>$ 3.25 $\pm$ 0.2 \\ HD 112127 & 0.215 & 0.225 & & 0.060 & 0.08 & $>$ 3.2 $\pm$ 0.4 \\ BD+062063 & 0.255 & 0.268 & & 0.126 & 0.125 & $>$ 2.05 $\pm$ 0.1 \\ HR 337 & 0.24 & 0.21 & & 0.089 & 0.087 & $>$ 2.55 $\pm$ 0.2 \\ HD 64332 & 0.33 & 0.332 & & 0.158 & 0.165 & $>$ 2.05 $\pm$ 0.1 \\ HD 123821 & 0.167 & 0.186 & & 0.052 & 0.083 & $>$ 2.75 $\pm$ 0.5 \\ HR 3639 & 0.33 & 0.331 & & 0.16 & 0.145 & $>$ 2.15 $\pm$ 0.2 \\ HD 58521 & 0.365 & 0.322 & & 0.13 & 0.102 & $>$ 3 $\pm$ 0.2 \\ HD 76846 & 0.184 & 0.186 & & 0.088 & 0.101 & $>$ 1.9 $\pm$ 0.2 \\ V455\,Pup & 0.266 & 0.243 & & 0.066 & 0.07 & $>$ 3.75 $\pm$ 0.3 \\ TU\,Gem & 0.312 & 0.293 & & 0.068 & 0.075 & $>$ 4.2 $\pm$ 0.3 \\ Y\,CVn & 0.262 & 0.2512 & & 0.1632 & 0.16 & $>$ 1.6 $\pm$ 0.2 \\ RY\,Dra & 0.215 & 0.213 & & 0.123 & 0.118 & $>$ 1.75 $\pm$ 0.1 \\ \hline \end{tabular} \end{center} \end{table*} \section{PRELIMINARY RESULTS AND DISCUSSION} The observed stars show strong 1st overtone bands of $^{12}$C$^{16}$O in the $K$-band region (see Figures \ref{fig.3} and \ref{fig.4}). As reported by \citet{hz-C2007}, prominent 1st overtone bands of $^{12}$C$^{18}$O are seen with no detection of $^{13}$C$^{16}$O in the two cool RCBs, SV\,Sge and ES\,Aql, and in the HdC star HD\,137613 (see Figure \ref{fig.3}); Z\,UMi spectrum is particularly noisy but suggests the presence of $^{12}$C$^{18}$O bands. As expected, a close inspection of the $K$-band spectra of the observed normal cool giants clearly show the presence of $^{13}$C$^{16}$O bands including the prominent $^{12}$C$^{16}$O bands with no detection of $^{12}$C$^{18}$O bands (see Figure \ref{fig.4}). We have used these HdC/RCBs and cool giants spectra as comparisons to look for the detection of $^{12}$C$^{18}$O and $^{13}$C$^{16}$O bands in the observed spectra of DY\,Persei, DY\,Per type stars and DY\,Per suspects. Among the DY\,Persei and seven DY\,Per affiliated stars we find suggestion of $^{12}$C$^{18}$O bands with no clear detection of $^{13}$C$^{16}$O bands in five of these stars : DY\,Persei, EV\,Eri, V1368\,Ori, ASAS J045331+2246.5, and CGCS\,1049 (see Figure \ref{fig.3}). In Figure \ref{fig.3} spectra of two stars, ASAS J065113+0222.1 and IRAS\,05301+1805, do not show any suggestion of $^{12}$C$^{18}$O and $^{13}$C$^{16}$O bands within the detection limit. In the case of V1942\,Sgr's spectrum (see Figure \ref{fig.3}), numerous features are observed and we could not confirm the presence or absence of both $^{12}$C$^{18}$O and $^{13}$C$^{16}$O bands. Based on the observed $K$-band spectra of HdC/RCBs, DY\,Persei, and DY\,Per affiliated stars, an attempt is made to estimate $^{16}$O/$^{18}$O values by measuring the absorption depths of $^{12}$C$^{16}$O and $^{12}$C$^{18}$O band heads using 2-0 as well as 3-1 bands. This exercise is more difficult for the DY\,Per type stars since spectra of cool stars are full of absorption features and the blending of these features with the identified $^{12}$C$^{18}$O band heads (in such low resolution spectra) is surely a possibility. Yet with the exact wavelength matches we could confirm the presence of $^{12}$C$^{18}$O bands. As these bands are not completely resolved and the bands from the more abundant isotopic species are possibly saturated, the estimated $^{16}$O/$^{18}$O values are the lower limits in most cases (see Table \ref{Table.3}). Using synthetic spectra for the analysis is avoided as it is extremely difficult to identify all the contributing features from the observed low resolution spectra. As all the DY\,Per affiliate stars observed here were reported to show strong presence of $^{13}$C in their respective discovery papers \citep{hz-M2012,hz-T2013}, we expected enhanced $^{13}$C$^{16}$O depths in the $K$-band spectra. We have estimated $^{12}$C/$^{13}$C ratios from the $K$-band absorption depths of $^{12}$C$^{16}$O and $^{13}$C$^{16}$O band heads. Since the observed depth at $^{13}$C$^{16}$O band heads is more or less comparable with the noise levels of the observed spectra, we conclude that there is no clear suggestion of $^{13}$C$^{16}$O in their spectra within the detection limit. However, we have estimated the lower limits of the $^{12}$C/$^{13}$C ratios measured from the $K$-band spectra of these stars as given in Table \ref{Table.3}. The depth at $^{13}$C$^{16}$O 2-0 band head region is used due to the better signal than other regions. We find that our estimated lower limit on $^{12}$C/$^{13}$C for DY\,Persei is in line with the range of values, 20-50, obtained by \citet{hz-KB1997}. We have also estimated the $^{12}$C/$^{13}$C ratios for the observed normal and the cool carbon giants (see Table \ref{Table.4}) for comparison. These very low lower limits on $^{12}$C/$^{13}$C ratios measured for these carbon giants clearly show enhanced $^{13}$C in contrast to the DY\,Per affiliates. The $^{12}$C/$^{13}$C ratios are expected to be more than the estimated lower limits for the normal and cool carbon giants. In the $H$-band region, the observed spectra do show the second overtone bands of $^{12}$C$^{16}$O but most of these are affected by noise. The strength of $^{12}$C$^{16}$O features in $H$-band is much weaker compared to those in $K$-band. Hence, detection of $^{12}$C$^{18}$O and $^{13}$C$^{16}$O in the $H$-band spectra is extremely difficult due to noise issues. For example, Figures \ref{fig.1} and \ref{fig.2} show the atomic features as well as the wavelength positions of $^{12}$C$^{18}$O and $^{13}$C$^{16}$O band heads. \section{CONCLUSIONS} Our analysis show the presence of strong $^{12}$C$^{18}$O band heads in RCB and HdC stars. The HdC star, HD\,137613, and the two RCB stars: SV\,Sge and ES\,Aql, are common with \citet{hz-C2007}. Our $^{16}$O/$^{18}$O estimates for these three stars are in fair agreement with the values given in column (4) of \citet{hz-C2007}'s Table 2. For DY\,Persei and the relatively cooler DY\,Per affiliated stars, our conclusion are less clear, however, there seems to be indication of $^{18}$O in the atmosphere of DY\,Persei and 4 DY\,Per suspects and no $^{13}$C (within the detection limit) which is the main isotopic signature of RCB/HdC stars. In the case of the DY\,Per type star, V1942\,Sgr, numerous features are observed and we could not confirm the presence or absence of both $^{12}$C$^{18}$O and $^{13}$C$^{16}$O bands. Note that, the $K$-band spectra of all the normal carbon stars, with similar S/N spectra of DY\,Per affiliates, having similar effective temperatures, show prominent $^{13}$C$^{16}$O bands. On the contrary, one DY\,Per type star ASAS J065113+0222.1, and one DY\,Per suspect IRAS\,05301+1805 show little or no presence of both $^{18}$O and $^{13}$C in their atmosphere. So whether DY\,Per type stars are the cooler cousins of RCBs or just a counterpart of normal carbon rich AGBs suffering ejection events can be better explored through the analyses of high resolution $H$- and $K$-band spectra. Our preliminary analysis suggests that a quartet of suspects along with DY\,Persei itself show prominent $^{12}$C$^{18}$O bands and no $^{13}$C$^{16}$O bands, which is in sharp contrast to the normal carbon stars and much similar to RCBs, and builds up a strong case to dig deeper into the high resolution spectra of these stars to find their evolutionary origins. \acknowledgments It is our pleasure to thank the referee for a constructive report that helped us considerably in the presentation of this work. We would like to thank the staff at IAO, Hanle and the remote control station at CREST, Hosakote for assisting in observations. We thank Dr. J. P. Ninan for his valuable suggestions regarding observations and reductions. We also thank Prof. Rajat Chowdhury for giving us valuable inputs in calculating the isotopic shifts.\\ \bibliographystyle{apj}
{ "timestamp": "2018-02-22T02:07:01", "yymm": "1802", "arxiv_id": "1802.06846", "language": "en", "url": "https://arxiv.org/abs/1802.06846" }
\section{Introduction} Quantum Chromodynamics (QCD) predicts that at sufficiently high temperatures and/or densities the quarks and gluon confined inside the hadrons are liberated into a medium of quarks and gluons, known as Quark-gluon Plasma (QGP). Over the decades a large number of activities have been directed towards the production and identification of this new state of matter theoretically and experimentally in ultra relativistic heavy-ion collisions (URHIC) with the increasing center of mass energies ($\sqrt{s}$) at BNL AGS, CERN SPS, BNL RHIC, and CERN LHC experiments. However, for the non-central events in the above URHICs, a very strong magnetic field is generated at the very early stages of the collisions due to very high relative velocities of the spectator quarks with respect to the fireball~\cite{Skokov:IJMPA'2009,Voronyuk:PRC83'2011}. Depending on the centralities of the collisions, the strength of the magnetic fields may vary from $m_{\pi}^2$ ($\sim 10^{18}$ Gauss) at RHIC to 10 $m_{\pi}^2$ at LHC. However, at extreme cases the magnetic field may even reach 50 $m_{\pi}^2$ at LHC and even much larger values $\sim 10^{5}~m_\pi^2$ in the early universe during electroweak phase transition~\cite{Vachaspati:PLB265'1991}. Naive classical estimates of the lifetime of these magnetic fields show that it only exists for a small fraction of the lifetime of QGP. However depending on the transport properties of the plasma the magnetic field may remain strong during the lifetime of QGP~\cite{Tuchin:PRC83'2010}. One particularly suited probe to infer the properties of nuclear matter under extreme conditions is the heavy-quarkonia. The heavy quark and antiquark ($Q \bar Q$) pairs are produced in URHICs on a very short time-scale $t_{\rm prod}\sim 1/2m_{Q}$. Subsequently they develop into a physical resonance over a formation time $t_{\rm form}\sim 1/E_{\rm bind}$ ($E_{\rm{bind}}$ is the binding energy of the state). They traverse the plasma and later the hadronic matter before decaying into the dilepton, which is eventually detected. This long journey is fairly `hazardous' for the quarkonium because even before the formation of resonances, the cold nuclear matter may dissociate the nascent $Q \bar Q$ pairs. However, even after the resonances are formed, they react to the presence of a thermal medium with the smaller binding energies. Since the mass of the charm or bottom quarks is larger than the temperature of QGP created in current heavy-ion collisions, {\em viz.} $T_{\rm LHC}\leq 0.6$ GeV heavy quarkonium bound states may survive while traversing the collision center. In that process they accumulate information about their environment which is imprinted on their depleted production yields, which may open up a direct window on the vital properties of the deconfined medium, {\em namely} the temperature and the presence of strong magnetic fields. Therefore the goal of the present work is to understand theoretically the properties of heavy quarkonium under realistic conditions existing in an environment at high temperatures in the presence of strong magnetic fields. Our understanding of heavy quarkonium has made a significant step forward with the computations of effective field theories (EFT) from the underlying theory - QCD, {\em such as} non-relativistic QCD (NRQCD) and potential NRQCD, which are synthesized by separating the intrinsic scales of heavy quark bound states ({\em e.g.} mass, velocity, binding energy) as well as the additional scales of thermal medium ({\em e.g.} $T$, $g$T, $g^2$T) in weak-coupling regime, in overall comparison with $\Lambda_{QCD}$. However, the separation of scales in EFT is not always evident in realistic conditions achieved at URHICs, so one needs the first-principle lattice QCD simulations to study the quarkonia in a medium even without the potential models rather by the spectral functions in terms of the Euclidean meson correlation functions~\cite{Alberico:PRD77'2008}. However the reconstruction of the spectral functions turns out to be very difficult because the temporal extent decreases at large temperature. Thereby the studies of quarkonia using the potential models at finite temperature complement the lattice studies. For a long time phenomenological potential models had been deployed in the literature, which were not based on the systematic derivations from QCD. The color singlet free energies extracted from the correlation function of Polyakov loops, which is computed from the first-principle lattice QCD simulations, has been commonly advocated as an appropriate potential to study the quarkonia in vacuum as well as in medium. The perturbative computations of the potential at high temperatures show that the $Q \bar Q$ potential becomes complex, where the real part gets screened due to the presence of deconfined color charges~\cite{Matsui:PLB178'1986} and the imaginary-part ~\cite{Escobedo:PRA78'2008,Brambilla:PRD78'2008,Laine:JHEP0703'2007, Beraudo:NPA806'2008} attributes the thermal width of the resonance. The physics of quarkonium dissociation in a medium has been refined over the last decade, where the resonances were initially thought to be dissociated when the screening becomes sufficiently strong, the potential becomes too weak to hold $Q \bar Q$ together. Nowadays the dissociation is thought to be mainly due to the broadening of the width of resonances in a medium. The broadening arises mainly either by the inelastic parton scattering process mediated by the spacelike gluons, known as Landau damping ~\cite{Laine:JHEP0703'2007} or due to the gluo-dissociation process in which the color singlet state undergoes into a color octet state by a hard thermal gluon~\cite{Brambilla:JHEP1305'2013}. The later processes becomes dominant when the temperature of medium is smaller than the binding energy of the particular resonance. Recently one of us estimated the imaginary component of the potential perturbatively in resummed thermal field theory, where the inclusion of a confining string term makes the (magnitude) imaginary component smaller~\cite{Lata:PRD89'2014} , compared to the medium modification of the perturbative term alone \cite{Adiran:PRD79'2009}. Even in strong coupling limit the potential extracted through AdS/CFT correspondence develops an imaginary component beyond a critical separation of $Q \bar Q$ pair~\cite{Binoy:PRD92'2015,Binoy:PRD91'2015}. In a similar calculation, generalized Gauss law relates the numerically simulated values of the potential to the in-medium permittivity of the QCD medium conventionally parameterized by the so called Debye mass pair~\cite{Rothkopf:PRL'2012}. The discussions referred above were limited for the simplest possible setting in heavy-ion phenomenology for fully central collisions but most events occur with a finite impact parameter where an extremely large magnetic fields may be produced. Recently some of us have explored the effects of strong magnetic field on the properties of heavy-quarkonium by computing the real part of $Q \bar Q$ potential \cite{Mujeeb:EPJC77'2017} as well as on the QCD thermodynamics~\cite{SRath:JHEP1712'2017}. However, such purely real potential alone cannot capture the physics relevant for in-medium modification of quarkonium states so we aim to estimate the imaginary component of the potential perturbatively in the real-time formalism and investigate how the properties of quarkonia in a thermal QCD medium get affected by the presence of strong magnetic field. Recently there was an attempt to derive the complex heavy quark potential due to an external strong magnetic field in a generalised Gauss law \cite{Balbeer:1711'2017}, where the imaginary part of in-medium permittivity, $\epsilon (k)$ is heuristically obtained by simply replacing the Debye mass in the absence of magnetic field by the same in the presence of strong magnetic field. In our calculation, we aim to calculate meticulously the imaginary part of retarded gluon self-energy due to quark loop and gluon loop separately, similar to the calculation of the real part. It is found that the imaginary part due to quark loop is proportional to the square of the quark masses and does not depend on the temperature directly (apart from the Debye mass). As a result, the momentum dependence will be completely different from their calculation \cite{Balbeer:1711'2017}, which can be understood by the dimensional reduction caused by the effect of magnetic field to quark dynamics, {\em not} the gluon dynamics. Our work proceeds as follows. First we calculate the resummed retarded/advanced and symmetric gluon propagator by calculating the real and imaginary part of retarded/advanced gluon self-energies for a thermal QCD medium in the presence of strong magnetic field in subsections 2.1 and 2.2, respectively. Next the real and imaginary component of dielectric permittivities are obtained by taking the static limit of the resummed retarded and symmetric propagators, whose inverse Fourier transform gives the real and imaginary parts of heavy quark potential in the coordinate space in subsection 3.1 and 3.2, respectively. The real part of potential is thereafter solved numerically by the Schr\"{o}dinger equation to obtain both the energy eigenvalues and eigenfunctions to calculate the size and binding energies of quarkonia in subsection 4.1. In Section 4.2 we deals with the imaginary component in a time-independent perturbation theory to estimate the medium-induced thermal width of the resonances, which facilitates to study the dissociation due to the Landau damping. Finally we will conclude in Section 5. \section{The resummed gluon propagator in strong magnetic field} In Keldysh representation of real-time formalism, the retarded (R), advanced (A) and symmetric (S) propagators are written as the linear combination of the components of matrix propagator: \begin{eqnarray} \label{2a6} D_R^0 = D_{11}^0 - D_{12}^0 ~,~ D_A^0 = D_{11}^0 - D_{21}^0 ~,~ D_S^0 = D_{11}^0 + D_{22}^0~. \end{eqnarray} Similar representation for self-energies can also be worked out in terms of components of self-energy matrix through the retarded ($\Pi_R$), advanced ($\Pi_A$) and symmetric ($\Pi_S$) self energies. The resummation for the above propagators is done by the Dyson-Schwinger equation. For the static potential, we need only the temporal (longitudinal) component of the propagator and its evaluation is easier in the Coulomb gauge so the temporal component of retarded/advanced propagator is resummed as \begin{eqnarray} D^L_{R,A}=D^{L(0)}_{R,A}+D^{L(0)}_{R,A}\Pi^L_{R,A}{D}^L_{R,A}~, \label{3b2} \end{eqnarray} whereas the resummation for symmetric propagator is done as \begin{equation} D^L_{S}=D^{L(0)}_{S}+D^{L(0)}_{R} \Pi^L_R D^L_{S(0)}+D_S^0\Pi_{A} {D}_{A}+ D_{R}^0\Pi _{S}{D}_{A}~. \label{symmetric} \end{equation} Thus the resummed retarded (advanced) and symmetric propagators can be expressed explicitly by the self-energies as \begin{eqnarray} D^{L}_{R,A}(k)&=&\frac{1}{{\textbf{k}}^2-\rm{Re} \Pi^{L}_{R}(k)\mp i \rm{Im} \Pi^{L}_{R}(k)},\label{lon_ret} \\ D^{L}_{S}(k)&=&(1+2n_{B}(k_0))~{\rm{sgn}}(k_0) \left(D^{L}_{R}(k)-D^{L}_{A}(k) \right), \label{lon_sym} \end{eqnarray} where the factor, $(1+2n_{B}(k_0)) {\rm{sgn}} (k_0)$ and the difference, $\left(D^{L}_{R}(k)-D^{L}_{A}(k)\right)$ can be obtained as ~\cite{Adiran:PRD79'2009,Magaret:EPJC7'1999} \begin{eqnarray} (1+2n_{B}(k_0)) {\rm{sgn}} (k_0) &=& \frac{2T}{k_0}, \label{factor}\\ \left(D^{L}_{R}(k)-D^{L}_{A}(k)\right)&=&\frac{2i\rm{Im} \Pi^{L}_{R}(k)} {\big[\textbf{k}^2-\rm{Re} \Pi^{L}_{R}(k)\big]^2+\big[\rm{Im} \Pi^{L}_{R} (k)\big]^2}, \label{retarded_advanced} \end{eqnarray} with the following identities \begin{eqnarray} \rm{Re}\Pi^{L}_{R}(k) &=&\rm{Re}\Pi^{L}_{A} (k),\\ \rm{Im} \Pi^{L}_{R}(k)&=&-\rm{Im} \Pi^{L}_{A}(k). \end{eqnarray} It is thus learnt that only the real and imaginary parts of the longitudinal component of retarded self-energy suffice to calculate the resummed retarded, advanced and symmetric propagator in a strongly magnetized hot QCD medium. For calculating the retarded self-energy, we need to evaluate the matrix propagator in a thermal medium in the presence of strong magnetic field for quarks and gluons. The magnetic field affects only the quark propagator {\em via} the projection operator and its dispersion relation. So we will now revisit the vacuum quark propagator in a strong magnetic field and then thermalize it in a real-time formalism, which in turn computes the gluon self-energy for the quark-loop diagram. We start with the vacuum quark propagator in coordinate-space, using the Schwinger's proper-time method \cite{Schwinger:PR82'1951} \begin{equation}\label{S(X,Y)} S(y,y^\prime)=\phi(y,y^\prime)\int\frac{d^4p}{(2\pi)^4}e^{-ip(y-y^\prime)} S(p)~, \end{equation} where the phase factor, $\phi(y,y^\prime)$ defined by \begin{equation} \phi(y,y^\prime)=e^{i|q_f|\int^{y}_{y^\prime} A^\mu(\zeta)d\zeta_\mu}. \end{equation} is a gauge-dependent quantity, which is responsible for breaking of translational invariance. For a single fermion line, it is possible to gauge away the phase factor by an appropriate gauge transformation for a symmetric gauge in a magnetic field directed along the $z$ axis. Thus one can express the propagator in the momentum-space \cite{Tsai:PRD10'1974,Chyi:PRD62'2000} as an integral over the proper-time ($s$) \begin{equation} iS(p)=\int_0^\infty \frac{1}{eB}\frac{ds}{\cos(s)} e^{-is\left[m^2_f-p_{\|}^2+\frac{\tan(s)}{s} p_\bot^2\right]} \left[(\cos(s) +\gamma_{1}\gamma_{2} \sin(s))(m_f+\gamma\cdot p_{\|}) -\frac{\gamma\cdot p_{\bot}}{\cos (s)}\right], \label{sch_mom} \end{equation} which can be expressed more conveniently in a discrete form by the associated Laguerre polynomials \begin{equation} iS_n (p)=\sum_n\frac{-id_n(\alpha)D+d^\prime_n(\alpha) \bar{D}}{p_L^2+2neB} +i\frac{\gamma\cdot p_\bot}{p_\bot^2}~, \label{prop_lag} \end{equation} with the notations in Ref\cite{Chyi:PRD62'2000}. In a strong magnetic field (SMF) limit both parallel and perpendicular components of quark momenta are smaller than the magnetic field (i.e. $p^2_\parallel,p^2_\perp\ll|q_fB| \gg T^2$) so the transitions to the higher Landau levels ($n\geq1$) are suppressed. Therefore only the lowest Landau levels (LLL) are populated, hence the vacuum propagator for quarks in the momentum-space for LLL ($n=0$) becomes \begin{equation} iS_0(p)=\frac{(1+\gamma^{0}\gamma^{3}\gamma^{5}) (\gamma^{0}p_{0}-\gamma^{3}p_{z}+m_f)} {p_{\parallel}^2-m^2_f+i\epsilon} e^{-\frac{p_{\perp}^2} {\mid q_fB\mid}}, \label{vacrop} \end{equation} where $m_f$ and $q_f$ are the mass and electric charge of $f^{th}$ flavour, respectively. However, in real-time formalism, the propagator in a thermal medium acquires a ($2\times{2}$) matrix structure~\cite{Magaret:EPJC7'1999} \begin{equation} S(p) = \begin{pmatrix} S_0(p)+n_F(p_0) (S^\ast_0 (p) -S_0(p)) & \sqrt{n_F(p_0) (1-n_F(p_0))} (S^\ast_0 (p) -S_0(p)) \\ -\sqrt{n_F(p_0) (1-n_F(p_0)} (S^\ast_0 (p) -S_0(p)) & -S^\ast_0(p)+n_F(p_0) (S^\ast_0 (p) -S_0(p)) \end{pmatrix}~, \label{mag_prop} \end{equation} where $n_F(p_0)$ is the quark distribution function. Thus, the $11$- and $12$-components can be read off \begin{eqnarray} iS_{11}(p)&=&\Bigg[\frac{1}{{p_{\parallel}^2-m_f^2+ i\epsilon}}+2\pi i n_F(p_0)\delta(p_{\parallel}^2-m_f^2)\Bigg] (1+\gamma^{0}\gamma^{3}\gamma^{5})(\gamma^{0}p_{0}-\gamma^{3} p_{z}+m_f) e^{\frac{-p_{\perp}^2}{\mid q_fB \mid}}, \label{propagator_11}\\ S_{12}(p)&=&-2\pi\sqrt{n_F(p_0)(1-n_F(p_0))} \delta(p_{\parallel}^2-m_f^2)(1+\gamma^{0}\gamma^{3}\gamma^{5}) (\gamma^{0}p_{0}-\gamma^{3}p_{z}+m_f)e^{\frac{-p_{\perp}^2} {\mid q_fB \mid}}. \label{propagator_12} \end{eqnarray} However, for gluons, the form of the vacuum propagator remains unaffected by the magnetic field, {\em i.e.} \begin{eqnarray}\label{1 G.P.} D^{\mu\nu}_0(p)=\frac{ig^{\mu\nu}}{p^2+i\epsilon} ~.\end{eqnarray} Similar to thermalization of quark propagator, the gluon propagator at finite temperature also takes the matrix structure in the real-time formalism~\cite{Magaret:EPJC7'1999} in terms of the gluon distribution function, $n_B(p_0)$ \begin{equation} D^{\mu \nu}(p) = \begin{pmatrix} D^{\mu \nu}_0(p)+n_B(p_0) (D^{\ast \mu \nu}_0 (p) +D^{\mu \nu}_0(p)) & \sqrt{n_B(p_0) (1+n_B(p_0))} (D^{\ast \mu \nu}_0 (p) +D^{\mu \nu}_0(p)) \\ \sqrt{n_B(p_0) (1+n_B(p_0))} (D^{\ast \mu \nu}_0 (p) +D^{\mu \nu}_0(p)) & D^{\ast \mu \nu}_0(p)+n_B(p_0) (D^{\ast \mu \nu}_0 (p) +D^{\mu \nu}_0(p)) \end{pmatrix}. \label{temp_prop} \end{equation} The above matrices (\ref{mag_prop}, \ref{temp_prop}) will be used to calculate the retarded/advanced and symmetric self energies due to quark loop and gluon loops, respectively in the next section. \subsection{Real part of retarded gluon self energy in real-time formalism} In Keldysh representation of real-time formaism, the evaluation of the real part of retarded gluon self-energy requires only the real part of 11-component of self-energy matrix \begin{equation} {\rm{Re}} \Pi_R(k)={\rm{Re}} \Pi_{11}(k). \end{equation} There are four Feynman diagrams, {\em e.g.} tadpole, gluon-loop, ghost-loop and quark-loop, which contribute to the gluon self-energy. Since only the quark-loop diagram is affected by the presence of the magnetic field in the thermal medium so we first calculate the quark-loop in SMF limit and then obtain the thermal contributions due to the remaining gluon-loop diagrams. Using the matrix propagator (\ref{mag_prop}) for quarks in real-time formalism, the $11$-component of the gluon self-energy matrix for the quark-loop (omitting the prefix $11$) can be written as {\small{ \begin{eqnarray} \nonumber\Pi^{\mu\nu}(k) &=& i\frac{g^2} {2}\sum_f\int\frac{{d^2p_\perp}{d^2p_\parallel}}{(2\pi)^4} {\rm{Tr}}\left[\gamma^\mu (1+\gamma^{0}\gamma^{3}\gamma^{5})\left(\gamma^0p_0-\gamma^3p_z+m_f\right)\gamma^\nu (1+\gamma^{0}\gamma^{3}\gamma^{5})\left(\gamma^0q_0-\gamma^3q_z+m_f\right)\right] \nonumber\\ && \times\left[\frac{1}{p^2_\parallel-m^2_f+i\epsilon}+ 2\pi{i}n_F\left(p_0\right)\delta\left(p^2_\parallel-m^2_f\right)\right] e^{-\frac{p^2_\perp}{|q_fB|}}\nonumber \\ && \times\left[\frac{1}{q^2_\parallel-m^2_f+i\epsilon}+ 2\pi{i}n_F\left(q_0\right)\delta\left(q^2_\parallel-m^2_f\right) \right]e^{-\frac{q^2_\perp}{|q_fB|}}, \end{eqnarray} }} where the factor $1/2$ arises due to the trace in color space and the momentum, $(p+k)$ is replaced by $q$. Here we use the one-loop running QCD coupling ($g=\sqrt{4 \pi \alpha_s (eB) }$), which, in strong magnetic field limit, runs exclusively with the magnetic field because the most dominant scale for quarks is no more the temperature of the medium rather the scale associated with the strong magnetic field. This is exactly Ferrar et. al has recently explored the dependence of running coupling on the magnetic field only by decomposing the momentum into parallel and perpendicular to the magnetic field~\cite{Ferrer:PRD91_2015}. Since the momentum integration is factorizable into parallel and perpendicular components with respect to the direction of magnetic field therefore the component, which depends only the transverse momentum, is given by \begin{eqnarray}\label{P.C.G.S.E.} \Pi_\perp (k_\perp) =\frac{\pi|q_fB|}{2} e^{-\frac{k^2_\perp}{2|q_fB|}}~, \end{eqnarray} and the self energy, which depends only the parallel component of the momentum, $\Pi^{\mu\nu} (k_\parallel)$ is decomposed into vacuum and medium contributions \begin{eqnarray} \label{G.S.E.S.M.F.A.} \Pi_{\parallel}^{\mu\nu}(k_\parallel) \equiv \Pi^{\mu\nu}_{\rm vacuum} (k_\parallel)+\Pi^{\mu\nu}_n(k_\parallel) +\Pi^{\mu\nu}_{n^2}(k_\parallel). \end{eqnarray} The vacuum and medium contributions having the linear and quadratic dependence on the distribution function, respectively are given by \begin{eqnarray} \Pi^{\mu \nu}_{\rm{vacuum}}(k_{\parallel}) &=& \frac{ig^2}{2(2\pi)^4} \int dp_0 dp_z L^{\mu\nu}\left[\frac{1}{(p_{\parallel}^2 -m_f^2+i\epsilon) (q_{\parallel}^2-m_f^2+i\epsilon)}\right], \label{G.S.E.V.} \\ \Pi^{\mu \nu}_n(k_{\parallel}) &=& -\frac{g^2} {2(2\pi)^3}\int dp_0 dp_z L^{\mu\nu}\left[n_F(p_0) \frac{ \delta(p_{\parallel}^2-m_f^2)}{(q_{\parallel}^2 -m_f^2+i\epsilon)} +n_F(q_0) \frac{\delta(q_{\parallel}^2-m_f^2)} {(p_{\parallel}^2-m_f^2+i\epsilon)}\right], \label{G.S.E.S.D.} \\ \Pi^{\mu \nu}_{n^2} (k_{\parallel}) &=& -\frac{ig^2}{2(2\pi)^2} \int dp_0 dp_z L^{\mu\nu}\left[n_{F}(p_0)n_{F}(q_0) \delta(p_{\parallel}^2-m_f^2) \delta(q_{\parallel}^2-m_f^2)\right]~, \label{G.S.E.D.D.} \end{eqnarray} where the trace over $\gamma$-matrices, $L^{\mu \nu}$ is \begin{eqnarray} L^{\mu\nu}=8\left[p^\mu_\parallel\cdot{q^\nu_\parallel} +p^\nu_\parallel\cdot{q^\mu_\parallel}-g^{\mu\nu}_\parallel\left (p^\mu_\parallel\cdot{q}_{\parallel\mu} -m^2_f\right)\right] ~.\end{eqnarray} Now we calculate the real part of the vacuum contribution (\ref{G.S.E.V.}) as~\cite{Mujeeb:EPJC77'2017} \begin{equation} {\rm{Re}}~\Pi^{\mu\nu}_{\rm{vacuum}}(k_\parallel)=\left(g_{\parallel}^{\mu\nu} -\frac{k_{\parallel}^{\mu}k_{\parallel}^{\nu}}{k_{\parallel}^2} \right)\Pi(k_\parallel^2), \label{self_vacuum} \end{equation} where the form factor, $\Pi (k_\parallel^2)$ is given by \begin{eqnarray} \Pi (k_\parallel^2)=\frac{g^2}{2\pi^3}\sum_{f} \left[\frac{2m_{f}^2} {k_{\parallel}^2} \left(1-\frac{4m_{f}^2}{k_{\parallel}^2}\right)^{-1/2} \ln \left\lbrace \frac{{\Big(1-\frac{4m_{f}^2} {k_{\parallel}^2}\Big)}^{1/2}+1} {{\Big(1-\frac{4m_{f}^2}{k_{\parallel}^2} \Big)}^{1/2}-1} \right\rbrace +1\right]. \end{eqnarray} Thus multiplying the transverse momentum dependent part (\ref{P.C.G.S.E.}) to the parallel momentum dependent component (\ref{self_vacuum}) and taking the static limit ($k_0=0$, $k_x, k_y, k_z \rightarrow 0$), the longitudinal component of the vacuum part in the limit of massless flavours becomes \begin{equation} {\rm{Re}}~\Pi^L_{\rm{vacuum}} =-\frac{g^2}{4\pi^2}\sum_{f}|q_{f}B|~, \label{Massless case} \end{equation} whereas for the physical quark masses, it vanishes \begin{equation} \rm{Re}~\Pi^L_{\rm{vacuum}} = 0. ~\label{Massive_case} \end{equation} Next the real part of the thermal contribution having linear dependence on the distribution function in static limit for the massless quarks can be obtained~\cite{Mujeeb:EPJC77'2017} as \begin{equation}\label{Massless (M.C)} {\rm{Re}}~\Pi^L_n=\frac{g^2}{4\pi^2}\sum_{f} |q_{f}B|-\frac{g^2}{8\pi^2}\sum_{f}|q_{f}B| ~,\end{equation} and for the physical quark masses, it becomes \begin{equation}\label{Massive} {\rm{Re}}~\Pi^L_n=-\frac{g^2}{4\pi^{2}T}\sum_{f}|q_{f}B| \int^\infty_0dp_z \frac{e^{\beta\sqrt{p^2_z+m^2_f}}}{\left(1+e^{\beta\sqrt{p^2_z +m^2_f}}\right)^2}~. \end{equation} The medium contribution having quadratic dependence on the distribution function (\ref{G.S.E.D.D.}) does not yield any contribution to the real-part, i.e. \begin{eqnarray} {\rm{Re}}~\Pi^{\mu \nu}_{n^2}(k_{\parallel})=0~. \end{eqnarray} Thus the vacuum (\ref{Massless case}) and medium contributions (\ref{Massless (M.C)}) are combined together to give the longitudinal component due to the quark-loop in the limit of massless quarks \begin{equation} {\rm{Re}}~\Pi^L_{\rm {quark~loop}}=-\frac{g^2}{8\pi^2}\sum_{f} |q_{f}B| ~,\label{pi00} \end{equation} which depends on the magnetic field only in the SMF limit ($eB>>T^2$) and is independent of temperature even in the thermal medium. The above form have also been calculated through the different approaches~\cite{Fukushima:PRD93'2016,Bandyopadhyay:PRD94'2016,Mujeeb:EPJC77'2017}. Similarly for the physical quark masses, the vacuum (\ref{Massive_case}) and medium contributions (\ref{Massive}) due to the quark loop are added to give the longitudinal component in the static limit \begin{equation} {\rm{Re}}~\Pi^L_{\rm {quark~loop}}=-\frac{g^2}{4\pi^2T} \sum_{f}|q_{f}B|\int^\infty_0dp_z \frac{e^{\beta\sqrt{p^2_z+m^2_f}}}{\left(1+e^{\beta\sqrt{p^2_z +m^2_f}}\right)^2}~, \label{pi00m} \end{equation} which now depends on both magnetic field and temperature. However it becomes independent of temperature beyond a certain temperature~\cite{Mujeeb:EPJC77'2017}. We will now calculate the retarded/advanced gluon self-energy tensor due to gluon loops using the $11$-component of matrix propagator for gluons (\ref{temp_prop}) in a thermal medium. The longitudinal component of the same~\cite{Lata:PRD89'2014} is obtained by the HTL perturbation theory as \begin{eqnarray} \Pi^L_{\rm {gluon~loops}}(k)={g^{\prime}}^2 T^2 \left(\frac{k_{0}}{2\textbf{k}}\ln\frac {k_{0}+\textbf{k} \pm i\epsilon}{k_{0}-\textbf{k} \pm i\epsilon}-1\right)~, \label{self_energy_gluon} \end{eqnarray} with the prescriptions $+i\epsilon $ ($ -i\epsilon $) for the retarded and advanced self-energies, respectively. Here we take $g^\prime=\sqrt{4 \pi \alpha_s^\prime (T)}$ as the one-loop strong running coupling, where the dominant scale for gluonic degrees of freedom is the temperature so the renormalization scale is taken as $2 \pi T$. Thus the real part of longitudinal component due to the gluon-loops in the static limit reduces to~\cite{Lata:PRD89'2014} \begin{equation}\label{gluonloop} {\rm{Re}}~\Pi^L_{\rm{ gluon~loops}}=-{g^{\prime}}^2 T^2 \end{equation} Thus the Debye mass is obtained from static limit of quark-loop (\ref{pi00}) and gluon-loops (\ref{gluonloop}) contributions for the massless quarks \begin{equation} m_{D}^2={g^{\prime}}^2 T^2+\frac{g^2}{8\pi^2} \sum_f |q_{f}B|~, \label{debye_massless} \end{equation} Therefore the collective behaviour of the thermal medium in the presence of magnetic field is affected both by the temperature and strong magnetic field, mainly through the gluon loop and quark loop contributions, respectively. Similarly for the physical quark masses, the Debye mass is obtained \begin{eqnarray} m_{D}^2={g^{\prime}}^2 T^2 +\frac{g^2}{4\pi^2T}\sum_f|q_fB|\int^\infty_0dp_z \frac{e^{\beta\sqrt{p^2_z+m^2_f}}}{\left(1+e^{\beta\sqrt{p^2_z +m^2_f}}\right)^2}~. \label{debye_massive_1} \end{eqnarray} \begin{figure}[h] \begin{center} \includegraphics[width=6.5cm,height=6.5cm]{debye.eps} \caption{Variation of Debye mass with temperature} \end{center} \end{figure} To see the competition between the temperature and the magnetic field, we have plotted the Debye mass as a function of temperature at the different strength of magnetic fields in Figure 1. At lower temperatures the magnetic field contributes more to the screening mass than the temperature whereas as the temperature increases within the SMF limit ($eB \gg T^2$) the thermal part plays more dominant role than the magnetic field unless the magnetic field is sufficiently strong. Therefore the real part of retarded gluon self-energy (\ref{debye_massive_1}) gives the real-part of the retarded resummed gluon propagator for realistic quark masses \begin{eqnarray} {\rm{Re}} D^L_R (k_0 \rightarrow 0)=\frac{1}{\textbf{k}^2+m_{D}^2}. \label{resuumed_retarded} \end{eqnarray} \subsection{Imaginary part of retarded gluon self-energy} Similar to the calculation of real part, the imaginary part of retarded self-energy is obtained from the real-time formalism \begin{equation} \rm{Im}~\Pi_R(k_0, {\bf k}) = \frac{\rm{Im} \bar\Pi(k_0, {\bf k})}{\varepsilon(k_0)}, \label{ret_re} \end{equation} where ${\rm{Im}}~\bar\Pi(k_0, {\bf k})$ is derived from the off-diagonal element of self-energy matrix as \begin{equation} \rm{Im}~\bar\Pi(k_0, {\bf k}) =-\sinh(\beta k_0/2)\rm{Im}~\Pi_{12}(k_0, {\bf k}), \label{re_12} \end{equation} and $\varepsilon(k_0)$ is the theta function. Like the evaluation of the real-part we will first evaluate the contribution due to quark-loop and then calculate for the gluon loops. Therefore the off-diagonal element (\ref{propagator_12}) of the propagator matrix (\ref{mag_prop}) for quarks gives the $12$-component of self-energy matrix \begin{eqnarray} i\Pi_{12}^L (k) &=&-\frac{g^2}{2}\sum_f\int\frac{dp_x dp_y}{(2\pi)^2} e^{-\frac{(p+k)_\perp^2}{|q_fB|}}e^{-\frac{p_\perp^2}{|q_fB|}}\nonumber\\ && \times \int dp_0 dp_z e^{\frac{\beta|p_0 + k_0|}{2}} e^{\frac{\beta|p_0|}{2}} n_F(p_0) n_F(p_0 + k_0) L^{00} \delta\left((p_\parallel + k_\parallel)^2 - m_f^2\right) \delta(p_\parallel^2-m_f^2), \label{pi1200} \end{eqnarray} wherein we use the equality $\sqrt{n_F(p_0)(1-n_F(p_0))}=e^{\frac{\beta p_0}{2}} n_F(p_0)$ and the trace, $L^{00}$ is evaluated as \begin{eqnarray} L^{00} &=& 8\left[p_0(p+k)_0 + p_z(p+k)_z + m_f^2\right]. \label{trace} \end{eqnarray} The magnetic field again facilitates the calculation of the imaginary part by separating the momentum integration into components perpendicular and parallel to the magnetic field, \begin{equation} {\rm{Im}}~\Pi_{12}^L (k) = \frac{g^2}{2}\sum_f {\rm{Im}}~\Pi_\parallel(k_\parallel)~{\rm{Im}}~\Pi_\perp(k_\perp), \label{pi_split} \end{equation} where the transverse component, $\Pi_\perp$ is integrated out as \begin{equation} {\rm{Im}}~\Pi_\perp (k_\perp) = \frac{|q_fB|}{8\pi}e^{-\frac{k_\perp^2}{2|q_fB|}}, \label{pi_perp} \end{equation} and after performing the $p_0$ integration, the parallel component is given by \begin{eqnarray} {\rm{Im}}~\Pi_\parallel (k_\parallel) \label{pi_parallel} &=& \int\frac{dp_z}{2\omega_p}e^{\frac{\beta|\omega_p|}{2}} e^{\frac{\beta|k_0+\omega_p|}{2}}n(\omega_p)n(k_0 + \omega_p) L^{00}(p_0 = \omega_p)\delta(k_0^2 - k_z^2 + 2p_0\omega_p - 2p_zk_z)\nonumber\\ &+& \int\frac{dp_z}{2\omega_p}e^{\frac{\beta|\omega_p|}{2}} e^{\frac{\beta|k_0-\omega_p|}{2}}n(\omega_p)n(k_0 - \omega_p) L^{00}(p_0 = -\omega_p)\delta(k_0^2 - k_z^2 - 2k_0\omega_p - 2p_zk_z). \label{pi_pz} \end{eqnarray} Thus, in the static limit ($k_0 \rightarrow 0$), the longitudinal component of the imaginary part of retarded self-energy (\ref{ret_re}) assumes the form \begin{equation} \lim_{k_0 \to 0}~\frac{\rm{Im} \Pi_{R}^L (\textbf{k})}{k_0} =-g^2 \sum_f \frac{2 m_f^2}{T |k_z| E_{k_z/2}} n_F(E_{k_z/2}) \left(1-n_F(E_{k_z/2}) \right) {\rm{Im}}~\Pi_\perp(k_\perp), \label{impir_q00} \end{equation} with $E_{\frac{k_z}{2}} = \sqrt{m^2_f + k_z^2/4}$. In weak coupling limit, the leading-order contribution in SMF limit comes from the momentum-transferred - $|\textbf{k}|^2\sim \alpha_s eB$, thus the exponential factor in transverse component becomes unity - $\exp{(-\frac{k^2_\bot}{2\mid q_fB \mid})}$ $\sim$ 1. Thus the transverse component of the imaginary part of the self-energy is approximated into \begin{equation} {\rm {Im}}~\Pi_\perp \approx \frac{ \mid q_fB\mid}{8\pi}~, \label{perpendicular} \end{equation} and the dispersion relation is simplified too: \begin{equation} E_{\frac{k_z}{2}} \approx \frac{\mid k_z \mid}{2}. \label{energy} \end{equation} Furthermore using the identity \begin{equation} n_F(E_{\frac{k_z}{2}})\Big[1-n_F(E_{\frac{k_z}{2}})\Big] =\frac{1}{2\big[1+\cosh(\beta E_{\frac{k_z}{2}}\big]}, \end{equation} the imaginary component is rewritten as \begin{equation} \lim_{k_0 \to 0}~\left[ \frac{\rm{Im}~\Pi_{R}^L (\textbf{k})}{\rm k_0} \right]= -\frac{g^2}{4\pi T} \sum_f m^2_f \mid q_fB\mid \frac{1}{k_z^2\big[1+\cosh(\beta E_{\frac{k_z}{2}}\big]}, \label{self_retarded4} \end{equation} Moreover in SMF limit the longitudinal component ($\mid k_z \mid$) of the momentum is of the order $(\alpha_s eB)^{1/2}$, which is much smaller than the temperature ($<<T$). Therefore, the imaginary component of retarded self energy due to quark-loop takes further lucid form \begin{equation} \lim_{k_0 \to 0}\Big[\frac{\rm{Im}\Pi^L_R(\textbf{k})}{k_0}\Big]_{\rm{quark~loop}}= -g^2\frac{\sum_f m^2_f\mid q_fB\mid}{8\pi T}\frac{1}{k^2_z}~, \label{self_retarded1} \end{equation} Similarly we will now calculate the imaginary part due to the gluon loops from the off-diagonal element of the self-energy matrix by the off-diagonal element of gluon propagator matrix (\ref{temp_prop}). However, it will be easier to calculate it directly from the imaginary part of the retarded self-energy from the gluon-loop contribution (\ref{self_energy_gluon}). Thus using the identity \begin{equation}\label{identity} \frac{1}{x\pm{y}\pm{i\epsilon}}={\rm{P}}\left(\frac{1} {x\pm{y}}\right)\mp{i\pi{\delta(x\pm{y})}} ~,\end{equation} the imaginary part due to the gluon-loop is extracted from (\ref{self_energy_gluon}) \begin{eqnarray} \lim_{k_0 \to 0}\Big[\frac{\rm{Im}\Pi^L_R(\textbf{k})} {k_0}\Big]_{\rm{gluon~loops}}=-{g^{\prime}}^2\frac{\pi T^2}{2}\frac{1}{\textbf{k}}. \end{eqnarray} Thus the longitudinal component of the imaginary part of gluon self-energy due to both quark and gluon-loop always factorizes into $k_0$ times $\rm{Im}\Pi^L_R (\bf k)$ so it vanishes in the static limit ($k_0 \to 0$). Therefore, using the factors in (\ref{factor}, \ref{retarded_advanced}), the resummed symmetric propagator (\ref{lon_sym}) in the static limit reduces to \begin{eqnarray} D^{L}_{S}(\textbf{k})&=&[1+2n_{B}(k_0)]~{\rm{sgn}}(k_0) \left(D^{L}_{R}(k)-D^{L}_{A}(k) \right) \nonumber\\ &=& i 4T \frac{{\rm{Im}} \Pi^{L}_{R}(\textbf{k})}{\left[\textbf{k}^2-\rm{Re} \Pi^{L}_{R}(\textbf{k})\right]^2}, \end{eqnarray} which is however decomposed into the contributions due to the quark and gluon loop \begin{eqnarray} D^L_S(\textbf{k})&=& D^L_S(\textbf{k})_{\rm{quark ~loop}}+ D^L_S(\textbf{k})_{\rm{gluon~loop}} \end{eqnarray} with \begin{eqnarray} D^L_S(\textbf{k})_{\rm{quark ~loop}} &=& - \frac{ig^2}{2\pi k^2_z}\frac{\sum_f \mid q_fB\mid m^2_f}{({\textbf{k}}^2+m^2_D)^2} \label{resummed_symmetric_quark}\\ D^L_S(\textbf{k})_{\rm{gluon~loop}} &=& \frac{-2i\pi {g^{\prime}}^2 T^3}{\textbf{k}({\textbf{k}}^2+m_D^2)^2}~. \label{resummed_symmetric_gluon} \end{eqnarray} \section{Heavy quark potential} The derivation of potential between a heavy quark $Q$ and its anti-quark ($\bar Q$) from effective field theory, {\em namely} pNRQCD may not be plausible because the hierarchy of non relativistic scales and thermal scales assumed in weak coupling EFT calculations may not be satisfied. Even in the first principle QCD study, the adequate quality of the data is not available in the present lattice correlator studies so one may use the potential model to circumvent the problems. Since the mass of the heavy quark ($m_Q$) is very large, so the requirement - $m_Q \gg T \gg \Lambda_{QCD}$ is satisfied for the description of the interactions between a pair of heavy quark and anti-quark at finite temperature in strong magnetic field limit in terms of quantum mechanical potential. Thus we can obtain the medium-modification to the vacuum potential in the presence of magnetic field by correcting both its short and long-distance part with a dielectric function $\epsilon (\textbf{k})$ as \begin{equation} V(r;T,B)=\int\frac{d^3\textbf{k}}{(2\pi)^{3/2}} ({e^{i\textbf{k}.\textbf{r}}-1})\frac{V(\textbf{k})}{\epsilon(\textbf{k})}, \label{pot_defn} \end{equation} where we have subtracted a $r$-independent term to renormalize the heavy quark free energy, which is the perturbative free energy of quarkonium at infinite separation. The Fourier transform, $V(\textbf{k})$ of the Cornell potential is given by \begin{equation} {V}(\textbf{k})=-\frac{4}{3}\sqrt{\frac{2}{\pi}} \frac{\alpha_s}{\textbf{k}^2}-\frac{4\sigma} {\sqrt{2 \pi} \textbf{k}^4}, \label{ft_pot} \end{equation} and the dielectric permittivity, $\epsilon(\mathbf k)$ encodes the effects of deconfined medium in the presence of magnetic field, which is going to be calculated next. \subsection{The complex permittivity for a hot QCD medium in a strong magnetic field} The dielectric permittivity is defined by the static limit of 11-component of longitudinal resummed gluon propagator by the following equation \begin{equation} \frac{1}{\epsilon (\bf{k})}=\displaystyle {\lim_{k_0 \rightarrow 0}}{\textbf{k}}^{2}D_{11}^{L}(k_{0}, \textbf{k}), \label{dielectric} \end{equation} where the real and imaginary parts of $D^{L}_{11}(\textbf{k})$ are obtained by the retarded (or advanced) and symmetric propagator, respectively \begin{eqnarray} \rm{Re} D^{L}_{11}(\textbf{k})&=&\rm{Re} D^{L}_{R} (\textbf{k}) \nonumber\\ \rm{Im} D^{L}_{11}(\textbf{k})&=&\rm{Im} \frac{D^{L}_{S}}{2} (\textbf{k}) , \label{imaginary_propagator} \end{eqnarray} which will in turn gives the real and imaginary part of dielectric permittivity, respectively. Thus the static limit of resummed retarded propagator (\ref{resuumed_retarded}) gives the real part of dielectric permittivity \begin{equation} \frac{1}{{\rm Re}~\epsilon(\bf{k})}=\frac{\textbf{k}^2}{\textbf{k}^2+m_{D}^2}. \label{real_dielectric} \end{equation} Similarly the static limit of resummed symmetric propagators (\ref{resummed_symmetric_quark},~\ref{resummed_symmetric_gluon}) gives the imaginary part of dielectric permittivity, due to quark and gluon loop contributions \begin{eqnarray} \frac{1}{\rm{Im}~{\epsilon (\bf{k})}_{\rm quark~loop}}&=&-\frac{g^2}{4\pi} \sum_f m_f^2 \mid q_fB\mid \frac{{\bf k}^2}{k^2_z({\bf{k}}^2+m_D^2)^2}~, \label{img_dielectric_quark}\\ \frac{1}{{{\rm{Im}}~\epsilon (\bf k)}_{\rm gluon~loop}} &=&-{g^{\prime}}^2\pi T^3 \frac{{\bf{k}}^2}{\bf{k}({{\bf{k}}^2+m_D^2)}^2}~, \label{img_dielectric_gluon} \end{eqnarray} respectively. Therefore the real and imaginary part of dielectric permittivities give the real and imaginary part of the complex potential, respectively in the next subsection. \subsection{Real and Imaginary Part of the potential} The real-part of dielectric permittivity (\ref{real_dielectric}) is substituted into the definition (\ref{pot_defn}) to give the real part of $Q \bar Q$ potential in the presence of strong magnetic field~\cite{Mujeeb:EPJC77'2017} (with $\hat{r}=rm_{D}$) \begin{eqnarray} \rm{Re} V(r;T,B)&=&-\frac{4}{3}\alpha_s m_{D} \frac{e^{-\hat{r}}}{\hat{r}} +\frac{2\sigma}{m_{D}} \frac{(e^{-\hat{r}}-1)}{\hat{r}} \nonumber\\ &-&\frac{4}{3}\alpha_s m_{D}+\frac{2\sigma}{m_{D}}~, \label{real_potential} \end{eqnarray} where the dependence of temperature and magnetic field enter through the Debye mass. The nonlocal terms insure the potential in medium $V(r;T,B)$ to reduce to the potential in $(T, B) \rightarrow 0 $ limit, which are, however, required to compute the masses of quarkonium states. The additional effect due to the strong magnetic field on the potential in a hot QCD medium is displayed as a function of interparticle distance ($r$) for different strength of magnetic fields in Figure 2, after excluding the constant terms from (\ref{real_potential}). The solid line represents the potential in a pure thermal medium ({\em i.e.} in the absence of magnetic field) whereas the dashed and dotted lines denote the effect of strong magnetic fields, 10 and 25 $m_\pi^2$ to a thermal medium, respectively. We have found that the magnetic field (eB=10 $m_\pi^2$) affects the linear string term more than the Coulomb term, as a result, the overall potential at small and intermediate $r$ becomes less screened than the potential in pure thermal medium. However, further increase of magnetic field ({\em i.e.} eB= 25 $m_\pi^2$), the potential becomes less attractive than eB=10 $m_\pi^2$. However for large $r$ the effect of magnetic field diminishes gradually. \begin{figure}[h] \begin{center} \begin{tabular}{c c} \includegraphics[width=6.5cm,height=6.5cm]{rpot1.eps}& \includegraphics[width=6.5cm,height=6.5cm]{ipot.eps}\\ \end{tabular} \caption{Real (left) and imaginary (right) part of the potential} \end{center} \label{fig1} \end{figure} Similarly the imaginary part of the potential is obtained by plugging the imaginary part of dielectric permittivities due to quark-loop (\ref{img_dielectric_quark}) and gluon-loop (\ref{img_dielectric_gluon}) contributions into the definition of potential (\ref{pot_defn}). The imaginary component of the potential consists of Coulomb and string terms \begin{equation} \rm{Im}~V(r;T,B) =\rm{Im}~V_C(r;T,B)+\rm{Im}~V_S(r;T,B), \end{equation} where each term is again split into quark-loop ($q$) and gluon-loop ($g$) contributions. We will first calculate due to the quark loop from (\ref{img_dielectric_quark}) \begin{eqnarray} \rm{Im} V_C^q (r;T,B)&=& \int \frac{d^3\textbf{k}}{(2\pi)^{3/2}} \left(e^{ik.r}-1\right)\left(-\frac{4}{3}\sqrt{\frac{2}{\pi}} \frac{\alpha_s}{{\textbf{k}}^2}\right)\left(-\frac {g^2{\textbf{k}}^2} {4\pi k^2_z}\frac{\sum_f \mid q_fB\mid m^2_f}{{({\textbf{k}}^2+m^2_D)}^2} \right)\nonumber\\ &=& \frac{\alpha_sg^2}{3\pi^2} \left(\sum_f \mid q_fB\mid m^2_f\right) I_C, \label{alpha_quark} \end{eqnarray} where the momentum integral, $I_C$ is integrated as \begin{eqnarray} I_C&=&\int_0^{\infty} \frac{dk} {{({\textbf{k}}^2+m^2_D)}^2} \int_{-1}^{1} dx\frac{(e^{ikrx}-1)}{x^2}\nonumber\\ &=&\int_0^{\infty} \frac{dk} {{({\textbf{k}}^2+m^2_D)}^2}\left[2-2\cos(kr)-2kr~Si(kr)\right]\nonumber\\ &\equiv&I_{C1}+I_{C2}+I_{C3}, \end{eqnarray} where \begin{eqnarray} I_{C1}&=&2 \int_0^{\infty} \frac{dk}{{({\textbf{k}}^2+m^2_D)}^2} =\frac{\pi}{2m_D^3}\\ I_{C2}&=&-2 \int_0^{\infty} \frac{\cos kr~dk} {{({\textbf{k}}^2+m^2_D)}^2} =- \left[\frac{\pi e^{-\hat{r}}}{2m_D^3}+ \frac{\hat{r}\pi e^{-\hat{r}}}{2m_D^3} \right] \\ I_{C3}&=&-2r \int_0^{\infty} \frac{dk~k}{{({\textbf{k}}^2+m^2_D)}^2} Si(kr) \nonumber\\ &=&-2\frac{\hat{r}}{m_D} \int_0^{\infty} \frac{dk~k}{{({\textbf{k}}^2+m^2_D)}^2} \int_0^{kr} dx \frac{\sin x}{x}~, \end{eqnarray} respectively. Similarly the string part of the imaginary potential is \begin{eqnarray} \rm{Im} V_S^q (r;T,B) &=& \int \frac{d^3\textbf{k}}{(2\pi)^{3/2}} \left(e^{ik.r}-1\right)\left(-\frac{4\sigma} {\sqrt{2\pi}{\textbf{k}}^4} \right)\left(-\frac {g^2{\textbf{k}}^2} {4\pi k^2_z}\frac{\sum_f \mid q_fB\mid m^2_f}{{({\textbf{k}}^2+m^2_D)}^2} \right)\nonumber\\ &=& \frac{\sigma g^2}{2\pi^2} \left(\sum_f \mid q_fB\mid m^2_f\right) I_S, \label{sigma_quark} \end{eqnarray} where the integral, $I_S$ is evaluated as \begin{eqnarray} I_S &=&\int_0^{\infty} \frac{dk} {{\textbf{k}}^2{({\textbf{k}}^2+m^2_D)}^2} \int_{-1}^{1} dx\frac{(e^{ikrx}-1)}{x^2} \nonumber\\ &=&\int_0^{\infty} \frac{dk} {{\textbf{k}}^2{({\textbf{k}}^2+m^2_D)}^2}\left[2-2\cos(kr)-2krSi(kr)\right]\nonumber\\ &\equiv & I_{S1} +I_{S2}, \label{integration_sigma} \end{eqnarray} where $I_{S1}$ and $I_{S2}$ are given by \begin{eqnarray} I_{S1}&=&\int_0^{\infty} \frac{dk} {{\textbf{k}^2{({\textbf{k}}^2+m^2_D)}^2}} \left( 2-2cos(kr) \right)\nonumber\\ &=&\frac{\pi}{2m_D^5}\left[\hat{r}e^{-\hat{r}}-3(1-e^{-\hat{r}}) +2\hat{r}\right]\\ I_{S2} &=& -2\frac{\hat{r}}{m_D} \int_0^{\infty} \frac{dk} {{{k}{({\textbf{k}}^2+m^2_D)}^2}} \int_0^{kr} \frac{\sin x}{x} dx, \end{eqnarray} Next we calculate the imaginary part due to the gluon loop contribution (\ref{img_dielectric_gluon}) for the Coulomb and string terms, respectively~\cite{Lata:PRD89'2014} as \begin{eqnarray} \rm{Im} V^{g}_C( r;T,B)&=&-\frac{8{\alpha_s}^{\prime} T}{3} \int_0^{\infty} \frac{dz~z}{(z^2+1)^2} \left(1-\frac {\sin{z\hat r}}{z\hat r}\right)\label{alpha_gluon}\\ \rm{Im} V_S^{g}(r;T,B) &=&-\frac{4\sigma T }{m_D^2}\int_0^{\infty} \frac{dz}{z(z^2+1)^2} \left(1-\frac {\sin{z\hat r}}{z\hat r}\right)~, \label{sigma_gluon} \end{eqnarray} where the Debye mass is given by Eq.(\ref{debye_massive_1}). Thus the equations (\ref{alpha_quark}) and (\ref{alpha_gluon}) give the Coulombic contribution whereas the equations (\ref{sigma_quark}), (\ref{sigma_gluon}) give the string contribution \begin{eqnarray} {\rm{Im}}~V_C(r;T,B)&=& \rm{Im} V^{q}_C( r;T,B)+\rm{Im} V^{g}_C( r;T,B)\\ {\rm{Im}}~V_S(r;T,B)&=& \rm{Im} V^{q}_S( r;T,B)+\rm{Im} V^{g}_S( r;T,B) \end{eqnarray} to the imaginary component of the potential, respectively. Like the real-part of potential, how does the imaginary part get affected by the additional presence of magnetic field we have plotted it as a function of interquark distance in the right panel of Figure 2. In pure thermal medium (denoted by solid line), both Coulomb and string term are larger powers of $\hat{r}$ and counter each other, resulting the overall magnitude very small. Now the strong magnetic field not only reduces the power of $\hat{r}$ in both terms compared to the pure thermal medium, it induces Coulomb and string terms to contribute additively, resulting the overall magnitude of imaginary part larger. The above observation ultimately translates into the enhancement of thermal width of resonance states due to the ambient strong magnetic field. \section{Properties of Quarkonia} \subsection{Wavefunction and Binding Energy} To investigate the properties of quarkonia in a strong magnetic field, we first solve the Schr\"odinger equation numerically by employing the real part of the potential (\ref{real_potential}) to see how the eigenstates to $J/\psi$, $\psi^\prime$ and $\chi_c$ states in a thermal QCD medium get affected by the presence of strong magnetic field in figures 3-5, respectively. In the presence of magnetic field both wavefunction and probability distribution of quarkonia becomes sharply peaked compared to quarkonia in absence of magnetic field. \begin{figure}[h] \begin{center} \begin{tabular}{c c} \includegraphics[width=6.5cm,height=6.5cm]{jpsi.eps}& \includegraphics[width=6.5cm,height=6.5cm]{prob-jpsi.eps}\\ \end{tabular} \caption{The wavefunction and the radial probability density of $J/\psi$ state} \end{center} \label{fig1} \end{figure} \begin{figure}[h] \begin{center} \begin{tabular}{c c} \includegraphics[width=6.5cm,height=6.5cm]{psi2s2.eps}& \includegraphics[width=6.5cm,height=6.5cm]{prob-psi2s.eps}\\ \end{tabular} \caption{The wavefunction and the radial probability density of $\psi^\prime$ state} \end{center} \label{fig1} \end{figure} \begin{figure}[h] \begin{center} \begin{tabular}{c c} \includegraphics[width=6.5cm,height=6.5cm]{chi.eps}& \includegraphics[width=6.5cm,height=6.5cm]{prob-chi.eps}\\ \end{tabular} \caption{The wavefunction and the radial probability density of $\chi_c$ state} \end{center} \label{fig1} \end{figure} Thus the medium effects encoded into the wavefunctions ($\Phi(r)$) and the corresponding probability distributions explore how the average size of a particular quarkonia ($\sqrt{{r_i}^2}$ =${(\int d \tau ~r^2~{\mid \Phi_i(r) \mid}^2)}^{1/2}$) get affected due to a thermal medium in absence (presence) of magnetic field in left (right) panel of Figure 6, respectively. The magnetic field in general causes swelling of all resonances unless the temperature is very large (Figure 7). \begin{figure}[h] \begin{center} \begin{tabular}{c c} \includegraphics[width=6.5cm,height=6.5cm]{rt.eps}& \includegraphics[width=6.5cm,height=6.5cm]{rtb25.eps}\\ \end{tabular} \caption{The average size ($\sqrt{r^2}$) of quarkonia in pure thermal medium (left) and then thermal medium in presence of strong magnetic field (right)} \end{center} \label{fig1} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=6.5cm,height=6.5cm]{rb.eps} \caption{Variation of the size of quarkonia with the magnetic field at a fixed temperature.} \end{center} \label{fig1} \end{figure} Finally we have studied how the binding energies of quarkonia change with the temperature in absence (presence) of magnetic field in left (right) panel of Figure 8, respectively. The immediate observation is that the magnetic field causes the binding energy to decrease with the temperature slowly, compared to the medium in absence of magnetic field. Moreover the competition between the scales associated to the temperature and magnetic field affects the binding of quarkonia discriminately, {\em viz.} $J/\psi$ becomes less bound and $\chi_c$ becomes more bound due to the presence of magnetic field. However, the binding energy decreases with the magnetic field too (Figure 9). \begin{figure}[h] \begin{center} \begin{tabular}{c c} \includegraphics[width=6.5cm,height=6.5cm]{bet.eps}& \includegraphics[width=6.5cm,height=6.5cm]{betb25.eps} \end{tabular} \caption{The binding energies of quarkonia in pure thermal medium (left) and thermal medium in presence of magnetic field (right).} \end{center} \label{fig1} \end{figure} \begin{figure}[h] \begin{center} \includegraphics[width=6.5cm,height=6.5cm]{beb.eps} \caption{Variation of binding energies with the magnetic field} \end{center} \label{fig1} \end{figure} \clearpage \subsection{Thermal Width and Dissociation of Quarkonia} Using the first-order perturbation theory, the width ($\Gamma$) has been evaluated numerically by folding the eigenstates of a specific quarkonium state in the deconfined medium in the presence of magnetic field \begin{eqnarray} \Gamma =-2\int_0^\infty \rm{Im}~V(r;B,T) |\Phi_i(r)|^2 d\tau. \label{gammaT} \end{eqnarray} We have thus computed the width as a function of temperature in absence (presence) of magnetic field in the left (right) panel of Figure 10, respectively. We have found that in pure thermal medium (left panel) the width increases with the temperature faster than in the presence of strong magnetic field (right panel). However, the magnetic field always enhances the width of the resonances (Figure 11). \begin{figure}[h] \begin{center} \begin{tabular}{c c} \includegraphics[width=6.5cm,height=6.5cm]{gammat.eps}& \includegraphics[width=6.5cm,height=6.5cm]{gammatb25.eps} \end{tabular} \caption{Variation of the thermal widths with the temperature of the medium in absence (left) as well as presence (right) of magnetic field} \end{center} \label{fig1} \end{figure} \begin{figure}[h] \begin{center} \begin{tabular}{c } \includegraphics[width=6.5cm,height=6.5cm]{gammab.eps} \end{tabular} \caption{Thermal widths of quarkonia is plotted as a function of magnetic field} \end{center} \label{fig1} \end{figure} Having studied the change of properties of quarkonia in the presence of magnetic field, we investigate now the effect of strong magnetic field on the dissociation of quarkonia from the conservative criterion on the width of the resonance in $\Gamma \ge 2 \rm{Re}~{\rm{B.E.}}$ \cite{Mocsy:PRL99'2007}. So we have first estimated the dissociation temperatures of quarkonia in absence of magnetic field in the Table 1 and then did the same in presence of magnetic fields in Table 2. We found that the dissociation temperatures increase due to the presence of strong magnetic field but with the further increase of magnetic field the dissociation temperatures decrease. {\em For example}, $J/\psi$'s and $\chi_c$'s are dissociated at higher temperatures at 2 $T_c$ and 1.1 $T_c$ at a magnetic field $eB \approx 6 ~{\rm{and}}~ 4~m_\pi^2$, respectively, compared to the values 1.60 $T_c$ and 0.80$T_c$ in the absence of magnetic field, respectively. However, the $J/\psi$ is dissociated at smaller temperatures, 1.8 $T_c$ and 1.5 $T_c$ for higher magnetic fields, $eB=27~{\rm{and}}~68~m_\pi^2$, respectively. Similarly for higher magnetic field, $eB=12~m_\pi^2$, $\chi_c$ gets dissociated at the critical temperature. \begin{table}[h] \begin{center} \begin{tabular}{|l|l|} \hline State & Dissociation Temperature\\ & $T_D$~(in $T_c$) \\ \hline \hline $J/\psi$ & 1.60\\ \hline $\chi_c$ & 0.80\\ \hline $\psi(2S)$ & 0.70\\ \hline \end{tabular} \end{center} \caption{Dissociation temperature for thermal medium in absence of magnetic field} \end{table} \begin{table}[h] \begin{center} \begin{tabular}{|l|l|} \hline State & Dissociation Temperature (Magnetic field) \\ & $T_D/T_c$ ($eB$~($m^2_\pi$))\\ \hline \hline $J/\psi$ & 2.0 (6.50) \\ & 1.8 (27.0) \\ & 1.5 (68.0)\\ \hline $\chi_c$ & 1.1 (3.7) \\ & 1.0 (12) \\ \hline $\psi(2S)$ & $<1 (<m^2_\pi$)\\ \hline \end{tabular} \end{center} \caption{Dissociation temperature for thermal medium in presence of magnetic field} \end{table} \section{Conclusion} The noncentral events in ultra-relativistic heavy-ion collisions provide an opportunity to probe the properties of heavy quarkonia in the presence of a strong magnetic field. So we utilize this by calculating the bound state radii, binding energy, thermal width etc. of quarkonia by resummed perturbative thermal QCD in the presence of strong magnetic field, thereby studying the dissociation of quarkonia due to the Landau damping. For that purpose, using the Keldysh representation in real-time formalism, we have first calculated the real and imaginary parts of retarded gluon self-energy for a deconfined medium in a strong magnetic field by thermalizing the Schwinger proper-time fermion propagator and then calculate the resummed retarded and symmetric propagators by the Schwinger-Dyson equation. As a result, the Fourier components of both short and long distance components of $Q \bar Q$ interaction are being modified by the static limit of resummed propagators and its inverse Fourier transform gives rise the real and imaginary part of potential in coordinate space. We have noticed that the long-distance term is largely affected by the magnetic field than the shot-distance term, as a result the real part of potential becomes stronger and the imaginary part becomes larger than the medium in absence of magnetic field. We have then studied the quarkonium dissociation by investigating its properties by solving the Schrodinger equation numerically with potential derived to check how the states and the probability distributions of quarkonia change in the strong magnetic field. With the solutions of Schr\"{o}dinger equation we have then calculated the average size, binding energy, thermal width of resonances. We have found that the presence of strong magnetic field causes the swelling for $J/\psi$ and squeezing for $\chi_c$. Similarly the binding decreases for $J/\psi$ and increases for $\chi_c$. Moreover the binding energies decrease with the temperature of medium very slowly due to the presence of magnetic field and for a given medium the binding decreases with the increase in magnetic fields. On the other hand the presence of magnetic field causes an increase the width of resonances in a hot QCD medium. The above observations on the change of the properties of quarkonia in a strong magnetic field facilitate to study the dissociation of quarkonia due to the Landau damping and quantify the magnetic field at which a specific $Q \bar Q$ state excites to the continuum from the intersection of the magnetic field induced thermal width and the (twice) binding energy curve. We have noticed that the presence of strong magnetic field increase the dissociation temperatures but it decreases with the further increase of magnetic field. {\em For example}, $J/\psi$'s and $\chi_c$'s are dissociated at higher temperatures at 2 $T_c$ and 1.1 $T_c$ at a magnetic field $eB \approx 6 m_\pi^2$ and $4 m_\pi^2$, respectively, compared to the values 1.60 $T_c$ and 0.8 $T_c$ in the absence of magnetic field, respectively. \section {Acknowledgement} BKP is thankful to the CSIR (Grant No.03 (1407)/17/EMR-II), Government of India for the financial assistance.
{ "timestamp": "2018-02-21T02:02:45", "yymm": "1802", "arxiv_id": "1802.06874", "language": "en", "url": "https://arxiv.org/abs/1802.06874" }
\section{Introduction} Large-scale spectroscopic surveys of Galactic stars, e.g. the Sloan Extension for Galactic Understanding and Exploration (SEGUE; \citealt{Yanny2009}) and the LAMOST Experiment for Galactic Understanding and Exploration (LEGUE; \citealt{Deng2012, Zhao2012}), are opening a new window for the study of the formation and evolution of the Milky Way galaxy in great detail. However, unlike photometric surveys that yield, in general, complete samples of objects to a given limiting magnitude, time-consuming spectroscopic surveys often have to select targets, and are unavoidably affected by the various potential target selection effects. Bias arises from the target selection, the observation, data reduction and processes determining parameters. To understand the relationship between a spectroscopic sample of stars with a reliable estimation of parameters and the parent stellar population, one needs to study and account for the selection function. Many authors have made efforts to characterise the selection function of spectroscopic samples from various completed or on-going surveys. \citet{Cheng2012} determine the selection function of a sample of SEGUE main-sequence turn-off stars. \citet{Schlesinger2012} study and correct for the various selection biases of SEGUE G and K dwarfs. Selection effects are also considered in \citet{Bovy2012} and \citet{Liu2012} for SEGUE G dwarfs for different purposes. \citet{Nidever2014} characterise the selection effects of the Apache Point Observatory Galactic Evolution Experiment (APOGEE; \citealt{Majewski2015}) red clump stars. More recently, \citet{Stonkute2016} discuss the selection function of Milky Way field stars targeted by the Gaia-ESO survey \citep{Gilmore2012}, and \citet{Wojno2016} describe in detail the selection function of the Radial Velocity Experiment (RAVE; \citealt{Steinmetz2006}) survey. In this paper, we take efforts to analyze the selection function of the LAMOST spectroscopic Survey of the Galactic Anticentre (LSS-GAC; \citealt{Liu2014, Liu2015,Yuan2015}). LSS-GAC is a major component of LEGUE. It was initiated in October, 2012, following a year-long Pilot Survey. It aims to observe $\sim$ 3 million stars of all colours and magnitudes of $r$ $\lesssim$17.8\,mag (18.5 for a limited number of fields) in a large (3,400\,deg$^2$) and continuous sky area centred on the Galactic anti-centre (GAC). The survey should allow us to obtain a deeper understanding of the structure, formation and evolution of the Milky Way disk(s), and of the Galaxy as a whole. Data yielded by the LSS-GAC survey are available from the LAMOST official data releases, such as LAMOST DR1 \citep{Luo2015}. The official data releases include stellar spectra and stellar parameters derived with the LAMOST Stellar parameter Pipeline (LASP; \citealt{Wu2011, Wu2014}). In addition, there are public releases of LSS-GAC value-added catalogues, the LSS-GAC DR1 \citep{Yuan2015} and LSS-GAC DR2\footnote{http://lamost973.pku.edu.cn/site/data} \citep{Xiang2016}. The LSS-GAC DR1 contains radial velocities and stellar atmospheric parameters derived with a different stellar parameter pipeline, the LAMOST Stellar Parameter Pipeline at Peking University (LSP3; \citealt{Xiang2015a}), for LAMOST spectroscopic observations between September, 2011 and June, 2013. The catalogue also presents additional information, including multiband photometry and proper motions collected from various databases, as well as extinction, distance and orbital parameters deduced with a variety of techniques. For LSS-GAC DR2, in addition to the above information, $\alpha$-element abundances, C and N abundances, and absolute magnitudes derived from the improved LSP3 \citep{Xiang2016b} are also provided for LAMOST spectroscopic observations between September, 2011 and June, 2014. In this paper we present a detailed study of the selection function of LSS-GAC based on its most recent data release, LSS-GAC DR2 \citep{Xiang2016}, to facilitate broad and robust usage of this publicly-available database. There have already been several studies trying to characterise the selection function of stars targeted by LSS-GAC. \citet{Rebassa2015} have discussed the selection function of a small sample of white dwarfs identified in an early stage of LSS-GAC in a study aimed to determine the mass function of Galactic white dwarfs. \citet{Xiang2015} have carried out a detailed analysis of the selection effects of LSS-GAC F-type turn off stars that they used to determine metallicity gradients of the Milky Way disk. \citet{Liu2017} take the selection effects of LAMOST K giants into account when deriving the stellar number density distribution. However, a comprehensive analysis of the selection function of LSS-GAC is still lacking. The earlier efforts of \citet{Rebassa2015}, \citet{Xiang2015}, and \citet{Liu2017} all concentrate on specific samples selected from LSS-GAC. In addition, \citet{Liu2017} determine the selection function of stars on the basis of the individual LAMOST plates, which have a field of view (FoV) of $\sim$20\,deg$^2$. The approach is not suitable for LSS-GAC, which selects targets based on boxes of $\sim$ 1\,deg$^2$ in sky area. Given the steep stellar number density gradients with latitude near the Galactic plane, the selection function in different parts of a given LAMOST plate near the plane would be quite different. Furthermore, LAMOST is equipped with 16 spectrographs that have different throughputs, decreasing in general with distance from the field centre \citep{Yuan2015}. Clearly, such variations of the selection function can not be ignored. \citet{Xiang2015} improve the work by determining the selection function spectrograph by spectrograph. However in evaluating the selection function, they have combined stars in a given spectrograph targeted by all LAMOST plates that share the same central star. Different plates are usually observed under different weather conditions including transparency, seeing and lunar phase, and are thus likely to have different limiting magnitudes. The selection function is expected to differ significantly amongst different plates. Furthermore, in some rare cases, although two plates target the same field, the sky areas targeted by the individual spectrographs of the two plates actually differ. Thus evaluating the selection function by combining data from the different plates is inappropriate. In this work, we discuss in detail the selection function of spectroscopic measurements of stars catalogued in the LSS-GAC DR2 and give a robust way to evaluate the selection bias, by considering as many effects as possible. Mock data are used to test our technique for the selection function evaluation. The paper is organised as follows. In \S{2} we introduce briefly the LSS-GAC, including the target selection algorithm and the LSS-GAC DR2. In \S{3} we describe how we evaluate the selection function of LSS-GAC. In \S{4} we test our algorithm using mock data. We discuss the applications of our results in \S{5} and summarise in \S{6}. \section{LSS-GAC} \begin{figure*} \centering \includegraphics[width=0.88\textwidth]{starcmd.eps} \caption{Target selection for three LSS-GAC example plates all centred on RA~=~94.39059\degr and Dec~=~35.14920\degr. The grey dots represent all stars in the clean photometric samples of the field. The blue dots represent all those selected and observed objects by the three LSS-GAC plates. The upper and bottom rows are based on the APASS DR9 and XSTPS-GAC photometric catalogues, respectively. The left, middle and right columns show target selection for the VB, B and M plates, respectively. The observational dates and IDs of the individual plates are labeled at the top of the three columns.} \label{scmd} \end{figure*} LSS-GAC contains three different components, the main, the M31/M33 and the VB surveys. The main survey aims to observe about 3 million stars in a contiguous sky area towards the GAC (150\degr $< l< $ 210\degr, and $-$30\degr $< b< $30\degr). The M31/M33 survey observes all kinds of interesting targets in the vicinity fields of M31 and M33 within the reach of LAMOST, including supergiants, massive star clusters, planetary nebulae, $\rm H~{\scriptstyle II}$\ regions, as well as background QSOs, galaxies and foreground Galactic stars. The VB survey is designed to observe very bright (VB) stars (9 $< r <$ 14\,mag) in sky areas accessible to LAMOST ($-$10$\degr < {\rm Dec} <$ 60\degr) for time of non-ideal observing conditions such as in bright/grey lunar nights. In this section we will give a brief introduction to the LSS-GAC survey and its most recent data release, LSS-GAC DR2. \citet{Liu2014} introduce the survey design and scientific motivations of LSS-GAC. \citet{Yuan2015} present the target selection and the LSS-GAC DR1. \citet{Liu2015} give a review of the early scientific results. \citet{Xiang2015a, Xiang2015b} and \citet{Xiang2016b} describe the data reduction of LSS-GAC and \citet{Xiang2016} present the LSS-GAC DR2. \subsection{Target selection} Four different types of survey plates, namely, very bright (VB) , bright (B), median bright (M) and faint (F) plates, are designed for LSS-GAC. They are defined by different $r$-band magnitude ranges. Usually, VB plates target stars of $r < 14$\,mag. B, M and F plates target stars of $14 < r \le m_1$\,mag, $m_1 < r \le m_2$\,mag and $m_2 \le r < 18.5$\,mag, respectively. Here $m_1$ and $m_2$ are the border magnitudes separating B, M and F plates and differ slightly for different regions in the sky \citep{Yuan2015}. Typical values of $m_1$ and $m_2$ are 16.3 and 17.8\,mag, respectively. Except for some VB plates, LSS-GAC targets are selected from the photometric catalogues of Xuyi Schmidt Telescope Photometric Survey of the Galactic Anticentre (XSTPS-GAC; \citealt{Liu2014, Yuan2015}). XSTPS-GAC surveys an area of approximately 7,000\,deg$^2$ in the GAC area, including the M31/M33 region, using the Xuyi 1.04/1.20m Schmidt Telescope. It collects images in SDSS $g$, $r$ and $i$ bands. XSTPS-GAC catalogues about one hundred million stars down to a limiting magnitude of $r$ $\sim$ 19.0\,mag (10$\sigma$). The photometric systematic uncertainties of XSTPS-GAC are estimated to be smaller than 0.02\,mag (Yuan et al., in preparation), and the uncertainties of resulted RA and Dec are about 0.1\,arcsec \citep{Zhang2014}. The basic strategy of the LSS-GAC target selection is to uniformly and randomly select stars from the colour-magnitude diagrams \citep{Yuan2015}. A brief summary of the target selection procedure for the LSS-GAC B, M and F plates is as follows: \begin{enumerate} \item The XSTPS-GAC photometric catalogue is used to generate a clean sample of targets for LSS-GAC, excluding stars that are either poorly detected, badly positioned, flagged as galaxies or star pairs, or contaminated by bright neighbours or by sky background. \item The whole survey area is divided into boxes of 1\,deg side in RA and Dec. For stars in each box, ($r$, $g - r$) and ($r,$ $r - i$) Hess diagrams are constructed from the clean sample. Stars of extremely blue colours, $(g - r)$ or $(r - i)$ $<-$0.5\,mag, and of extremely red colours, $(g - r)$ or $(r - i)$ $ >$ 2.5\,mag, are first selected. \item Stars in the remaining colour space are then sorted in magnitude from bright to faint. $m_1$ and $m_2$ are set to the faint end magnitudes of the first 40\% and 80\% sources, respectively. \item Stars of B, M and F plates are selected and assigned priorities, in batches of 200 stars per deg$^2$, with a Monte Carlo (random) approach. \item The field centres of the individual LAMOST plates are defined. All LAMOST plates must be centred on bright stars ($\lesssim 8$\,mag) such that the LAMOST active optics can operate. \item For each plate, the SSS software \citep{Luo2015} is used to allocate fibres to the selected stars. \end{enumerate} The target selection of VB plates is slightly different from B, M and F plates. Within the XSTPS-GAC footprint, all stars of $r \le$ 14.0\,mag from XSTPS-GAC and all stars of 9 $\le J \le$ 12.5\,mag from Two Micron All-Sky Survey (2MASS; \citealt{Skrutskie2006}) are selected as potential targets with equal priorities. Outside the XSTPS-GAC footprint, all stars of 10.0 $\le b1 \le $ 15.0\,mag, or 10.0 $\le b2 \le$ 15.0\,mag, or 9.0 $\le r1 \le$ 14.0\,mag, or 9.0 $\le r2 \le$ 14.0\,mag, or 8.5 $\le i \le$ 13.5mag from PPMXL \citep{Roeser2010} and stars of 9 $\le J \le$ 12.5\,mag from 2MASS are selected as potential targets with equal priorities. \subsection{Observation, data reduction and the LSS-GAC DR2} The five-year long Phase~I LAMOST Regular Surveys were initiated in October 2012, following the one-year long Pilot Surveys. In each year, a sufficient number of the LSS-GAC plates are planned in advance for the observation. The main and the M31/M33 survey plates are observed in dark/grey nights. Typically 2 -- 3 exposures are obtained for each plate, with typical integration time per exposure of 600 -- 1200\,s, 1200 -- 1800\,s, 1800 -- 2400\,s for B, M and F plates, respectively, depending on the weather. The seeing varies between 3 -- 4\,arcsec for most plates, with a typical value of about 3.5\,arcsec. The VB plates, typically observed with 2~$\times$~600\,s, are observed in bright nights or nights of poor observing conditions. In total, 314 plates (194 B + 103M + 17 F) for the LSS-GAC main survey, 59 plates (38 B + 17 M + 4 F) for the M31/M33 survey and 682 plates for the VB survey have been observed by June, 2014. The raw spectra are first processed with the LAMOST 2D pipeline \citep{Luo2012, Luo2015} to extract the 1D spectra. The resultant 1D spectra are then processed with LSP3 to obtain radial velocities, basic atmospheric parameters ($T_{\rm eff},~{\rm log}\,g$ and [Fe/H]) as well as [$\alpha$/Fe], [C/H] and [N/H] abundance ratios, and absolute magnitudes $M_V$ and $M_{K_{\rm s}}$. The resultant parameters serve as the core data of the LSS-GAC DR2 \citep{Xiang2016b}. The most recently published LSS-GAC DR2 contains information derived from 1.8 million spectra of 1.4 million unique stars that have a spectral signal to noise ratio (SNR) at 4650\AA\ higher than 10, collected for the LSS-GAC main, M31/M33 and VB surveys since 2011 September until 2014 June. LSS-GAC DR2 provides additional information of the individual targets, including the observing conditions, and absolute magnitudes, values of interstellar extinction, distances, and orbital parameters derived from the basic parameters using a variety of techniques. \section{The selection function} In this paper, we consider the selection function to be the relation between a spectroscopic sample selected from the LSS-GAC value-added catalogue with robust determinations for (certain) stellar parameters and the underlying (statistically complete to a given limiting magnitude) photometric sample. Generally, the selection effects of a selected LSS-GAC spectroscopic sample are due to the following two parts: (1) the LSS-GAC target selection algorithm, and (2) the observation, data reduction and parameter determination processes. We define the selection function, $S$, as the probability of a star which is selected, observed and ends up as a valid entry in the LSS-GAC value-added catalogue. The selection function $S$ can then be divided into two parts, namely, $S_1$, the probability of a star in a given colour-magnitude bin that is selected and gets observed by LAMOST, and $S_2$, the probability of a LAMOST spectrum of a star in a given colour-magnitude bin that is capable of delivering robust stellar parameters. In the current work, we discuss the selection function $S$ of LSS-GAC mainly based on the photometric data of XSTPS-GAC, as most of the LSS-GAC targets are selected from XSTPS-GAC. Limited by the sky coverage and bright star saturation of XSTPS-GAC, some VB stars are selected from PPMXL and 2MASS. For those plates we adopt the AAVSO Photometric All-Sky Survey (APASS; \citealt{Henden2016}) DR9 catalogue to determine the selection function of the spectroscopic measurements. The APASS survey is conducted in five filters, including the Johnson $B$ and $V$, and Sloan $g, ~r,~ i$ bands. It covers the entire sky and is valid for magnitude range $7 < r < 17$\,mag. It thus serves as an excellent photometric catalogue for the LSS-GAC VB targets. The APASS DR9 contains photometric data of $\sim$ 61 million measurements covering about 99\,per\,cent of the sky. We remove all the repeated measurements in APASS DR9 (about 7\,per\,cent) and keep only those with the smallest photometric uncertainties for the individual stars. We require that all stars should have detections in $g$ and $r$ bands in both the XSTPS-GAC and APASS DR9 catalogues. In Fig.~\ref{scmd}, we show the colour-magnitude diagrams (CMD) for stars targeted by three LSS-GAC plates, VB, B and M each, with all stars from the XSTPS-GAC and APASS DR9 catalogues overplotted. The Figure shows clearly how LSS-GAC targets are selected from the photometric catalogues. For all the spectroscopic measurements with reliable parameters released in LSS-GAC DR2, values of selection function are calculated based on the XSTPS-GAC and on the APASS photometric catalogue separately. For stars falling inside the XSTPS-GAC footprint and having magnitudes in the range of 13.0 $<$ $r$ $<$ 18.0\,mag, the selection function values calculated with the XSTPS-GAC catalogue are adopted. For stars falling outside the XSTPS-GAC footprint and having magnitudes in the range of 7.0 $<$ $r$ $<$ 15.5\,mag, or stars falling inside the XSTPS-GAC footprint but having magnitudes in the range of 7.0 $<$ $r$ $<$ 13.0\,mag, the selection function values calculated by the APASS DR9 catalogue are adopted. \subsection{Selection effect due to target selection} \begin{figure*} \centering \includegraphics[width=0.78\textwidth]{starspect.eps} \caption{Spatial distribution of stars targeted by an example LSS-GAC plate 20140305-GAC094N35B1 (left) and the layout of the 16 rectangular boxes that are used to define the parent photometric samples from the XSTPS-GAC catalogue for the 16 spectrographs of LAMOST (right; see text for detail). In the left panel, stars targeted by different spectrographs are plotted with different colours. The spectrograph IDs are labelled. In the right panel, the rectangles are plotted with the same colour as the stars in the spectrographs they represent on the left.} \label{sd} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.88\textwidth]{starspecmd.eps} \caption{Distributions in CMDs of stars targeted by Spectrograph~\#1 of three example plates (blue dots and histograms) and of all stars in the representative rectangular box, for the case of using the APASS (left panel, a VB plate) or the XSTPS-GAC (middle panel, a B plate and right panel, an M plate) photometric catalogues. Red lines in the bottom panels show the grid of bins for $S_1$ evaluation. The observational dates and IDs of the three example plates are labeled at the top of the three columns.} \label{sacmd} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.68\textwidth]{c1plot.eps} \caption{Distributions of mean values of $S_1$ in CMDs (left panels) and in Galactic longitude-latitude ($l~,b$) plane (right panels) for stars targeted by all spectrographs of all plates included in the LSS-GAC DR2. The upper and bottom panels represent respectively results obtained using the APASS and XSTPS-GAC photometric catalogues.} \label{f1d} \end{figure*} We first collect all spectroscopic measurements in LSS-GAC. For those measurements, the selection effects come from the LSS-GAC target selection algorithm, i.e. $S_1$. We calculate $S_1$ for the individual spectrographs of each LSS-GAC plate. In Fig.~\ref{sd}, we show the spatial distribution in the sky for the measurements in different spectrographs of a given LSS-GAC plate. The boundaries of the individual spectrographs are irregular and the areas covered by them differ from each other. To make a robust comparison between the LSS-GAC targeted sample and the parent photometric sample, we consider each plate to be composed of rectangles, roughly centred on each spectrograph. The area of the rectangle is chosen to cover the same area in the sky as the corresponding spectrograph. Thus the stellar distribution in the rectangular area would be the same as that in the corresponding spectrograph, which can be used to calculate $S_1$. The centre of the rectangular area is set to the mean position of all stars targeted in the spectrograph. The size in Declination of the rectangular area is set to 1.1\degr, while the size in Right Ascension is set to $\Omega/(1.1\cos\delta)$, where $\Omega$ is the size of the spectrograph (unit in deg$^2$ ) and $\delta$ the central Declination of the spectrograph. We show in the right panel of Fig.~\ref{sd} the rectangular areas for the individual spectrographs. A LAMOST plate has a circular FoV of $\sim$20\,deg, covering a diameter of 5\,deg. Thus the individual spectrographs have an average area of $\sim$ 1.2\,deg$^2$, comparable to the size of the box that we used for target selection (1\degr\ $\times$ 1\degr). According to the LSS-GAC target selection strategy, for the individual spectrographs of LAMOST, the probability of a star in a given colour-magnitude bin that is selected and gets observed by LAMOST, $S_1$, can be calculated as, \begin{equation} S_1= \dfrac{N_{\rm LAMOST}({\rm sp},C,M)} {N_{\rm phot.}({\rm sp},C,M)}, \end{equation} where ${N_{\rm LAMOST}({\rm sp},C,M)}$ and ${N_{\rm phot.}({\rm sp},C,M)}$ are respectively the number of all LAMOST measurements and the number of all photometric stars in a given colour $C$ and magnitude $M$ bin for Spectrograph sp of a LSS-GAC plate. For both the calculation with XSTPS-GAC and APASS photometric catalogues, $C$ and $M$ corresponds to $g-r$ and $r$, respectively. We adopt a colour and magnitude bin-size of $\delta (g-r)$~=~0.25\,mag and $\delta r$~=~0.2\,mag. Fig.~\ref{sacmd} shows the colour and magnitude distributions of all targets observed with LAMOST and of the underlying photometric samples, along with the grid we use to calculate $S_1$, for Spectrograph~\#1 of three example plates. The distributions of averaged values of $S_1$ in the CMD and in the Galactic longitude-latitude $(l, b)$ plane for targets observed with all spectrographs of all plates included in the LSS-GAC DR2 are shown in Fig.~\ref{f1d} for the cases of selection with APASS and selection with XSTPS-GAC. In general, brighter stars have higher values of $S_1$ than the fainter ones, and stars of extreme colours have higher values of $S_1$ than those of medium colours. The result is consistent with the strategy of LSS-GAC target selection, i.e., selecting stars uniformly and randomly from the CMDs. Stars of fainter magnitudes or of medium colours are much more numerous than those of brighter magnitudes or of extreme colours, so they have lower probabilities to get observed, i.e. smaller values of $S_1$. Spectrographs (of plates) of high Galactic latitudes have higher average values of $S_1$ than those of lower latitudes. Again, this is simply due to the steep decline of stellar number density (to a given limited magnitude) with latitude. Note that there exists substantial overlaps between adjacent LSS-GAC plates. In addition, often there are more than two plates targeting the same field, covering exactly the same sky area. Nevertheless, it is extremely important to bear in mind that values of $S_1$ calculated here are for stars targeted by individual (plate) observations of LAMOST. For any follow-up scientific applications, e.g. to derive the underlying stellar number density of a given age from the LAMOST spectroscopic sample with robust stellar parameter determinations, results from the individual observations can only be combined after correcting for the selection effects. \subsection{Selection effect due to observation, data reduction and parameter determination} \begin{table*} \centering \caption{Probabilities $S_1$, $S_2$ and $S$ for measurements catalogued in the LSS-GAC DR2.} \begin{tabular}{lrrrrrrrrr} \hline \hline Spec\_id & Date & Plate & Spectrograph & RA & Dec & $ S_1$ & $S_2$ & $S$ & Notes$^{a}$ \\ \hline 20110921-PM1-01-003 & 20110921 & PM1 & 01 & 12.90091 & 35.61838 & 0.100 & 0.600 & 0.060 & 1 \\ 20110921-PM1-01-005 & 20110921 & PM1 & 01 &13.23111 & 35.65357 & 0.154 & 0.833 & 0.128 & 1 \\ 20110921-PM1-01-006 & 20110921 & PM1 & 01 &12.98870 & 35.79328 & 0.182 & 1.000 & 0.182 & 1 \\ 20110921-PM1-01-007 & 20110921 & PM1 & 01 &13.24852 & 35.89709 & 0.750 & 1.000 & 0.667 & 1 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ... \\ 20110921-PM1-01-011 & 20110921 & PM1 & 01 &13.18381 & 35.87553 & 0.062 & 1.000 & 0.062 & 2 \\ 20110921-PM1-01-013 & 20110921 & PM1 & 01 &12.93216 & 35.62636 & 0.571 & 0.750 & 0.429 & 2 \\ ... & ... & ... & ... & ... & ... & ... & ... & ... & ...\\ \hline \end{tabular}\\ \begin{flushleft} {$^a$ 1: The probabilities calculated with the XSTPS-GAC photometric catalogue. 2: The probabilities calculated with the APASS photometric catalogue.} \end{flushleft} \end{table*} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{lstplate.eps} \caption{Spatial distributions of stars in the LAMOST focal plane (X, Y). The stars with reliable parameters determined with LSP3 are denoted with red dots. All stars selected and observed by LAMOST are denoted with grey dots. The three panels are corresponding to three example plates (a VB, a B and an M plate from left to right). The boundaries of the individual spectrographs are delineated by blue lines with IDs marked. The observational dates and IDs of the three plates are labelled on the top of the panels. } \label{lstp} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.88\textwidth]{cmdplate.eps} \caption{CMDs for stars observed with two selected spectrographs of three example plates. In each panel all stars selected and get observed with LAMOST (grey dots) and those with reliable parameters determined with LSP3 (blue dots) are shown. Values of colour $g-r$ and magnitude $r$ are from the XSTPS-GAC catalogue. Red lines delineate the grid of bins for $S_2$ evaluation. The observational dates and IDs of the plates are labelled on the top of the columns. The upper and bottom panels are for Spectrograph~\#15 and \#6 of those plates, respectively. } \label{cmdsp} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.68\textwidth]{c2plot.eps} \caption{Distribution of averaged values of $S_2$ in the colour and magnitude $(C,~M)$ space (left panels) and in LAMOST focal plane ($X~,Y$) (right panels) for all spectroscopic measurements with robust stellar parameters in LSS-GAC DR2. The top and bottom panels represent the results obtained using the APASS and XSTPS-GAC catalogues, respectively.} \label{f2d} \end{figure*} $S_1$ represents the bias induced by the LSS-GAC target selection algorithm. Accounting for $S_1$ will eliminate the discrepancy between the number of stars targeted by LAMOST and the number of stars in the parent photometric catalogue. However, there are additional effects that determine whether a spectroscopic measurement of star targeted by LAMOST ends up in the resultant spectroscopic catalogue with robust parameters. This is related to the quality of the observation, data reduction, and parameter determination. Robust estimates of stellar parameters, including radial velocities and basic atmospheric parameters, plus any additional information, such as values of interstellar extinction and distances can only be deduced from spectra of sufficient quality. Even for spectra of good quality, the currently developed stellar parameter pipelines (e.g. LSP3) may fail to deliver usable parameters because of lack of suitable analysis tools. This is the case for stars of extremely hot or cool colours. Thus one needs another quantity to account for this selection effect. The precision of stellar parameters deduced from the spectra are mainly determined by the quality of spectra. The requirements are however different for different parameters. For example, robust radial velocities can be deduced for LAMOST spectra of $\rm SNR(4650\,\AA)$$>$5; while [$\alpha$/Fe] ratios can only be used for spectra of $\rm SNR(4650\,\AA)$$>$20 \citep{Xiang2016}. Thus the selection criteria to build a spectroscopic sample with `robust' parameters are different for different applications. In the current work, we consider a commonly used sample selected from the LSS-GAC DR2, following the recommendation of \citet{Xiang2016}. We select stars that have, \begin{enumerate} \item snr\_b $>$ 10 for LAMOST spectra of good quality; \item moondis $>$ 30 to avoid moonlight contamination; \item vr\_flag $\le$ 6 and Teff $>$ 0 for robust radial velocity and atmospheric parameters; \item satflag $=$ 0 to avoid CCD saturation; \item brightflag $=$ 0 to eliminate contaminations by nearby bright stars; \item deadfiber $=$ 0 to reject the spectra from LAMOST bad fibres; \end{enumerate} The above criteria are the basic requirements for robust radial velocity and basic atmospheric parameters. We compare in Fig.~\ref{lstp} the distributions of stars observed with LAMOST and those with reliable parameters in the LAMOST focus physical plane ($X,~Y$) for three example plates. A similar comparison for two example spectrographs of the above three example plates is given in Fig.~\ref{cmdsp} in the colour-magnitude ($C,~M$) plane. It is clear that the probabilities of stars with reliable parameters depend on the spectrographs, plates with which they are observed, as well as on colours and magnitudes of the stars themselves. As described above, whether a LAMOST spectrum is capable of delivering reliable stellar parameters could depend on many factors. The most important factor is of course the SNR of the spectrum that depends on the brightness of the target, the exposure time and the observation conditions. The individual spectrographs of LAMOST also have different throughputs \citep{Yuan2015}. Finally, the current implemented version of LSP3 pipeline uses the blue-arm spectra for parameter estimation. Thus stars of blue colours are more likely to yield spectra of sufficient SNRs and thus reliable stellar parameters than stars of red colours, either intrinsically red or heavily reddened by the interstellar dust grains. The quantity to account for all the above effects, $S_2$, defined above as the probability of a LAMOST spectrum of a star in a given colour-magnitude bin that is capable of delivering robust stellar parameters, can be calculated as, \begin{equation} S_2=\dfrac{N_{\rm PARAM}({\rm sp},C, M)}{N_{\rm LAMOST}({\rm sp},C, M)}, \end{equation} where ${N_{\rm PARAM}({\rm sp},C, M)}$ and ${N_{\rm LAMOST}({\rm sp},C, M)}$ are respectively the number of stars having reliable parameters in a colour-magnitude bin and the number of all stars in the same colour-magnitude bin that are selected and observed by LAMOST in Spectrograph sp of a given plate. Again, for both selection with the XSTPS-GAC and APASS catalogues, $C$ is $g-r$ and $M$ is $r$. The magnitude bin-size is the same ($\delta r$ = 0.2\,mag) as in calculating $S_1$. We adopt however a larger colour bin-size [$\delta (g-r)$ = 0.5\,mag] for calculating $S_2$ as this quantity is less sensitive to the colours of stars. In Fig.~\ref{cmdsp} we show the adopted colour-magnitude grid for some example spectrographs of three illustrative plates. The distribution of averaged values of $S_2$ in the $C$-$M$ space and in the LAMOST focal plane for all spectroscopic measurements with robust stellar parameters in LSS-GAC DR2 are shown in Fig.~\ref{f2d}. Bright and blue stars have higher averaged values of $S_2$ than faint and red stars. The differences in the average values of $S_2$ of the individual spectrographs are clearly visible, although the differences reduce significantly after averaging over all LSS-GAC plates. For example, Spectrographs~\#12 and \#13 near the edge of the LAMOST focal plane have lower averaged values of $S_2$ (i.e. lower observing efficiencies) than Spectrographs~\#4 and \#8 near the centre of the focal plane. The differences between the average values of $S_2$ of the individual spectrographs are more pronounced for results based on the XSTPS-GAC catalogue than those based on the APASS catalogue, with the fraction of faint (B/M) plates are larger in the former case. \subsection{The final selection function} In the case that both $S_1$ and $S_2$ are evaluated using the same $C$ and $M$ bin sizes, then the final selection function, $S$, is simply given by the product of $S_1$ and $S_2$. However, as noted above, we use slightly different $C$ and $M$ bin sizes when evaluating $S_2$ compared to the bin sizes used for $S_1$. As such, we now define $C1$ and $M1$ for the colour-magnitude bin used to calculate $S1$. Similarly, $C2$ and $M2$ is defined as the colour-magnitude bin used to calculate $S2$. The final selection function, $S$, is given by \begin{equation} S=\dfrac{N_{\rm PARAM}({\rm sp}, C2, M2)}{N_{\rm total}({\rm sp}, C2, M2)}, \end{equation} where for each spectrograph ${\rm sp}$ and each $C2$ and $M2$ bin, $N_{\rm PARAM}({\rm sp}, C2, M2)$ is the number of stars having robust parameters and $N_{\rm total}({\rm sp}, C2, M2)$ is the total number of stars observed by LAMOST with target selection function $S_1$ corrected. We thus have \begin{equation} N_{\rm total}({\rm sp}, C2, M2)={\sum\limits_{C1 \in C2}\sum\limits_{M1 \in M2} \frac{N_{\rm LAMOST}({\rm sp},C1,M1)}{S_1 ({\rm sp},C1,M1 )}}. \end{equation} Example values of selection functions $S_1$, $S_2$ and $S$ thus calculated for spectroscopic measurements in LSS-GAC DR2 are given in Table 1 for the purpose of illustration. The full results are available by contacting the authors (BQC, XWL) and will be included in the next release of LSS-GAC value-added catalogue, i.e. LSS-GAC DR3 (Huang et al., in preparation). \section{Mock data test} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{mockdcreata.eps} \includegraphics[width=0.48\textwidth]{mockdcreatb.eps} \caption{Top panel shows distributions in colour-magnitude plane of targets selected from the Besan\c{c}on simulated catalogue (black dots and histograms), using the same target selection algorithm as for LSS-GAC (blue dots and histograms). Selected targets with simulated `SNR' $>$ 10 (see text for details) are represented by red dots and histograms in the bottom panel.} \label{mkdata} \end{figure} \begin{figure} \centering \includegraphics[width=0.48\textwidth]{cmdist2p.eps} \caption{Hess diagram of the mock `photometric' sample (blue scales and contours), compared to that given by the sample with `reliable' parameters after corrected for the selection biases (red contours).} \label{sfmk} \end{figure} \begin{figure*} \centering \includegraphics[width=0.98\textwidth]{fehdist2.eps} \caption{Distributions of metallicity [Fe/H], radial velocity $V_{\rm r}$ and distance $d$ for the mock `photometric' sample (black histograms), the sample with `reliable' parameters (blue histograms), and for the sample with `reliable' parameters after corrected for the selection effects (red histograms). } \label{fvd} \end{figure*} In this Section we validate our method using a mock star catalogue. For this purpose, we utilise the Besan\c{c}on stellar population synthesis model \citep{Robin2003} to generate a catalogue centred on the GAC, ($l,~b$)~=~(180\degr, 0\degr). The three-dimensional extinction maps from \citet{Chen2014} are used to add extinction to stars in the catalogue. Taking the simulated catalogue as an observed one, we select targets using the same algorithm as adopted for the LSS-GAC target selection. We artificially define a field centred on ($l,~b$)~=~(180\degr, 0\degr). Within this field, four plates, one VB, two B and one M plates, are generated, containing 4 000, 3 888, 3 914 and 3 950 stars, respectively. In Fig.~\ref{mkdata} we plot the colours and magnitudes of the selected targets, along with those of all stars in the full `photometric' catalogue. To define a sample of stars with `reliable stellar parameters', we artificially add SNRs to all of the selected targets. To simulate the quality of stellar spectra in real LAMOST observations, we randomly select three plates from LSS-GAC: a VB plate HD213239N421743V01, a B plate GAC101N22B1 and an M plate GAC091N33M1. For each spectrograph of each plate, the distribution of $\rm SNR(4650\,\AA)$\ of stars in the selected plates is fitted as a function of colour $g-r$ and magnitude $r$, as, \begin{equation} {\rm SNR}=a_1+a_2(g-r)+a_3(g-r)^2r+a_4(g-r)r+a_5r+a_6r^2, \end{equation} where $a_1$, $a_2$, ..., and $a_6$ are the coefficients. The fitting is then applied to all spectrographs of simulated plates to assign `real' SNRs for those selected and get `observed' stars. We ignore here the effects of the dead, saturated or contaminated fibres and the uncertainties induced by the LSP3 pipeline. Assuming that the LSP3 pipeline is able to deliver robust stellar parameters for all stars with a `SNR' $>$ 10, the sample of stars with robust parameters is simply defined by stars with a `SNR' $>$ 10. In Fig.~\ref{mkdata}, stars in this sample with `robust' parameters are also overplotted. We now have a `photometric' sample generated with the Besan\c{c}on model, a sample of stars selected with the same target selection algorithm as for the LSS-GAC and get `observed', and a sample with `reliable' parameters by adding artificially `SNRs' to their `observed' spectra. The values of selection function for each star with `reliable' parameters are then calculated following the procedure introduced in the former Section. The sample with `reliable' parameters is then corrected for selection biases using the calculated selection function. In Fig.~\ref{sfmk}, we compare the resultant colour and magnitude distribution of the sample with `reliable' parameters after correcting for the selection effects with the distribution of the underlying population (i.e. the `photometric' sample). Overall the agreement is quite good. For regions of extreme red colours and faint magnitudes, given the small number of stars with `reliable' parameters, we are not able to perfectly recover the CMD of the underlying stellar population. We have also tested our selection function results for the metallicity, radial velocity and star count distributions. Comparisons of those distributions as given the sample with `reliable' parameters after corrected for the selection effects and those of the underlying population are shown in Fig.~\ref{fvd}. Overall, the agreement is good. We note that for stars of extreme parameters, such as those of very high metallicities, very high radial velocities, or very far distances, due to the large statistical uncertainties, the two sets of distributions do not match well, but are still consistent with each other. Thus the selection function presented in the current work is a powerful tool for the studies of Galactic chemistry and dynamics. \section{Applications of the selection function} \begin{figure} \centering \includegraphics[width=0.45\textwidth]{fehzdist.eps} \caption{Distribution of stellar number density in the metallicity [Fe/H] and height from the Galactic mid-plane $|z|$ from a sample of LAMOST F-type stars. The density is shown on a logarithmic scale, with peak value normalized to unity. The dashed lines show Eq.~(11) of \citet{Chen2017}. The top and bottom panels show results before and after applying the selection function corrections, respectively.} \label{mdf} \end{figure} \begin{figure*} \centering \includegraphics[width=0.88\textwidth]{rhogrid.eps} \caption{Stellar density distribution in the $R$ - $z$ plane deduced from a sample of LAMOST F-type stars after corrected for selection biases. The colour encodes the mean ln\,$\rho$ value in each $R$-$z$ bin. } \label{ndis} \end{figure*} In this Section we give two simple applications of our selection function. We select a sample of F-type stars from the internal release of LSS-GAC DR2 with effective temperature and surface gravity cuts, 6000 $<$ $T_{\rm eff}$ $<$ 6800\,K and 3.8 $<$ log\,$g$ $<$ 5.0\,dex. The internal release of LSS-GAC DR2 includes all observations from the initiation of the survey up to 2016 June. The selected F-type star sample contains 713 016 spectroscopic measurements. \subsection{The metallicity distribution} In Fig.~\ref{mdf}, we plot the distribution of stellar number density in the plane of metallicity [Fe/H] and height from the Galactic mid-plane $|z|$ of this sample for the metallicity and height ranges, $-$1 $< {\rm [Fe/H]} <$ 0.5\,dex and 0 $< |z| < $ 5\,kpc. The peak value of the distribution is normalized to unity. The upper and bottom panels show respectively the results derived before and after applying the selection function corrections deduced in the current work. When plotting the distributions, we have discarded bins containing fewer than 10 stars. For different $|z|$ slices, the variations of peak metallicities as a function of $|z|$ is consistent with the fit given by Eq.~(11) of \citet{Chen2017}, a fit obtained using main sequence turn off stars selected from LSS-GAC DR2 \citep{Xiang2015}. The variations of peak metallicities as $|z|$ are quite similar before and after applying the selection function corrections, suggesting that the LSS-GAC selection function has only a marginal effect on the metallicity peak distributions. This is in consistent with the recent work by \citet{Nandakumar2017}. Amongst the individual $|z|$ slices, both the metallicity dispersion and skewness vary. As $|z|$ increases, the metallicity dispersions decrease while the skewness increase, which imply the star formation and radial migrations history of the Galactic thin and thick disks \citep{Sellwood2002, Schonrich2009, Hayden2015, Loenman2016}. A detailed analysis of the Galactic disk metallicity distribution based on the LAMOST main sequence turn off stars will be presented in a separate work (Wang et al., in preparation). Compared to the distributions after the selection effect corrections, those before the selection function corrections have smaller dispersions and larger skewness. This is likely to be caused by the fact that stars further away (of larger $|z|$) are fainter and thus suffer from larger selection effects than those nearby ones. \subsection{The stellar number density distribution} Fig.~\ref{mdf} also shows for different metallicity bins how the stellar number density varies with $|z|$. It is clear that the number densities of metal-poor populations decrease more slowly than those of the metal-rich ones, in other words, the metal-poor populations have larger scale heights. There is no doubt that applying the selection function corrections is very important for this type of study. Without the corrections, the scale heights derived will be systematically underestimated. This is again largely due to the fact that stars further away suffer from larger selection effects than those nearby. With the selection function corrections presented here, one can thus examine the underlying stellar number density distributions using the LAMOST spectroscopic samples. We give an example here using the F-type star sample. From stars observed in each spectrograph of each plate, one can simply derive the stellar number density using, \begin{equation} \rho_j = \dfrac{\Sigma_{i, j}(\frac{1}{S_{i,j}})}{V_{j}}, \end{equation} where $i$ is the index of stars in a distance bin of index $j$, and $V_{j}$ is the volume of the $j$th distance bin. We adopt the centre $l$ and $b$ for each spectrograph and convert ($l,~b,~d$) into the Galactocentric cylindrical coordinates ($R,~z,~\phi$) similar as in \citet{Bovy2012} and \citet{Liu2017}. The resultant number density is then averaged in the $R$ and $z$ plane. The results are presented in Fig.~\ref{ndis}. The Figure displays a remarkable shape of the Galactic disk, very similar to that seen in an edge-on external disk galaxy, for Galactic radius $R$ between 5 and 16\,kpc. Note that here we have assumed that the photometric sample from which LAMOST targets are drawn is complete. In addition, we have also ignored the possible variations of the absolute magnitudes of F-type stars. Any such variations, coupled with the varying interstellar extinction, affect the lower and upper completeness distance limits for the individual lines of sight lines, effects that one must take into account when studying the Galactic structure \citep{Chen2017}. More quantitative analysis will be presented in a separate work ( Chen et al., in preparation). \section{Summary} In this paper, we have discussed in detail the selection function of LSS-GAC spectroscopic survey and presented corrections for all spectroscopic measurements with reliable parameters in the LSS-GAC DR2. The selection function determines how representative the final spectroscopic catalogue is compared to the underlying stellar population of the Milky Way. It is a powerful tool for the studies of the Galactic chemistry and structure problems. We divide the selection function into two parts. The first part, quantified by $S_1$, characterise the LSS-GAC target selection strategy. The LSS-GAC target selection is based on stellar magnitudes and colours, using photometric data from the XSTPS-GAC, supplemented by PPMXL and 2MASS photometry for the VB survey. Based on the photometric data of XSTPS-GAC and APASS DR9, we calculate $S_1$ in the $C$ and $M$ space for each spectrograph of each LSS-GAC plate. The second part, quantified by $S_2$, characterise the selection effects due to the observational quality, data reduction and parameter determination. We select from LSS-GAC DR2 a commonly used sample that contains stars with reliable stellar parameters. Values of $S_2$ of the sample are calculated for each $C$-$M$ bin and for each spectrograph of each plate. The full selection function $S$ can then be calculated from $S_1$ and $S_2$. Example values of selection function corrections are listed in Table~1. The full results are available upon request by email, and will be included in the next release of LSS-GAC value-added catalogue. We test our method using mock data. The test shows that the selection function corrections presented here can successfully recover the distributions of colours, magnitudes, metallicities, radical velocities, as well as number counts of the underlying stellar populations. Finally we present two simple applications of our deduced selection function corrections. The selection function presented in the current work provide a better insight of the properties of LSS-GAC and the resulted value-added catalogue, and can be used to study a variety of problems of the Milky Way galaxy that rely on proper corrections for the selection biases in the LSS-GAC spectroscopic dataset. \section*{Acknowledgements} We thank our anonymous referee for helpful comments that improved the quality of this paper. This work is partially supported by National Key Basic Research Program of China 2014CB845700, China Postdoctoral Science Foundation 2016M590014 and National Natural Science Foundation of China U1531244. The LAMOST FELLOWSHIP is supported by Special Funding for Advanced Users, budgeted and administrated by Center for Astronomical Mega-Science, Chinese Academy of Sciences (CAMS). This work has made use of data products from the Guoshoujing Telescope (the Large Sky Area Multi-Object Fibre Spectroscopic Telescope, LAMOST). LAMOST is a National Major Scientific Project built by the Chinese Academy of Sciences. Funding for the project has been provided by the National Development and Reform Commission. LAMOST is operated and managed by the National Astronomical Observatories, Chinese Academy of Sciences. This research was made possible through the use of the AAVSO Photometric All-Sky Survey (APASS), funded by the Robert Martin Ayers Sciences Fund. \bibliographystyle{mn2e}
{ "timestamp": "2018-02-21T02:00:26", "yymm": "1802", "arxiv_id": "1802.06777", "language": "en", "url": "https://arxiv.org/abs/1802.06777" }
\section{Introduction} Kepler-16 is a well-documented example of a closely separated binary system with a Saturnian planet in a P-type orbit \citep{sla11,doy11}. P-type orbit means that the planet encircles both stars instead of only one star with the other star acting as a perturber \citep{dvo82}. Previous results on the existence and orbital properties of planets in binary systems have been given by, e.g., \cite{rag06,rag10} and \cite{roe12}, among others. Detailed information on the abundance of circumstellar planets has been given by \cite{wan14} and \cite{arm14}. So far, eleven circumbinary planets have been discovered by \textit{Kepler} with Kepler-453b and Kepler-1647b constituting number 10 and 11, as reported by \cite{wel15} and \cite{kos16}, respectively. The main purpose of the \textit{Kepler} mission is to identify exoplanets via the transit method near or within the host star's habitable zone (HZ). The lion's share of stars-of-study encompass main-sequence stars of spectral types G, K, and M, with latter ones also referred to as red dwarfs. Recent catalogs of stars studied by \textit{Kepler} have been given by \cite{kir16} and \cite{tho17}. Here \cite{tho17} offer the latest results for the general catalog from \textit{Kepler}, as it contains all observed objects, including circumbinary planets, potentially habitable planets, and (most likely) non-habitable planets. On the other hand, the catalog by \cite{kir16} is mostly focused on eclipsing binary systems. Previous theoretical work about circumbinary planets in binary systems has been given by, e.g., \cite{kan13}, \cite{egg13}, \cite{hag13}, \cite{cun14,cun15}, \cite{zul16}, \cite{pop17}, \cite{she17}, and \cite{wan17}, and references therein. These types of studies focus on the formation, orbital stability, secular evolution, and/or environmental forcings pertaining to those systems. For example, recently, \cite{wan17} presented fitting formulae for the quick determination of the existence of P-type HZs in binary systems. Objects hosted by P-type systems which might be potentially habitable could include exoplanets, exomoons, and exo-Trojans. For Kepler-16, the latter two kinds of objects have been discussed by Quarles et al. (2012), hereafter QMC12. Kepler-16(AB) is a pivotal example of a planet-hosting binary; it is 61 parsecs (199 light years) from Earth (see Table~1); for more detailed information see \cite{doy11}, and references therein. The system consists of the primary star, Kepler-16A, a K-dwarf of about 0.69~$M_\odot$, and the secondary star, Kepler-16B, a red dwarf star. The circumbinary planet of that system is similar to Saturn in mass and density. Kepler-16b has a nearly circular orbit with an eccentricity of approximately 0.007 and a small deviation in orbital inclination to that of its host stars indicating that it may have formed within the same circumbinary disk as the two stars. Although Kepler-16b proves to be an interesting exoplanet, it is considered to be cold, gaseous, and ultimately uninhabitable. However, previous work by QMC12 has focused on the possibility of both Earth-mass exomoons and Trojans, which if existing may be potentially habitable. Among other considerations, we intend to expand the work by QMC12 in this article. The structure of our paper is as follows. In Section~2, we report the stellar parameters. A special effort is made to determine the effective temperature of Kepler-16B. Section~3 discusses the HZ of the Kepler-16(AB) binary system in consideration of different types of climate models available in the literature. For tutorial reasons, we also discuss the HZ of Kepler-16A, with Kepler-16B assumed absent. In Section~4, we consider the previous results by QMC12 for Earth-mass moons and Trojans in relationship to Kepler-16's HZ. Furthermore, additional stability simulations based on a modified version of the mercury6 integration package are pursued to explore the possible parameter space of stable objects in the Kepler-16(AB) system. Our summary and conclusions are given in Section~5. \section{Stellar Parameters} Regarding our study, stellar parameters are of pivotal importance for the calculation of stellar HZs as well as for orbital stability simulations of possible exomoons and Trojan objects. Most relevant parameters of the Kepler-16(AB) system have been previously reported by \cite{doy11}, as they announce a transiting circumbinary planet observed by the \textit{Kepler} spacecraft. Kepler-16A was identified as a K-type main-sequence star with effective temperature, radius, and mass given as (see Table~1) $4450 \pm 150$~K, $0.6489 \pm 0.0013$~$R_\odot$, and $0.6897 \pm 0.0035$~$M_\odot$, respectively. Here the relative uncertainty bar is largest for the stellar effective temperature (see Table~2). However, less information has been conveyed for Kepler-16B, which based on its mass of about 0.20255~$M_\odot$ \citep{doy11} is identified as a red dwarf. But Kepler-16B's effective temperature needs to be determined as well to compute the HZ for the Kepler-16 binary system. Thus, to determine Kepler-16B's stellar effective temperature, we utilize the mass -- effective temperature relationship by \cite{man13}. They have analyzed moderate resolution spectra for a set of nearby K and M dwarfs with well-known parallaxes and interferometrically determined radii to define their effective temperatures, among other quantities. They also adopt state-of-the-art PHOENIX atmosphere models, as described. Thus, we conclude that the effective temperature of Kepler-16B is $3308 \pm 110$~K (see Fig.~1). Here the uncertainty bar has been estimated based on the results of similar objects included in the sample. From other work as, e.g., that by \cite{kir91} and \cite{bar98} the spectral type of Kepler-16B has been deduced as $\sim$M3~V. Both the effective temperature and radius of Kepler-16B are important for determining the different types of HZs of the Kepler-16(AB) system (see Sect.~4). \section{The Kepler-16 Habitable Zone} A crucial aspect of this study is the evaluation of Kepler-16's HZ. The HZ is a region around a star or a system of stars in which terrestrial planets could potentially have surface temperatures at which liquid water could exist, given a sufficiently dense atmosphere \cite[e.g.,][]{kas93,jon01,und03}. When determining the HZ, both inner limits and outer limits are calculated, in response to different types of criteria, thus defining the HZ. The determination of the location of the HZ is significant in the context of theoretical studies as well as for the purpose of planet search missions \cite[e.g.,][and references therein]{lam09,kas14,kal17}. Inner limits previously used for stellar HZs include those set by the recent Venus (RV), the runaway greenhouse effect, and the onset of water loss. Furthermore, outer limit of the stellar HZ has been set by the first CO$_2$ condensation, the maximum greenhouse effect for a cloud-free CO$_2$ atmosphere and the early Mars (EM) setting. For example, \cite{kas93} describe the runaway greenhouse effect such that the greenhouse phenomenon is enhanced by water vapor, thus promoting surface warming. The latter further increases the atmospheric vapor content, thus resulting in an additional rise of the planet's surface temperature. Consequently, this will lead to the rapid evaporation of all surface water. On the other hand \citep[see, e.g.,][]{und03}, the water loss criterion means that an atmosphere is warm enough to have a wet stratosphere, from where water is gradually lost by atmospheric chemical processes to space. Table~3 shows the HZ limits for Kepler-16A, treated as a single star, for tutorial reasons. Here GHZ denotes the general habitable zone, bracketed by the runaway greenhouse and maximum greenhouse criteria, whereas RVEM denotes the kind of HZ, defined by the settings of recent Venus and early Mars; this latter type of HZ is also sometimes referred to as (most) optimistic HZ; see, e.g., \cite{kal17} and references therein. Figure~2 and Tables~3 and 4 convey the results for the various HZ limits as well as for the GHZ and RVEM. The most recent results based on \cite{kop13,kop14} have been included as well, which indicate updated HZ limits. For the inner and outer HZ limits, they assumed $\mathrm{H}_2\mathrm{O}$ and $\mathrm{CO}_2$ dominated atmospheres, respectively, while scaling the background $\mathrm{N}_2$ atmospheric pressure with the radius of the planet. Moreover, from said climate model, several equations were generated, which correspond to select inner and outer HZ limit criteria. Surely, most of our study focuses on Kepler-16 as a binary thus taking into account both Kepler-16A (an orange dwarf) and Kepler-16B (a red dwarf); see Table~1 for data. The computation of the GHZ and RVEM of Kepler-16(AB) follows the work by \cite{cun14,cun15} and \cite{wan17}. Information is given in Fig.~3; here RHZ refers to the so-called radiative habitable zone (applicable to both the GHZ and RVEM), which is based on the planetary climate enforcements set by both stellar components, while deliberately ignoring the orbital stability criterion regarding a possible system planet. Figure~3 indicates the inner and outer RHZ limits with the inner HZ limit defined as the maximum radial distance of the inner RHZ (red lines) and the outer HZ limit defined as the minimum radial distance of the outer RHZ (blue lines). This approach conveys the HZ region for GHZ (darkest green) and RVEM (medium green) criteria (see also Table~5). As expected the RVEM criteria produces a more generous HZ region. We also indicate the orbital stability limit (black dashed line) based on \cite{hol99}, referred to as $a_{\rm orb}$. In fact it is found that the widths of the GHZ and RVEM for Kepler-16(AB) are significantly less than for Kepler-16A (single star approach), owing to the additional criterion of orbital stability for possible system planets. Previous work by \cite{mis00} argues that the HZ about a main-sequence star might be further extended if $\mathrm{CO}_2$ cloud coverage is assumed. In the case of the Sun, this assumption would amount to an outer limit of 2.40~AU for the hereupon defined extended habitable zone (EHZ)\footnote{The previous work by \cite{mis00} has been superseded by more recent studies, including those given by \cite{hal09}, \cite{pie11}, \cite{wor13}, and \cite{kit16}; see also summary by \cite{sea13}. For example, \cite{kit16} argued that the heating assumed by \cite{mis00} has been overestimated, thus putting the extension of the outer HZ in question. However, in the following, we will parameterize the outer limit of \cite{mis00}, and the significance of our results will not rely on the full extent of the HZ introduced by them. Moreover, \cite{pie11} argued that planetary HZs could extend to up to 10~AU for single G-type stars (or, say, about 3~AU for single K-type stars, as indicated by their Fig.~1), which is well beyond the outer limit advocated by \cite{mis00}.}. \cite{blo07} have explored the habitability around Gliese~581 with focus on the possible planet GJ~581d. They argue that the RHZ could be further extended if the atmospheric structure is determined by particularly high base pressures. Thus, the outer limit for the EHZ is not very well constrained, but could be parameterized as $\epsilon \sqrt{L}$ with $\epsilon$ in the likely range between 2.0 and 3.0 and $L$ defined as stellar luminosity (in units of solar luminosity). Hence, $\epsilon = 2.4$ corresponds to the value of \cite{mis00}. Results for the EHZ of Kepler-16(AB) are given in Figure~4 and Table~6. Another aspect of study is that concerning the GHZ and RVEM, we also have explored the impact of the observational uncertainties of the stellar luminosities on inner and outer limits of the RHZs (see Figs.~5 and 6). It is found that the uncertainty in the stellar luminosity ${\Delta}L$ moves the inner and outer limits of both the GHZ and the RVEM by about $\pm$6\%. Our results are summarized in Table~7. Here we also see that the inner limits of both the GHZ and RVEM are set by the additional criterion of orbital stability regarding possible circumbinary planets, referred to by \cite{cun14} as PT habitability. Additionally, it is found that the HZ around Kepler-16A (if treated as a single star) would be significantly more extended than the HZ of Kepler-16(AB). Thus, Kepler-16B notably reduces the prospect of habitability in that system. \section{Stability Investigations for Earth-Mass Exomoons and Trojans} Previously, QMC12 have exemplary case studies for the orbital stability of Earth-mass objects (i.e., Trojan exoplanet or exomoon) in the Kepler-16(AB) system. Their numerical methods were based on the Wisdom-Holman mapping technique and the Gragg-Burlisch-Stoer algorithm \citep{gra96}. The resulting equations of motion were integrated forward in time for 1 million years using a fixed/initial (WH/GBS) time step. QMC12 showed that, in principle, both Trojan exoplanets and exomoons are able to exist in the Kepler-16(AB) system. Figures 7 and 8 show the results by QMC12 together with the updated system's HZs, i.e., the GHZ, RVEM, and EHZ. It is found that the orbital settings of those objects are consistent with an EHZ (with $\epsilon \lta 2.2$) or with the RVEM if upper limits of the stellar luminosities, consistent with the observational uncertainties, are considered. In order to better understand the dynamical domain of possible exo-Trojans, we perform additional 5,000 stability simulations using a modified version of the mercury6 integration package that is optimized for circumbinary systems \citep{cha02}. In these simulations, we adopt the orbital parameters from \cite{doy11} for the binary components and the Saturnian planet. We also consider Earth-mass objects with different initial conditions. Table~8 conveys the initial conditions for exomoon sample cases, which are: the semimajor axis $a$, eccentricity $e$, inclination $i$, argument of periastron $\omega$, and mean anomaly $M$ for each body. A simulation is terminated when the Earth-mass body either crosses the binary orbit or has a radial distance from the center of mass greater than 100~AU; this will be viewed as an ejection. The orbital evolution of the four bodies are evaluated on a 10 Myr timescale. The initial orbital elements are chosen using uniform distributions. The initial semimajor axis of the Earth-mass object is selected from values ranging from 0.6875~AU to 0.7221~AU (i.e., $\pm$ 0.5 Hill radii); furthermore, eccentricities are limited to 0.1 and inclinations are limited to 1$^\circ$. The initial argument of periastron and mean anomalies are selected randomly between 0$^\circ$ and 360$^\circ$. The statistical distributions of the surviving population are shown in Figure~9 to illustrate possible correlations between parameters. Overall $\sim$10\% of the simulations (496) are identified as stable (i.e., survived for 10 Myr) as depicted in Figure 10. By delineating the stable (cyan) and unstable (gray) points, it is seen that the stable initial conditions correspond to Trojans and are separated in relative phase from Kepler-16b by $\sim$60$^\circ$ to 90$^\circ$. This also appears in Figure 9 through the distribution for $\lambda^*$, the relative mean longitude. The inclinations of the orbitally stable Earth-mass objects in Fig. 9 remain uniformly distributed and thus are unlikely to affect the overall stability. Figure 11 illustrates the orbital evolution in a rotated-reference frame of two initial conditions taken from Figure 10. The panels of Fig. 11 show the first $\sim$1,000 years of orbital evolution, where the run in the top panel would continue in a Trojan orbit for the 10 Myr simulation time and the other run (bottom panel) evolves in a horseshoe orbit, which quickly becomes unstable. We also found that the eccentricity distribution as obtained prefers values close to circular, whereas the relative mean longitude distribution reflects, by a factor of two, more trailing orbits than preceding orbits. \section{Summary and Conclusions} The purpose of our study is to continue investigating the habitable zone as well as the general possibility of Earth-mass exomoons and Trojans in Kepler-16. The binary system Kepler-16(AB) consists of a low-mass main-sequence star, a red dwarf and a circumbinary Saturnian planet. The temperatures of the two stellar components are given as $4450 \pm 150$~K and $3308 \pm 110$~K, respectively. Previously, QMC12 pursued an exploratory study about this system, indicating that based on orbital stability considerations both Earth-mass exomoons and Earth-mass Trojan planets might be possible. The aim of the present study is to offer a more thorough analysis of this system. We found the following results: \bigskip \noindent (1) As previously said by QMC12, Kepler-16 possesses a circumbinary HZ; its width depends on the adopted climate model. Customarily, these HZs are referred to as GHZ and RVEM; the latter is also sometimes referred to as optimistic HZ \citep[e.g.,][]{kop13, kal17}. For objects of thick CO$_2$ atmospheres including clouds, the HZ is assumed to be further extended, thus giving rise to the EHZ as proposed by \cite{mis00}. \bigskip \noindent (2) Our work confirms earlier simulations by QMC12 that both Earth-mass exomoons and Earth-mass Trojan planets could stably orbit in that system. However, in this study, we adopted longer timescales and also explored the distributions of eccentricity and inclinations of the Earth-mass test objects considered in our study. \bigskip \noindent (3) Exomoons and Trojans, associated with the Saturnian planet, are found to be situated in the lower portion of the EHZ (i.e., $\epsilon \lta 2.2$). A more detailed analysis also implies that the distances of those objects may be consistent with the RVEM (i.e., optimistic HZ) if a relatively high luminosity for the stellar components is assumed (but still consistent with the uncertainty bars) or if the objects are allowed to temporarily leave the RVEM-HZ without losing habitability. The latter property is maintained if habitability is provided by a relatively thick atmosphere \cite{wil02}. \bigskip \noindent (4) For tutorial reasons, we also compared the HZ of the system's primary to that of the binary system. We found that the latter is reduced by 42\% (GHZ) and 48\% (RVEM) despite the system's increase in total luminosity given by the M-dwarf. The reason is that for the binary, the RHZ is unbalanced and it is further reduced by the additional requirement of orbital stability as pointed out previously \citep[e.g.,][]{egg13,cun14}. \bigskip \noindent (5) Moreover, we pursued new stability simulations for Earth-mass objects while taking into account more general initial conditions. The attained eccentricity distribution prefers values close to circular, whereas the inclination distribution is relatively flat. The distribution in the initial relative phase indicates that the stable solutions are distributed near the co-orbital Lagrangian points, thus increasing the plausibility for the existence of those objects. Our study shows that the binary system Kepler-16(AB) has a HZ of notable extent, though smaller than implied by the single-star approach, with its extent critically depending on the assumed climate model for the possible Earth-mass Trojan planet or exomoon. Thus, Kepler-16 should be considered a valuable target for future planetary search missions. Moreover, it is understood that comprehensive studies of habitability should take into account additional forcings by planet host stars, such as stellar activity and strong winds expected to impact planetary conditions as indicated through analyses by, e.g., \cite{lam07}, \cite{tar07}, \cite{lam09}, \cite{kas14}, and \cite{kal17}. Recent articles about the impact on stellar activity on prebiotic environmental conditions have been given by, e.g., \cite{cun16} and \cite{air17}. \acknowledgments This work has been supported by the Department of Physics, University of Texas at Arlington (UTA). The simulations presented here were performed using the OU Supercomputing Center for Education \& Research (OSCER) at the University of Oklahoma (OU). Furthermore, we would like to thank the anonymous referee for useful suggestions, allowing us to improve the manuscript. \clearpage
{ "timestamp": "2018-02-23T02:00:45", "yymm": "1802", "arxiv_id": "1802.06856", "language": "en", "url": "https://arxiv.org/abs/1802.06856" }
\section{Introduction and background} The frequency of queries in web search logs has been found useful in estimating the incidence of \textit{influenza-like illnesses} (ILIs) \cite{ginsberg2009detecting,lampos2015advances,lazer2014parable,santillana2015combining,yang2015accurate}. Current methods use two core discriminative features for ILI estimation: (i) past ILI activity, and (ii) the frequency of queries in web search logs that correlate strongly with past ILI activity. There are two problems with this approach. The \textit{first problem} is that not all queries whose frequency is strongly correlated to ILI activity are necessarily informative with respect to ILIs, and hence discriminative as a feature for ILI estimation. At the most general level, this is an example of the fundamental issue of correlation not being causation. In the case of estimating ILI, this is exacerbated by the seasonal nature of influenza. In fact, it has been previously observed that previous methods can identify queries that have a very similar seasonality but are clearly not related to ILI. For example, the query \textit{``high school basketball''} has been found to have a high correlation with ILI activity [8] even though it is obviously unrelated to ILI. The seasonality of high school basketball accounts for this correlation. Queries unrelated to the ILI activity will not be useful in the case of irregular ILI activity, e.g. an off season influenza outbreak. Additionally, changes in for example the high school basketball schedule would result in changes in the ILI estimates. The \textit{second problem} is that by using two types of features that are strongly correlated to each other (past ILI activity, and queries whose frequencies are strongly correlated to past ILI activity), we may compromise diversity in the representations one would expect from the features. Better estimations may be produced by using features that \textit{complement} each other, regardless of their between--feature correlation. Motivated by the above issues, we propose an alternative approach to selecting queries. Our approach consists of two steps. (1) We model the seasonal variation of ILI activity, and (2) we select queries whose search frequency fits aspects of this seasonality. Specifically, we present two variations of our algorithm: select queries that correlate with our seasonal model of ILI, and select queries that correlate with the residual between the seasonal model and observed ILI rates. Our results are two fold. (i) Experimental evaluation of our seasonal query selection models for ILI estimation against strong recent baselines (of no seasonality) show that we can achieve performance that is overall more effective (reduced estimation error), and requires fewer queries as estimation features. With respect to error reduction we see that selecting queries fitting regular seasonal ILI variation is a better strategy than selecting queries fitting ILI outbreaks. (ii) Selecting queries that fit seasonal irregularities result in much more semantically relevant queries. These queries are surprisingly not the ones that result in the best predictions. Our main results are: (i) We demonstrate that Google Correlate retrieves many non-relevant queries that are highly correlated with a times series of historic ILI incidence, and that the ILI-related queries are not highly ranked; (ii) re-ranking these queries based on their correlation with a residual signal, i.e. the difference between a seasonal model and historic data, strongly favours ILI-related queries; (iii) the performance of a linear estimator is improved based on the re-ranked queries. To our knowledge, the seasonal examination of ILI activity for query selection in automatic ILI estimation is a novel contribution. Seasonal variation has, however, been studied for other medical informatics tasks, such as vaccination uptake estimation \cite{www17, HansenLM16}. \section{Problem Statement} The goal is to estimate ILI activity at time $t$, denoted $y_t$, using observed historical ILI activity (reported e.g. by the Centers for Disease Control and Prevention (CDC) in US) and query frequencies in web search logs. This is most commonly done by (i) submitting to Google Correlate\footnote{\url{https://www.google.com/trends/correlate}} a file of historical ILI activity, and receiving as output web search queries (and their frequencies) that are most strongly correlated to the input ILI data. Then, $y_t$ can be estimated with a linear model that uses only web search frequencies \cite{ginsberg2009detecting} as follows: \begin{equation}\label{eq:queries} y_t = \alpha_0 + \sum^n_{i=1} \alpha_i Q_{t,i} + \epsilon, \end{equation} \noindent where $n$ is the number of queries, $Q_{t,i}$ is the frequency for query $i$ in the web search log at time $t$, the $\alpha$s are coefficients, and $\epsilon$ is the estimation error. Including historical ILI activity data can improve the estimations of Eq. \ref{eq:queries}, for instance with an autoregressive model \cite{yang2015accurate}, as follows: \begin{equation}\label{eq:clinical+queries} y_t = \beta_0 + \beta_1 t + \sum^m_{j=1} \beta_{j+1} \cdot y_{t-j} + \sum^n_{i=1} \beta_{i+m+1} \cdot Q_{t,i}+ \epsilon, \end{equation} \noindent where $m$ is the number of autoregressive terms, and the $\beta$s are coefficients to be learned. With $m=52$ and $n=100$, Eq. \ref{eq:clinical+queries} corresponds to the model presented by Yang {\em et al.}~\cite{yang2015accurate}. Most ILI estimation methods (exceptions include \cite{DBLP:conf/www/LamposZC17}) that use web search frequencies use all queries found to be correlated to ILI activity, i.e. in Eq. \ref{eq:queries} and Eq. \ref{eq:clinical+queries} $n$ corresponds to \textit{all} strongly correlated queries, and query selection is typically left for the model regularisation, for example using lasso regularisation. In the next section we present a novel way of selecting which among these correlated queries to include in the estimation of $y_t$ according to how well they fit the seasonal variation of ILI activity. \section{Seasonal Query Selection}\label{sec:seasonal_query_selection} We reason that among the queries whose frequency is correlated with past ILI activity, some queries may fit the ILI seasonal variation better than others. This is supported by the literature \cite{lazer2014parable}. We further reason that this fit of queries to seasonal ILI variation may not be sufficiently captured by simply measuring the correlation between the frequency of those queries and ILI activity. Based on this, we (i) present two models to represent seasonal variation of ILI activity, and (ii) select queries based on these seasonal models. \subsection{Step 1: Model seasonal ILI variation} \begin{figure} \centering \scalebox{.5}{ \includegraphics[]{model_fits} } \caption{Fit of the Serfling model (Eq. \ref{eq:serf}) and the Yearly Average model (Eq. \ref{eq:avg}) to historical ILI data (described in Section \ref{s:eval}).} \label{fig:serflingfit} \end{figure} We model seasonal variation in two ways. The first model is the Serfling model \cite{serfling1963methods}, chosen because of its simplicity and expressiveness. The Serfling model (Eq. \ref{eq:serf}) uses pairs of sine and cosine terms to model seasonality, and a term linear in time to model general upward or downward trends. We use this model with weekly data and one yearly cycle (details on data are given in Section \ref{s:eval}), resulting in the following ILI estimation model: \begin{equation}\label{eq:serf} y_t = \beta_0 + \beta_1 t + \beta_2 \sin\left(\frac{2 \pi t}{52}\right) + \beta_3 \cos\left(\frac{2 \pi t}{52}\right) + \epsilon, \end{equation} \noindent where the $\beta$s denote model coefficients and $\epsilon$ the error, i.e. residual. For the second model we use a yearly average (YA). Here the expected value of $y_t$ is calculated as the average value of $N$ seasons of ILI activity data, \begin{equation}\label{eq:avg} \hat{y}_t = \frac{1}{N} \sum_{i=0}^{N-1} y_{(t \text{ mod } S) + i \cdot S}, \end{equation} \noindent where $S$ is the season length in weeks, in our case 52. Fig. \ref{fig:serflingfit} shows the fit of the two models, i.e. the Serfling model (Eq. \ref{eq:serf}) and the YA model (Eq. \ref{eq:avg}) to historical ILI activity data. We see that the Serfling model fits the general seasonality, but not more complex patterns representing higher transmission. It does not, for example, model differences between the start and end of the ILI season. This is better captured by the YA model. \subsection{Step 2: Query selection}\label{ss:qs} Having modelled seasonal ILI variation, the second step is to approximate how well queries fit the seasonal variation of ILI activities modelled by Eq. \ref{eq:serf}-\ref{eq:avg}. We do this in two ways: \textbf{Seasonal correlation.} We compute the Pearson correlation between the query frequencies and the ILI seasonal model, i.e. Eq. \ref{eq:serf} or \ref{eq:avg}. We then select queries that are most strongly correlated to the ILI activity model. \textbf{Residual correlation.} We compute the Pearson correlation between the query frequencies and the residual between the ILI seasonal model and the historical ILI activity. We then select queries that are most strongly correlated to the residual, i.e. \textit{unexpected variations in ILI activity} (possible outbreaks). The four query selection methods are denoted (i) Seasonal (Serfling), (ii) Seasonal (YA), (iii) Residual (Serfling), and (iv) Residual (YA). \section{Evaluation} \label{s:eval} \paragraph{\textbf{Experimental Setup}} We experimentally evaluate our seasonality-based query selection methods using two types of data: weekly ILI activity data and Google search frequency data. The ILI activity data is from the US CDC for the period 2004-6-6 to 2015-7-11\footnote{\url{https://gis.cdc.gov/grasp/fluview/fluportaldashboard.html}} (inclusive). The CDC reports this in the form of weighted ILI activity across different US regions. ILI activity for a region corresponds to the number of ILI related visits to health care providers compared to non-influenza weeks, e.g. an ILI activity of 2 corresponds to twice as many visits as in non-epidemic weeks. We retrieve query search frequencies from Google Correlate with the location set to the US. Specifically, we use the 100\footnote{This is a maximum number set by Google Correlate.} queries that have the highest correlation with the ILI activity from 2004-6-6 to 2009-3-29 according to Google Correlate. Google normalizes the search frequencies for each query to unit variance and zero mean, i.e. we do not know the actual search frequencies. We use the interval 2004-6-6 to 2009-3-29 because it represents a non-epidemic period (it excludes the 2009 pandemic of H1N1 influenza virus that caused highly irregular ILI activity). The 100 queries are shown in Tab.~\ref{tabel:ili_queries}. Only 21 of the 100 queries are related to ILI (in bold). We compare our query selection methods (Section \ref{sec:seasonal_query_selection}) to the following three baselines: (Tab. \ref{tab:resultsQuery} baseline i) uses the top-$c$ queries to estimate ILI activity, where the top-$c$ are chosen to minimise the RMSE error, i.e. if we use $c+1$ queries the RMSE increases; (Tab. \ref{tab:resultsQuery} baseline ii) using \textbf{\textit{all}} 100 queries to estimate ILI activity; (Tab. \ref{tab:resultsQuery} baseline iii) using no queries, only past ILI activity, i.e. an autoregressive model. For (i) and (ii), the query ranking is determined by Google Correlate. For (iii), two autoregressive models are fitted: one using 3 autoregressive terms \cite{lazer2014parable,yang2015accurate} and one with 52 autoregressive terms \cite{yang2015accurate}. This setup is similar to the setup of Yang {\em et al.}~\cite{yang2015accurate}. We implement baseline (iii) using Eq. \ref{eq:clinical+queries} where $m$ is set to 3 and 52 terms, respectively, and $n=0$. Similarly to \cite{lazer2014parable,yang2015accurate}, we evaluate estimation performance by reporting the root mean squared error (RMSE) and Pearson correlation between the estimations and the observed historical ILI activity. For all runs, we use data from 2004-6-1 to 2009-3-29 for training, and data from 2009-4-1 to 2015-7-11 for testing. The training data is used to fit Eq. \ref{eq:serf}-\ref{eq:avg}, and to calculate the correlation scores as described in Section \ref{ss:qs}. Estimations are made in a leave-one-out fashion where data prior to the data point being estimated is used to fit the estimation model. Each model is retrained for every time step using the 104 most recent data points (exactly as in Yang {\em et al.}~\cite{yang2015accurate}). We determine the number of queries $n$ in Eq. \ref{eq:queries}-\ref{eq:serf} by iteratively adding the next highest ranked query, where query rank is given by either Google Correlate (for the baselines), or by the four variants of our algorithm, specifically (i) correlate seasonal (Serfling), (ii) correlate seasonal (YA), (iii) correlate residual (Serfling), and (iv) correlate residual (YA). The models are fitted using lasso regression, where the hyper-parameter is found using three fold cross-validation on the training set. \paragraph{\textbf{Results}} \begin{table*} \begin{tabular}{p{16cm}} \toprule \footnotesize{ florida spring, \textbf{influenza symptoms}, \textbf{symptoms of influenza}, new york yankees spring training, yankees spring training, \textbf{flu incubation}, \textbf{flu incubation period}, \textbf{flu duration}, florida spring training, spring training locations, \textbf{influenza incubation}, florida spring break, spring break dates, \textbf{flu fever}, sox spring training, new york mets spring training, \textbf{bronchitis}, red sox spring, spring training in florida, snow goose, spring break calendar, spring training in arizona, red sox spring training, mlb spring training, \textbf{flu report}, baseball spring training, mariners spring training, wrestling tournaments, spring training, golf dome, \textbf{flu recovery}, city spring, wrestling singlets, spring training sites, boys basketball, \textbf{type a influenza}, yankee spring training, spring training tickets, las vegas march, indoor track, harlem globe, spring break panama city, girls basketball, panama city spring break, cardinals spring training, ny mets spring training, ny yankees spring training, \textbf{flu symptoms}, minnesota twins spring training, concerts in march, spring training map, \textbf{tessalon}, boston red sox spring training, \textbf{flu contagious}, \textbf{symptoms of the flu}, events in march, seattle mariners spring training, singlets, \textbf{influenza contagious}, \textbf{influenza incubation period}, spring break schedule, spring vacation, \textbf{treating the flu}, college spring break, basketball boys, college spring break dates, boys basketball team, \textbf{respiratory flu}, atlanta braves spring training, \textbf{acute bronchitis}, march madness dates, spring break florida, braves spring training, college basketball standings, in march, braves spring training schedule, high school boys basketball, spring break ideas, spring break miami, banff film, addy awards, grapefruit league, spring clothing, spring collection, banff film festival, st. louis cardinals spring training, april weather, spring break family, red sox spring training schedule, miami spring break, nj wrestling, spring break getaways, spring break date, high school boys, march concerts, high school basketball, indoor track results, \textbf{tussionex}, globetrotters, orioles spring training}\\ \bottomrule \end{tabular} \caption{The 100 queries retrieved from Google Correlate. We treat queries in bold as ILI related.} \label{tabel:ili_queries} \end{table*} As noted earlier, Google Correlate identifies the top-100 queries, but only 21 of these are ILI-related. Our four algorithms re-rank the 100 queries. Fig.~\ref{fig:ILIrelevant} plots the number of ILI-related queries as a function of the number of rank-ordered queries. The solid curve is based on the original ranking provided by Google Correlate. We observe that both the (i) Seasonal (Serfling) and (ii) Seasonal (YA), re-rank the queries such that, in general, the ILI-related queries are ranked worse. The Residual (Serfling) generally performs similarly to or worse than Google Correlate in favouring ILI-related queries. In contrast, Residual (YA) re-ranks the queries such that almost all ILI-related queries are favoured. Of the top-21 queries, 19 are ILI-related. All 21 ILI-related queries are within the top-23. The only two non-related queries in the top-23 are ranked at 19 and 21. Clearly re-ranking queries based on Residual (YA) strongly favours ILI-related queries much more than Google Correlate or our other three variants. For each ranking of the queries, we select the top-$n$ queries that either minimise the RMSE or maximise the Pearson correlation. This is done for the Linear model of Eq.~\ref{eq:queries} and for the autoregressive model of Eq.~\ref{eq:clinical+queries}, Tab. \ref{tab:resultsQuery} shows the results. For the Linear model (column 1), we observe that Residual (YA) performs best w.r.t. RMSE and Pearson correlation, though the latter is not significant. Note that in both cases, (i) the number of queries needed by Residual (YA) is significantly less than for the other three variants and (ii) the two baselines performed worse. For the autoregressive models, we observe that the Seasonal (Serfling) model performs best w.r.t. RMSE and Pearson correlation. This is achieved with relatively few queries (5, 9, or 11). However we note that of the top-5, -9 or -11 queries only 3, 3 or 4, resp. are ILI-related. In general, autoregressive models perform well when the signal has a strong autocorrelation. However, should the signal strongly deviate from seasonal patterns, then it is unlikely that the ILI estimates would be accurate. \begin{figure} \centering \includegraphics[width=8.5cm]{relevant_queries} \caption{Portion of ILI related queries in the top $n$ queries for each of the five ranking methods.} \label{fig:ILIrelevant} \end{figure} \begin{table*} { \begin{tabular}{l l rr rr rr} & & \multicolumn{6}{c}{\textbf{RMSE} \textit{(the lower, the better)}} \\ \toprule & & Linear (Eq. \ref{eq:queries}) & \#q. & Autoregressive 52 (Eq. \ref{eq:clinical+queries}) & \#q. & Autoregressive 3 (Eq. \ref{eq:clinical+queries}) & \#q. \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} Seasonal (Serfling) & & 0.398 & 97 & \textbf{0.280}& 5 & \textbf{0.280} & 11 \\ Residual (Serfling) & & 0.407 & 98 & 0.309 & 98 & 0.312 & 98 \\ Seasonal (YA) & & 0.394 & 96 & 0.311 & 100 & 0.298 & 68 \\ Residual (YA) & & 0.390 & 79 & 0.309 & 49 & 0.310 & 47 \\ Not Seasonal & \textit{baseline i} & 0.413 & 33 & 0.309 & 68 & 0.312 & 46\\ Not Seasonal all q. & \textit{baseline ii} & 0.416 & & 0.310 & & 0.314 &\\ ILI History & \textit{baseline iii} & n/a & & 0.348 & & 0.333 &\\ \\ & & \multicolumn{6}{c}{\textbf{Correlation} \textit{(the higher, the better)}} \\ \toprule & & Linear (Eq. \ref{eq:queries}) & \#q. & Autoregressive 52 (Eq. \ref{eq:clinical+queries}) & \#q. & Autoregressive 3 (Eq. \ref{eq:clinical+queries}) & \#q. \\ \cmidrule(lr){3-4} \cmidrule(lr){5-6} \cmidrule(lr){7-8} Seasonal (Serfling) & & 0.948 & 96 & 0.973 & 5 & \textbf{0.974} & 9 \\ Residual (Serfling) & & 0.946 & 98 &0.968 & 100 & 0.967 & 98 \\ Seasonal (YA) & & 0.948 & 99 & 0.969 & 99 & 0.971 & 68 \\ Residual (YA) & & 0.949 & 77 & 0.969 & 59 & 0.966 & 47 \\ Not Seasonal & \textit{baseline i} & 0.942 & 86 & 0.968 & 68 & 0.967 & 46 \\ Not Seasonal all q. & \textit{baseline ii} & 0.941 & & 0.967 & & 0.967 & \\ ILI History & \textit{baseline iii} & n/a & & 0.959 & & 0.962 &\\ \bottomrule \end{tabular} } \caption{Root mean squared error (RMSE) and Pearson Correlation of our seasonal ILI estimation methods and the three baselines. Bold marks the best score. \#q denotes the number of queries used in the estimation.} \label{tab:resultsQuery} \end{table*} \section{Conclusion} The incidence of influenza-like illness (ILI) exhibits strong seasonal variations. These seasonal variations cause Google Correlate to identify a large number of non-relevant queries (~80\%). Many of the relevant queries are not highly ranked. Estimating the incidence of ILI with non-relevant queries is likely to become problematic when ILI deviates significantly from its seasonal variation. We proposed a new approach to ILI estimation using web search queries. The novelty of our approach consists of re-ranking queries derived from Google Correlate. We first developed two models of the seasonal variation in ILI. The first is an analytical Serfling model. The second is an empirical model based on yearly averages. Four methods of re-ranking queries were then examined. The first two re-rank the queries based on their correlation with the two seasonal models. The second two re-rank queries based on their correlation with the residual between the seasonal models and historical ILI activity. Experimental results showed that re-ranking queries based on Residual (YA) strongly favoured ILI-related queries, but re-ranking queries based on the two seasonal models, Seasonal (Serfling) and Seasonal (YA) led to rankings that were worse than those of Google Correlate. When ILI estimates were based on both queries and autoregression the best performance was obtained when queries were re-ranked based on Seasonal (Serfling). Future work is needed to determine why, but we reason that (i) autoregessive models perform better when the signal has strong autocorrelation, i.e. is strongly seasonal, and (ii) this strong seasonality was present in our dataset, i.e. there was little deviation from the seasonal models. If, however, strong deviations did arise, we expect that models based on autoregression and queries re-ranked based on correlation with seasonal models will perform much worse. This work complements the use of information retrieval and machine learning methods in the wider area of medical and health informatics \cite{DragusinPLLJW11,DragusinPLLJCHIW13}. \paragraph{Acknowledgments.} Partially supported by Denmark's Innovation Fund (grant no. 110976).
{ "timestamp": "2018-02-21T02:01:38", "yymm": "1802", "arxiv_id": "1802.06833", "language": "en", "url": "https://arxiv.org/abs/1802.06833" }
\section{Introduction}\label{sec:intro} Galactic disk evolution implies temporal variations of the disk population and its constituents. Field stars along with star clusters represent the typical disk population, and have been the subjects of investigation into the evolutionary processes in the Milky Way for a long time. Our understanding of disk evolution is largely based on stellar data, coming from studies of the kinematics, abundances and ages of F-K dwarfs \citep[see e.g.][]{jujah10}. Aside from the advantage of the long lifetimes of late-G and K dwarfs, which exceed the age of the disk, using stars leads to a number of disadvantages as they represent the very local disk only up to a few hundred parsecs, and their ages and other evolutionary parameters are of lower accuracy than those of star clusters. On the other hand, the typical lifetime of open clusters is lower than the age of the disk, exceeding it only for initially very massive clusters. This raises a demand for studies of old open clusters, which provide insights into the early epochs of the Milky Way disk's formation and evolution. Thus the investigation of star cluster ages, in conjunction with either the dissolution history of star clusters or with the star formation history of the Milky Way, is of great interest. This is largely connected to the advances in open clusters observations, dating, and compilations of representative samples of open clusters and collection of their data into all-sky catalogues. One can mention for example studies of \citet{wiel71}, \citet{pama86}, \citet{janes88}, \citet{bcd91}, and \citet{clupop} who constructed the local cluster age distributions, and estimated present cluster formation rates (in the range 0.10-0.45 Myr$^{-1}$ kpc$^{-2}$), and typical lifetimes on the order of $100-250$ Myr. \citet{lamea} and \citet{lamgi06} proposed a model to explain the local distribution of open clusters with age. \citet{mora13} fitted this model to their constructed age distribution of clusters observed towards the Galactic center. In this paper, we use data from the Milky Way Star Clusters catalog MWSC \citep[][hereafter \citetalias{khea12}]{khea12}, to study cluster ages. Within the MWSC project, \citet[][\citetalias{mwscat}]{mwscat} determined various cluster parameters for 2859 clusters known in the literature, whereas \citet[][\citetalias{newari}]{newari} and \citet[][\citetalias{nchpm}]{nchpm} added 202 newly-discovered open clusters and associations. The full MWSC sample includes clusters with heliocentric distances up to 15\,kpc, with the mode at 2.4\,kpc. The ages and distances in the MWSC survey are based on cluster members with NIR photometric data from the 2MASS catalogue \citep{cat2MASS} that were fitted to the newest Padova isochrones. The basic purpose of our study is to derive the unbiased age distribution of MWSC clusters in the wider Solar Neighbourhood, and to study its variations within a spatial completeness zone extending between the Sagittarius and Perseus spiral arms. Specifically we aim at understanding if the age distributions from the arm areas differ from that in the inter-arm region. We will also fit a simple analytical model of cluster formation in the Galactic disk to the observations, in order to draw conclusions on the consistency of our main assumptions with the temporal variations of the cluster formation rate, and with the main components of the model (cluster initial mass function and a relation between cluster lifetime and the clusters' initial mass) used for model construction. The outline of the paper is as follows: Sect.~\ref{sec:data} gives a short overview of the input data and the cluster parameters obtained within the MWSC survey. In Sect.~\ref{sec:genage} we describe the working samples of data, a method of construction of the unbiased age distribution of the local clusters, and derived general results. In Sect.~\ref{sec:spvar} we consider spatial variations of the cluster age distribution in the Galactic plane and along the $Z$-axis. In Sect.~\ref{sec:cluhis} we construct a simple analytical model describing the evolution of open clusters during the last 5 Gyrs, fit it to the observed age distribution of local clusters and draw conclusions on the details of cluster formation. Sect.~\ref{sec:conc} summarises our results. \section{The sample and the data on cluster ages}\label{sec:data} In order to build the age distribution of a given set of objects, one has to select a complete and unbiased list of the objects, which also has to be uniformly dated by an accurate and unbiased method. For galactic star clusters, the MWSC survey is an ideal source, as it suits the requirements mentioned above. It provides a comprehensive sample of star clusters together with a number of well-determined parameters based on uniform photometric and kinematic stellar data gathered from the all-sky catalogues 2MASS \citep{cat2MASS} and PPMXL \citep{ppmxl}. A merger of these catalogues, 2MAst \citepalias[see][for a description of the 2MAst construction]{khea12}, was used to verify clusters from an input list, find new clusters and determine cluster parameters in astrometric and photometric systems that are homogeneous over the whole sky. The full MWSC sample contains 3208 objects: 3061 open and 147 globular clusters. In this study, we concentrate on the subset of open clusters. For all stars within the cluster areas, cluster membership probabilities were determined by using kinematic (proper motions) and photometric (colour magnitude diagrams) selection criteria. The procedure is described in \citetalias{khea12}, and the results are published in the MWSC catalogues \citepalias{mwscat,newari,nchpm}. The cluster ages are based on uniform cluster membership and present-day isochrones including both pre- and post-MS evolutionary stages. The ages were determined by fitting cluster member CMDs to isochrones computed using the Padova on-line server CMD2.2\footnote{http://stev.oapd.inaf.it/cgi-bin/cmd} \citepalias[for more details see][]{khea12,mwscat}. At metallicities typical of the Galactic disk, the isochrones only weakly depend on the metallicity. Therefore, only a single set of isochrones with solar metallicity was used in the determinations of the MWSC survey. Simultaneously with cluster ages, the respective distances and reddening values were also determined from the isochrone fitting. In the result, all MWSC objects were provided with the homogeneous data necessary for this study. As shown in a comparison with the literature \citepalias[see][]{mwscat}, our ages are typically accurate within 10\% for clusters older than $\log t=8.2$ (with age $t$ in years and which according to Fig.\ref{fig:his_nage} comprises more than 60\% of the total survey) and within 25\% for younger clusters. The distances are accurate within 11\%. \begin{figure}[t] \centering \includegraphics[width=0.99\hsize,clip=]{cluage_fig1.eps} \caption{Distributions of MWSC clusters with age. The total sample is shown with a background (brown) hatched histogram, clusters within individual completeness limits are shown with an intermediate (blue) filled histogram, and those within the general completeness circle are shown with a foreground (green) back-hatched histogram. The vertical bars show Poisson errors. } \label{fig:his_nage} \end{figure} In total we have age determinations for 3061 star clusters and cluster-like objects (compact associations, regular, embedded, remnant and moving clusters), which we call hereafter open clusters for simplicity. The sample includes effectively all clusters previously known from the literature with the addition of 202 new clusters. Ages of MWSC clusters cover a considerable fraction of the age of the Galactic disk, spanning between $\log t=6$ and $\log t=9.78$. The cluster sample also covers a wide range of galactocentric distances $R_G$, from the Galactic centre to the outskirts of the Galactic disk at about $R_G=15$ kpc. The completeness zone (where we know virtually all clusters) has a radius of about 2 kpc, reaching the Sagittarius and Perseus spiral arms. However, the data completeness is not uniform. The statistics of extremely young and extremely old clusters is still insufficiently known. For example, the density of older clusters in the Solar Neighbourhood is lower than in the outer regions, which implies that a few tens of old clusters within about 1 kpc from the Sun \citepalias{mwscat} are missing. Also the number of the youngest clusters is still uncertain, since many of them may still be obscured by heavy gas-dust clouds in star formation sites. It should also be noted that the MWSC pipeline could not be applied to the nearest clusters, the Ursa Major moving cluster and the Hyades, where 3D motions should be used for member selection. Therefore, these two well-known clusters are not included in our sample. At the youngest ages, our star cluster sample could be biased by the pre-main sequence isochrones, which are less secure than at the MS- and post-MS stages. The age edge effect should also be considered: the used set of isochrones had a lower age limit of $\log t=6.0$, and no ages younger than this could be determined. Thus, this value was artificially assigned to potentially younger clusters. Furthermore, the quality of the determined MWSC parameters strongly depends on the distance to the cluster. At larger distances, due to a fixed faint limit of the apparent magnitude of the survey, one can only observe the tip of the cluster MS/RG-branches, and so the accuracy of the parameters diminishes. \section{Cluster age distribution}\label{sec:genage} The raw age distribution of 3061 MWSC clusters is shown in Fig.~\ref{fig:his_nage} as brown hatched histogram. One can see that it is dominated by old clusters with age $\log t= 8.5...9.5$. The peak is partly due to the logarithmic age scale and partly due to the NIR nature of the survey: the clusters with red giants tend to be brighter than their younger counterparts, are observed at larger distances, and thus are more numerous at the limiting magnitude of the survey, as they are collected from larger areas of the disk. \subsection{Data completeness issue}\label{sec:datcompl} As cluster counts show, the MWSC can be classified as a magnitude-limited sample \citep[see][\citetalias{mwscint} for details]{mwscint}. The surface density profile for such a sample can be represented schematically by a flat inner area, where the data incompleteness is low/negligible, and by a long outer tail of gradually decreasing density, which is biased by the survey incompleteness at faint magnitudes. The incompleteness can be quantified in statistical sense as a measure of the decrease of the observed surface density with respect to the averaged local density \citep[see e.g.][]{mora13}. Note, that as a measure of the distance we use hereafter a Galactic plane projection $d_{xy}$ of solarcentric distance $d$. The radius of the flat area $\hat{d}_{xy}$ is then called the completeness limit of the survey. Once it is established, the bias-free statistics is gathered within the completeness limit. This approach (which we call hereafter single-limit approach) is attractive due to its simplicity and was commonly used starting from the pioneering work of \citet{wiel71}, but is inherent in a bias for objects which are absolutely fainter or brighter than the clusters typical of the given sample. For example, when applying the single-limit approach to faint objects, which must be observed near to the Sun only, one underestimates their density, when one divides their counts by the completeness area defined by the common completeness limit. In contrast, since the typical distance to bright objects may exceed the completeness limit, one can lose them from the statistics at all. This is especially important since for a NIR survey (including the MWSC, where this statement is supported by direct statistics) the brightest objects are as a rule the oldest ones, and their loss leads to a bias in the early history of the disk. Therefore, to avoid important biases, which might affect the end-distribution, we decided to abandon the single-limit approach and apply instead a strategy used for the stellar luminosity function construction, which collects stars of different absolute magnitudes from proportionally extended completeness areas. We refer to this approach as variable completeness limit concept. Note, that this approach represents a development of the single-limit approach, where single completeness zones are prescribed to objects from some narrow absolute magnitude interval. This strategy became possible since we have determined in \citetalias{mwscint} for all MWSC clusters their integrated NIR magnitudes, and have built a magnitude-dependent completeness distance scale, and its relation to galactic longitude. For the absolute integrated magnitude in the $K_S$ passband $I(M_{K_S})$ this relation averaged over galactic longitudes can be written as \begin{equation} \hat{d}_{xy} = p - q \times I(M_{K_S})\,, \label{eq:dciksrel} \end{equation} with $p=0.80\pm0.05$, and $q=0.42\pm0.02$, where $\hat{d}_{xy}$ is in kpc. As we determined earlier applying both approaches in \citetalias{mwscat} and in \citetalias{mwscint}, the MWSC is generally complete within about 1.8-2.2 kpc from the Sun (we adopt hereafter the lower limit of this interval), where about half of all MWSC open clusters are located (green back-hatched histogram in Fig.~\ref{fig:his_nage}). Their age-distribution is still similar to the total one, though the contrast between the peak of older clusters and younger ones becomes lower (the ratios between values at the maxima and shoulders at about $\log t =7.9$ are equal to 3.4 and 2.4 respectively). If one adopts a more flexible variable completeness limit the ratio equal to 3.0 falls in between (see also blue histogram in Fig.~\ref{fig:his_nage}). As seen in Fig.~\ref{fig:his_nage} this approach does not change the shape of the distribution strongly, but involves in the statistics almost 40 percent more clusters (2242 objects), which makes the completeness sample more representative and allows us to look at larger distances from the Sun. \begin{figure}[t] \centering \includegraphics[width=0.99\hsize,clip=]{cluage_fig2.eps} \caption{Distribution of cluster distances with age. All clusters are shown with light (blue) dots, while clusters located within individual completeness limits computed along Eq.~(\ref{eq:dciksrel}) are shown with black dots. The general completeness limit for the MWSC-sample is given by the horizontal (red) line. A vertical yellow stripe is given for illustration and indicates an arbitrary $\log t$-box with clusters of the two kinds falling in it. } \label{fig:dist_age} \end{figure} In Fig.~\ref{fig:dist_age} we compare both completeness approaches in the $d_{xy}$ vs. $\log t$ diagram, which has a distinctive U-shape configuration. It is clear that the most distant clusters belong to either the youngest ($\log t < 7$) or the oldest age group ($\log t \gtrsim 8$). The single completeness limit (despite only including about half of the total objects) seems to be more selective (higher impact of non-regular lower bound in $d_{xy}$, especially strong at youngest and oldest ages). In the alternative case one can extend the size of the completeness area almost by a factor of two (especially for older ages, see Fig.~\ref{fig:dist_age}). On average, the individual completeness approach allows us to expand the total completeness area to about 3 kpc. However, this causes the upper limit of the completeness zone to become non-uniform with respect to age: at $\log t>8.5$ the completeness distance reaches for the intrinsically brightest clusters $d_{xy}=4.5$ kpc. An additional bias is introduced by the inhomogeneous distribution of MWSC clusters over the sky, which is related to the patchy distribution of interstellar extinction and in particular strongly discriminates against clusters of the Pleiades type (those deprived of bright NIR red giants). This results in the lowered cluster density in the $d_{xy}$ vs. $\log t$-diagram at $\log t < 6.6$ for young clusters associated with dust clouds, and $\log t\sim 7.4-8.3$ for screened star clusters (preferentially red giant-deficient). This deficiency should be remembered as a source of bias in the built age distribution. To the end of Sec.~\ref{sec:genage} we will use both completeness approaches in order to ensure that they give similar results in the limiting case of the local clusters, which is also important for comparison with the literature, where almost exclusively the single completeness limit concept is used. \begin{figure}[t] \centering \includegraphics[width=0.99\hsize,clip=]{cluage_fig3.eps} \caption{Comparison of age distributions computed with different approaches to the completeness distance calculation. The distribution computed with individual completeness distances (Eq.~\ref{eq:agedv}) is shown with a histogram, while that computed with a single-completeness distance common to all clusters of the survey (Eq.~\ref{eq:agedc}) is shown with green filled circles. The vertical bars show the statistical uncertainty (Poisson errors) of the bins. The red curve illustrates a smoothed histogram. } \label{fig:den_age} \end{figure} \subsection{Age distribution construction}\label{sec:method} We define the cluster age distribution $\eta(t)$ as a surface density of objects in the unit interval of age $t$: \begin{equation*} \eta(t)= \frac{1}{S(t)}\,\frac{\mathrm{d} N(t)}{\mathrm{d} t}\,, \end{equation*} where $\mathrm{d} N(t)$ is number of clusters with ages between $t$ and $t+\mathrm{d} t$ residing within the completeness area $S(t)$. It is related to the more convenient logarithmic age distribution \begin{equation} \nu(t) = \frac{1}{S(t)}\,\frac{\mathrm{d}N(t)}{\mathrm{d}\log t} \label{eq:agedistr} \end{equation} via \begin{equation} \eta(t) = \frac{\log e}{t}\,\nu(t). \label{eq:etanu} \end{equation} If one adopts a single completeness limit $\hat{d}_{xy,0}=1.8$ kpc, valid for clusters of all ages (horizontal line in Fig.~\ref{fig:dist_age}), then $S(t) \equiv S_0=\pi\,\hat{d}_{xy,0}^2$, and Equation (\ref{eq:agedistr}) re-written in the discrete form simply reflects the distribution of cluster numbers $\Delta_k N$ within the completeness area (i.e. below the horizontal line): \begin{equation} \nu_k= \frac{1}{S_0}\,\frac{\Delta_k N}{\Delta_k \log t}\,, \label{eq:agedc} \end{equation} where the age step $\Delta_k\log t$ can be a variable. In the case of the variable completeness limit, the cluster density will be computed as the sum of partial densities $\varsigma=1/(\pi\,\hat{d}_{xy}^2)$ of clusters located within their proper completeness limits given by Eq.~(\ref{eq:dciksrel}), i.e.\ those with $d_{xy}\leqslant \hat{d}_{xy}$ (black dots in Fig.~\ref{fig:dist_age}): \begin{equation} \nu_k= \frac{1}{\Delta_k \log t}\sum_{i=1}^{\Delta_k N}\varsigma_i = \frac{1}{\pi\Delta_k \log t}\,\sum_{i=1}^{\Delta_k N} \frac{1}{\hat{d}_{xy,i}^2}\,. \label{eq:agedv} \end{equation} Here we sum over $\Delta_k N$ black dots within the given age interval $\Delta_k \log t$. Note that in the case of the constant completeness limit $(\hat{d}_{xy,i}\equiv \hat{d}_{xy,0})$ Eq.~(\ref{eq:agedv}) is naturally reduced to Eq.~(\ref{eq:agedc}). \begin{table}[t] \caption{Comparison of the present open cluster sample with literature data used for cluster age distribution construction.} \label{tab:age_sam} \begin{tabular}{llcrl} \hline\hline \noalign{\smallskip} No&Sample & $\hat{d}_{xy}$ & $N$ & Basic source \\ & & kpc & & \\ \hline \noalign{\smallskip} 1 &Wi71 & 1.0 & 70 & BF71 \\ 2 &PM86 & 1.0 & 116 & Lund3 \\ 3 &BC91\tablefootmark{a} & 2.0 & 94 & Lund5 \\ 4 &La05\tablefootmark{b} & 0.6 & 114 & COCD \\ 5 &Pi06\tablefootmark{c} & 0.85 & 259 & COCD \\ 6 &Mo13\tablefootmark{d} & 3.0\tablefootmark{e} & 143 & ATLASGAL\tablefootmark{f}\\ 7 &Present & 1.0$-$6.0 &2242 & MWSC \\ \hline \end{tabular} \tablebib{ (Wi71) \citet{wiel71}; (PM86) \citet{pama86}; (BC91) \citet{bcd91}; (La05) \citet{lamea}; (Pi06) \citet{clupop}; (Mo13) \citet{mora13};\\ (BF71) \citet{beckfen71}; (Lund3) \citet{lynga3}; (Lund5) \citet{lynga5}; (COCD) \citet{clucat}; (ATLASGAL) \citet{mora13}. } \tablefoot{\tft{a}{Subset of bright clusters $I(M_V)<-4.5$;} \tft{b}{Subset of known clusters;} \tft{c}{\textit{cmp} subset;} \tft{d}{Inner clusters with $|l|\lid60\degr,\,|b|\lid1.5\degr$;} \tft{e}{Distance limit of the representative sample;} \tft{f}{Compilation of 17 lists on the basis of \citet{daml02} catalogue ver.3.1.} } \end{table} The resulting distributions computed with help of Eqs.~(\ref{eq:etanu}), (\ref{eq:agedc}) and (\ref{eq:agedv}) are shown in Fig.~\ref{fig:den_age}. One can see that despite a considerable difference in the numbers of used objects collected from different areas (1359 in the first case and 2242 in the second one), both distributions are very similar, and for most age bins only differ within the statistical uncertainty. The difference at the youngest ages is due to the binning effect enhanced by poor statistics within the completeness distance (see Fig.~\ref{fig:dist_age}). However, one can see a small bias at $\log t>9$, where counts along Eq.~(\ref{eq:agedv}) lead to a slightly more enhanced (by factor of order 1.5 at maximum) cluster density, which is outside the statistical uncertainty. We consider this a consequence of taking into account ``far'' old clusters located beyond the common completeness limit $\hat{d}_{xy,0}$. Due to this effect, and also due to the better representation of remote clusters, which might be important for the study of spatial variations of the age distribution in the wider solar neighbourhood, we will use the approach of a variable completeness distance in the remaining part of the paper. \subsection{Comparison with the literature}\label{sec:cmplit} Formerly ages of Galactic star clusters were accumulated from individual efforts of workers analysing their CMDs. From time to time these non-homogeneous data were reduced to a unique scale of ages and catalogued into compiled lists. Thus an appearance of a new collection of cluster ages was usually accompanied by a follow up study of galactic cluster age distribution. Nowadays this is regularly modulated by the appearance of large scale photometric and/or kinematic data which force generation of new sets of cluster ages. \begin{figure}[t] \centering \includegraphics[width=0.99\hsize,clip=]{cluage_fig4.eps} \caption{Comparison of present (histogram) and literature age distributions. The vertical bars show the statistical uncertainty (Poisson errors). The six lines show age distribution samples from the literature presented in Table~\ref{tab:age_sam}. Different (green) symbols connected with thin lines show results from earlier published data. The filled circles, crosses, and triangles correspond to the Wi71, PM86, and BC91 age distributions, respectively. The thick (red) lines correspond to more recent cluster age distributions: the dashed line shows the La05 result, the dotted line is constructed from the Pi06 sample, and the solid line corresponds to the Mo13 data. } \label{fig:cmp_litnt} \end{figure} \citet{wiel71} explored two catalogues of open cluster data \citep{beckfen71,lindoff68} and concluded that they give statistically similar age distributions. Here we consider the distribution based on the data of \citet{beckfen71}. The ages were based on the age calibration of \citet{barea69} and data on the colors and spectral classes of the brightest and bluest main sequence stars of the clusters. The clusters containing stars with spectral classes earlier than B2 were excluded from consideration. The analysis of the spatial distribution has shown that the samples are statistically complete within a cylinder with a radius of 1 kpc. The investigation of \citet{pama86} was based on the later Lund3 catalogue \citep{lynga3}. According to their conclusion, the sample is statistically complete within 1 kpc from the Sun, where it contains 116 objects (see Table~\ref{tab:age_sam}). The ages were also taken from Lund3. Clusters younger than 10 Myr were omitted. The fifth release of the Lund catalogue \citep{lynga5} was used for building the cluster age distribution by \citet{bcd91}, who used the subset of bright open clusters with integrated magnitudes $I(M_V)<-4.5$ mag. They found that this sub-sample could be regarded as spatially complete within 2 kpc of the Sun. In spite of the assurances on the completeness of the cluster samples mentioned above, more recent developments have shown that the real number of clusters in the solar neighbourhood is considerably higher than those presented by \citet{beckfen71} and the Lund collections. For example the COCD catalogue \citep{clucat,newc109}, based on the ASCC-2.5 \citep{asccnir} survey, when re-scaled to 1 kpc completeness distance, contains about three times more clusters than assumed by the previous studies. The MWSC survey with an average completeness limit of 1.8 kpc contains 400 objects within 1 kpc. Recently \citet{mora13} have considered an age distribution of star clusters from the inner Galactic disk. They have compiled a list of 695 known embedded and optical clusters located within the limits of the sub-millimetre survey ATLASGAL ($|l|\lid60\degr,\,|b|\lid1.5\degr$). They studied the completeness of the constructed sample and found that it is complete within 1 kpc from the Sun, and that it can be regarded to be representative within 3 kpc. \begin{figure}[t] \centering \includegraphics[width=0.50\hsize,clip=]{cluage_fig5a.eps} \includegraphics[width=0.44\hsize,clip=]{cluage_fig5b.eps} \caption{Positions of the cluster spatial sub-samples considered. The left panel shows the ``planar'' samples and the right panel shows the ``vertical'' ones. Gray dots show all MWSC clusters, while coloured dots correspond to the completeness samples. Red, magenta, green, blue, and brown dots correspond to the Inner, Outer, Local, Thin-disk, and Thick-disk sub-samples, respectively. Their limits are shown with dotted lines. Yellow dots indicate the rest of the general completeness sample described in Sect.~\ref{sec:genage}. The big plus sign marks the Galactic centre. Thick curves show approximate positions of the spiral arms \citep[as taken from][]{benjam08}. } \label{fig:spat_smp} \end{figure} The aforementioned data samples are summarised in Table~\ref{tab:age_sam}, where we list the identifier of the sample (second column), the adopted completeness distance (third column), the reported number of clusters used for the age distribution construction (fourth column), and the basic source of open cluster data. The comparison of the above distributions with present data is shown in Fig.\ref{fig:cmp_litnt}. The thick curves correspond to the determinations based on recent data. In general they show better agreement with the present determination than the thin curves corresponding to earlier publications, which demonstrates a general underestimation of cluster density. We attribute this to the insufficient completeness of these samples and to the already mentioned additional selection constraints imposed on the samples. The recent samples show general agreement with the present distribution for ages younger than $\log t<8.7$, and increasing deficiency at older ages. We attribute this bias to the NIR nature of the MWSC, which allows a better representation of old clusters containing bright red giants (see Sect.~\ref{sec:datcompl} for details). This might also explain the better agreement between our data and those of \citet{mora13}, also based on a survey including infrared data. We note an excess in the Mo13 distribution at young ($\log t<7.3$) ages. Among other reasons this could be caused by an enhanced cluster formation rate in the recent past in the inner galactic disk. Unfortunately, \citet{mora13} did not provide details of the construction of their age distribution, so we cannot discuss this feature as a possible consequence of their data analysis. Therefore, we postpone the discussion until Sect.~\ref{sec:spvar}, where we consider the issue of spatial variation of cluster age distributions of the MWSC clusters. \begin{figure*}[t] \includegraphics[width=0.69\hsize,clip=]{cluage_fig6.eps} \parbox[b]{0.30\hsize}{ \caption{Comparison of age distributions of various radial samples. The left, middle, and right panels show the Local, Inner, and Outer sample distributions, respectively. The thick red line is a smoothed Local distribution. It is plotted in the middle and right panels for comparison. The green curve in the left panel is a smoothed age distribution for the entire complete sample shown in Fig.~\ref{fig:den_age}.}\label{fig:dnrgvar} } \end{figure*} Our general conclusion from comparison with the literature is that the agreement of $\nu(t)$ with recent data is satisfactory and disagreement in the details is understandable. We also interpret the poor agreement with earlier results as a consequence of stronger incompleteness in the input catalogues burdened by additional constraints. \section{Spatial variations of cluster age distribution}\label{sec:spvar} From Fig.~\ref{fig:dist_age} it is clear that depending on cluster age the MWSC sample is complete at solar-centric distances from 2 to 4 kpc. This allows us to trace variations in the cluster age distribution in a wide range of Galactocentric distances from about 6 to 12 kpc (which covers a significant part of the Galactic disk radius), and over the complete extent of the disk in the direction perpendicular to its plane. Taking into account the dependence of the completeness distance on age we have to note, that the complete age range can only be covered for the local Solar Neighbourhood closer than 2 kpc. The full span of the aforementioned distances is only available for clusters older than $\log t \approx 8.3$. This excludes the data on the radial dependence of the immediate cluster formation rate from our consideration, but still allows us to look for radial variations in the deeper history of cluster formation. \begin{table}[b] \caption{Spatial parameters of cluster samples}\label{tab:ssprm} \begin{tabular}{@{}r@{ }lc@{ }c@{ }r@{}rc@{}r} \hline\hline \noalign{\smallskip} &Sample & $p$ & $q$ &\mc{2}{c}{Range\tablefootmark{a}}& Mean & $N_{obj}$\\ & & & & & & position\tablefootmark{a}& \\ & & kpc &kpc/mag &\mc{2}{c}{kpc} & kpc & \\ \hline \noalign{\smallskip} 1&Complete & 0.80 & $-$0.42 & 4.2, &12.0 & $\;\;$8.6 & 2242 \\ 2&Inner & 1.09 & $-$0.28 & 6.6, &7.3 & $\;\;$7.1 & 254 \\ 3&Local & 0.80 & $-$0.42 & 8.2, &8.9 & $\;\;$8.5 & 467 \\ 4&Outer & 0.56 & $-$0.57 & 10.0, &10.7 & $\;$10.2 & 288 \\ \hline \noalign{\smallskip} 5&Thin disk\tablefootmark{b} & 0.80 & $-$0.42 &$-$0.22,&0.18 & $-0.02$ & 750 \\ 6&Thick disk\tablefootmark{b} & 0.80 & $-$0.42 &$-2.04$,&$-0.43$& $-0.67$ & 95 \\ & & & & 0.39, &1.93 & $\;\;$0.65 & \\ \hline \end{tabular} \tablefoot{\tft{a}{Samples 1-4 are in galactocentric radius, while samples 5,6 are in $Z$-coordinate.} \tft{b}{Local sub-sample.}} \end{table} \subsection[]{Defining the spatial sub-samples}\label{sec:spasam} In order to investigate the spatial stability of the age distribution in the Galactic disk we divided our completeness sample into a few spatially limited groups. Our division represents a compromise between the aim to reach maximum spatial separation of the groups, and to keep their population sufficient for reliable statistics. We have considered ``planar'' and ``vertical'' divisions. Following the geometrical considerations we selected our planar sub-samples in radial rings of given Galactocentric radii $R_G$. We include into the planar groups all the clusters independent of their distance from the Galactic plane. The ``vertical'' groups are separated with respect to their position along $Z$-axis by horizontal layers parallel to the Galactic disk plane. As a result, we have constructed five spatial sub-samples characterising the cluster population at areas spanning over $R_G\approx7$ to 11 kpc denoted here as Inner, Local, Outer, Thin-disk, and Thick-disk sub-samples. Their parameters are shown in Table~\ref{tab:ssprm}, where we also show data for our completeness sample discussed in Sect.\ref{sec:method} for comparison. In order to keep the general sampling approach we selected only local clusters for the vertical samples. To increase the statistics of Thick-disk clusters the width of the shell was increased to 1.5 kpc, as shown in the right panel of Fig.~\ref{fig:spat_smp}. We should note that an indication of non-isotropic behaviour in the completeness parameters $p$ and $q$ of Eq.~(\ref{eq:dciksrel}), which leads to variations of the completeness distance with galactic longitude, was already found in \citetalias{mwscint}. As the Inner and Outer sub-samples reside close to the border of the completeness zone, the issue of the completeness becomes especially important, so for these groups of clusters we decided to apply the values of $p$ and $q$ coefficients determined in \citetalias{mwscint} specifically for these directions. This is why the completeness distances do not coincide for the Inner and Outer samples, being shorter towards the Galactic centre, and longer in the opposite direction. For the other sub-samples we used the general values of $p$ and $q$ as shown in Table~\ref{tab:ssprm}. In Table~\ref{tab:ssprm} the radial or vertical limits of the groups, their average $R_G$ and $Z$-coordinate, and the number of included clusters are also provided. In Fig.~\ref{fig:spat_smp} we illustrate the spatial distribution of the constructed samples in the $(X,Y)$ and $(Z,X)$-planes. To give an impression of the covered portion of the Galactic disk we mark the position of the Galactic Centre and the approximate locations of the Galactic spiral arms given by \citet{benjam08}. As seen from the plot, the Inner and Outer samples roughly coincide with the positions of Sagittarius and Perseus spiral arms, while the Local sample represents the inter-arm cluster population. \subsection[]{Planar variations}\label{sec:plavar} In Fig.~\ref{fig:dnrgvar} we compare the age distributions of clusters from the three samples with different Galactocentric radii. The Local sample shows the $\eta(t)$ close to the general distribution that was discussed in the previous section. It covers nearly the same range of ages, with the exception of the oldest clusters in the last bin with $\log t \approx 9.7$. We attribute this to spatial sparseness of the oldest clusters, requiring a considerable extension of the area to collect a sufficient number of these objects. It can also be seen that the number of clusters younger than a few hundred million years significantly (by factor of 1.5-2) exceeds that of the general distribution. This is related to a bias due to incompleteness at the edge of the completeness zone and will be discussed in more detail below. \begin{figure}[t] \includegraphics[width=\hsize,clip=]{cluage_fig7.eps} \caption{Comparison of age distributions for thin-disk (left) and thick-disk (right) cluster populations shown with filled histograms. The thick red curve, as in Fig.~\ref{fig:dnrgvar}, shows the smoothed local distribution integrating all local shell clusters residing at different $Z$-coordinates.}\label{fig:dndtvert} \end{figure} The Inner clusters show a similar distribution with only one difference from that of the Local sample. The Inner age distribution shows a deficiency at a moderately young cluster domain of $\log t\approx 7.7-8.7$. For younger ages, both distributions agree well, but at the older age domain $\log t\gtrsim 9$, the Inner distribution shows considerable excess with respect to the Local one. The Outer distribution also roughly resembles the basic features of the local distribution, and similar to the Inner sample exhibits a dip for the moderate ages. However, unlike the Inner clusters, an enhancement at older ages is not observed. Lastly, in contrast to both the inner and local distributions, the outer one shows a deficiency in young clusters with $\log t\approx7$ complemented with a total absence of the youngest clusters ($\log t<6.2$), which are abundantly present in the Local and Inner samples. All three samples show general agreement of the distributions representing both the inter-arm space and two different spiral arms. The different details observed in the distributions may reflect both different cluster formation histories and sampling biases. However, the latter is unlikely to be associated with an excess of older clusters in the inner sample, but rather with a higher cluster formation rate in the past in the inner disk. The deficiency of the youngest clusters in the outer disk may be due to a lower present cluster formation activity in the Perseus arm, and/or to the more difficult observing conditions of younger clusters behind heavy nearby clouds in the Perseus-Taurus-Auriga region. At the same time, it seems that the cluster formation histories of the Local and Outer samples were similar. We interpret the intermediate age dip as evidence of the increasing incompleteness among Pleiades-type clusters (those lacking bright stars, and especially red giants) at the edges of the completeness zone as it is illustrated by Fig.~\ref{fig:dist_age}. \subsection[]{Vertical variations}\label{sec:vervar} In Fig.~\ref{fig:dndtvert} we show the age distributions of clusters of the ``vertical'' samples. In contrast to the general similarity shown by the ``planar'' samples (see Fig.~\ref{fig:dnrgvar}), the ``vertical" samples demonstrate a dramatic, but unsurprising, disagreement. As expected, the thin-disk distribution agrees closely with the ``planar" samples, and the thick-disk distribution is completely deprived of young clusters ($\log t<8.4$), with intermediate-age objects representing only a small fraction of the thick-disk population. For example, the fraction of objects with $\log t< 9$ is less than 20\% for the thick-disk, meanwhile for the thin-disk it exceeds 90\%. Both distributions complement each other when reproducing the total age distribution of the disk clusters, and can be regarded as representatives of different populations having different formation histories. \section{Cluster formation history}\label{sec:cluhis} In this section we present a simple analytic cluster formation and destruction model in order to discuss the impact of the different input parameters on the observed age distribution. The age distribution of star clusters reflects directly their formation history only in the regime where their lifetimes $\tau$ exceed the look-back time. Since the observed age distribution is monotonically declining, a consequence would be an increasing cluster formation rate (hereafter CFR) in the recent past. Otherwise their present-day distribution is distorted by destruction processes of existing generations. The process is quite similar to that observed in the world of stars, with an exception, that stellar lifetimes decrease with their mass, and those of star clusters increase. In general the cluster age distribution depends on the CFR, the cluster initial mass function (CIMF) and the cluster lifetime, which is a function of the initial cluster mass and will be parametrised by the cluster lifetime-mass relation (LTMR). Since reliable and unbiased cluster masses are not available to date, the details of the gradual cluster dissolution do not directly enter the present day age distribution. The LTMR describes the observability of clusters, where we allow for a initial mass dependent maximum age of the clusters. For a proper interpretation of the observed age distributions these dependencies have to be taken into account. Our model has some free parameters, which should be optimised when the model is fit to the empirical age distributions. \subsection{The model}\label{sec:model} Let $\xi(M,T)\,\mathrm{d}M\,\mathrm{d}T$ be the number of clusters with initial masses $M,\,M+\mathrm{d}M$ formed in the time interval $T,\,T+\mathrm{d}T$ \begin{equation*} \xi(M,T) \equiv \frac{\partial^2N}{\partial T\partial M}. \end{equation*} The time $T$ is counted from the moment of formation of the open cluster subsystem of the Galactic disk that is still observable. To be definite we assume that this moment corresponds to formation of the oldest cluster in the completeness sample with $\log t_{max}=9.68$\footnote{In fact this limit corresponds to the value of the last bin in the age distribution and is slightly lower than the age of the oldest cluster.}. In this scale the present moment of time $T_p$ equals 4.8 Gyr. The cluster mass $M$ corresponds to the initial mass of the cluster, and normally decreases with cluster evolution due to mass loss driven by various processes. In the literature starting with the seminal works of \citet{salp55} and \citet{mschm59} $\xi(M,T)$ is called the ``cluster formation function'' and is typically represented as a product of two independent functions of mass and time \begin{equation*} \xi(M,T) = \psi(T)\,f(M)\,, \end{equation*} where $\psi(T)$ is the CFR, and $f(M)$ is the CIMF. The CIMF is normalised to unity over the whole mass range $[M_{min},M_{max}]$ \begin{equation} \int^{M_{max}}_{M_{min}} f(M)\, \mathrm{d}M = 1\,, \label{eq:norm} \end{equation} and while $\psi(T)$ gives the number of clusters formed per time interval, $f(M)$ weights it with cluster mass. \begin{figure}[t] \begin{center} \includegraphics[width=0.9\hsize,clip=]{cluage_fig8.eps} \caption{The lifetime-mass relations LTMR used in the model (red) compared to the \citet{lamgi06} relation with 100 $M_\sun$-remnant (blue). The solid red line corresponds to under-filled clusters, the dashed line represents filling Roche lobe models, and the dotted line is for overfilled models with $T_0=500$ Myr. The two thin, vertical lines indicate lower and upper limits of the cluster mass range. The horizontal solid line at $\log \tau=7.8$ illustrates the integration range $[M_t,M_{max}]$ for the filled model.}\label{fig:mltrcmp} \end{center} \end{figure} With these definitions and $\tau(M)$, the lifetime of a cluster with initial mass $M$ (the LTMR), one can easily build a theoretical distribution of cluster ages in terms of the cluster formation history. We take into account that the relation between the moment of cluster formation $T$ and its current age $t$ is $T=T_p-t$. The number $\mathrm{d}N$ of clusters with mass $M,\,M+\mathrm{d}M$ formed at a moment $T,\,T+\mathrm{d}T$ and not dissolved until the present is equal to $\xi(M,T)\,\mathrm{d}M\,\mathrm{d}T$ if $\tau(M) \geqslant t$, and 0 otherwise. For simplicity we use hereafter the notation $M_t$, denoting the solution of the equation $\tau(M)=t$, corresponding to the minimum mass of presently observed clusters with age $t$. For clusters of all masses born at $T,T+\mathrm{d}T$ the observed number is expressed as \begin{equation*} \mathrm{d}N = \int^{M_{max}}_{M_{min}}\xi(M,T)\,\mathrm{d}M\,\mathrm{d}T - \int^{M_t}_{M_{min}}\xi(M,T)\,\mathrm{d}M\,\mathrm{d}T, \end{equation*} where the second integral corresponds to the number of dissolved clusters at the moment $T_p$. Changing to cluster ages (= look-back time), the present-day age distribution becomes \begin{eqnarray} \eta(t)=\frac{\mathrm{d}N(t)}{\mathrm{d}t} &=& \int^{M_{max}}_{M_t} \xi(M,T_p-t)\, \mathrm{d}M\, = \nonumber \\ \label{eq:modeta} &=& \psi(T_p-t)\,\int^{M_{max}}_{M_t} f(M)\, \mathrm{d}M. \end{eqnarray} As the age $t$ increases, the lower integration limit also increases from $M_{min}$ to $M_{max}$, while the integral decreases from unity to zero. At young ages it is clear that $\eta(t)$ is close to the CFR, but for old clusters it is highly affected by the assumptions on $f(M)$ and $\tau(M)$. \begin{table}[t] \caption{Fitted models and best fit parameters for the CIMF} \label{tab:fitcimf} \begin{tabular}{llllllll} \hline\hline \noalign{\smallskip} Model &$N_{it}$&$N_{fd}$&$\chi^2_n$ &$x_1$ &$\sigma_{x_1}$&$x_2$&$\sigma_{x_2}$\\ \hline \noalign{\smallskip} $u$ & 10 & 36 & 1.343 & 0.39 & 0.18 & 0.54 & 0.05 \\ $f$ & 10 & 36 & 1.403 & 0.63 & 0.15 & 0.24 & 0.05 \\ $o$\tablefootmark{a}& 177 & 33 & 1.524 & - & - & 0.07 & 0.05 \\ \hline \end{tabular} \tablefoot{\tft{a}{One section CIMF.}} \end{table} Equation (\ref{eq:modeta}) fully determines the cluster population model describing the theoretical age distribution. The main components of the model are the cluster initial mass function, the cluster formation rate, and the cluster lifetime, along with their parameters. One can see from Eq.~(\ref{eq:modeta}) that the theoretical $\eta(t)$ depends on the specific representations of $f(M)$, $\psi(T)$ and $\tau(M)$, and on the parameters of these functions. In earlier studies, different approaches were proposed, depending on the purpose of the study and available data for the components. Below we briefly describe the adopted representation for every component. The theoretical age distribution was fit to the empirical one with the help of the powerful and flexible routine MPFIT from the IDL-library of \citet{markwdt09}. For the cluster formation rate CFR we use an exponential function \begin{equation} \psi(T) = \alpha + \beta\exp\left(\gamma \frac{T_p-T}{T_p}\right). \label{eq:cfr} \end{equation} Prior to selecting this particular form for the CFR, we tested various other forms (linear relation, rational, power law etc.) and found that the resulting goodness of fits do not differ strongly and there is no clear preference. Nevertheless in detail they differ and we decide to show here one giving a small specific residual to the fit $\chi^2_n$. In some regions of the parameter space $\gamma$ is strongly correlated with $\alpha$ and $\beta$, so the iteration does not converge properly. In these cases $\gamma$ is fixed to a few different values and the best fit results by optimization of the parameters $\alpha$ and $\beta$ are compared. Eq.~(\ref{eq:cfr}) reproduces a CFR monotonically decreasing in time if $\beta$ and $\gamma$ are positive. At the initial moment we have $\psi(0)\equiv\psi_0=\alpha+\beta\,\mathrm{e}^\gamma$, while the present-time CFR is equal to $\psi(T_p)\equiv\psi_p=\alpha+\beta$. The average CFR $\psi_a$ can then be expressed as \[ \psi_a = \frac{1}{T_p}\int_0^{T_P} \psi(T)\, \mathrm{d}T = \alpha + \frac{\beta}{\gamma}\,(\mathrm{e}^\gamma-1). \] As a measure of the variations of the CFR we use two ratios: $\psi_0/\psi_p$ and $\psi_a/\psi_p$. \begin{figure*}[t] \includegraphics[width=0.325\hsize,clip=]{cluage_fig9a.eps} \includegraphics[width=0.325\hsize,clip=]{cluage_fig9b.eps} \includegraphics[width=0.325\hsize,clip=]{cluage_fig9c.eps}\\ \includegraphics[width=0.325\hsize,clip=]{cluage_fig9d.eps} \includegraphics[width=0.325\hsize,clip=]{cluage_fig9e.eps} \includegraphics[width=0.325\hsize,clip=]{cluage_fig9f.eps}\\ \includegraphics[width=0.325\hsize,clip=]{cluage_fig9g.eps} \includegraphics[width=0.325\hsize,clip=]{cluage_fig9h.eps} \includegraphics[width=0.325\hsize,clip=]{cluage_fig9i.eps} \caption{Results of the model fit to the observed age distribution for the local cluster sample for $u$-, $f$-, and $o$-models (from top to bottom). The left column shows the fit results, the middle column shows the respective CFRs, and the right column displays the derived CIMFs. The histogram shows the observed age distribution with Poisson errors indicated by vertical error bars. The violet curves represent the fitted models, where the thick portion indicates the fitted range, and the thinner dotted one extends the derived law. Red curves are the model CFRs, thick blue lines are the CIMFs and thick blue dotted lines represent their initial approximations. The blue horizontal lines indicate the present (solid), initial (dashed), and average (dotted) model CFRs. Vertical lines indicate the mass range limits (red), and selected $M^*$ (blue) value used.}\label{fig:ufores} \end{figure*} In principle the CIMF agrees with the present day mass distribution of very young clusters, before they suffer from mass dissolution. A simple representation of the CIMF is given by a broken power law with two sections \begin{eqnarray} f(M) = \frac{\mathrm{d}N}{\mathrm{d}M} = \begin{cases} k_1\,M^{-(x_1+1)} & \text{for $M_{min}\leqslant M < M^*$,}\\ \label{eq:cimf2} k_2\,M^{-(x_2+1)} & \text{for $M^* \leqslant M \leqslant M_{max}$.} \end{cases} \end{eqnarray} The constants $k_1,k_2$ are determined by the continuity (or the jump) at $M^*$ and the normalisation of the CIMF with Eq.~(\ref{eq:norm}). The parameters $x_1$, $x_2$ are determined from the model fit to the observed age distributions, with initial values $x_1=-0.15$, and $x_2=1.0$. The mass ranges are fixed at the following values: $M_{min}=2.5\,M_\sun$, $M_{max}=6.3\,\times\,10^4\,M_\sun$ and $M^*=100\,M_\sun$. \begin{table*}[t] \caption{Best fit parameters for the CFR}\label{tab:fitcfr} \begin{tabular}{lllllllllllllll} \hline\hline \noalign{\smallskip} Model&$\alpha$&$\sigma_\alpha$ &$\beta$ &$\sigma_\beta$ &$\gamma$ &$\sigma_\gamma$ &$\psi_0$&$\sigma_{\psi_0}$&$\psi_a$&$\sigma_{\psi_a}$&$\psi_p$&$\sigma_{\psi_p}$&$\psi_a/\psi_p$ &$\psi_0/\psi_p$\\ \hline \noalign{\smallskip} $u$ & $-0.55$& $0.10$ & $0.57$ & $0.10$ &1.00\tablefootmark{b}& - & $1.00$ & $0.29$ & $0.43$ & $0.20$ & $0.02$ & $0.14$ & $22.2$& $51.7$\\ $f$ & $-0.37$& $0.05$ & $0.40$ & $0.05$ &1.00\tablefootmark{b}& - & $0.72$ & $0.15$ & $0.32$ & $0.10$ & $0.03$ & $0.08$ & $9.8$ & $22.0$\\ $o$ & $0.12$ & $0.02$ & $1.38$\tablefootmark{a}& $0.22$\tablefootmark{a}&31.24 &0.00& $0.63$ & $0.08$ & $0.14$ & $0.02$ & $0.12$ & $0.02$ & $1.1$ & $5.1$ \\ \hline \end{tabular} \tablefoot{\tft{a}{In units of $10^{-14}$; }\tft{b}{Fixed.}} \end{table*} The third input function is the lifetime-mass relation LTMR. It is well-known from numerical simulations that the LTMR strongly depends on the initial conditions of star clusters after gas-removal. Here we use a simple parametrisation covering the results based on N-body calculations of \citet{ernstea15}, who studied the dissolution of star clusters of different initial Roche volume filling factors covering a large cluster mass range. They consider cases of under-filled, filled in, and overfilled Roche lobes. They have shown that the lifetime scales with a power of the relaxation time with a decreasing power law index for larger filling factors. We use here a simple power law of the initial mass \begin{equation} \tau = T_0 \left(\frac{M}{M_0}\right)^{s}, \nonumber \end{equation} with $s=0.9, 0.6, 0.3$ for the under-filling (u), filling (f) and overfilling (o) cases, respectively. The filling case is also very close to the parametrisation of \citet{lamgi06}. The respective relations are shown in Fig.~\ref{fig:mltrcmp} together with earlier results of \citet{lamgi06}. Since the issue of the Roche volume filling factor is not solved (in particular it is not clear how many clusters follow the extremely compact or the extended models), we will not use the parameters of the LTMR for the optimisation of the model. Instead, we simply compare the fit results for different filling cases with fixed $T_0=200$\,Myr for the under-filling and $T_0=500$\,Myr for the overfilling case at $M_0=250 M_\sun$. The larger $T_0$ for the overfilling case is necessary to reach a maximum age of a few Gyr for the most massive clusters. \subsection{Results of the model fit}\label{sec:fitreslt} The best fit parameters determined for the three cases of under-filling, filling and overfilling clusters are listed in Tables~\ref{tab:fitcimf} and \ref{tab:fitcfr}. The resulting fits are shown in Fig.~\ref{fig:ufores} (left column). For each model, the required number of iterations $N_{it}$, the number of freedom degrees $N_{fd}$, the specific (per one freedom degree) $\chi^2_n$-parameter describing goodness of fit, and the slopes of the CIMF with their errors are provided in Table~\ref{tab:fitcimf}. For overfilling clusters, a one-section representation of the CIMF is used with an initial slope $x=1.0$. In Table~\ref{tab:fitcfr} we list the CFR parameters $\alpha$, $\beta$, $\gamma$, and initial $\psi_0$, average $\psi_a$, and present-day $\psi_p$ values and their ratios. For the $u$- and $f$-models we use the full range of available ages for fitting. However, the $o$-model was fit at $\log t\lesssim 9.3$ since overfilling clusters with older ages should not exist for $M\leqslant M_{max}$ (see Fig.~\ref{fig:mltrcmp}). For the under-filling and filling cases we find a very good representation of the observed age distribution. In both cases we find a strong decrease in the CFR (see Fig.~\ref{fig:ufores} middle column) and a relatively shallow power law of the CIMF at high cluster masses (Fig.~\ref{fig:ufores} right panels). We have tested the fits with different values of $\gamma$ in the CFR and $T_0$ in the LTMR, but the results were all very similar. In contrast, the overfilling case does not yield a satisfactory fit with a 2-slope CIMF and fixed $\gamma=1$. With a 1-slope CIMF and free parameter $\gamma$ we find a reasonable fit (after increasing $T_0$ to 500\,Myr). The resulting CFR shows a strong initial peak on top of the dominating constant value over the majority of time. The main reason is that in this case the cluster lifetimes cover only a range of less than 1\,dex, which cannot reproduce the continuous decline of the observed age distribution over 2\,dex with a monotonically declining CFR. The deviation of the CIMF shape to the initial mass distribution is large in all three cases. It increases with a decreasing power law index $s$ of the LTMR, i.e. with an increasing filling factor of the clusters, leading to a very shallow function at the high mass end. \subsection{Discussion}\label{sec:discus} The simple model for the three input functions, CIMF, CFR, and LTMR, described in the previous sub-section, yields a few fundamental conclusions for the observed present-day cluster sample. \begin{itemize} \item The age distribution alone cannot disentangle the impact of the three input functions. For a better understanding of cluster formation and evolution, a 2-dimensional fit of the mass-age distribution would be helpful. But this requires a parametrisation of the cluster mass evolution $M(t)$, replacing the simple function for the cluster lifetime. \item The fits to the observed age distribution for the different Roche volume filling factors are indistinguishable. However, shallower LTMRs require a sharper peak in the CFR at the oldest ages. \item In the framework of our simple model the CFR is not proportional to the field SFR as derived by \citet{aumebin09} or \citet{jujah10}. \item The large fraction of clusters with intermediate age combined with the strong decrease above an age of 1\,Gyr requires a large fraction of high mass clusters, i.e. a shallow CIMF at the high mass end, with $x_2$ significantly smaller than unity. \item We do not find a significant break in the CIMF to be flatter at low masses. One reason could be the extrapolation to very short lifetimes at the low mass end. On the other hand, infant mortality or a significant mass loss due to the expulsion of gas after cluster formation is not taken into account here \citep[see e.g.][for violent relaxation of young clusters]{shukurea17}. \item If the high mass slope of the CIMF is fixed to $x_2=1$, a satisfactory fit of the age distribution cannot be obtained. The basic assumption of a universal CIMF may need to be relaxed. The CIMF could depend on properties of the disc (gas fraction, stability) via a maximum cluster mass. \end{itemize} For a deeper understanding of cluster formation and evolution an extensive parameter study of the CIMF, the CFR, and the cluster mass evolution (instead of the cluster lifetime) is necessary. The theoretical predictions should then be compared to the 2-dimensional mass-age distribution. From the observational side, biases in terms of incompleteness and cluster mass determinations need to be understood in more detail. \section{Summary and conclusions}\label{sec:conc} In this study we constructed and investigated the age distributions of clusters using data from the all-sky survey of Galactic open clusters MWSC, which provides uniform and accurate ages as well as other relevant parameters like distances and reddenings. For assembling the distribution, we use a total of 2242 clusters located within the completeness radius of about 2.5 kpc from the Sun. Our sample is one order of magnitude larger than any previous samples used for age distribution analysis. Comparison with the literature shows that earlier results published in the 1980s-1990s strongly underestimate the fraction of evolved clusters with ages $\log t\gtrsim 8$. Recent studies, based on all-sky catalogues, agree more with our data, but still suffer from a lack of clusters older than about 1 Gyr. In order to consider radial variations in the age distribution, we build three radial sub-samples occupying different spatial locations within the completeness zone (the Inner, Local and Outer segments). They manifest general agreement of the distributions representing both the inter-arm space and two different spiral arms (Sagittarius and Perseus). The only prominent distinction is an enhanced fraction of old clusters in the Inner sample compared to the other two. This feature may be the manifestation of a higher cluster formation rate in the past in the inner disk. At the same time, it seems that the cluster formation histories of the Local and of the Outer sample were similar in the past. We also compare two vertical sub-samples (the planar and high-altitude samples, which we associate with the thin- and thick-disk populations respecitvely) and find very different distributions. As expected, the thin-disk distribution agrees in general with the ``radial'' samples, though a deficiency of old ($t\gtrsim 1$ Gyr) clusters exists. In contrast, the thick-disk distribution is completely deprived of young clusters ($t< 250$ Myr), and to a large degree also of intermediate-age objects ($t< 1$ Gyr). Nevertheless, both distributions complement each other and together reproduce the total age distribution of disk clusters, and can be regarded as representatives of different populations having different formation histories. With simple assumptions on the cluster formation history, the cluster initial mass function, and the cluster lifetime, we can reproduce the observed age distribution. The cluster formation rate and lifetime function are strongly degenerate, which prevents us from disentangling the different formation scenarios. In all cases the cluster formation rate is strongly declining with time, and the cluster initial mass function is very shallow at the high mass end. \begin{acknowledgements} This study was supported by Sonderforschungsbereich SFB 881 "The Milky Way System" (subproject B5) of the German Research Foundation (DFG) and by Russian Foundation of Basic Research grant 16-52-12027. We acknowledge the use of the Simbad database, the VizieR Catalogue Service and other services operated at the CDS, France, and the WEBDA facility, operated at the Department of Theoretical Physics and Astrophysics of the Masaryk University. We thank the referee for comments and suggestions that helped us improve the paper. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2018-02-21T02:00:27", "yymm": "1802", "arxiv_id": "1802.06779", "language": "en", "url": "https://arxiv.org/abs/1802.06779" }
\section{Introduction} \label{sec:Introduction} Neutrino physics is entering a new era of precision measurements, following up on the discovery that neutrinos have mass and leptons mix. Neutrino oscillations are a particularly interesting direction by which one can study physics beyond that predicted by the Standard Model (SM) of particle physics. One experiment that will carry the field into this new era is the Deep Underground Neutrino Experiment (DUNE)~\cite{Acciarri:2015uup}, which expects to begin collecting data within the next decade. DUNE is one of several next-generation long-baseline neutrino experiments that has been proposed to continue the quest to precisely measure neutrino oscillations. The experimental goals of DUNE include measuring the neutrino mass ordering, the octant of the atmospheric mixing angle (whether the third neutrino mass eigenstate is composed more of muon- or tau-flavor neutrino), and whether there is CP violation in the lepton sector through the phase $\delta$. Existing experiments have begun to make progress towards all of these goals, however there is no definitive answer yet for any, and no truly definitive answer will likely be given before DUNE begins its experimental run. Of key importance for these goals at DUNE is the fact that its long-baseline consists of matter that the neutrinos have the opportunity to interact with while travelling, a non-trivial effect that impacts neutrino oscillations in a measurable way. These impacts have been well-studied for several decades~\cite{Wolfenstein:1977ue} and are critical for the physics goals of the experiment (See, e.g., Refs.~\cite{Coloma:2014kca,Nath:2015kjg,Das:2016bwe,He:2016dco,DeRomeri:2016qwo,Das:2017fcz,Kolupaeva:2017nmc}). However, recent discussion has arisen over how well-known the Earth's matter density is known, and whether this uncertainty can impact the ability of DUNE to perform its experimental goals~\cite{Roe:2017zdw}. In this paper, we address uncertainties in the Earth's matter density profile and the measurement capability of DUNE. We show that, while matter density effects are important for its experimental goals, changes to the neutrino oscillation by changing the profile in ways discussed in Ref.~\cite{Roe:2017zdw} will not be realizable at DUNE. Previous works, such as Refs.~\cite{Ohlsson:2001et,Jacobsson:2001zk,Jacobsson:2002nb,Brahmachari:2003bk,Shan:2003vh,Ohlsson:2003ip}, have explored the impact on oscillation probabilities from a changing matter density. Here, we focus specifically on the impact at DUNE. Additionally, we discuss a perturbative method for calculating neutrino oscillations, first introduced in Ref.~\cite{Denton:2016wmg}, and analyze how suitable it is for DUNE. We see that this method is simultaneously capable of calculating probabilities for the sake of DUNE and several orders of magnitude faster in calculation than conventional, more exact methods. This manuscript is organized as follows: in Section~\ref{sec:Oscillations}, we review the framework in which neutrino oscillation probabilities are calculated, as well as how matter density effects impact these probabilities. In Section~\ref{sec:Precision}, we analyze the oscillation probability measurement precision in a number of ways -- in Section~\ref{subsec:Naive}, we perform a na{\"i}ve statistical argument for this precision. We improve on this estimate in Section~\ref{subsec:OscParams} by analyzing how well DUNE will be able to measure oscillation parameters. In Section~\ref{subsec:MatterDensity}, we explore the change of oscillation probabilities caused by changing the matter density profile's average density and shape, and in Section~\ref{subsec:PerturbativeSensitivity}, we see how precisely the perturbative method discussed can calculate oscillation probabilities. In Section~\ref{sec:Conclusions}, we offer some concluding remarks. \setcounter{equation}{0} \section{Neutrino Oscillations in Matter} \label{sec:Oscillations} Oscillations between flavor eigenstates of neutrinos occur during propagation due to the difference in masses between mass eigenstates and the sizable mismatch between the two eigenbases. We characterize this mismatch using the PMNS Matrix $U$, where $\ket{\nu_\alpha} = U_{\alpha i} \ket{\nu_i}$. Here, Greek indices $\alpha = e, \mu, \tau$ refer to the flavor basis, and Latin indices $i = 1, 2, 3$ refer to the mass basis. Where oscillations are concerned, the matrix $U$ depends on three mixing angles $\theta_{12}$, $\theta_{13}$, and $\theta_{23}$, as well as one CP-violating phase $\delta$. The probability that a neutrino, produced in a flavor-diagonal interaction as a state $\nu_\alpha$, travels a distance $L$, and has oscillated into a state $\nu_\beta$, then, is \begin{equation} P_{\alpha\beta} \equiv \left\lvert \bra{\nu_\beta} U e^{-i H_{ij} L} U^\dagger \ket{\nu_\alpha} \right\rvert^2, \label{eq:Osc} \end{equation} where $H_{ij}$, assumed to be constant, is the Hamiltonian in the mass eigenbasis. This additionally assumes that the neutrinos travel ultrarelativistically. In vacuum, $H_{ij} \equiv 1/(2 E_\nu) \mathrm{diag} \left\lbrace 0, \Delta m_{21}^2, \Delta m_{31}^2\right\rbrace$, where $E_\nu$ is the energy of the neutrino and $\Delta m_{ji}^2 \equiv m_j^2 - m_i^2$ is the neutrino mass-squared splitting. During propagation through Earth, interactions between neutrinos and the electrons, neutrons, and protons induce an effective interaction potential $V$, diagonal in the flavor basis. Interactions with neutrons and protons are identical for all neutrino flavors, however there is an asymmetry between interactions of $\nu_e$ with electrons compared to interactions of $\nu_{\mu,\tau}$. Because of this, we write $V_{\alpha\beta} = (a/2 E_\nu) \mathrm{diag}\left\lbrace 1, 0, 0 \right\rbrace,$ where $a = 2\sqrt{2} G_F n_e E_\nu$, and $G_F$ is the Fermi constant and $n_e$ is the number density of electrons in the path of propagation, again, assumed here to be constant. Writing $n_e$ in terms of matter density $\rho$ and electron fraction $Y_e$, \begin{equation} a \simeq 1.52 \times 10^{-4} \left(\frac{Y_e \rho}{\mathrm{g/cm}^3}\right) \left(\frac{E_\nu}{\mathrm{GeV}}\right) \mathrm{eV}^2. \end{equation} For the remainder of this work, we assume $Y_e = 1/2$. Comparing with the measured mass-squared splittings, $a$ will be on the comparable to $\Delta m_{31}^2$ for GeV-scale $E_\nu$. The propagation Hamiltonian is then modified, $H_{ij} \rightarrow H_{ij} + U_{i\alpha}^\dagger V_{\alpha\beta} U_{\beta j}$. For antineutrinos oscillating, the probability is calculated in the same way, however $U \rightarrow U^*$ and $a \rightarrow -a$. In Eq.~(\ref{eq:Osc}), the term $e^{-i H_{ij} L}$ is the time-evolution of the initial neutrino state as it travels over a distance $L$. As stated above, this assumes that $H_{ij}$ is constant over the entire path and $t = L$. With a varying Hamiltonian, the Schr\"{o}dinger equation\footnote{We note here that the Hamiltonian in Eq.~(\ref{eq:Schrodinger}) is written in the flavor basis, or $H_{\alpha\beta} = U_{\alpha i} H_{ij} U_{j\beta}^\dagger$.} must be solved: \begin{equation} i \frac{\partial}{\partial x} \ket{\nu} = H\ket{\nu}, \quad \ket{\nu} = \left(\begin{array}{c} \nu_e \\ \nu_\mu \\ \nu_\tau \end{array} \right).\label{eq:Schrodinger} \end{equation} Instead of solving this equation for a varying $H$, we instead treat the matter potential, and therefore the Hamiltonian, as a piecewise-constant function. Then, we can apply a series of time-evolution operators to the initial state, arriving at the following oscillation probability: \begin{equation} P_{\alpha\beta} = \left\lvert \bra{\nu_\beta} U \left(\prod_{n =1}^N e^{-i H_{ij}^{(n)} L_n}\right) U^\dagger \ket{\nu_\alpha} \right\rvert^2, \end{equation} where $N$ is the number of divisions taken along the path of propagation, $H_{ij}^{(n)}$ is the Hamiltonian and $L_n$ is the length for the $n$th division, respectively. If one takes the limit $N\to \infty$, the resulting oscillation probability agrees with that from the Schr\"{o}dinger equation. \section{Sensitivity of DUNE} \label{sec:Precision} In this Section, we discuss the sensitivity of the upcoming Deep Underground Neutrino Experiment~\cite{Acciarri:2015uup} (DUNE) to changes in oscillation probabilities for neutrinos travelling the $1285$ km between Fermilab and the Sanford Underground Research Facility in South Dakota. We will be interested in the capability of the experiment to measure an oscillation probability $P_{\alpha\beta}$, and the changes in the probability that will be statistically measurable. We will refer to these changes as $|\Delta P_{\alpha\beta}|$. First, we will do so using a na{\"i}ve statistical estimate, and then we will do so by considering the stated neutrino oscillation parameter precision of the experiment. After doing so, we will discuss how changes to the matter density profile along the path of propagation can lead to measurable changes in probability, both in changing the shape and average density of the profile. Finally, we will consider a perturbative approach and discuss whether it is precise enough for the sake of DUNE. Throughout, we will use specific colors when discussing changes in probabilities induced by a certain effect: we will use \textbf{black} for our na{\"i}ve statistical estimate, \textbf{{\color{paramcol} green}} for changes of oscillation parameters, \textbf{{\color{avgcol} blue}} for changes to the matter density profile average density, \textbf{{\color{shapecol} red}} for changes to its shape, and \textbf{{\color{pertcol} purple}} for the perturbative method at zeroth-order. We will briefly discuss the perturbative method at first-order, and will do so in \textbf{{\color{firstcol} pink}}. \subsection{Na{\"i}ve Estimate of Sensitivity} \label{subsec:Naive} Let us consider that a measurement of the oscillation probability for a given channel $P_{\alpha\beta}$ is being measured for neutrinos of some energy $E_\nu$. The number of events $N$ measured for this energy will be \begin{equation} N = N_1 \left(E_\nu; ...\right) \times P_{\alpha\beta}, \end{equation} where $N_1$ is the number of events that would be measured if the oscillation probability is $1$. It is a product of the neutrino flux, cross-section, detection efficiencies, etc. If we assume that the only uncertainty on $N$ is statistical and $N_1$ is well-known, then $\sigma_N = \sqrt{N}$ and $\sigma_N/N = |\Delta P_{\alpha\beta}|/P_{\alpha\beta}$. We can then substitute and arrive at our desired result, \begin{equation} |\Delta P_{\alpha\beta}| = \sqrt{\frac{P_{\alpha\beta}}{N_1}}.\label{eq:StatPrecision} \end{equation} We see here that the experiment is sensitive to smaller changes $|\Delta P_{\alpha\beta}|$ when the oscillation probability itself is lower\footnote{We note here that for a small enough probability $P_{\alpha\beta} = 1/N_1$, the number of events measured is $1$: assuming only statistical uncertainty, one cannot improve on a measurement of $1$ event.}, and that in order to be sensitive to an order of magnitude lower $|\Delta P_{\alpha\beta}|$, an experiment requires a factor of $100$ larger $N_1$. Using Ref.~\cite{Acciarri:2015uup}, we estimate that for the energy ranges of interest at DUNE, $E_\nu \simeq 1 - 4$ GeV, $N_1 \simeq 10^3$ for both appearance and disappearance channels. \begin{figure}[!htbp] \centering \includegraphics[width=0.6\linewidth]{Precisions_StatOnly.pdf} \caption{Measurement precision of a single-bin experiment with only statistical uncertainty as a function of the oscillation probability $P_{\alpha\beta}$. \textbf{{\color{dpcol} Orange}} lines give precision in terms of fractional uncertainty $|\Delta P_{\alpha\beta}|/P_{\alpha\beta}$ where solid lines give precision in terms of $|\Delta P_{\alpha\beta}|$. The right axis, along with annotations, denotes the number of unoscillated events $N_1$ necessary in a bin to attain the given precision.} \label{fig:PrecisionStat} \end{figure} In Fig.~\ref{fig:PrecisionStat}, we display the sensitivity to changes in probability $|\Delta P_{\alpha\beta}|$ for $N_1 = 10^{2}$, $10^{3}$, and $10^{4}$. Additionally, we display in \textbf{{\color{dpcol} orange}} the corresponding fractional uncertainty on the probability measurement, $|\Delta P_{\alpha\beta}|/P_{\alpha\beta}$. For the fractional uncertainty as well, two orders of magnitude larger $N_1$ is necessary to improve the sensitivity by one order of magnitude. This process can be repeated assuming systematic uncertainties on $N_1$. We perform this exercise in Appendix~\ref{sec:SystAppendix}. The results here remain true when including this systematic uncertainty: improvement of an order of magnitude on $|\Delta P_{\alpha\beta}|$ require at least two orders of magnitude larger $N_1$ in light of this uncertainty. DUNE will not be measuring oscillation probabilities in a single bin, but across $30$ bins in each of four channels (neutrino and antineutrino appearance and disappearance). If the measured oscillation probability $P_{\alpha\beta}$ is identical across $m$ measurements, one expects the sensitivity $|\Delta P_{\alpha\beta}|$ to decrease by a factor of $\sqrt{m}$. At DUNE, not only does the probability change across energies, the number of unoscillated events $N_1$ decreases away from the energy range of interest. We estimate this bin-to-bin measurement improvement factor to be $\sqrt{5}$ -- the measurement of the probability is being made predominantly in one bin with two bins on either side in $E_\nu$. In Fig.~\ref{fig:ParamsAndNaive}, we display the expected precision\footnote{Here, we use the oscillation parameters to be discussed cf Table~\ref{tab:Params} and calculate oscillation probabilities $P_{\mu e}$ and $P_{\mu\mu}$ as a function of neutrino energy $E_\nu$. We then use the estimated formulas for $|\Delta P_{\alpha\beta}|$ in Eqs.~(\ref{eq:StatPrecision}) and (\ref{eq:SystPrecision}) with $N_1 = 10^{3}$.} assuming $N_1 = 10^{3}$ unoscillated events, including an improvement factor of $\sqrt{5}$. We display this for appearance and disappearance channel sensitivity, both with and without a 5\% systematic uncertainty on $N_1$. We will be comparing this na{\"i}ve estimate with the sensitivity to $|\Delta P_{\alpha\beta}|$ that comes from changing oscillation parameters in Section~\ref{subsec:OscParams}. We note here that the sensitivity to the appearance channel $|P_{\mu e}|$ flattens out at $E_\nu \simeq 1.25$ GeV because $P_{\mu e} \simeq 0$ and roughly one event would be measured in this bin. In general, we expect sensitivity to $|\Delta P_{\mu e}| \simeq 3\times 10^{-3}$ (except for near $E_\nu = 1.3$ GeV, where the oscillation probability $P_{\mu e} \simeq 0$). For the disappearance channel, at all energies of interest, the sensitivity to $|\Delta P_{\mu\mu}|$ is larger than $2 \times 10^{-3}$. One could improve on these estimates with a more thorough calculation of this bin-to-bin improvement, and also by folding in the true varying $N_1$ as a function of neutrino energy. \subsection{Sensitivity to Oscillation Parameters} \label{subsec:OscParams} In this subsection, we analyze changes to the neutrino oscillation probabilities that arise when parameters change, and the capability of DUNE to measure these changes. Due to the range of energies at DUNE and its baseline, the experiment will not be sensitive to the solar sector parameters $\Delta m_{21}^2$ or $\theta_{12}$. It will have significant precision in measuring the four remaining oscillation parameters; $\theta_{13}$, $\theta_{23}$, $\Delta m_{31}^2$, and $\delta$. In Table~\ref{tab:Params}, we summarize the expected precision of the experiment to measuring these four parameters, assuming the true values listed, as detailed in Ref.~\cite{Acciarri:2015uup}. \begin{table}[!htbp] \begin{center} \begin{tabular}{|c||c|c|}\hline Parameter & Physical Value & $1\sigma$ Range \\ \hline \hline $\sin^2\theta_{23}$ & $0.450$ & $\left[ 0.442, 0.458\right]$ \\ \hline $\delta$ & $0$ & $\left[-0.2, 0.2\right]$ \\ \hline & $\pi/2$ & $\left[1.37, 1.77\right]$ \\ \hline $\sin^2\left(2\theta_{13}\right)$ & $0.085$ & $\left[0.080, 0.090\right]$ \\ \hline $\Delta m_{31}^2$ & $2.457 \times 10^{-3}$ eV$^2$ & $\left[ 2.447, 2.467\right] \times 10^{-3}$ eV$^2$ \\ \hline \end{tabular} \caption{Expected measurement precision at DUNE for parameters of interest assuming physical values listed. We note here that the measurement precision of DUNE for $\delta$ is mostly independent of its physical value, however we list the precision assuming $\delta = 0$ or $\pi/2$ here, as projected by the DUNE collaboration.} \label{tab:Params} \end{center} \end{table} This assumes a total exposure of $300$ kt-MW-years, consistent with experimental expectations. The appearance channel $P_{\mu e}$ (and its CP-conjugate) has sensitivity predominantly to the parameters $\sin^2\theta_{13}$ and $\delta$, where the disappearance channel $P_{\mu\mu}$ has sensitivity to $\sin^2\theta_{23}$ and $\Delta m_{31}^2$. With this in mind, we calculate the oscillation probability for a given channel assuming the physical values listed in Table~\ref{tab:Params}, as well as the oscillation probability with the parameter at its $\pm 1\sigma$ value. We calculate the change in probability between these two and show this in Fig.~\ref{fig:ParamsAndNaive}. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{PlotDeltaP_VaryParams_AndSyst.pdf} \caption{\textbf{Black} lines: na{\"i}ve prediction of measurement precision of oscillation probability $|\Delta P_{\alpha\beta}|$ assuming $N_1 = 10^3$ unoscillated events for all energies. Solid \textbf{black} lines include a 5\% uncorrelated bin-to-bin systematic uncertainty, where dashed \textbf{black} lines are for only statistical uncertainties. We have included a bin-to-bin measurement improvement factor of $\sqrt{5}$ to this na{\"i}ve estimate as discussed in the text. \textbf{{\color{paramcol} Green}}: Change to oscillation probabilities while changing oscillation parameters between their central values and $\pm 1\sigma$ extremes, as given in Table~\ref{tab:Params}. In the left panel, we show the impact on appearance probability $P_{\mu e}$ for the parameters measured (predominantly) by this channel, $\sin^2\theta_{13}$ (solid) and $\delta$ (dashed). In the right panel, we show the impact on disappearance probability $P_{\mu\mu}$ and its associated parameters, $\Delta m_{31}^2$ (dot-dashed) and $\sin^2\theta_{23}$ (dotted). We do not display antineutrino probability precisions here, but the result is qualitatively the same.} \label{fig:ParamsAndNaive} \end{figure} We show only the impact of $\sin^2\theta_{13}$ and $\delta$ in the appearance\footnote{The experimental sensitivity to the CP-violating phase $\delta$ comes largely from comparing neutrino and antineutrino appearance channels. Here, we simply display the change to the neutrino oscillation probability from changing $\delta$, but insist that this is an incomplete picture of the experimental sensitivity.} panel (left) and $\sin^2\theta_{23}$ and $\Delta m_{31}^2$ in the disappearance panel (right). Additionally, we include the na{\"i}ve estimates with and without 5\% systematic uncertainties discussed in Section~\ref{subsec:Naive}. We see here that the na{\"i}ve estimate with a $\sqrt{5}$ bin-to-bin measurement improvement factor comes close\footnote{The fact that the changes in oscillation probability induced by changing parameters is, for some energies, significantly lower than our na{\"i}ve estimate implies that the parameters are being measured where $|\Delta P_{\alpha\beta}|$ is largest, e.g. near $1.6$ GeV for $\Delta m_{31}^2$ in the disappearance channel (Fig.~\ref{fig:ParamsAndNaive}, left panel, dot-dashed line).} to capturing the true sensitivity to oscillation probability changes that comes from changing oscillation parameters. The necessary change to the oscillation probability in order to be measured at DUNE is on the level of $2\times 10^{-3}$ (greater than $2 \times 10^{-3}$) for the appearance (disapperance) channel. \subsection{Matter Density Profile Effects} \label{subsec:MatterDensity} Recently, Ref.~\cite{Roe:2017zdw} studied different models of the Earth's matter density profile and the resulting density as a function of distance between Fermilab and the future location of the DUNE detector, in South Dakota. The models discussed in detail are Shen-Ritzwoller~\cite{Shen:2016xxx}, Crustal~\cite{2013EGUGA..15.2658L}, and PEMC~\cite{Pemc:1975xxx}. The author of Ref.~\cite{Roe:2017zdw} cautions that these different matter density models lead to changes in oscillation probabilities for the energy range of interest at DUNE. \begin{figure}[!htbp] \centering \includegraphics[width=0.5\linewidth]{DensityMaps.pdf} \caption{The density maps considered here, given in Ref.~\cite{Roe:2017zdw} and scaled such that $\rho_{\mathrm{Avg.}} = 2.845$ g/cm$^3$. Each density map is divided into $N = 100$ segments.} \label{fig:DensityMaps} \end{figure} In Fig.~\ref{fig:DensityMaps}, we reproduce the Shen-Ritzwoller, PEMC, and Crustal maps considered in Ref.~\cite{Roe:2017zdw}, all normalized to the same average density $\rho_\mathrm{Avg.} = 2.845$ g/cm$^3$. The profiles have been divided into $N = 100$ piecewise constant segments. In this subsection, we consider changes to the oscillation probability due to these different matter density profiles. We separate this discussion into probability differences induced by changes in the density profile shape and those induced by changes in the average density. First, we calculate the oscillation probabilities with identical oscillation parameters for all three density profiles (with $N=100$ regions) as well as $\rho_\mathrm{Avg.} = 2.845$ g/cm$^3$. We then calculate the differences between probabilities for each pair of density profiles, and show the range of differences obtained by this process in Fig.~\ref{fig:ShapeNorm} in the \textbf{{\color{shapecol} red}} shaded regions. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{PlotDeltaP_VaryShapeAndNorm.pdf} \caption{Shaded regions (\textbf{{\color{shapecol} red}}): the range of change in oscillation probabilities obtained while changing the shape of the matter density profile, comparing the Crustal map, Shen-Ritzwoller map, and the PEMC map, as detailed in Ref.~\cite{Roe:2017zdw}. Matter density profiles have been normalized so that $\rho_{\mathrm{Avg.}} = 2.845$ g/cm$^3$. Solid lines (\textbf{{\color{avgcol} blue}}) display the change in oscillation probabilities when changing a constant matter density by $\pm 1\%$ of $\rho_\mathrm{Avg.} = 2.845$ g/cm$^3$. The left panel displays change in appearance probability $P_{\mu e}$ where the right panel displays change in disappearance probability $P_{\mu\mu}$, both for neutrino oscillation.} \label{fig:ShapeNorm} \end{figure} Next, we calculate, for a flat matter density profile, the change in oscillation probabilities when $\rho_\mathrm{Avg.}$ is changed between $2.845$ g/cm$^3$ and $\pm 1\%$, or $[2.82, 2.87]$ g/cm$^3$. The difference between the upper and lower range of this is negligible, and we display the resulting difference in probability $|\Delta P_{\alpha\beta}|$ also in Fig.~\ref{fig:ShapeNorm} as a solid \textbf{{\color{avgcol} blue}} line. We see that a $1\%$ change in the average matter density induces changes to the probability nearly an order of magnitude larger than changes in shape that are $\mathcal{O}(10\%)$ of the average density locally. Moreover, we see that both of these effects generate changes in the probability that are \textit{far} below what is necessary at DUNE to be measurable. Additionally, we see that the impact on the disappearance channel $P_{\mu\mu}$ is lower than that for the appearance channel $P_{\mu e}$ for all energies of interest. This is due to the fact that matter effects impact the appearance channel more significantly than the disappearance channel when comparing with vacuum oscillation probabilities. While the density profiles here are of particular interest for DUNE, we additionally would like to know whether this behavior -- that the impact of changing the average density dwarfs changing the shape of the profile -- is generic. In Appendix~\ref{sec:AppendixShapeNorm}, we consider a simple matter density profile that has two free parameters, one that governs the shape of the distribution, and one that governs its average density. We show that in general, a fractional change to the shape leads to probability differences that are five times smaller than those induced by the same fractional change in the average density. Here, we have considered changes to the average density $\rho_\mathrm{Avg.}$ at the level of $1\%$, in agreement with the largest uncertainties on $\rho_\mathrm{Avg.}$ discussed in Ref.~\cite{Roe:2017zdw}. Clearly, uncertainties at this level will have no impact at measurable levels at DUNE. In order to see how well DUNE can measure $\rho_\mathrm{Avg.}$ without any prior information, we allow it to be a free parameter in a fit in Appendix~\ref{sec:MeasureRho}. There, we see that DUNE requires changes to the average density on the order of $25\%$ to make a measurable impact. We also see that allowing a $1\%$ prior on $\rho_\mathrm{Avg.}$ has no impact on the measurement of any oscillation parameters. Even without a prior, the only parameter measurement that worsens is $\delta$, however it is a small effect. \subsection{Perturbative Approaches} \label{subsec:PerturbativeSensitivity} Constructing oscillation probabilities for three-neutrino oscillations in the presence of matter has been of interest for several decades~\cite{Zaglauer:1988gz,Sato:1997st,Arafune:1997hd,Minakata:1998bf,Cervera:2000kp,Freund:2001pn,Kimura:2002wd,Akhmedov:2004ny,Blennow:2013rca,Minakata:2015gra,Denton:2016wmg,Li:2016pzm}. Here, we focus on a method~\cite{Denton:2016wmg,Denton:2018hal} specifically developed for calculating oscillation probabilities perturbatively for long-baseline experiments such as DUNE. This approach, which we will refer to as the DMP method, provides a much faster way of calculating a probability, compared with that discussed above, which relies on calculating $N$ $3\times 3$ matrix exponentials for each neutrino energy considered. In the DMP method, one calculates changes to the mixing angles $\theta_{13} \to \widetilde{\theta}_{13}$ and $\theta_{12} \to \widetilde{\theta}_{12}$, as well as changes to the mass-splittings $\Delta m_{ji}^2 \to \Delta \widetilde{m}_{ji}^2$. We will be concerned with the zeroth order expansion of the DMP method, and reproduce the results here for completeness. The modifications depend on a combination of the (unperturbed) mass-splittings, $\Delta m_{ee}^2 \equiv \cos^2\theta_{12} \Delta m_{31}^2 + \sin^2\theta_{12} \Delta m_{32}^2$. The modifications to the mixing angles then, are \begin{align} \cos{2\widetilde{\theta}_{13}} &= \frac{\left(\cos{2\theta_{13}} - a/\Delta m_{ee}^2\right)}{\sqrt{\left(\cos{2\theta_{13}} - a/\Delta m_{ee}^2\right)^2 + \sin^2 2\theta_{13}}}, \\ \cos{2\widetilde{\theta}_{12}} &= \frac{\left(\cos{2\theta_{12}} - a'/\Delta m_{21}^2\right)}{\sqrt{\left(\cos{2\theta_{12}} - a'/\Delta m_{21}^2\right)^2 + \sin^2 2\theta_{12} \cos^2{\left(\widetilde{\theta}_{13} - \theta_{13}\right)}}}, \end{align} where $a' \equiv a\cos^2\widetilde{\theta}_{13} + \Delta m_{ee}^2 \sin^2{\left(\widetilde{\theta}_{13} - \theta_{13}\right)}$. Both $\widetilde{\theta}_{12}$ and $\widetilde{\theta}_{13}$ are in the range $[0, \pi/2]$. The modified mass-splittings are \begin{align} \Delta \widetilde{m}_{21}^2 &= \Delta m_{21}^2 \sqrt{\left(\cos{2\theta_{12}} - a'/\Delta m_{21}^2\right)^2 + \sin^2 2\theta_{12} \cos^2{\left(\widetilde{\theta}_{13} - \theta_{13}\right)}}, \\ \Delta \widetilde{m}_{31}^2 &= \Delta m_{31}^2 + \frac{1}{2}\left(2a - 3a' + \Delta \widetilde{m}_{21}^2 - \Delta m_{21}^2\right). \end{align} With these perturbative angles and mass-splittings, the zeroth-order probability can be calculated in the DMP scheme\footnote{In Refs.~\cite{Denton:2016wmg,Denton:2018hal}, the mixing matrix used is distinct from the PDG convention in Ref.~\cite{Patrignani:2016xqp}. These differences are not realizable at zeroth order of perturbation.}. The method for calculating higher-order perturbative corrections can be found in Refs.~\cite{Denton:2016wmg,Denton:2018hal}. We can compare the zeroth-order DMP probabilities with those calculated using a proper matrix exponential for a constant matter density and $N=1$ layer in order to characterize how precise the DMP method is. \begin{figure} \centering \includegraphics[width=\linewidth]{PlotDeltaP_PertVsFull_New.pdf} \caption{Change in oscillation probabilities between the DMP perturbative method at zeroth-order (\textbf{{\color{pertcol} purple}}) and first-order (\textbf{{\color{firstcol} pink}}) discussed in Section~\ref{subsec:PerturbativeSensitivity} and a matrix-exponential-calculated oscillation probability assuming one layer, both with constant matter density $\rho = 2.845$ g/cm$^3$. The left panel displays differences for appearance channel neutrino oscillation probabilities $P_{\mu e}$, and the right panel displays differences for disappearance channel probabilities $P_{\mu\mu}$.} \label{fig:DMP} \end{figure} The differences in oscillation probability are shown in Fig.~\ref{fig:DMP} in \textbf{{\color{pertcol} purple}}. Additionally, we include the first-order DMP probabilities in \textbf{{\color{firstcol} pink}}. We see that the resulting $|\Delta P_{\alpha\beta}|$, even at zeroth-order is well below the range necessary for detection at DUNE, implying that the zeroth-order DMP approach is sufficient for calculating oscillation probabilities for DUNE. The vertical axis in Fig.~\ref{fig:DMP} extends far lower than those in Figs.~\ref{fig:ParamsAndNaive} and \ref{fig:ShapeNorm} in order to display the first-order precision. In order for DUNE to be sensitive to this level of $|\Delta P_{\alpha\beta}|$, at least two orders of magnitude larger statistics would be necessary, as discussed in Section~\ref{subsec:Naive}. We encourage the use of this approach, as compiled zeroth-order DMP {\sc C++} code can calculate an oscillation probability in $\mathcal{O}(10^{-7})$ s, whereas compiled {\sc C++} code with the full matrix exponential (even for $N=1$ layer of matter) calculates a probability in $\mathcal{O}(10^{-5})$ s. A factor of $100$ faster calculation can drastically reduce computation time for large parameter spaces. If one requires the precision demonstrated by the first-order DMP method shown in Fig.~\ref{fig:DMP}, the amount of time to calculate a probability is not significantly longer than at zeroth-order. \setcounter{footnote}{0} \setcounter{equation}{0} \section{Discussion and Conclusions} \label{sec:Conclusions} In this manuscript, we have analyzed the impact of matter effects at the Deep Underground Neutrino Experiment, and shown that the only significant quantity regarding these, for the sake of measuring neutrino oscillation parameters, is the average density $\rho_\mathrm{Avg.}$. We have estimated the sensitivity to differences in oscillation probabilities at DUNE for both appearance and disappearance channels using both a na{\"i}ve statistical approach and analyzing the experiment's sensitivity to oscillation parameters. Additionally, we have shown that differences to the oscillation probability caused by changing the average density within its allowed region (or even inflated significantly) are smaller than those required for DUNE to detect~\cite{Roe:2017zdw}. The perturbative approach in Refs.~\cite{Denton:2016wmg,Denton:2018hal} can calculate oscillation probabilities precisely enough to capture all measurable effects at DUNE. Changes in the matter density profile shape, e.g. the three profiles considered in Ref.~\cite{Roe:2017zdw}, induce changes to the oscillation probability smaller than all of these and will be immeasurable at DUNE. A summary of these scales of $|\Delta P_{\alpha\beta}|$ for both appearance and disappearance channels is shown in Fig.~\ref{fig:Scales}. \begin{figure}[!htbp] \centering \includegraphics[width=\linewidth]{Scales_Horizontal.pdf} \caption{A summary of the scales of $|\Delta P_{\mu e}|$ (appearance, top) and $|\Delta P_{\mu\mu}|$ (disappearance, bottom) discussed in this paper: na{\"ive} sensitivity estimates in Section~\ref{subsec:Naive} (\textbf{black}), differences from parameter changes in Section~\ref{subsec:OscParams} (\textbf{{\color{paramcol} green}}), changes to the matter density profile average density (\textbf{{\color{avgcol} blue}}) and shape (\textbf{{\color{shapecol} red}}) discussed in Section~\ref{subsec:MatterDensity}, and the precision of the zeroth-order (\textbf{{\color{pertcol} purple}}) and first-order (\textbf{{\color{firstcol} pink}}) DMP perturbative approach from Section~\ref{subsec:PerturbativeSensitivity}. We restrict the neutrino energy to be in the range $2$ GeV $< E_\nu <$ $4$ GeV, where the number of unoscillated events is highest for each channel.} \label{fig:Scales} \end{figure} Additionally, we have also explored the computation time saved by using the perturbative approach as opposed to a more exact calculation time; several orders-of-magnitude faster. Because of this, we encourage the use of such perturbative approaches, as the precision is more than capable of calculating all detectable oscillation probabilities at DUNE. Briefly, we discuss these results in a broader context. Many studies of upcoming long-baseline neutrino experiments consider the possibility that neutrinos have additional interactions with the matter along the path of propagation, dubbed non-standard neutrino interactions (NSI) (see, e.g., Refs.~\cite{Masud:2015xva,deGouvea:2015ndi,Coloma:2015kiu,Liao:2016hsa,Masud:2016bvp,Masud:2016gcl,Blennow:2016etl,Deepthi:2016erc,Liao:2016orc,Ghosh:2017lim,Farzan:2017xzy,Deepthi:2017gxg} for discussions of NSI at DUNE). These scenarios are testable at DUNE, in large part due to the matter effects discussed in this manuscript. Because these NSI alter the interaction potential discussed in Section~\ref{sec:Oscillations}, the DMP perturbative expansion cannot be used here. Other perturbative methods exist for these scenarios, such as that detailed in Ref.~\cite{Kikuchi:2008vq}, however they do not offer the same level of precision as the DMP method. Regardless of the calculation method considered for NSI, we still note that the effects due to changing matter density profile shape are subdominant to any measurable impacts at DUNE. In addition to parameter degeneracies discussed in Appendix~\ref{sec:MeasureRho}, further degeneracies will exist in studying NSI if one considers a changing average matter density (particularly $\rho_\mathrm{Avg.}$ vs. $\epsilon_{ee}$). In Appendix~\ref{sec:SystAppendix}, we explored the modifications to our na{\"i}ve sensitivity estimate in light of systematic uncertainties. We saw that, as expected, systematic uncertainties only make measurements more difficult, and that at least two order of magnitude more events are necessary to improve a measurement by an order of magnitude. We analyzed a simple matter density profile model in Appendix~\ref{sec:AppendixShapeNorm} in order to explore whether, in general, changes to a matter density profile shape matter far less than changes to the average density. We found this to be the case, in agreement with the discussion in Section~\ref{subsec:MatterDensity}. Additionally, we found that changes to the average density on the order of $\pm 25\%$ its true value are required to make measurable changes to the oscillation probability at DUNE. Finally, we explored the capability of DUNE to measure $\rho_\mathrm{Avg.}$ assuming a constant matter density profile in Appendix~\ref{sec:MeasureRho}. The results here agreed with those in Appendix~\ref{sec:AppendixShapeNorm}, DUNE will be able to independently measure $2.5$ g/cm$^3$ $\lesssim \rho_\mathrm{Avg.} \lesssim 3.5$ g/cm$^3$ at a $1\sigma$ level, and this statement is true regardless of the true value of $\delta$. We also saw that the only oscillation parameter with its measurement impacted by a free parameter $\rho_\mathrm{Avg.}$ is $\delta$, and even so it is not a large effect. \section*{Acknowledgements} The work of KJK is supported in part by DOE grant \#de-sc0010143. We acknowledge the use of the Quest computing cluster at Northwestern University for a portion of this research. KJK thanks the Neutrino Physics Center at Fermilab for providing support during the completion of this work. This manuscript has been authored by Fermi Research Alliance, LLC under Contract No. DE-AC02-07CH11359 with the U.S. Department of Energy, Office of Science, Office of High Energy Physics. This project has received funding/support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 690575. This project has received funding/support from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 674896.
{ "timestamp": "2018-02-21T02:00:31", "yymm": "1802", "arxiv_id": "1802.06784", "language": "en", "url": "https://arxiv.org/abs/1802.06784" }
\section{Introduction} Let $G$ be a connected reductive algebraic group over $\mathbb{Q}$ and $\Gamma \subset G(\mathbb{Q})$ an arithmetic subgroup. For an irreducible finite-dimensional complex representation $M$ of $G(\mathbb{R})$, the group cohomology $H^*(\Gamma, M)$ provides a concrete realization of some automorphic forms that are of number-theoretic interest. For example, if $G = SL_2$, and $\Gamma$ is a congruence subgroup of level $N$, the well-known Eichler-Shimura isomorphism exhibits $H^1(\Gamma, \mathbb{C})$ as the span of modular forms of $\Gamma$ of weight $2$. But, these cohomology groups $H^\bullet(\Gamma, M)$ `captures' only {\it some} automorphic forms and in fact, almost all automorphic forms do not appear in them; nevertheless, those that do appear have number-theoretic significance, partly justifying their study. \\ \noindent Henceforth, throughout this section, let $G = Gl_n$, and $K_\infty = O(n) Z(\mathbb{R})$ be the maximal compact modulo center subgroup of $G(\mathbb{R})$, and let $K_f \subset G(\ade_{\textrm{fin}})$ be an open compact subgroup that is neat [Definition \ref{neat_subgroup_defn}] (see \S \ref{adelic_setup} for notation). Let $(\rho_\lambda, M_\lambda)$ be the highest weight $G(\mathbb{R})$-module associated to the dominant integral weight $\lambda$ such that its central character $\omega_\lambda$ is a type of an algebraic Hecke character (see \S \ref{algebraic_Hecke_character_defn} for definition). \\ \noindent The group cohomology $H^\bullet(\Gamma, M_\lambda)$ is known to be isomorphic to the sheaf cohomology $H^\bullet(S_{K_f}, \til{M_\lambda})$ of the adelic locally symmetric space $S_{K_f} := G(\mathbb{Q}) \expandafter\@gobble\string\\ G(\mathbb{A})/K_\infty K_f$ with coefficients in the locally system $\til{M_\lambda}$ \textit{derived} from $M_\lambda$, allowing one to understand the former in terms of the latter. The space $S_{K_f}$ is not compact in general, and has a compactification called the Borel-Serre compactification $\bar{S}_{K_f}$ (see \ref{Borel_Serre_compactification}), equipped with an inclusion $\iota : S_{K_f} \hookrightarrow \bar{S}_{K_f}$ that is a homotopy equivalence and the canonical restriction $r : \bar{S}_{K_f} \to \partial \bar{S}_{K_f}$ onto the boundary $\partial \bar{S}_{K_f} = \bar{S}_{K_f} \setminus S_{K_f}$: $$ S_{K_f} \xrightarrow{i} \bar{S}_{K_f} \xrightarrow{r} \partial S_{K_f} $$ The coefficients on these spaces are given by the short exact sequence of sheaves $$ 0 \to i_{!} \til{M_\lambda} \xrightarrow{i} i_* \til{M_\lambda} \xrightarrow{r} i_* \til{M_\lambda}/ i_{!} \til{M_\lambda} \to 0 $$ where $i_{!} \til{M_\lambda}$ is the sheaf extended by zero from $S_{K_f}$ to $\bar{S}_{K_f}$, and the quotient $i_* \til{M_\lambda}/ i_{!} \til{M_\lambda}$ is the sheaf $i_* \til{M_\lambda}$ restricted to the boundary extended by zero to $\bar{S}_{K_f}$. Accordingly, there is a fundamental long exact sequence \begin{equation*} \ldots \to H^{k-1}( \partial \bar{S}_{K_f}, \til{M_\lambda})\to H^k_c(S_{K_f}, \til{M_\lambda}) \xrightarrow{i^k} H^k(S_{K_f}, \til{M_\lambda}) \xrightarrow{r^k} H^{k+1}(\partial \bar{S}_{K_f}, \til{M_\lambda}) \to \ldots \end{equation*} where the \textit{cohomology with compact supports} $ H^\bullet_c(S_{K_f}, \til{M_\lambda}) := H^\bullet(\bar{S}_{K_f}, i_{!}\til{M_\lambda}) $. The main goal of this article is to give an an explicit description, in the case where $G = GL_n$ with $n$ prime, of the \textit{inner cohomology} $$ H^\bullet_{!}(S_{K_f}, \til{M_\lambda}) := \operatorname{Img}( H^\bullet_c(S_{K_f}, \til{M_\lambda}) \xrightarrow{i^\bullet} H^\bullet(\bar{S}_{K_f}, \til{M_\lambda})). $$ \noindent In this paper, we use the `approximation' given by Borel and Garland \cite{BG}, namely the homomorphism of Hecke modules, which \textit{surjects} onto the subspace of $H^\bullet(S_{K_f}, \til{M})$ called the square-integrable cohomology $H^\bullet_{(2)}(S_{K_f}, \til{M})$ (see Definition \ref{square_integrable_cohomology}): \begin{equation*}\label{intro_BG_map} H^\bullet(\mathfrak{g}, K_\infty, L^2(G(\mathbb{Q})\expandafter\@gobble\string\\ G(\mathbb{A})/K_f, \omega_\lambda^{-1}) \otimes M_\lambda) \overset{\phi^\bullet_{BG}}{\longtwoheadrightarrow} H^\bullet_{(2)}(S_{K_f}, \til{M_\lambda}) \end{equation*} where the coefficient system $L^2(G(\mathbb{Q})\expandafter\@gobble\string\\ G(\mathbb{A})/K_f, \omega_\lambda^{-1}) \otimes M_\lambda$ of the Lie algebra cohomology is well-understood (see \S \ref{Langlands_spectral_decomposition}), thanks to the spectral decomposition of Langlands, that has a refinement due to M{\oe}glin and Waldspurger in the case of our interest, namely for $G = GL_n$. Together with the strong multiplicity-one result of Jacquet and Shalika one obtains a satisfactory description of the domain of $\phi^\bullet_{BG}$ thereby of its image, which is $H^\bullet_{(2)}(S_{K_f}, \til{M_\lambda})$ containing the square-integrable cohomology $H^\bullet_{!}(S_{K_f}, \til{M_\lambda})$ (see below \ref{intro_filtration_cohomology}). \\ \noindent Let $\mathcal{H}_f := \mathcal{C}_c(G(\ade_{\textrm{fin}}) // K_f, \mathbb{C})$ be the Hecke algebra of $K_f$-bi-invariant compactly supported complex-valued functions $\phi : G(\ade_{\textrm{fin}}) \to \mathbb{C}$ with the algebra structure given by convolution. The space $$ L^2(\omega_\lambda^{-1}) := L^2(G(\mathbb{Q})\expandafter\@gobble\string\\ G(\mathbb{A})/K_f, \omega_\lambda^{-1}), $$ is a $G(\mathbb{R}) \times \mathcal{H}_f$-module, and is the direct sum of the discrete spectrum $L^2_{\textrm{disc}}$ which is the maximal closed subspace spanned by irreducible $G(\mathbb{R}) \times \CalH_f$-modules, and its orthogonal complement called the {\it continuous spectrum} $L^2_{\textrm{cont}}(\omega_\lambda^{-1})$. The discrete spectrum contains the {\it cuspidal spectrum} $L^2_{\textrm{cusp}}(\omega_\lambda^{-1})$, and there is a natural inclusion whose image is called \textit{cupsidal cohomology} \begin{equation*}\label{intro_BG_map} H^\bullet_{\textrm{cusp}}(S_{K_f}, \bar{S}_{K_f}) := \operatorname{Img}(H^\bullet(\mathfrak{g}, K_\infty, L^2_{\textrm{cusp}}(\omega_\lambda^{-1}) \otimes M_\lambda) \hookrightarrow H^\bullet(S_{K_f}, \til{M_\lambda})). \end{equation*} The full cohomology $H^\bullet(S_{K_f}, \til{M_\lambda})$ has the following filtration as $\mathcal{H}_f$-modules: \begin{equation}\label{intro_filtration_cohomology} H^\bullet_\textrm{cusp}(S_{K_f}, \til{M_\lambda}) \subset H^\bullet_{!}(S_{K_f}, \til{M_\lambda}) \subset H^\bullet_{(2)}(S_{K_f}, \til{M_\lambda}) \subset H^\bullet(S_{K_f}, \til{M_\lambda}). \end{equation} Since cuspidal cohomology is well-understood, namely by the inclusion above spanned by the cuspidal automorphic forms $L^2_{\textrm{cusp}}(\omega_\lambda^{-1})$, it is natural to study the quotient $\mathcal{H}_f$-module $$ H^\bullet_{!/ \textrm{cusp}}(S_{K_f}, \til{M_\lambda}) := H^\bullet_{!}(S_{K_f}, \til{M_\lambda}) \Big/ H^\bullet_{\textrm{cusp}}(S_{K_f}, \til{M_\lambda}). $$ In other words, we give an explicit description of the inner cohomology classes $H^{\bullet}_{!}(S_{K_f}, \til{M_\lambda})$ that are not cuspidal in the case where $G = GL_n$, with $n$ a prime number; in the particular case of primes $n =2,3$ the description is \textit{even} simpler. The main results of the article are as follows: \\ \noindent Let $\textrm{Coh}_{\infty}(G, \lambda)$ be the set of isomorphic classes of essentially-unitary irreducible representations $\CalV_{\pi_\infty}$ of $G(\mathbb{R})$ with nontrivial $(\mathfrak{g},K_\infty)$-cohomology with coefficients in $M_\lambda$, and let $\textrm{Coh}_{(2)}(G, K_f, \lambda)$ be the set of isomorphism classes of absolutely-irreducible $\mathcal{H}_f$-modules $\pi_f$ for which there exists a $\pi_\infty \in \textrm{Coh}_\infty(G,\lambda)$ such that $\operatorname{Hom}_{G(\mathbb{R}) \times \mathcal{H}_f}(V_{\pi_\infty} \otimes V_{\pi_f}, V_{(2)}(\omega_\lambda^{-1})) \neq 0$, and let \begin{equation*}\label{analysis_2_residual_part_of_Borel_Garland_map} \textrm{Res}_{f}(\lambda) := \bigoplus_{ \substack{\pi_f \in \textrm{Coh}_{(2)}(G, K_f, \lambda) \\ \textrm{type}(\pi_f) = \omega_\lambda^{1/n}} } \pi_f. \end{equation*} \begin{thm}\label{intro_main_thm} Assume that $n$ is a prime number. For all primes $n \geq 2$, the quotient module $H^\bullet_{!/\textrm{cusp}}(S_{K_f}, \til{M_\lambda})$ vanishes if $\til{M_\lambda}$ is not isomorphic to the constant sheaf $\mathbb{C}$. So, suppose otherwise, i.e. $\til{M_\lambda} \cong \mathbb{C}$, and let $S^0 = \Set{ 2l-1 | 1 < l \leq n, \; l \trm{ odd }}$, then \begin{enumerate} \item for prime $n =2,3$, the module $H^\bullet_{!/\textrm{cusp}}(S_{K_f}, \mathbb{C}) = 0$, and \item for all primes $n \geq 5$, \begin{equation}\label{possible_cases} H^k_{!/\textrm{cusp}}( S_{K_f}, \mathbb{C}) \cong \begin{cases} 0 & \trm{ for } k \not \in S^0. \\ \ker( r^k|_{\Phi^k_{BG}(\textrm{Res}_f(\lambda)}) & \trm{ for } k \in S^0. \end{cases} \end{equation} \end{enumerate} \end{thm} \noindent The paper is organized as follows. In \S \ref{basic_setup} we recall the notion of adelic locally symmetric space $S_{K_f}$ and the structure of sheaf $\til{M_\lambda}$ on it defined by $M_\lambda$, and the notion algebraic Hecke characters. In \S \ref{cohomology_of_arithmetic_groups} we recall the cohomology of arithmetic groups, and define required definitions that are directly relevant to our article. We discuss the Hecke module structure of cohomology groups, and the associated fundamental isomorphism with the $(\mathfrak{g},K_\infty)$-cohomology or relative Lie-algebra cohomology. In \S \ref{decomposing_cohomology} we recall the coarse decomposition of the space $L^2(G(\mathbb{Q})\expandafter\@gobble\string\\ G(\mathbb{A})/K_f, \omega_\lambda^{-1})$ due to Langlands into various subspaces, and a finer one also due to Langlands refined further by M{\oe}glin and Waldspurger in the our case of intersect, namely the $GL_n$ case. Finally, in \S \ref{main_result} we determine the contribution of residual spectrum to the inner cohomology, and prove Theorem \ref{intro_main_thm}. \section*{Acknowledgements} \noindent It is a great pleasure to thank A.Raghuram for suggesting the question, and Dipendra Prasad for helpful discussions. \section{Notation} The notation $G$ always denotes the the general linear group $GL_n$ defined over $\mathbb{Q}$. Consider the inclusions $G \supset P \supset B = T U \supset T \supset Z$ of subgroups all defined over $\mathbb{Q}$, where $P$ is a parabolic subgroup, $B$ the standard Borel subgroup of upper triangular matrices, $T$ the maximal torus of diagonal matrices, $U$ the unipotent subgroup of strict upper triangular matrices, and $Z$ the center of $G$. We call a parabolic subgroup of $G$, such as $P$, {\it standard} if it contains $B$. For a $\mathbb{Q}$-algebra $A$, let $G(A)$ denote the group of $A$-valued points of $G$, and $G_A$ the extension of scalars of $G$ from $\mathbb{Q}$ to $A$. The dimension of a subgroup $K$ of $G$ is denoted by $\dim K$, and its $\mathbb{Q}$-rank by $\operatorname{rank} K$. The notation $G^\circ$ denotes the connected component of the identity of $G$, and $\pi_0(G(\mathbb{R}))$ is the group of connected components of $G(\mathbb{R})$. The notation $N_G(K)$ denotes the normalizer of $K$ in $G$. The Lie algebra of $G$ is denoted by $\mathfrak{g}$, and its universal enveloping algebra by $\mathfrak{U}(\mathfrak{g})$. \section{Basic setup}\label{basic_setup} \subsection{Adelic setup}\label{adelic_setup} \noindent Let $\mathbb{A} = \mathbb{R} \times (\prod^{\prime}_p\mathbb{Q}_p) = \mathbb{A}_\infty \times \ade_{\textrm{fin}}$ be the ring of adeles over $\mathbb{Q}$, where $\mathbb{A}_\infty = \mathbb{R}$ is the {\it archimedean} component, and $\ade_{\textrm{fin}} = \prod^{\prime}_p \mathbb{Q}_p$ is the {\it nonarchimedean} component, which is the restricted direct product of the local fields $\mathbb{Q}_p$ as $p$ runs through the set of finite primes. Then $G(\mathbb{A}) := G(\mathbb{R}) \times G(\ade_{\textrm{fin}}) := G_\infty \times G_f$. Fix a subgroup $K_\infty = O(n) Z(\mathbb{R}) = O(n) Z(\mathbb{R})^\circ$, the maximal compact modulo center subgroup. The {\it symmetric space} associated to the pair $(G_\infty, K_\infty)$ is the quotient space $X_{\operatorname{Sym}}:= G_\infty/K_\infty$. Let $\Gamma \subset G(\mathbb{Q})= GL_n(\mathbb{Q})$ be an arithmetic subgroup, i.e. for all congruence subgroups $\Gamma'$ the intersection $\Gamma \cap \Gamma'$ is of finite index both in $\Gamma$ and $\Gamma'$. Suppose $\Gamma$ has no torsion, then its natural action on $X_{\operatorname{Sym}}$ by left multiplication is properly discontinuous and free, resulting in a locally symmetric space $\Gamma \expandafter\@gobble\string\\ X_{\operatorname{Sym}}$. \\ \noindent Let $\rho : G \to GL(M)$ be a finite-dimensional complex rational representation of $G$. It defines a local system $\ul{M}$ of complex vector spaces on $\Gamma \expandafter\@gobble\string\\ X_{\operatorname{Sym}}$, and one has $$ H^\bullet(\Gamma \expandafter\@gobble\string\\ X_{\operatorname{Sym}}, \ul{M}) \cong H^\bullet(\Gamma, M) . $$ The cohomology on the left hand side is computed with the aid of the de Rham complex (and that on the right is the ordinary group cohomology). Passing further on to the adelic setup so as to bring in the results of automorphic representations, let $K_f \subset G(\ade_{\textrm{fin}})$ be a compact open subgroup, and consider the following construction, wherein the action of $G(\mathbb{Q})$ is by left multiplication and all the maps are the canonical projections (see \cite[Chapter 3]{Ha1}): \[ \begin{tikzcd}\label{role_change_diagram} X_{\operatorname{Sym}} \times G(\ade_{\textrm{fin}}) \ar{r}{\pi'} \ar{d}{\Pi'} &X_{\operatorname{Sym}} \times G(\ade_{\textrm{fin}})/K_f \ar{d}{\pi} \\ G(\mathbb{Q}) \expandafter\@gobble\string\\ \Big( X_{\operatorname{Sym}} \times G(\ade_{\textrm{fin}}) \Big) \ar{r}{\Pi} & G(\mathbb{Q})\expandafter\@gobble\string\\ \Big( X_{\operatorname{Sym}} \times G(\ade_{\textrm{fin}})/K_f \Big) \end{tikzcd} \] \vspace{0.5cm} \noindent Let $ S_{K_f} := G(\mathbb{Q})\expandafter\@gobble\string\\ \Big( X_{\operatorname{Sym}} \times G(\ade_{\textrm{fin}})/K_f \Big) = G(\mathbb{Q}) \expandafter\@gobble\string\\ G(\mathbb{A})/K_\infty K_f$, called the {\it adelic locally symmetric space}. It can be equipped with the coefficient sheaf $\til{M}$, obtained from the representation $(\rho,M)$, whose sections on an open set $V \subset S_{K_f}$ are the set $\til{M}(V)$ of locally constant functions $s: \pi^{-1}(V) \to M$ satisfying $$ s(\gamma (x_\infty K_\infty, g_f K_f)) = \rho(\gamma) s((x_\infty K_\infty, g_f K_f)), \; \; \trm{ for all } \gamma \in \Gamma, u \in \pi^{-1}(V). $$ \begin{remark}Eventually, we view sheaf cohomology groups $H^\bullet(S_{K_f}, \til{M_\lambda})$ as Hecke modules; see \S \ref{Hecke_action}. In particular these groups should be equipped with the $\Gamma$-action on the left or equivalently the $K_f$-action on the right. Informally speaking, this is analogous to the strong approximation theorem that aids in trading transformation property under the left-action of $SL(2,\mathbb{Z})$ of the modular forms on $SL(2,\mathbb{R})$ with the transformation property under the right-action of the maximal compact subgroup $SO(2,\mathbb{R})$ of automorphic forms on $SL(2,\mathbb{R})$. \end{remark} Consider the natural inclusion $\til{M} \hookrightarrow \til{M} \otimes \ade_{\textrm{fin}}$, and given a section $s \in \til{M}(V)$ associate a map $s_1 : \pi'^{-1} ({\pi^{-1}(V)}) \to \til{M} \otimes \ade_{\textrm{fin}}$ defined by $$ s_1(x_\infty, g_f) := g_f^{-1} s(x_\infty K_\infty , g_f K_f) $$ where $g_f$ acts on the second factor of $M \otimes \ade_{\textrm{fin}}$. Evidently $s_1(\gamma (x_\infty, g_f)) = s_1(x_\infty, g_f)$ for all $\gamma \in G(\mathbb{Q})$, so that $s_1$ factors through the map $$ s_2 : G(\mathbb{Q}) \expandafter\@gobble\string\\ \Big( G(\mathbb{R})/K_\infty \times G(\ade_{\textrm{fin}}) \Big) \to M \otimes \ade_{\textrm{fin}}, $$ and defines a sheaf $\til{M} \otimes \ade_{\textrm{fin}}$ on the space $S_{K_f}$. Alternatively, since $\Pi^{-1}(V) = \Pi'(\pi'^{-1} \circ \pi^{-1}(V))$, we obtain a sheaf $\til{M \otimes \ade_{\textrm{fin}}}$ whose sections on an open set $V \subset S_{K_f}$ are the set $\til{M \otimes \ade_{\textrm{fin}}}(V)$ of locally constant functions $s: \Pi^{-1}(V) \to M\otimes \ade_{\textrm{fin}}$ satisfying $ s(x_\infty K_\infty, g_f k_f) = k_f^{-1} s(x_\infty, g_f)$, for all $x_\infty \in G_\infty$, $g_f \in G_f$, $k_f \in K_\infty$. In summary, the sheaf $\til{M} \otimes \ade_{\textrm{fin}}$ defined in terms of the $G(\mathbb{Q})$-action on $M$ on the left is identified with the sheaf $\til{M \otimes \ade_{\textrm{fin}}}$ defined in terms of the natural right action of $K_f$ on $M \otimes \ade_{\textrm{fin}}$ on the right. \subsection{Topological structure of $S_{K_f}$:}\label{topological_structure_of_lss} The quotient $ X_{\operatorname{Sym}} \times G(\ade_{\textrm{fin}})/ K_f = G(\mathbb{Q}) \expandafter\@gobble\string\\ G(\ade_{\textrm{fin}})/K_f$ under the natural action of $G(\mathbb{Q})$ on $G(\ade_{\textrm{fin}})/K_f$ is a finite set $\Set{g_f^{1}, g_f^{2}, \ldots, g_f^{l}}$, and a connected component is of the form $$ X_i := G(\mathbb{A})^{\circ} (\epsilon, g_f^{(i)}) K_f/ K_\infty K_f $$ where $\epsilon \in \pi_0(G(\mathbb{R}))$. Let $\Gamma_i \subset G(\mathbb{Q})$ be its stabilizer, which is an arithmetic subgroup of $G(\mathbb{Q})$. Then we have $S_{K_f} = \coprod_{i=1}^l \Gamma_i \expandafter\@gobble\string\\ X_i$ (see \cite[\S 1.1]{Ha2}) \begin{defn}\label{neat_subgroup_defn} The subgroup $K_f$ is said to be neat if the $\Gamma_i$ are torsion free. \end{defn} \begin{remark} The sheaf cohomology groups $H^\bullet(S_{K_f}, \til{M})$ are known to be isomorphic to finite direct sum of the cohomology groups of the form $H^\bullet(\Gamma \expandafter\@gobble\string\\ G(\mathbb{R})/K_\infty, \til{M})$ for an appropriate arithmetic subgroup $\Gamma \subset G(\mathbb{Q})$, and under mild restrictions both on $S_{K_f}$ and $\til{M}$. If the stabilizers $\Gamma_i$ has no torsion, then they act freely, so that the connected components are locally symmetric. This is true in our case of interest, i.e. $G = GL_n/\mathbb{Q}$. Indeed, the stabilizer $\Delta$ of a point $g = (g_\infty, g_f^{i}) K_\infty K_f$ in $\Gamma_i$ is a congruence subgroup in the connected component of the unit group $\Set{1, -1}$ of the center $Z(\mathbb{Q}) = \mathbb{Q}^\times$, hence trivial. But, if we consider groups over an arbitrary number field $F \neq \mathbb{Q}$ then, we have to pass onto the action of $\Gamma \expandafter\@gobble\string\\ \Delta$ above to get a locally symmetric space, since the unit group $\mathcal{O}_F^\times$ is nontrivial as a consequence of Dirichlet's unit theorem; accordingly we have to consider the group cohomology $H^\bullet(\Gamma_i, M) := H^\bullet( ( \Gamma_i/\Delta_i) \expandafter\@gobble\string\\ X_i, \til{M} )$. \end{remark} \subsection{Sheaf structure on $S_{K_f}$}\label{sheaf_str} \noindent The group of rational characters $X^*(T) := \operatorname{Hom}(T, \mathbb{G}_m)$ of the maximal torus $T$ is a free abelian group group of rank $n$. It is equipped with the standard basis $e_i: \textrm{diag}(t_1, \ldots, t_n) \to t_i$. The structure of $X^*(T)$ is more transparent if we pass onto $X^*(T) \otimes_\mathbb{Z} \mathbb{Q}$ and consider the fundamental basis associated to the standard basis. The {\it fundamental weights} $\gamma_i \in X^*(T)_\mathbb{Q}$ are characterized by the conditions that they act on the center $Z$ by $z \mapsto z^i$, and they satisfy the following relations: for all $1 \leq i \leq n-1$, and $1 \leq j \leq n$, $$ 2 \ip{\gamma_i, e_j - e_{j+1}}/\ip{e_j-e_{j+1}, e_{j}- e_{j+1}} = \delta_{ij}. $$ In particular, the determinant character $\delta := e_1 + \ldots + e_n$ spans $X^*(Z)_\mathbb{Q}$, and the set $\Set{\gamma_1, \ldots, \gamma_{n-1}, \delta}$ is a basis of $X^*(T)_\mathbb{Q}$ called the {\it fundamental basis of $T$.} Let then $\gamma = \sum_{i=1}^{n-1} a_i \gamma_i + d \delta $; it is said to be {\it integral} if $a_i \in \mathbb{Z}$, $n d \in \mathbb{Z}$ and $nd \equiv \sum_{i=1}^{n-1} i(a_i -1) \pmod{n}$. An integral weight is said to be {\it dominant} if, in addition, the coefficients $a_i\geq 0$. \\ \noindent Suppose that the representation $M$ is absolutely irreducible. By the highest-weight theory, up to isomorphism, $M_\mathbb{C}$ is isomorphic to $M_\lambda \otimes \mathbb{C}$ where $M_\lambda$ is the highest weight module associated to the dominant integral weight $\lambda \in X^*(T)_\mathbb{Q}$. Throughout this article, we consider the restriction $$ \rho_\lambda := \rho(\mathbb{C})|_{G(\mathbb{R})} : G(\mathbb{R}) \to GL(M_\lambda \otimes \mathbb{C}). $$ Henceforth, we work \textit{only} with the module $M_\lambda \otimes \mathbb{C}$ exclusively, so, for ease of notation, we drop the second factor $\mathbb{C}$ in $M_\lambda \otimes \mathbb{C}$ and simply write as $M_\lambda$. Now, with this abuse of notation, consider the associated sheaf $\til{M}_\lambda$ on the adelic locally symmetric space $S_{K_f}$ [\ref{adelic_setup}]. Let $\omega_\lambda: Z(\mathbb{R}) \to \mathbb{C}^\times$ be the central character of $\rho_\lambda$, and it is given by $\omega_\lambda(z) = z^{nd}$ with $d$ the coefficient of the determinant character $\delta$ in the expression of the character $\lambda : T(\mathbb{R}) \to \mathbb{C}^\times$ in the fundamental basis. Then $\omega_\lambda(-I_n) = -1 \; \textrm{or} \; 1$. Suppose it is $-1$ and consider the stalk $\til{(M_\lambda)}_x$ for some $x \in S_{K_f}$: $$ \til{(M_\lambda)}_x = \Set{ s_x : \pi^{-1}(x) \to M_\lambda | s_x(\gamma \cdot u)= \rho_\lambda(\gamma) s_x(u), \; \; \gamma \in G(\mathbb{Q}), u \in \pi^{-1}(x)} $$ Since the representative section $s$ of $s_x$ is locally constant the germ $s_x$ is constant, hence $$ s(u) = s(-I_n \cdot u) = \omega_\lambda(-I_n) s(u) = - s(u), \; \; u \in \pi^{-1}(x), $$ forcing the stalk $(\til{M_\lambda})_x$ to be trivial, whence the sheaf $\til{M_\lambda}$ is also trivial; note that here we used the `thickened' aspect of $K_\infty$ namely that $K_\infty$ is $O(n) \mathbb{R}^*$ rather than just $O(n)$, which is also considered in the literature. Therefore, to have $\til{M_\lambda}$ to be not zero identically, we must restrict our attention to representations $\rho_\lambda$ whose central characters $\omega_\lambda : Z(\mathbb{R}) \to \mathbb{C}^\times$ satisfy $\omega_\lambda(-I_n) = 1$. This implies in particular that $\omega_\lambda$ is determined by its values on the connected component of the identity $Z(\mathbb{R})^\circ \cong \mathbb{R}^\times_{>0}$. In other words, the central $\omega_\lambda$ is a type of algebraic Hecke character; see below, and also \cite[\S 2.5]{Ha2}. \subsection{Algebraic Hecke characters}\label{algebraic_Hecke_characters} \begin{defn}\label{algebraic_Hecke_character_defn} An \textit{algebraic Hecke character of a torus $S$ of type $\gamma \in X^*(S)_\mathbb{Q}$ defined over $\mathbb{Q}$} is a continuous group homomorphism $\phi : S(\mathbb{Q}) \expandafter\@gobble\string\\ S(\mathbb{A}) \to \mathbb{C}^\times$ such that $\phi|_{S(\mathbb{R})^\circ} = \gamma_\infty^{-1}|_{S(\mathbb{R})^\circ}$, where $\gamma_\infty : S(\mathbb{R}) \hookrightarrow S(\mathbb{C}) \to \mathbb{C}^\times$. \end{defn} \noindent Applying the definition to our situation, keeping in view of the assumption that $\omega_\lambda(- I_n) = 1$, we have that $\omega_\lambda: Z(\mathbb{R}) \to \mathbb{C}^\times$ is the type of algebraic Hecke character $\phi : Z(\mathbb{Q}) \expandafter\@gobble\string\\ Z(\mathbb{A}) \to \mathbb{C}^\times$ such that $\phi|_{Z(\mathbb{R})} = \omega_{\lambda}^{-1}$. The center $Z(\mathbb{A}) \cong \mathbb{A}^\times \cong \mathbb{Q}^\times \times \mathbb{R}^\times_{>0} \times \widehat{\mathbb{Z}}$, and therefore $\phi$ is determined by its `finite part' $\phi_f : \widehat{\mathbb{Z}} \to \mathbb{C}^\times$ (its infinite part is given by $\omega_\lambda$), which has finite order because $\widehat{\mathbb{Z}}$ is compact, and therefore must factor through the map $(Z/N\mathbb{Z})^\times \to \mathbb{C}^\times$ for some positive integer $N$; the least such $N$ is called the {\it conductor} of the character $\phi$. Consequently, the algebraic Hecke characters (in our situation) of type $\omega_\lambda$ are parametrized by primitive Dirichlet characters. \subsection{Summary}\label{summary_basic_setup} \noindent Let us summarize the assumptions about the principal objects of our study: $G:= GL_n$, $K_\infty = O(n) \mathbb{R}^*$, $ S_{K_f} = G(\mathbb{Q}) \expandafter\@gobble\string\\ G(\mathbb{A})/K_\infty K_f$ is the adelic locally symmetric space attached to the pair $(G_\infty, K_\infty)$ and for some choice of compact open subgroup $K_f \subset G(\ade_{\textrm{fin}})$ that is neat. We study the sheaf cohomology groups $H^\bullet(S_{K_f}, \til{M_\lambda})$ where $(\rho_\lambda, M_\lambda)$ is the highest weight $G(\mathbb{R})$-module associated to the dominant integral weight $\lambda$ such that its central character $\omega_\lambda$ is a type of an algebraic Hecke character. \section{Cohomology of arithmetic groups} \label{cohomology_of_arithmetic_groups} \noindent In this section we recall several notions related to the sheaf cohomology. The reader may refer to \cite[Chapter 2]{Ha} for the formal properties of sheaf cohomology, and \cite[Chapter 2]{Ha1} for their interpretation as Hecke modules in our context. \subsection{Hecke action}\label{Hecke_action} \noindent The groups $H^\bullet(S_{K_f}, \til{M_\lambda})$ are functorial with respect to $K_f$. Indeed, passing onto a smaller compact open subgroup $K_f' \subset K_f$ (which is necessarily of finite index) yields a surjective map $ \pi_{K_f, K_f'}: S_{K_f'} \to S_{K_f} $ with finite fibers, and hence a map on cohomology $$ \pi_{K_f, K_f'}^\bullet : H^\bullet(S_{K_f}, \til{M_\lambda}) \to H^\bullet(S_{K_f'}, \til{M_\lambda}). $$ The family $ \{ H^\bullet(S_{K_f}, \til{M}), \pi_{K_f, K_f'}^\bullet \}$ indexed by $K_f$ is a directed system with \textit{the} direct limit $$ H^\bullet(S^G, \til{M_\lambda}) = \varinjlim_{K_f} H^\bullet(S_{K_f}, \til{M_\lambda}). $$ The limit $H^\bullet(S^G, \til{M_\lambda})$ has a natural action of $\pi_0(G(\mathbb{R})) \times G(\ade_{\textrm{fin}})$ by right multiplication; for $(k_\infty, g_f) \in \pi_0(G(\mathbb{R})) \times G(\ade_{\textrm{fin}})$, the induced multiplication map $ m_{(k_\infty,x_f)} : S_{K_f} \overset{\sim}{\longrightarrow} S_{x_f^{-1} K_f x_f}$ is an isomorphism such that $(m_{(k_\infty, x_f)})_*(\til{M_\lambda}) \cong \til{M_\lambda}$, hence passing onto the limit results in the desired action. The cohomology with fixed level $K_f$ is obtained by taking the $K_f$-invariants under this action: $ H^\bullet(S_{K_f}, \til{M_\lambda}) = H^\bullet(S^G, \til{M_\lambda})^{K_f}$. \\ \noindent Let $\mathcal{H}_f := \mathcal{C}_c(G(\ade_{\textrm{fin}}) // K_f, \mathbb{C})$ be the \textit{Hecke algebra} of $K_f$-bi-invariant compactly supported functions $\phi : G(\ade_{\textrm{fin}}) \to \mathbb{C}$, with the algebra structure given by convolution: $$ (h_1 \ast h_2)(g_f) = \mathop{\int}_{G(\ade_{\textrm{fin}})} h_1(x_f) h_2(x_f^{-1}g_f) dx_f $$ where the Haar measure $dx_f$ is normalized such that $K_f$ has unit volume. Clearly the characteristic function $\chi_{K_f}$ is the identity element of $\mathcal{H}_f$. The action of the group $G(\ade_{\textrm{fin}})$ induces an action of $\mathcal{H}_f$ on the cohomology $H^\bullet(S_{K_f}, \til{M_\lambda})$ by $$ T_h(v) = \mathop{\int}_{G(\ade_{\textrm{fin}})} h(x_f) (x_f \cdot v) dx_f, \; \; v \in H^\bullet(S_{K_f}, \til{M_\lambda}). $$ which is a finite sum: let $K_f' \subset K_f$ be the stabilizer of $v$, necessarily of finite index, then $$ T_h(v) = [K_f : K_f'] \sum_{a_f} \sum_{\xi_f \in G_f/K_f'} c_{a_f} \chi_{K_f a_f K_f}(\xi_f) (\xi_f \cdot v). $$ is $K_f$-invariant. Therefore $T_h(v) \in H^\bullet(S_{K_f}, \til{M_\lambda})$, and since \begin{align*} T_{h_1 \ast h_2} &= \mathop{\int}_{G(\ade_{\textrm{fin}})} (h_1 \ast h_2)(x_f) (x_f \cdot v) dx_f \\ &= \mathop{\int}_{G(\ade_{\textrm{fin}})} \mathop{\int}_{G(\ade_{\textrm{fin}})}h_1(y_f) h_2(y_f^{-1} x_f) dy_f (x_f \cdot v) dx_f \\ &= \mathop{\int}_{G(\ade_{\textrm{fin}})} h_1(y_f ) y_f \cdot \Big(\mathop{\int}_{G(\ade_{\textrm{fin}})} h_2(z_f) (z_f \cdot v) dz_f \Big) d(y_f z_f)\\ &= \mathop{\int}_{G(\ade_{\textrm{fin}})} h_1(y_f) \; \Big( y_f \cdot T_{h_2}(v) \Big) dy_f = T_{h_1}(T_{h_2}(v)) \end{align*} the map $\mathcal{H}_f \to \operatorname{End}_\mathbb{C}(H^\bullet(S_{K_f}, \til{M_\lambda}))$ given by $h \mapsto T_h$ is a representation of the Hecke algebra $\mathcal{H}_f$ which is in fact finite-dimensional since $H^\bullet(S_{K_f}, \til{M_\lambda})$ is a finite-dimensional complex vector space (see de Rham isomorphism \eqref{de_Rham_isomorphism}). \newcommand{\hbul(\lss, \sheafm)}{H^\bullet(S_{K_f}, \til{M_\lambda})} \subsection{Borel-Serre compactification}\label{Borel_Serre_compactification} \noindent We now turn to the topological aspects of $S_{K_f}$ (with $K_f$ neat) that will yield some more information about $\hbul(\lss, \sheafm)$. In general, the space $S_{K_f}$ is not compact. In fact the associated adelic locally symmetric space of any general connected reductive group over $\mathbb{Q}$ is compact if and only if the the group is anisotropic over $\mathbb{Q}$, i.e. has no proper parabolic subgroups defined over $\mathbb{Q}$ \cite[Page 277]{Bo}. Certainly then in our case, namely $GL_n/\mathbb{Q}$, the space $S_{K_f}$ is not compact. Borel and Serre constructed a compactification $\bar{S}_{K_f}$ of $S_{K_f}$ by `adding' the boundary $ \partial \bar{S}_{K_f} := \bigcup_{P} \partial_P \bar{S}_{K_f}$, where $P$ runs through the (finitely many) {\it standard} representatives of $G(\mathbb{Q}) $-conjugacy classes of proper $\mathbb{Q}$-parabolic subgroups; the {\it Borel-Serre compactification} $\bar{S}_{K_f} = S_{K_f} \cup \partial \bar{S}_{K_f},$ is a compact manifold with corners and $\dim \partial \bar{S}_{K_f} = \dim \bar{S}_{K_f} - 1$ (see \cite{BS}). The Borel-Serre compactification $\bar{S}_{K_f}$ is equipped with an inclusion $\iota : S_{K_f} \hookrightarrow \bar{S}_{K_f}$ that is a homotopy equivalence, and a canonical restriction $r : \bar{S}_{K_f} \to \partial \bar{S}_{K_f}$: $$ S_{K_f} \xrightarrow{i} \bar{S}_{K_f} \xrightarrow{r} \partial S_{K_f} $$ The coefficients on these spaces are obtained by the canonical short exact sequence of sheaves $$ 0 \to i_{!} \til{M_\lambda} \xrightarrow{i} i_* \til{M_\lambda} \xrightarrow{r} i_* \til{M_\lambda}/ i_{!} \til{M_\lambda} \to 0 $$ where $i_{!} \til{M_\lambda}$ is the sheaf extended by zero from $S_{K_f}$ to $\bar{S}_{K_f}$, and the quotient $i_* \til{M_\lambda}/ i_{!} \til{M_\lambda}$ is the sheaf $i_* \til{M_\lambda}$ restricted to the boundary extended by zero to $\bar{S}_{K_f}$. \begin{defn} The \textit{cohomology with compact supports} or compactly-supported cohomology is defined by $$ H^\bullet_c(S_{K_f}, \til{M_\lambda}) := H^\bullet(\bar{S}_{K_f}, i_{!}(\til{M_\lambda})), $$ and the image of $i^\bullet$ is called the {\it inner or interior cohomology} and denoted $H^\bullet_{!}(S_{K_f}, \til{M_\lambda})$. The cohomology $H^\bullet(\partial \bar{S}_{K_f}, \til{M_\lambda})$ is called the {\it boundary cohomology}. \end{defn} \noindent The short exact sequence of sheaves yields the following fundamental long exact sequence equipped with $\mathcal{H}_f$-action \cite[Chapter 3]{Ha1}, \begin{equation}\label{Borel_Serre_fundamental_exact_sequence} \ldots \to H^{k-1}( \partial \bar{S}_{K_f}, \til{M_\lambda})\to H^k_c(S_{K_f}, \til{M_\lambda}) \xrightarrow{i^k} H^k(S_{K_f}, \til{M_\lambda}) \xrightarrow{r^k} H^{k+1}(\partial \bar{S}_{K_f}, \til{M_\lambda}) \to \ldots \end{equation} \vspace{0.5cm} \begin{notation}\label{notation} We make the following notation for ease of reference: $H^\bullet_{?}(S_{K_f}, \til{M_\lambda})$ where the symbol $?$ takes values in the set $\Set{ \trm{`empty'}, c, !, \partial}$. For example, by $H^\bullet_\partial(S_{K_f}, \til{M_\lambda})$ we mean $H^\bullet(\partial \bar{S}_{K_f}, \til{M_\lambda})$, and likewise for other symbols too. Let us note further that by the symbol ? = `empty' we mean $H^\bullet(S_{K_f}, \til{M_\lambda})$, i.e. the full or ordinary cohomology. When the coefficient system $\til{M_\lambda}$ is clear from the context, we further simplify $H^\bullet_{?}(S_{K_f}, \til{M_\lambda})$ to $H^\bullet_{?}$. \end{notation} \begin{remark}\label{degree_0_compact_cohomology} Note that the beginning of the fundamental exact sequence \eqref{Borel_Serre_fundamental_exact_sequence} is $$ 0 \to H_c^0 \to H^0 \to H^0_\partial \to \ldots; $$ In particular, observe that the map $H_c^0 \to H^0$ is an injection, due to the fact the global-sections functor is left exact. \end{remark} \subsection{Relative Lie algebra cohomology}\label{relative_Lie_algebra_cohomology} \noindent Consider the \textit{de Rham complex} which is the resolution of the constant sheaf $\mathbb{C}$ by the sheaf of $M_\lambda$-valued smooth forms $\Omega^\bullet(S_{K_f}, M_\lambda)$ on $S_{K_f}$. The groups $H^\bullet(S_{K_f}, \til{M_\lambda})$ are computed through de-Rham complex and with the aid of the \textit{de Rham isomorphism} \begin{equation}\label{de_Rham_isomorphism} H^\bullet(S_{K_f}, \til{M_\lambda}) \cong H^\bullet(\Omega^\bullet(S_{K_f}, \til{M}_\lambda)^\Gamma). \end{equation} As an aside, let us note that this interpretation of sheaf cohomology in terms of deRham cohomology implies that the Poincar\'{e} duality holds on the sheaf cohomology groups as well, namely for all $0 \leq i \leq \dim X_{\operatorname{Sym}}$ there exists a nondegenerate pairing, with $\til{M_\lambda}^\vee$ the sheaf dual to $\til{M_\lambda}$: \begin{equation}\label{Poincare_duality} H^i(S_{K_f}, \til{M_\lambda}) \times H^{d-i}_c(S_{K_f}, \til{M_\lambda}^\vee) \to \mathbb{C}. \end{equation} \vspace{0.5cm} \noindent The de-Rham cohomology also has an interpretation in terms of the $(\mathfrak{g},K_\infty)$-cohomology which we briefly recall now. The coefficient space $ V(\omega_\lambda^{-1}) := \mathcal{C}^\infty(G(\mathbb{Q})\expandafter\@gobble\string\\ G(\mathbb{A})/K_f, \omega_\lambda^{-1})$, which is the space of smooth functions $\phi : G(\mathbb{A}) \to \mathbb{C}$ satisfying \begin{equation}\label{transformation_law} \phi(g_0 z_\infty g_\infty K_f) = \omega^{-1}(z_\infty) \phi(g_\infty), \; \; g_0 \in G(\mathbb{Q}), g_\infty \in G_\infty, k_f \in K_f, z_\infty \in Z_\infty, \end{equation} is equipped with $G(\mathbb{A})$ action by right translation, which upon differentiation (in the $g_\infty$-variable) yields a $\mathfrak{g}$-action. Hence we see that $V(\omega_\lambda^{-1})$ is a $(\mathfrak{g},K_\infty) \times G(\ade_{\textrm{fin}})$-module, hence also a $(\mathfrak{g},K_\infty) \times \mathcal{H}_f$-module where the Hecke algebra acts by convolution. Let $V(\omega_\lambda^{-1})^{(K_\infty)} \subset V(\omega_\lambda^{-1})$ be the subspace of $K_\infty$-invariant vectors. It is a sub-$(\mathfrak{g},K_\infty)$-module. The {\it $(\mathfrak{g},K_\infty)$-cohomology or the relative Lie algebra cohomology} is defined as the cohomology of the complex $ \operatorname{Hom}_{K_\infty}(\wedge^\bullet(\mathfrak{g}/\mathfrak{k}), V(\omega_\lambda^{-1})^{(K_\infty)} \otimes M_\lambda) $ where the action of $K_\infty$ on the exterior powers $\wedge^\bullet \mathfrak{g}/\mathfrak{k}$ is the one induced from the adjoint representation of $K_\infty$ in $\mathfrak{g}/\mathfrak{k}$ (see \cite[Chapter I]{BW}). The relation of the $(\mathfrak{g},K_\infty)$-cohomology to the sheaf cohomology is based the following canonical isomorphism of complexes, which is compatible with the action of Hecke algebra: \begin{equation}\label{de_Rham_Lie_algebra_isomorphism} \Omega^\bullet_?(S_{K_f}, \til{M_\lambda}) \cong \operatorname{Hom}_{K_\infty}(\wedge^\bullet(\mathfrak{g}/\mathfrak{k}), V_?(\omega_\lambda^{-1})\otimes M_\lambda) \; \; \; \;\; \; \textrm{ where } ? = \textrm{empty}, c \end{equation} where $V_c(\omega_\lambda^{-1})$ consisting of compactly supported functions of $V(\omega_\lambda^{-1})$. Isomorphisms \eqref{de_Rham_isomorphism} and \eqref{de_Rham_Lie_algebra_isomorphism} hints at the analysis of $V(\omega_\lambda^{-1})$, therefore, in general of the Hilbert space obtained from its completion using a suitable norm: \begin{defn}\label{square_integrable_function} The subspace of {\it square-integrable} functions $$ V_{(2)}(\omega_\lambda^{-1}) := \mathcal{C}^\infty_2(G(\mathbb{Q}) \expandafter\@gobble\string\\ G(\mathbb{A})/K_f, \omega_\lambda^{-1}) \subset V(\omega_\lambda^{-1}) $$ is the subset of $f \in V(\omega_\lambda^{-1})$ satisfying \begin{equation}\label{square_integrable_defn} \mathop{\int}_{G(\mathbb{Q}) Z(\mathbb{R})^\circ \expandafter\@gobble\string\\ G(\mathbb{A})} \abs{(Uf)(g)}^2 \abs{\omega_\lambda(g)}^2 dg < \infty. \end{equation} for all elements $U \in \mathfrak{U}(\mathfrak{g})$. Its completion, with respect to the norm defined by \eqref{square_integrable_defn} is denoted by $L^2(G(\mathbb{Q})\expandafter\@gobble\string\\ G(\mathbb{A})/K_f,\omega_\lambda^{-1})$. \end{defn} \begin{defn}\label{square_integrable_cohomology} The \textit{square integrable cohomology} $H^\bullet_{(2)}(S_{K_f}, \til{M_\lambda})$ is sub-$\mathcal{H}_f$-module of $H^\bullet(S_{K_f}, \til{M_\lambda})$ consisting of those cohomology classes with a square-integrable function as a representative \textit{that is also a closed form} in $\Omega^\bullet(S_{K_f}, \til{M_\lambda}$) (see \eqref{de_Rham_isomorphism}); for details about the motivation for this definition, see \cite[Chapter 3]{Ha1}. \end{defn} \noindent Finally, and clearly, we have the filtration as $\mathcal{H}_f$-modules, of the full cohomology : \begin{equation}\label{basic_filtration} H^\bullet_{!} \subset H^\bullet_{(2)} \subset H^\bullet; \; \; \trm{ see } \ref{notation}. \end{equation} \begin{remark} Notice that $H^\bullet_{c}$ hence $H^\bullet_{!}$ is defined by geometric means, while $H^\bullet_{(2)}$ is defined by analytic means. In the next section we define \textit{cuspidal cohomology} by algebraic means. \end{remark} \section{Decomposing cohomology} \label{decomposing_cohomology} \subsection{Langlands spectral decomposition}\label{Langlands_spectral_decomposition} \noindent Consider the Hilbert space $ L^2(\omega) := L^2(G(\mathbb{Q}) \expandafter\@gobble\string\\ G(\mathbb{A})/K_f, \omega) $ (see \eqref{transformation_law} and Definition \eqref{square_integrable_function}). It is a $G(\mathbb{R}) \times \mathcal{H}_{f}$-module, where $G(\mathbb{R})$ acts by unitary transformations and $\CalH_{f}$ by right convolution. Due to Langlands \cite{La}, the space $L^2(\omega)$ is the direct sum of the \textit{discrete spectrum} $L^2_{\textrm{disc}}(\omega)$ and the {\it continuous spectrum} $L^2_{\textrm{cont}}(\omega)$, where $L^2_{\textrm{disc}}(\omega)$ is the maximal closed subspace spanned by irreducible $G(\mathbb{R}) \times \CalH_f$-modules, and $L^2_{\textrm{cont}}(\omega)$ is the orthogonal complement of $L^2_{\textrm{disc}}(\omega)$. A representation occurring in $L^2_{\textrm{disc}}(\omega)$ will be called {\it discrete}. The discrete spectrum contains the {\it cuspidal spectrum} $L^2_{\textrm{cusp}}(\omega)$, namely the closed subspace spanned by functions $f \in L^2_{\textrm{disc}}(\omega)$ such that the integral over $U(\mathbb{Q}) \expandafter\@gobble\string\\ U(\mathbb{A})$ of $f$, and all its right-translates under $G(\mathbb{A})$, vanishes, where $U$ is the unipotent radical of any proper parabolic subgroup, and the measure is normalized so that $U(\mathbb{Q}) \expandafter\@gobble\string\\ U(\mathbb{A})$ has unit volume. The complement of $L^2_{\textrm{cusp}}(\omega)$ in $L^2_{\textrm{disc}}(\omega)$ is called the {\it residual spectrum} $L^2_{\textrm{res}}(\omega)$. The decomposition of discrete spectrum in to cuspidal spectrum and residual spectrum has a refinement in the $GL_n$ case due to Langlands \cite{La} and due to M{\oe}glin and Waldspurger \cite{MW}. The description of these results involves several notions, but we recall only those that are directly relevant to this article; for more details the reader may refer to the articles of Arthur \cite{Ar} \cite{Ar1}. According to Borel and Casselman \cite[\S 4]{BC}, the contribution of the continuous spectrum to the $(\mathfrak{g},K_\infty)$-cohomology is trivial. Therefore we may, and do, restrict our attention only to the discrete spectrum henceforth. \subsection{Residual spectrum}\label{residual_spectrum} \noindent We use these notions in our special case summarized in \S \ref{summary_basic_setup}. First, consider the decomposition of $L^2_{\textrm{disc}}(\omega_\lambda^{-1})$ indexed by central characters $\omega : \mathbb{Q}^\times \expandafter\@gobble\string\\ \mathbb{A}^\times \to \mathbb{C}^\times$ of type $\omega_\lambda$: $$ L^2_{\textrm{disc}}(\omega_\lambda^{-1}) = \bigoplus_{\substack{\omega: \mathbb{Q}^\times \expandafter\@gobble\string\\ \mathbb{A}^\times \to \mathbb{C}^\times \\ \omega_\infty = \omega_\lambda^{-1}}} L^2_{\textrm{disc}}(\omega). $$ We now analyse the structure of a summand $L^2_{\textrm{disc}}(\omega)$. Consider the set of tuples $(L,W)$, where $L = GL(N_1) \times \ldots \times GL(N_m)$ is a \textit{standard} Levi subgroup of a (standard) parabolic subgroup, and $W = W_1 \otimes \ldots \otimes W_m$ is an irreducible subspace of the space of cuspidal automorphic representations of $L(\mathbb{Q}) \expandafter\@gobble\string\\ L(\mathbb{A})$. Two such tuples $(L,W)$ and $(L',W')$ are defined to be equivalent if there exists an $m-$tuple $\underline{s} = (s_1, \ldots, s_m)$ of complex numbers such that the representation defined by $(L',W')$ is conjugate (by an element in $L(\mathbb{A})$) to the one defined by $(L,W[\underline{s}])$, where $$ W[\underline{s}] = W_1[s_1] \otimes \ldots \otimes W_m[s_m], \; \; \textrm{ where } W_i[s_i]= \Set{ \phi \abs{\det}^{s_i} | \; \phi \in W_i}. $$ The {\it cuspidal support} of $\Psi \in \Xi$ is the set of all tuples in the equivalence class (see \cite[Lemma 6]{Ar}). \vspace{0.5cm} \noindent Let $\Xi$ be the set of equivalence classes, and $\Xi^\circ$ be the subset of those equivalence classes $\Psi \in \Xi$ such that $\Psi$ contains an element $(L,W)$ satisfying $$ N_1 = \ldots = N_m, \; \; \textrm{and} \; \; V_1 = \ldots = V_m. $$ The assumption that the central character of the representation $(L,W)$ is equal to $\omega$ implies that there is precisely one such element $(L,W)$ in the equivalence class $\Psi$ (see \cite[\S 1]{MW}). Due to Langlands \cite{La}, we have the following decomposition: \begin{equation}\label{spectral_decomposition} L^2_{\textrm{disc}}(\omega) = \bigoplus_{\Psi \in \Xi} L^2_{\textrm{disc}}(\omega)_\Psi \end{equation} which has a refinement due to M{\oe}glin--Waldspurger \cite{MW} in the $GL_n$ case. \begin{thm}\label{mw_thm} (M{\oe}glin, Waldspurger) Let $\Psi \in \Xi$. Then \begin{enumerate} \item If $\Psi \not \in \Xi^0$, then $L^2_{\textrm{disc}}(\omega)_\Psi \cap L^2_{\textrm{disc}}(\omega) = 0$. \item If $\Psi \in \Xi^\circ$, then $L^2_{\textrm{disc}}(\omega)_\Psi \cap L^2_{\textrm{disc}}(\omega)$ is irreducible and isomorphic to an unique irreducible quotient of the induced representation $I(V,\underline{s})$, with $\underline{s} = ( \frac{m-1}{2}, \ldots, \frac{1-m}{2})$, obtained by normalized parabolic induction from $M$ to $G(\mathbb{A})$ of the representation $V_1[s_1]\otimes \ldots V_m[s_m]$. \end{enumerate} \end{thm} \begin{cor}\label{mw_cor} (M{\oe}glin, Waldspurger) Let $W$ be a representation of $T/\mathbb{Q}$, so that the representation $(T,W)$ is of the form $\mu_1 \otimes \ldots \otimes \mu_n$ where $\mu_i : \mathbb{Q}^\times \expandafter\@gobble\string\\ \mathbb{A}^\times \to \mathbb{C}^\times$ is a Hecke character over $\mathbb{Q}$. Let $\Psi \in \Xi$ be the equivalence class containing $(T,W)$. Then, \begin{enumerate} \item only representations of the form $\mu \otimes \ldots \mu$ such that $\mu^n =\omega$, in the equivalence class $\Psi$, contribute to the residual spectrum $L^2_{\textrm{res}}(\omega)$. \item $L^2_{\textrm{disc}}(\omega) \cap L^2(\omega)_{\Psi}$ is isomorphic to the space spanned by the representation $\pi = \otimes^\prime_p \pi_p$, where $\pi_p = \mu_p \circ \det$ for all primes $p$. \end{enumerate} \end{cor} \noindent In summary for $G = GL_n$ only the indexing set $\Xi^\circ$ is relevant in the direct sum \eqref{spectral_decomposition}: \begin{equation}\label{MW_spectral_decomposition} L^2_{\textrm{disc}}(\omega) = \bigoplus_{\Psi \in \Xi^\circ} L^2_{\textrm{disc}}(\omega)_\Psi \cong L^2_{\textrm{cusp}}(\omega) \bigoplus \underbrace{\Big( \bigoplus_{\substack{\mu: \mathbb{Q}^\times \expandafter\@gobble\string\\ \mathbb{A}^\times \to \mathbb{C}^\times \\ \mu^n = \omega}} \mathop{\otimes^\prime}_{p \leq \infty} (\mu_p \circ \det) \Big)}_{L^2_{\textrm{res}}(\omega)}. \end{equation} Our main goal eventually is to understand the contribution of $L^2_{\textrm{res}}(\omega)$ to the inner cohomology with the aid of the Borel-Garland map $\Phi^\bullet_{BG}$; see below \eqref{Borel_Garland_map}. \subsection{Cuspidal cohomology}\label{filtration_cohomology} Let $V_{\textrm{cusp}}(\omega_\lambda^{-1}) := \mathcal{C}^\infty_{\textrm{cusp}}(G(\mathbb{Q}) \expandafter\@gobble\string\\ G(\mathbb{A})/K_f, \omega_\lambda^{-1})$ be the subset of the space of smooth cusp forms of $V(\omega_\lambda^{-1})$; smooth in the archimedean component, and locally constant in nonarchimedean components. The {\it cuspidal cohomology} is defined by $$ H^\bullet_{\textrm{cusp}}(S_{K_f}, \til{M_\lambda}) := H^\bullet(\mathfrak{g}, K_\infty, V_{\textrm{cusp}}(\omega_\lambda^{-1}) \otimes M_\lambda). $$ Now consider the filtration of $V(\omega_\lambda^{-1})$: \begin{equation}\label{space_filtration} V_{\textrm{cusp}}(\omega_\lambda^{-1}) \subset V_{(2)}(\omega_\lambda^{-1}) \subset V (\omega_\lambda^{-1}) \end{equation} Let $\textrm{Coh}_{\infty}(G, \lambda)$ be the set of isomorphic classes of essentially-unitary (unitary up to twist by a central character) irreducible representations $\CalV_{\pi_\infty}$ of $G(\mathbb{R})$ with nontrivial $(\mathfrak{g},K_\infty)$-cohomology with coefficients in $M_\lambda$. By a result of Harish-Chandra, this set is finite. For $\CalV_\infty \in \textrm{Coh}_\infty(G, \lambda)$, put $V_{\pi_\infty} = (\CalV_{\pi_\infty})^{K_\infty}$, which is a $(\mathfrak{g},K_\infty)$-module, and consider the following spaces of homomorphisms (see \cite[\S 3.2.3]{HR}): \begin{equation*} \begin{split} W_{\pi_\infty} &:= \operatorname{Hom}_{G(\mathbb{R})}(V_{\pi_\infty}, V(\omega_\lambda^{-1})) \\ W_{\pi_\infty \otimes \pi_f}^{(2)} &:= \operatorname{Hom}_{G(\mathbb{R}) \times \mathcal{H}_f}(V_{\pi_\infty} \otimes V_{\pi_f}, V_{(2)}(\omega_\lambda^{-1})) \\ W_{\pi_\infty \otimes \pi_f}^{\textrm{cusp}} &:= \operatorname{Hom}_{(\mathfrak{g},K_\infty) \times \mathcal{H}_f}(V_{\pi_\infty} \otimes V_{\pi_f}, V_{\textrm{cusp}}(\omega_\lambda^{-1})) \end{split} \end{equation*} Informally, the cardinality of these sets gives multiplicity (or \textit{w}eights, whence the notation $W$,) of the representation given by the respective domains in their respective target spaces. \\ \noindent Let $\textrm{Coh}_{(2)}(G, K_f, \lambda)$, resp. $\textrm{Coh}_{\textrm{cusp}}(G, K_f, \lambda)$, be the set of isomorphism classes of absolutely-irreducible $\mathcal{H}_f$-modules $\pi_f$ for which there exists a $\pi_\infty \in \textrm{Coh}_\infty(G,\lambda)$ such that $W_{\pi_\infty \times \pi_f}^{(2)} \neq 0$, resp. $W_{\pi_\infty \times \pi_f}^{\textrm{cusp}} \neq 0$. From \ref{space_filtration}, it follows that $$ \textrm{Coh}_{\textrm{cusp}}(G, K_f, \lambda) \subset \textrm{Coh}_{(2)}(G, K_f, \lambda) \subset \textrm{Coh}_\infty(G, \lambda) $$ Due to Jacquet--Shalika \cite{JS}, we have $\dim W_{\pi_\infty \otimes \pi_f}^{\textrm{cusp}} \leq 1$. On the other hand, note that Theorem \ref{mw_thm} of M{\oe}glin-Waldspurger implies $\dim W_{\pi_\infty \otimes \pi_f}^{(2)} \leq 1$.\\ \noindent Finally, we have the following result of Borel and Garland \cite{BG} that `approximates' square-integrable cohomology $H^\bullet_{(2)}(S_{K_f}, \til{M_\lambda})$, namely that there is a \textit{surjective map} $\Phi_{BG}^\bullet$ of $\mathcal{H}_f$-algebras (\cite[\S 3.2.3]{HR}): \begin{equation}\label{Borel_Garland_map} \begin{split} \mathop{\bigoplus}_{\pi_\infty \in \textrm{Coh}_\infty(G, \lambda)} \mathop{\bigoplus}_{\pi_f \in \textrm{Coh}_{(2)}(G, K_f, \lambda)} &W_{\pi_\infty \otimes \pi_f}^{(2)} \otimes H^\bullet(\mathfrak{g},K_\infty, V_{\pi_\infty} \otimes M_\lambda) \otimes V_{\pi_f} \xrightarrow{\Phi_{BG}^\bullet} \\ &H^\bullet_{(2)}(S_{K_f}, \til{M_\lambda}). \end{split} \end{equation} In particular the square-integrable cohomology, and hence its subspace the inner cohomology, are semisimple as $\mathcal{H}_f$-modules. On the other hand, due to Borel \cite{Bo1}, there is a canonical map of $\mathcal{H}_f$-modules, which is an isomorphism: \begin{equation}\label{Borel_cusp_map} \begin{split} \bigoplus_{\pi_\infty \in \textrm{Coh}_\infty(G, \lambda)} \bigoplus_{\pi_f \in \textrm{Coh}_{\textrm{cusp}}(G, K_f, \lambda)}&W_{\pi_\infty \otimes \pi_f}^{\textrm{cusp}} \otimes H^\bullet(\mathfrak{g},K_\infty, V_{\pi_\infty} \otimes M_\lambda) \otimes V_{\pi_f} \longrightarrow \\ &H^\bullet_{\textrm{cusp}}(S_{K_f}, \til{M_\lambda}) \end{split} \end{equation} \noindent Therefore, comparing \eqref{Borel_cusp_map} and \eqref{Borel_Garland_map} we see that the cuspidal cohomology is contained in the inner cohomology \cite[\S 3.2.3]{HR}. Finally, taking in to account of the filtration \eqref{basic_filtration}, we obtain the following refinement of the filtration of the the full cohomology $H^\bullet$: \begin{equation}\label{filtration} H^\bullet_\textrm{cusp} \subset H^\bullet_{!} \subset H^\bullet_{(2)} \subset H^\bullet. \end{equation} The main object of study in this article is the quotient $\mathcal{H}_f$-module $$ H^\bullet_{!/\textrm{cusp}} := H^\bullet_{!}(S_{K_f}, \til{M_\lambda}) \Big/ H^\bullet_{\textrm{cusp}}(S_{K_f}, \til{M_\lambda}). $$ \section{Main result}\label{main_result} \noindent We establish the main result of this article in this section. As always we work in the setup of \S \ref{summary_basic_setup}. In this section, we impose an additional hypohteis, \textit{which is crucial for the results of this article}, namely that \textit{$n \geq 2$ is a prime number}. To emphasize further, we suppose that the rank of $G = GL_n$, which is $n$, is a prime number.\\ \begin{prop} Assume the hypothesis of \S \ref{summary_basic_setup}. Suppose that $n$ is a prime number. Then \begin{equation}\label{residual_decomposition} L^2_{\textrm{res}}(\omega_\lambda^{-1}) = \Big( \omega_\lambda^{-1/n} \circ \det \Big) \bigotimes \Big( \bigoplus_{\pi: \textrm{type}(\pi_f) = \omega_\lambda^{1/n}} \pi_f \Big). \end{equation} \end{prop} \begin{proof} The proper standard $\mathbb{Q}$-parabolic subgroups of $G$ are in one-to-one correspondence with the nontrivial partitions of $n$, namely the partition $n = n_1 + \ldots + n_r$, where the summands $n_i \geq 1$ corresponds to the the parabolic subgroup that has unipotent radical $U = GL_{n_1} \times \ldots \times GL_{n_r}$. Since $n$ is a prime number, there is a unique partition $n = n_1 + \ldots + n_r$ such that $n_1 = \ldots = n_r$, namely the one with all $n_i = 1$: it corresponds to the standard Borel subgroup $B$. Accordingly, the set $\Xi^0$ consists of equivalence classes with unique representatives $(B, W)$ where $W$ is a one-dimensional representation of the torus $T$ (see Theorem \ref{mw_thm}). Hence, given a central character $\omega$ of $G$ of type $\omega_\lambda^{-1}$, the direct summand $L^2_{\textrm{res}}(\omega)$ in $L^2_{\textrm{res}}(\omega_\lambda^{-1})$ is spanned by the one-dimensional automorphic representation $\pi(\omega) = \mu \circ \det$ where $\mu$ is a Hecke character, necessarily unitary, such that $\mu^n = \omega$; see \eqref{MW_spectral_decomposition}: \begin{equation*} L^2_{\textrm{res}}(\omega_\lambda^{-1}) = \bigoplus_{\substack{\omega :\mathbb{Q}^\times \expandafter\@gobble\string\\ \mathbb{A}^\times \to \mathbb{C}^\times \\ \omega_\infty = \omega_\lambda^{-1}}} \pi(\omega) = \Big( \omega_\lambda^{-1/n} \circ \det \Big) \bigotimes \Big( \bigoplus_{\pi: \textrm{type}(\pi_f) = \omega_\lambda^{1/n}} \pi_f \Big). \end{equation*} \noindent The last equality follows from the fact that any algebraic Hecke character of a given type is uniquely determined uniquely by its finite component (see \eqref{algebraic_Hecke_character_defn}). \end{proof} The cuspidal cohomology is contained in the inner cohomology \eqref{filtration} and injects into the square-integrable cohomology \eqref{Borel_cusp_map}. On the other hand, the cuspidal cohomology is the image of the cuspidal spectrum which is disjoint from the residual spectrum, it follows that the inner cohomology classes are obtained from the residual spectrum and is contained in the image of the map \eqref{Borel_Garland_map}. With the aid of the residual decomposition \eqref{residual_decomposition} we deduce that the set of inner cohomology classes in $H^\bullet_{(2)} \supset H^\bullet_{!} \supset H^\bullet_{\textrm{cusp}}$, {\it that are not cuspidal}, must be contained in the image of $ H^\bullet(\mathfrak{g},K_\infty, L^2_{\textrm{res}}(\omega_\lambda^{-1}) \otimes \til{M_\lambda})$ under the Borel-Garland map $\Phi^\bullet_{BG}$ \begin{remark} Note that the map \eqref{Borel_Garland_map} is surjective onto $H^\bullet_{(2)}$, and it is not necessarily the case that $H^\bullet_{!} = H^\bullet_{(2)}$. Also, it is not necessarily the case that the image of $\textrm{Res}_f(\lambda)$ generates the inner cohomology that are not cuspidal. All we know at the moment is that it generates square-integrable cohomology with the aid of Borel-Garland map $\Phi^{\bullet}_{BG}$. \\ \end{remark} \vspace{0.5cm} \noindent Before we prove the main result we establish some elementary results. Let $\pi$ be an automorphic representation of $G(\mathbb{A})$; its archimedean component $\pi_\infty$ can be identified with a $(\mathfrak{g},K_\infty)$-module on which the center $Z(\mathfrak{U}(\mathfrak{g}))$ of $\mathfrak{U}(\mathfrak{g})$ acts by scalars, and the resulting map $Z(\mathfrak{U}(\mathfrak{g})) \to \mathbb{C}$ is called the infinitesimal character of $\mathfrak{g}$. \begin{lem}\label{Wigner_lemma} The $(\mathfrak{g},K_\infty)$-cohomology $H^\bullet(\mathfrak{g},K_\infty, (\omega_\lambda^{-1/n} \circ \det ) \otimes M_\lambda)$ is nontrivial, only if $M_\lambda \cong \omega_\lambda^{1/n} \circ \det$. In that case $$ H^\bullet(\mathfrak{g},K_\infty, (\omega_\lambda^{-1/n} \circ \det ) \otimes M_\lambda) \cong H^\bullet(\mathfrak{g},K_\infty,\mathbb{C}). $$ \end{lem} \begin{proof} Wigner's lemma \cite[Chapter I, Corollary 4.2]{BW} gives a necessary condition for the nonvanishing of the factor $H^\bullet(\mathfrak{g},K_\infty, (\omega_\lambda^{-1/n} \circ \det ) \otimes M_\lambda)$, namely that the representations $(\omega_\lambda^{-1/n} \circ \det)$ and $M_{\lambda}^\vee$ have the same infinitesimal character, forcing the representation $\rho_\lambda : G(\mathbb{R}) \to GL(M_\lambda)$ to be $\omega_\lambda^{1/n} \circ \det$; in other words, the coefficients of the cohomology $H^\bullet(\mathfrak{g},K_\infty, (\omega_\lambda^{-1/n} \circ \det ) \otimes M_\lambda)$ must be the trivial module $\mathbb{C}$. \end{proof} \begin{lem}\label{Haefliger_lemma} Let $S^0 = \Set{ 2l -1 | 1 < l \leq n, \; l \trm{ odd }}$. Then $$ H^\bullet(\mathfrak{g},K_\infty, \mathbb{C}) \cong H^\bullet(SU(n)/SO(n),\mathbb{C}) \cong \bigwedge^{*}[ \Set{\xi_i}_{i \in S^0}], $$ the exterior algebra over $\mathbb{C}$ generated by symbols $\xi_i$ indexed by $S^0$. \end{lem} \begin{proof} The $(\operatorname{\mathfrak{gl}}_n,O(n))$-cohomology $H^{\bullet}(\operatorname{\mathfrak{gl}}_n, O(n), \mathbb{C})$ is isomorphic to the exterior algebra over $\mathbb{C}$ generated by elements taken one in degrees $2k-1$, with $k$ odd, and $k \leq n$ (see \cite[Page 28]{Hae}. In our case, where instead of $O(n)$ we took $K_\infty = O(n) Z(\mathbb{R})^\circ$, the exterior algebra $H^\bullet(\mathfrak{g},K_\infty, \mathbb{C})$ has no generators in degree $1$, because \begin{align*} H^\bullet(\mathfrak{g},O(n), \mathbb{C}) &= H^\bullet(U(n)/O(n),\mathbb{C}) = H^\bullet(U(1) \times SU(n)/SO(n),\mathbb{C}) \\ &= H^\bullet(U(1), \mathbb{C}) \otimes H^\bullet(SU(n)/SO(n), \mathbb{C}) \\ &= H^\bullet(U(1),\mathbb{C}) \otimes H^\bullet(\operatorname{\mathfrak{gl}}_n, K_\infty, \mathbb{C}) \end{align*} and the first factor $H^\bullet(U(1),\mathbb{C}) \cong H^1(\operatorname{\mathfrak{gl}}_n, O(n), \mathbb{C})$, which is the cohomology of the circle which is trivial in all degrees except in degrees $0$ and $1$ where it is isomorphic to $\mathbb{C}$, must be excluded to obtain the desired summand $H^\bullet(\operatorname{\mathfrak{gl}}_n, K_\infty, \mathbb{C})$. \end{proof} \vspace{0.5cm} \noindent Before we proceed further let us recall a result of Li and Schewermer \cite[Propositions 5.2 and 5.8]{LS} which are adapted to our needs after changing the notation found therein. First let us recall that $\dim G(\mathbb{R}) = n^2$, $\dim O(n) = n(n-1)/2$, $\operatorname{rank} G(\mathbb{R}) = n$, and $\operatorname{rank} O(n) = (n-1)/2$. Consider a variant symmetric space $X_{\operatorname{Sym}}^\prime = G(\mathbb{R})/O(n)$ of the symmetric space $X_{\operatorname{Sym}}$. Then $\dim X_{\operatorname{Sym}}^\prime = {n+1 \choose 2}$. let \begin{equation}\label{cusp_interval_endpoints} a(n) := \frac{1}{2}\Bigg( {n+1 \choose 2} -\Big(\frac{n+1}{2}\Big) \Bigg) \; \; , b(n) := \frac{1}{2} \Bigg( {n+1 \choose 2} + \Big(\frac{n+1}{2} \Big) \Bigg). \end{equation} \newcommand{\textrm{irr}}{\textrm{irr}} \noindent Consider the \textit{integer} intervals, where $\dim X_{\operatorname{Sym}} = {n+1 \choose 2} - 1$. $$ I := [0, \dim X_{\operatorname{Sym}}], \; \; I_! := (0, a(n)), \; \; I_{\textrm{cusp}} := [a(n),b(n)], \; \; I_{\textrm{irr}} := (b(n), \dim X_{\operatorname{Sym}}) $$ where we have the usual notation of intervals, yet, let us explain, for instance $I_{!}$, which is the set of all \textit{integers} $k$ such that $k$ is strictly greater than $0$ and strictly less than $a(n)$. Note that $I$ is the disjoint union: $$ I = \Set{0, \dim X_{\operatorname{Sym}}} \cup I_{!} \cup I_{\textrm{cusp}} \cup I_{\textrm{irr}}. $$ \begin{remark} The notation hints at the final theorem of our paper, namely the interval $I_{!}$ hints that the inner cohomology classes that are not cuspidal occur only in this interval, while $I_{\textrm{irr}}$ hints that the interval in question is `irrelevant'. \end{remark} \begin{thm}\label{Li_Schwermer} (Li and Schwermer) \cite[Propositions 5.2(ii) and 5.8]{LS} \begin{enumerate} \item\label{cuspidal_LS} $H^k_{\textrm{cusp}}(S_{K_f}, \til{M_\lambda})=0$ for $k \in I \setminus I_{\textrm{cusp}}$. \item\label{restriction_LS} The restriction $r^k : H^k(S_{K_f}, \til{M_\lambda}) \to H^k(\partial S_{K_f}, \til{M_\lambda})$ is an isomorphism for $k > b$, i.e. for all $k \in I_{\textrm{irr}}$. \end{enumerate} \end{thm} \noindent Consider the following table containing the intervals where cuspidal cohomology may be nontrivial, for primes $n =2,3,5,7,11$, where in $a(n),b(n)$ are defined as in \eqref{cusp_interval_endpoints}, and $S^0$ is the subset of the interval $I= [0, \dim X_{\operatorname{Sym}}]$ defined by $$ S^0:= \Set{ 2l -1 | 1 < l \leq n, \; l \trm{ odd } } $$ given by the conclusion of the Lemma \ref{Haefliger_lemma}. \begin{table}\label{cusp_intervals} \caption{Inner cohomology degrees} \begin{tabular}{| c | c | c | c | c | c |} \hline $n$ & $\dim X_{\operatorname{Sym}} = {n+1 \choose 2} -1$ & $I_{\textrm{cusp}} = [a(n),b(n)]$ & $S^0$ \\ \hline $2$ & $2$ & $[3/4,9/4] = \Set{1,2}$ & $\emptyset$\\ \hline $3$ & $5$ & $[2,4]$ & $\Set{5}$ \\ \hline $5$ & $14$ & $[6,9]$ & $\Set{5,9}$ \\ \hline $7$ & $27$ & $[12,16]$ & $\Set{5,9,13}$ \\ \hline $11$ & $65$ & $[30,36]$ & $\Set{5,9,13,17,21}$\\ \hline \end{tabular} \end{table} \begin{lem}\label{deg_0_n} The following isomorphisms hold: \begin{align*} H^{\dim X_{\operatorname{Sym}}}_{!/\textrm{cusp}}(S_{K_f}, \til{M_\lambda}) &= H^{\dim X_{\operatorname{Sym}}}_{!}(S_{K_f}, \til{M_\lambda}) \\ &\cong H^{\dim X_{\operatorname{Sym}}}_{(2)}(S_{K_f}, \til{M_\lambda}) = H^{\dim X_{\operatorname{Sym}}} (S_{K_f}, \til{M_\lambda}) \\ & \cong H^0_{!}(S_{K_f}, \til{M_\lambda}) \cong H^0_{c}(S_{K_f}, \til{M_\lambda}) \\ & \cong H^{0}_{!/\textrm{cusp}}(S_{K_f}, \til{M_\lambda}). \end{align*} \end{lem} \begin{proof} The boundary cohomology $H^{\dim X_{\operatorname{Sym}}}_{\partial}$ in degree $\dim X_{\operatorname{Sym}}$ is trivial, since $\dim \partial \bar{S}_{K_f} = \dim X_{\operatorname{Sym}} -1$ so that in degrees strictly greater than $\dim \partial \bar{S}_{K_f}$ the de Rham cohomology is trivial, the claim follows from de Rham isomorphism \eqref{de_Rham_isomorphism} Therefore the restriction map $r^{\dim X_{\operatorname{Sym}}}$, and hence the composition $r^{\dim X_{\operatorname{Sym}}} \circ \Phi^{\dim X_{\operatorname{Sym}}}_{BG}$ is the zero map. From the definition of the inner cohomology we have then \begin{equation}\label{temp_equation} H^{\dim X_{\operatorname{Sym}}}_{!}= H^{\dim X_{\operatorname{Sym}}}_{(2)}= H^{\dim X_{\operatorname{Sym}}}. \end{equation} On the other hand, by the Poincar\'{e} duality, namely that there exists a nondegenerate pairing $ H^0_c \times H^{\dim X_{\operatorname{Sym}}} \to \mathbb{C}, $ \eqref{Poincare_duality} it follows that $H^0_c \cong H^{\dim X_{\operatorname{Sym}}}$. First consider the case where the prime $n \geq 3$. From Table \ref{cusp_intervals}, we see that the interval $0,\dim X_{\operatorname{Sym}} \not \in I_{\textrm{cusp}}$. By Theorem \ref{Li_Schwermer}\eqref{cuspidal_LS}, $H^0_{\textrm{cusp}} =H^{\dim X_{\operatorname{Sym}}}_{\textrm{cusp}} = 0$. On the other hand, the map $i^0$ is injective therefore $H^0_{!/\textrm{cusp}} = H^0_{!} \cong H^0_c$ (see Remark \ref{degree_0_compact_cohomology}). Now consider the case where the prime $n = 2$. Again, from Table \ref{cusp_intervals}, we see that $2 \in I_{\textrm{cusp}}$ but $0 \not \in I_{\textrm{cusp}}$. Consider the Borel-Serre long exact sequence at degree $2$, \begin{center} \begin{tikzcd} & & H^2(\mathfrak{g},K_\infty, \mathbb{C}) \otimes \textrm{Res}_{f}(\lambda) \arrow{d}{\Phi_{BG}^2} \arrow[dashed]{dr}{j_2} & & & \\ \ldots \arrow{r} & H^2_c \arrow{r}{i^2} & H^2 \arrow{r}{r^2} & H^2_{\partial} \arrow{r}{i^{3}} & H^{3}_c \arrow{r} &\ldots \end{tikzcd} \end{center} By Lemma \ref{Haefliger_lemma}, $H^2(\mathfrak{g},K_\infty, \mathbb{C}) = 0$, therefore $H^2_{!} = 0$, and now we run through the remaining argument as in the $n \geq 3$ case already considered above. \end{proof} \begin{prop}\label{n_all} \begin{enumerate} \item \label{prime_2_3} For primes $n = 2$ and $n= 3$, $ H^\bullet_{!/\textrm{cusp}}(S_{K_f}, \til{M_\lambda}) = 0. $ \item \label{prime_5_7} For prime $n = 5$, resp. $n = 7$, $H^k_{!/\textrm{cusp}}(S_{K_f}, \til{M_\lambda}) =0 $ for all integers $k \in I_{\textrm{cusp}} \cup I_{\textrm{irr}} \cup \Set{0}$ except perhaps for $k = 9$, resp. $k = 13$. \item \label{prime_11} For all primes $n \geq 11$, $H^k_{!/\textrm{cusp}}(S_{K_f}, \til{M_\lambda}) =0 $ for all integers $k \in I_{\textrm{cusp}} \cup I_{\textrm{irr}} \cup \Set{0}$. \end{enumerate} \end{prop} \begin{proof} By Lemma \ref{Wigner_lemma} we may (and do) restrict our attention to the constant coefficient system. By Lemma \ref{Haefliger_lemma} $H^k_{!/\textrm{cusp}} = 0$ for all $k \not \in S^0$. First we prove assertion \ref{prime_11}. From Table \ref{cusp_intervals} it is clear that $I_{\textrm{cusp}} \cap S^0 = \emptyset$ and in fact these are precisely such primes. Indeed, the maximal element in $S^0$ is $2n-1$ corresponding to $l = n$, and the minimal value of $I_{\textrm{cusp}}$ is $a$ (see \eqref{cusp_interval_endpoints}). Solving for $n$ for which the inequality $2n-1 \leq a $ holds, we obtain the desired claim. Now, it follows from Theorem \ref{Li_Schwermer}\eqref{cuspidal_LS}, and the definition of the inner cohomology that $H^k_{!/\textrm{cusp}} = 0$ for all $k \in I_{\textrm{cusp}} \cup I_{\textrm{irr}}$. As for the vanishing of $H^0_{!/\textrm{cusp}}$, the restriction $r^{\dim X_{\operatorname{Sym}}}$ is the zero map. Indeed, by Theorem \ref{Li_Schwermer}\eqref{restriction_LS} for all $k > b$ the the restriction map $r^k: H^k(\bar{S}_{K_f}, \mathbb{C}) \to H^k(\partial \bar{S}_{K_f}, \mathbb{C})$ is an isomorphism. On the other hand, since $\dim \partial \bar{S}_{K_f} = \dim \bar{S}_{K_f} - 1 = \dim X_{\operatorname{Sym}} - 1$ it follows that $H^{\dim X_{\operatorname{Sym}}}_\partial$, \textit{hence} $H^{\dim X_{\operatorname{Sym}}}$, is trivial. But then by Lemma \ref{deg_0_n}, $H^{0}_{!/cusp}$ is also trivial. Now consider assertion \ref{prime_2_3}. For prime $n =2$, the set $S^0$ is empty, and so by Lemma \ref{Haefliger_lemma} the inner cohomology vanishes for all degrees $k = 1, 2$, and hence by Lemma \ref{deg_0_n} in degree $k = 0$ too. The argument assertions \ref{prime_2_3} for prime $n=3$ and \ref{prime_5_7} is same as that of the proof of the assertion \ref{prime_11} above, provided the exceptions mentioned in the assertion \ref{prime_5_7} are excluded in the argument. \end{proof} \begin{remark}\label{Notation_Haefliger} \begin{enumerate} \item Proposition \ref{n_all} implies that we may restrict our attention to the interval $I_{!}$ to find inner cohomology classes that are not cuspidal, justifying the notation $!$ in $I_{!}$. \item The subset $S^0$ may be viewed as the subset of those integers in $I$ for which the inner cohomology \textit{may} be nontrivial, so that for integers in the complement in $I$ of this set $S^0$ the inner cohomology is trivial. This also explains the choice of notation $S^0$ with $0$ in the superscript position. \end{enumerate} \end{remark} \noindent Now we prove the main result of the article. Let \begin{equation*}\label{analysis_2_residual_part_of_Borel_Garland_map} \textrm{Res}_{f}(\lambda) := \bigoplus_{ \substack{\pi_f \in \textrm{Coh}_{(2)}(G, K_f, \lambda) \\ \textrm{type}(\pi_f) = \omega_\lambda^{1/n}} } \pi_f. \end{equation*} The following Borel-Serre fundamental long exact sequence \eqref{Borel_Serre_fundamental_exact_sequence} at degree $d$ is a useful illustration of the following theorem. \begin{center} \title{Approximation by Borel Garland map} \begin{tikzcd} & & H^d(\mathfrak{g},K_\infty, \mathbb{C}) \otimes \textrm{Res}_{f}(\lambda) \arrow{d}{\Phi_{BG}^d} \arrow[dashed]{dr}{j_d} & & & \\ \ldots \arrow{r} & H^d_c \arrow{r}{i^d} & H^d \arrow{r}{r^d} & H^d_{\partial} \arrow{r}{i^{d+1}} & H^{d+1}_c \arrow{r} &\ldots \end{tikzcd} \end{center} \begin{thm}\label{main_thm} Assume that $n$ is a prime number. For all primes $n \geq 2$, the quotient module $H^\bullet_{!/\textrm{cusp}}(S_{K_f}, \til{M_\lambda})$ vanishes if $\til{M_\lambda}$ is not isomorphic to the constant sheaf $\mathbb{C}$. So, suppose otherwise, i.e. $\til{M_\lambda} \cong \mathbb{C}$, and let $S^0 = \Set{ 2l-1 | 1 < l \leq n, \; l \trm{ odd }}$, then \begin{enumerate} \item for prime $n =2,3$, the module $H^\bullet_{!/\textrm{cusp}}(S_{K_f}, \mathbb{C}) = 0$, and \item for all primes $n \geq 5$, \begin{equation}\label{possible_cases} H^k_{!/\textrm{cusp}}( S_{K_f}, \mathbb{C}) \cong \begin{cases} 0 & \trm{ for } k \not \in S^0. \\ \ker( r^k|_{\Phi^k_{BG}(\textrm{Res}_f(\lambda)}) & \trm{ for } k \in S^0. \end{cases} \end{equation} \end{enumerate} \end{thm} \begin{proof} The first assertion and the first part of the second assertion is Proposition \ref{n_all}. The second part of the second assertion is a consequence of Proposition \ref{residual_decomposition} and the definition of the inner cohomology. \end{proof} \begin{remark} As a final remark, we note that the some of the conclusions of Theorem \ref{main_thm} \textit{could} be more precise trading for simplicity. For instance, in the case $n = 5$, it follows from the facts $I_{\textrm{cusp}} = [ 6, 9]$, $S^0 = \Set{5,9}$ one has $H^5_{!} = H^5_{!\textrm{cusp}}$ because $H^5_{\textrm{cusp}} = 0$, and similarly for the other cases too. \end{remark} \section*{References} \bibliographystyle{plainurl} \section{} \label{} \section{Introduction} \file{elsarticle.cls} is a thoroughly re-written document class for formatting \LaTeX{} submissions to Elsevier journals. The class uses the environments and commands defined in \LaTeX{} kernel without any change in the signature so that clashes with other contributed \LaTeX{} packages such as \file{hyperref.sty}, \file{preview-latex.sty}, etc., will be minimal. \file{elsarticle.cls} is primarily built upon the default \file{article.cls}. This class depends on the following packages for its proper functioning: \begin{enumerate} \item \file{pifont.sty} for openstar in the title footnotes; \item \file{natbib.sty} for citation processing; \item \file{geometry.sty} for margin settings; \item \file{fleqn.clo} for left aligned equations; \item \file{graphicx.sty} for graphics inclusion; \item \file{txfonts.sty} optional font package, if the document is to be formatted with Times and compatible math fonts; \item \file{hyperref.sty} optional packages if hyperlinking is required in the document. \end{enumerate} All the above packages are part of any standard \LaTeX{} installation. Therefore, the users need not be bothered about downloading any extra packages. Furthermore, users are free to make use of \textsc{ams} math packages such as \file{amsmath.sty}, \file{amsthm.sty}, \file{amssymb.sty}, \file{amsfonts.sty}, etc., if they want to. All these packages work in tandem with \file{elsarticle.cls} without any problems. \section{Major Differences} Following are the major differences between \file{elsarticle.cls} and its predecessor package, \file{elsart.cls}: \begin{enumerate}[\textbullet] \item \file{elsarticle.cls} is built upon \file{article.cls} while \file{elsart.cls} is not. \file{elsart.cls} redefines many of the commands in the \LaTeX{} classes/kernel, which can possibly cause surprising clashes with other contributed \LaTeX{} packages; \item provides preprint document formatting by default, and optionally formats the document as per the final style of models $1+$, $3+$ and $5+$ of Elsevier journals; \item some easier ways for formatting \verb+list+ and \verb+theorem+ environments are provided while people can still use \file{amsthm.sty} package; \item \file{natbib.sty} is the main citation processing package which can comprehensively handle all kinds of citations and works perfectly with \file{hyperref.sty} in combination with \file{hypernat.sty}; \item long title pages are processed correctly in preprint and final formats. \end{enumerate} \section{Installation} The package is available at author resources page at Elsevier (\url{http://www.elsevier.com/locate/latex}). It can also be found in any of the nodes of the Comprehensive \TeX{} Archive Network (\textsc{ctan}), one of the primary nodes being \url{http://www.ctan.org/tex-archive/macros/latex/contrib/elsevier/}. Please download the \file{elsarticle.dtx} which is a composite class with documentation and \file{elsarticle.ins} which is the \LaTeX{} installer file. When we compile the \file{elsarticle.ins} with \LaTeX{} it provides the class file, \file{elsarticle.cls} by stripping off all the documentation from the \verb+*.dtx+ file. The class may be moved or copied to a place, usually, \verb+$TEXMF/tex/latex/elsevier/+, or a folder which will be read by \LaTeX{} during document compilation. The \TeX{} file database needs updation after moving/copying class file. Usually, we use commands like \verb+mktexlsr+ or \verb+texhash+ depending upon the distribution and operating system. \section{Usage}\label{sec:usage} The class should be loaded with the command: \begin{vquote} \documentclass[<options>]{elsarticle} \end{vquote} \noindent where the \verb+options+ can be the following: \begin{description} \item [{\tt\color{verbcolor} preprint}] default option which format the document for submission to Elsevier journals. \item [{\tt\color{verbcolor} review}] similar to the \verb+preprint+ option, but increases the baselineskip to facilitate easier review process. \item [{\tt\color{verbcolor} 1p}] formats the article to the look and feel of the final format of model 1+ journals. This is always single column style. \item [{\tt\color{verbcolor} 3p}] formats the article to the look and feel of the final format of model 3+ journals. If the journal is a two column model, use \verb+twocolumn+ option in combination. \item [{\tt\color{verbcolor} 5p}] formats for model 5+ journals. This is always of two column style. \item [{\tt\color{verbcolor} authoryear}] author-year citation style of \file{natbib.sty}. If you want to add extra options of \file{natbib.sty}, you may use the options as comma delimited strings as arguments to \verb+\biboptions+ command. An example would be: \end{description} \begin{vquote} \biboptions{longnamesfirst,angle,semicolon} \end{vquote} \begin{description} \item [{\tt\color{verbcolor} number}] numbered citation style. Extra options can be loaded with\linebreak \verb+\biboptions+ command. \item [{\tt\color{verbcolor} sort\&compress}] sorts and compresses the numbered citations. For example, citation [1,2,3] will become [1--3]. \item [{\tt\color{verbcolor} longtitle}] if front matter is unusually long, use this option to split the title page across pages with the correct placement of title and author footnotes in the first page. \item [{\tt\color{verbcolor} times}] loads \file{txfonts.sty}, if available in the system to use Times and compatible math fonts. \item[] All options of \file{article.cls} can be used with this document class. \item[] The default options loaded are \verb+a4paper+, \verb+10pt+, \verb+oneside+, \verb+onecolumn+ and \verb+preprint+. \end{description} \section{Frontmatter} There are two types of frontmatter coding: \begin{enumerate}[(1)] \item each author is connected to an affiliation with a footnote marker; hence all authors are grouped together and affiliations follow; \item authors of same affiliations are grouped together and the relevant affiliation follows this group. An example coding of the first type is provided below. \end{enumerate} \begin{vquote} \title{This is a specimen title\tnoteref{t1,t2}} \tnotetext[t1]{This document is a collaborative effort.} \tnotetext[t2]{The second title footnote which is a longer longer than the first one and with an intention to fill in up more than one line while formatting.} \end{vquote} \begin{vquote} \author[rvt]{C.V.~Radhakrishnan\corref{cor1}\fnref{fn1}} \ead{cvr@river-valley.com} \author[rvt,focal]{K.~Bazargan\fnref{fn2}} \ead{kaveh@river-valley.com} \author[els]{S.~Pepping\corref{cor2}\fnref{fn1,fn3}} \ead[url]{http://www.elsevier.com} \end{vquote} \begin{vquote} \cortext[cor1]{Corresponding author} \cortext[cor2]{Principal corresponding author} \fntext[fn1]{This is the specimen author footnote.} \fntext[fn2]{Another author footnote, but a little more longer.} \fntext[fn3]{Yet another author footnote. Indeed, you can have any number of author footnotes.} \address[rvt]{River Valley Technologies, SJP Building, Cotton Hills, Trivandrum, Kerala, India 695014} \address[focal]{River Valley Technologies, 9, Browns Court, Kennford, Exeter, United Kingdom} \address[els]{Central Application Management, Elsevier, Radarweg 29, 1043 NX\\ Amsterdam, Netherlands} \end{vquote} The output of the above TeX source is given in Clips~\ref{clip1} and \ref{clip2}. The header portion or title area is given in Clip~\ref{clip1} and the footer area is given in Clip~\ref{clip2}. \vspace*{6pt} \deforange{blue!70} \src{Header of the title page.} \includeclip{1}{132 571 481 690}{els1.pdf} \deforange{orange} \deforange{blue!70} \src{Footer of the title page.} \includeclip{1}{122 129 481 237}{els1.pdf} \deforange{orange} \pagebreak Most of the commands such as \verb+\title+, \verb+\author+, \verb+\address+ are self explanatory. Various components are linked to each other by a label--reference mechanism; for instance, title footnote is linked to the title with a footnote mark generated by referring to the \verb+\label+ string of the \verb=\tnotetext=. We have used similar commands such as \verb=\tnoteref= (to link title note to title); \verb=\corref= (to link corresponding author text to corresponding author); \verb=\fnref= (to link footnote text to the relevant author names). \TeX{} needs two compilations to resolve the footnote marks in the preamble part. Given below are the syntax of various note marks and note texts. \begin{vquote} \tnoteref{<label(s)>} \corref{<label(s)>} \fnref{<label(s)>} \tnotetext[<label>]{<title note text>} \cortext[<label>]{<corresponding author note text>} \fntext[<label>]{<author footnote text>} \end{vquote} \noindent where \verb=<label(s)>= can be either one or more comma delimited label strings. The optional arguments to the \verb=\author= command holds the ref label(s) of the address(es) to which the author is affiliated while each \verb=\address= command can have an optional argument of a label. In the same manner, \verb=\tnotetext=, \verb=\fntext=, \verb=\cortext= will have optional arguments as their respective labels and note text as their mandatory argument. The following example code provides the markup of the second type of author-affiliation. \begin{vquote} \author{C.V.~Radhakrishnan\corref{cor1}\fnref{fn1}} \ead{cvr@river-valley.com} \address{River Valley Technologies, SJP Building, Cotton Hills, Trivandrum, Kerala, India 695014} \end{vquote} \begin{vquote} \author{K.~Bazargan\fnref{fn2}} \ead{kaveh@river-valley.com} \address{River Valley Technologies, 9, Browns Court, Kennford, Exeter, UK.} \end{vquote} \begin{vquote} \author{S.~Pepping\fnref{fn1,fn3}} \ead[url]{http://www.elsevier.com} \address{Central Application Management, Elsevier, Radarweg 43, 1043 NX Amsterdam, Netherlands} \end{vquote} \begin{vquote} \cortext[cor1]{Corresponding author} \fntext[fn1]{This is the first author footnote.} \fntext[fn2]{Another author footnote, this is a very long footnote and it should be a really long footnote. But this footnote is not yet sufficiently long enough to make two lines of footnote text.} \fntext[fn3]{Yet another author footnote.} \end{vquote} The output of the above TeX source is given in Clip~\ref{clip3}. \vspace*{12pt} \deforange{blue!70} \src{Header of the title page..} \includeclip{1}{132 491 481 690}{els2.pdf} \deforange{orange} The frontmatter part has further environments such as abstracts and keywords. These can be marked up in the following manner: \begin{vquote} \begin{abstract} In this work we demonstrate the formation of a new type of polariton on the interface between a .... \end{abstract} \end{vquote} \begin{vquote} \begin{keyword} quadruple exiton \sep polariton \sep WGM \PACS 71.35.-y \sep 71.35.Lk \sep 71.36.+c \end{keyword} \end{vquote} \noindent Each keyword shall be separated by a \verb+\sep+ command. \textsc{pacs} and \textsc{msc} classifications shall be provided in the keyword environment with the commands \verb+\PACS+ and \verb+\MSC+ respectively. \verb+\MSC+ accepts an optional argument to accommodate future revisions. eg., \verb=\MSC[2008]=. The default is 2000.\looseness=-1 \section{Floats} {Figures} may be included using the command, \verb+\includegraphics+ in combination with or without its several options to further control graphic. \verb+\includegraphics+ is provided by \file{graphic[s,x].sty} which is part of any standard \LaTeX{} distribution. \file{graphicx.sty} is loaded by default. \LaTeX{} accepts figures in the postscript format while pdf\LaTeX{} accepts \file{*.pdf}, \file{*.mps} (metapost), \file{*.jpg} and \file{*.png} formats. pdf\LaTeX{} does not accept graphic files in the postscript format. The \verb+table+ environment is handy for marking up tabular material. If users want to use \file{multirow.sty}, \file{array.sty}, etc., to fine control/enhance the tables, they are welcome to load any package of their choice and \file{elsarticle.cls} will work in combination with all loaded packages. \section[Theorem and ...]{Theorem and theorem like environments} \file{elsarticle.cls} provides a few shortcuts to format theorems and theorem-like environments with ease. In all commands the options that are used with the \verb+\newtheorem+ command will work exactly in the same manner. \file{elsarticle.cls} provides three commands to format theorem or theorem-like environments: \begin{vquote} \newtheorem{thm}{Theorem} \newtheorem{lem}[thm]{Lemma} \newdefinition{rmk}{Remark} \newproof{pf}{Proof} \newproof{pot}{Proof of Theorem \ref{thm2}} \end{vquote} The \verb+\newtheorem+ command formats a theorem in \LaTeX's default style with italicized font, bold font for theorem heading and theorem number at the right hand side of the theorem heading. It also optionally accepts an argument which will be printed as an extra heading in parentheses. \begin{vquote} \begin{thm} For system (8), consensus can be achieved with $\|T_{\omega z}$ ... \begin{eqnarray}\label{10} .... \end{eqnarray} \end{thm} \end{vquote} Clip~\ref{clip4} will show you how some text enclosed between the above code looks like: \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newtheorem}} \includeclip{2}{1 1 453 120}{jfigs.pdf} \deforange{orange} The \verb+\newdefinition+ command is the same in all respects as its\linebreak \verb+\newtheorem+ counterpart except that the font shape is roman instead of italic. Both \verb+\newdefinition+ and \verb+\newtheorem+ commands automatically define counters for the environments defined. \vspace*{12pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newdefinition}} \includeclip{1}{1 1 453 105}{jfigs.pdf} \deforange{orange} The \verb+\newproof+ command defines proof environments with upright font shape. No counters are defined. \vspace*{6pt} \deforange{blue!70} \src{{\ttfamily\color{verbcolor}\expandafter\@gobble\string\\ newproof}} \includeclip{3}{1 1 453 65}{jfigs.pdf} \deforange{orange} Users can also make use of \verb+amsthm.sty+ which will override all the default definitions described above. \section[Enumerated ...]{Enumerated and Itemized Lists} \file{elsarticle.cls} provides an extended list processing macros which makes the usage a bit more user friendly than the default \LaTeX{} list macros. With an optional argument to the \verb+\begin{enumerate}+ command, you can change the list counter type and its attributes. \begin{vquote} \begin{enumerate}[1.] \item The enumerate environment starts with an optional argument `1.', so that the item counter will be suffixed by a period. \item You can use `a)' for alphabetical counter and '(i)' for roman counter. \begin{enumerate}[a)] \item Another level of list with alphabetical counter. \item One more item before we start another. \begin{enumerate}[(i)] \item This item has roman numeral counter. \item Another one before we close the third level. \end{enumerate} \item Third item in second level. \end{enumerate} \item All list items conclude with this step. \end{enumerate} \end{vquote} \vspace*{12pt} \deforange{blue!70} \src{List -- Enumerate} \includeclip{4}{1 1 453 185}{jfigs.pdf} \deforange{orange} Further, the enhanced list environment allows one to prefix a string like `step' to all the item numbers. Take a look at the example below: \begin{vquote} \begin{enumerate}[Step 1.] \item This is the first step of the example list. \item Obviously this is the second step. \item The final step to wind up this example. \end{enumerate} \end{vquote} \deforange{blue!70} \src{List -- enhanced} \includeclip{5}{1 1 313 83}{jfigs.pdf} \deforange{orange} \vspace*{-18pt} \section{Cross-references} In electronic publications, articles may be internally hyperlinked. Hyperlinks are generated from proper cross-references in the article. For example, the words \textcolor{black!80}{Fig.~1} will never be more than simple text, whereas the proper cross-reference \verb+\ref{tiger}+ may be turned into a hyperlink to the figure itself: \textcolor{blue}{Fig.~1}. In the same way, the words \textcolor{blue}{Ref.~[1]} will fail to turn into a hyperlink; the proper cross-reference is \verb+\cite{Knuth96}+. Cross-referencing is possible in \LaTeX{} for sections, subsections, formulae, figures, tables, and literature references. \section[Mathematical ...]{Mathematical symbols and formulae} Many physical/mathematical sciences authors require more mathematical symbols than the few that are provided in standard \LaTeX. A useful package for additional symbols is the \file{amssymb} package, developed by the American Mathematical Society. This package includes such oft-used symbols as $\lesssim$ (\verb+\lesssim+), $\gtrsim$ (\verb+\gtrsim+) or $\hbar$ (\verb+\hbar+). Note that your \TeX{} system should have the \file{msam} and \file{msbm} fonts installed. If you need only a few symbols, such as $\Box$ (\verb+\Box+), you might try the package \file{latexsym}. Another point which would require authors' attention is the breaking up of long equations. When you use \file{elsarticle.cls} for formatting your submissions in the \verb+preprint+ mode, the document is formatted in single column style with a text width of 384pt or 5.3in. When this document is formatted for final print and if the journal happens to be a double column journal, the text width will be reduced to 224pt at for 3+ double column and 5+ journals respectively. All the nifty fine-tuning in equation breaking done by the author goes to waste in such cases. Therefore, authors are requested to check this problem by typesetting their submissions in final format as well just to see if their equations are broken at appropriate places, by changing appropriate options in the document class loading command, which is explained in section~\ref{sec:usage}, \nameref{sec:usage}. This allows authors to fix any equation breaking problem before submission for publication. \file{elsarticle.cls} supports formatting the author submission in different types of final format. This is further discussed in section \ref{sec:final}, \nameref{sec:final}. \section{Bibliography} Three bibliographic style files (\verb+*.bst+) are provided --- \file{elsarticle-num.bst}, \file{elsarticle-num-names.bst} and \file{elsarticle-harv.bst} --- the first one for the numbered scheme, the second for the numbered with new options of \file{natbib.sty} and the last one for the author year scheme. In \LaTeX{} literature, references are listed in the \verb+thebibliography+ environment. Each reference is a \verb+\bibitem+ and each \verb+\bibitem+ is identified by a label, by which it can be cited in the text: \verb+\bibitem[Elson et al.(1996)]{ESG96}+ is cited as \verb+\citet{ESG96}+. \noindent In connection with cross-referencing and possible future hyperlinking it is not a good idea to collect more that one literature item in one \verb+\bibitem+. The so-called Harvard or author-year style of referencing is enabled by the \LaTeX{} package \file{natbib}. With this package the literature can be cited as follows: \begin{enumerate}[\textbullet] \item Parenthetical: \verb+\citep{WB96}+ produces (Wettig \& Brown, 1996). \item Textual: \verb+\citet{ESG96}+ produces Elson et al. (1996). \item An affix and part of a reference: \verb+\citep[e.g.][Ch. 2]{Gea97}+ produces (e.g. Governato et al., 1997, Ch. 2). \end{enumerate} In the numbered scheme of citation, \verb+\cite{<label>}+ is used, since \verb+\citep+ or \verb+\citet+ has no relevance in the numbered scheme. \file{natbib} package is loaded by \file{elsarticle} with \verb+numbers+ as default option. You can change this to author-year or harvard scheme by adding option \verb+authoryear+ in the class loading command. If you want to use more options of the \file{natbib} package, you can do so with the \verb+\biboptions+ command, which is described in the section \ref{sec:usage}, \nameref{sec:usage}. For details of various options of the \file{natbib} package, please take a look at the \file{natbib} documentation, which is part of any standard \LaTeX{} installation. \subsection*{Displayed equations and double column journals} Many Elsevier journals print their text in two columns. Since the preprint layout uses a larger line width than such columns, the formulae are too wide for the line width in print. Here is an example of an equation (see equation 6) which is perfect in a single column preprint format: \bigskip \setlength\Sep{6pt} \src{See equation (6)} \deforange{blue!70} \includeclip{4}{134 391 483 584}{els1.pdf} \deforange{orange} \noindent When this document is typeset for publication in a model 3+ journal with double columns, the equation will overlap the second column text matter if the equation is not broken at the appropriate location. \vspace*{6pt} \deforange{blue!70} \src{See equation (6) overprints into second column} \includeclip{3}{61 531 532 734}{els-3pd.pdf} \deforange{orange} \pagebreak \noindent The typesetter will try to break the equation which need not necessarily be to the liking of the author or as it happens, typesetter's break point may be semantically incorrect. Therefore, authors may check their submissions for the incidence of such long equations and break the equations at the correct places so that the final typeset copy will be as they wish. \section{Final print}\label{sec:final} The authors can format their submission to the page size and margins of their preferred journal. \file{elsarticle} provides four class options for the same. But it does not mean that using these options you can emulate the exact page layout of the final print copy. \lmrgn=3em \begin{description} \item [\texttt{1p}:] $1+$ journals with a text area of 384pt $\times$ 562pt or 13.5cm $\times$ 19.75cm or 5.3in $\times$ 7.78in, single column style only. \item [\texttt{3p}:] $3+$ journals with a text area of 468pt $\times$ 622pt or 16.45cm $\times$ 21.9cm or 6.5in $\times$ 8.6in, single column style. \item [\texttt{twocolumn}:] should be used along with 3p option if the journal is $3+$ with the same text area as above, but double column style. \item [\texttt{5p}:] $5+$ with text area of 522pt $\times$ 682pt or 18.35cm $\times$ 24cm or 7.22in $\times$ 9.45in, double column style only. \end{description} Following pages have the clippings of different parts of the title page of different journal models typeset in final format. Model $1+$ and $3+$ will have the same look and feel in the typeset copy when presented in this document. That is also the case with the double column $3+$ and $5+$ journal article pages. The only difference will be wider text width of higher models. Therefore we will look at the different portions of a typical single column journal page and that of a double column article in the final format. \vspace*{2pc} \begin{center} \hypertarget{bsc}{} \hyperlink{sc}{ {\bf [Specimen single column article -- Click here]} } \vspace*{2pc} \hypertarget{bsc}{} \hyperlink{dc}{ {\bf [Specimen double column article -- Click here]} } \end{center} \newpage \vspace*{-2pc} \src{}\hypertarget{sc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{121 81 497 670}{els1.pdf}} \deforange{orange} \newpage \src{}\hypertarget{dc}{} \deforange{blue!70} \hyperlink{bsc}{\includeclip{1}{55 93 535 738}{els-3pd.pdf}} \deforange{orange} \end{document} \section{} \label{}
{ "timestamp": "2018-02-23T02:04:21", "yymm": "1802", "arxiv_id": "1802.06978", "language": "en", "url": "https://arxiv.org/abs/1802.06978" }
\section{Introduction} \label{sec:intro} Cepheus~A\ is a region of massive star formation within our Galaxy. Its radio continuum image consists of about 16 compact thermal cores, many of which are associated with embedded heating sources in the form of newly formed O and B stars. These sources were first identified by \citet{HughesWouterloot84} and are numbered with the prefix HW. The distance to the complex has been determined to be $700\pm40$~pc both by VLBI parallax measurement from the continuum emission from HW9 \citep{Dzib11} and methanol masers associated with HW2 \citep{Moscadelli09}. HW2 is the dominant energy source in the complex. Its continuum emission arises from an elongated structure (see Fig.~\ref{fig:f1}), which has been identified as a thermal jet with an outflow velocity of 480~km~s$^{-1}$\ \citep{Curiel06}. Another structure, perpendicular to the jet, is a disk of dust and molecular gas \citep{Patel05}. A system of water masers is associated with this disk, whose components are spread over an area of about $0.5''$ in extent \citep{Torrelles98}. Another important source is HW3, which lies about $3''$ south of HW2. The radio continuum emission shows four distinct cores, all probably associated with newly formed B stars \citep{Hughes95}. Most of the water masers associated with HW3d define a highly collimated outflow centered on HW3dii \citep{Chibueze12}. Of particular interest to this study is the source HW3diii, which lies about $0.5''$ east of HW3dii. The morphology of the HW2 and HW3 regions is shown in Fig.~\ref{fig:f1}. For a general discussion of the physics of cosmic masers, see \citet{Gray12}. \begin{figure}[hb!] \epsscale{0.6} \plotone{JMfig1.pdf} \caption{The central part of the star-forming region Cepheus~A. The contours show the extent of the continuum components taken from the 1.3~cm VLA image [adapted from \citet{Torrelles98}]. The nomenclature is based on the original identification of about 16 continuum radio sources marking the sites of newly formed massive stars by \citet{HughesWouterloot84}. The dots mark the positions of masers (labeled by their velocities) whose positions were found by analysis of the relative fringe rates derived from these observations. The coordinate origin is the center of HW2/R4: ${\rm RA}=22^{\rm h}56^{\rm m}17.977^{\rm s}$, ${\rm dec}=61^{\rm d}45'49.37''$ (2000). The relative alignment of the masers and continuum is accurate to about $\pm0.2''$ (see text). At a distance of 700~pc, $1''$ corresponds to $1.05\times10^{16}$~cm.} \label{fig:f1} \end{figure} \begin{figure}[ht!] \epsscale{0.5} \plotone{JMfig2.pdf} \caption{$(u,v)$ plane coverage of the 40-minute observation of Cepheus~A\ on 18~Nov 2012 in millions of wavelengths.} \label{fig:f2} \end{figure} We present in this paper our measurements of the maser emission from Cepheus A made with an unprecedented resolution (at the time of observations) of 66~microarcseconds ($\mu$as) on a baseline of 3.3 Earth diameters (ED). These are among the earliest results from a VLBI experiment that incorporate the RadioAstron satellite radio telescope (SRT). More recently, observations with baselines up to 10~ED on other galactic masers and up to 26.7~ED on extragalactic masers have been presented in conference proceedings \citep{Sobolev18, Shakhvorostova18, Baan18}. The only other reported detection of an H$_2$O maser with a space VLBI experiment was of the very bright maser the in Orion-KL region, but with projected baseline shorter than an Earth diameter \citep{Kobayashi00}. The properties of the SRT, which was launched in 2011, are described by \citet{Kardashev13} and \citet{RadioAstron18}. The SRT operates at frequencies of 22, 5, 1.6, and 0.3~GHz. The receiving element is a 10-m parabolic dish, whose aperture efficiency is about 10\% at 22~GHz. The local oscillator phase is controlled by an onboard hydrogen maser. There are four baseband channels: two subbands of 16 MHz in each sense of circular polarization. These signal streams were digitally sampled with one-bit quantization, transmitted to Earth and recorded for later processing at the VLBI center in Moscow. \section{Observations} \label{sec:observ} The observations were made in a single 40-minute period from 12:00~UT to 12:40~UT on 18~Nov 2012. The data were blocked into four segments of 600-second duration each. The actual observation time on each segment was 570~seconds. The VLBI array consisted of the SRT and ground-based telescopes at Yebes, Spain (Ys); Noto, Italy (Nt); and Zelenchukskaya, Russian Federation (Zc). The diameters of these telescopes are 40, 32, and 32~m, respectively. Over the 40-minute observation, the $(u,v)$ coordinates of the SRT--Ys baseline changed from (1.36,~2.60) to (1.63,~2.89) in units of Giga-wavelengths. The corresponding fringe spacings changed from 70 to 62~$\mu$as\ (corresponding to 0.049 and 0.043~AU, or 7.3 and $6.4\times10^{11}$~cm, respectively). The mean position angle of the space--Earth baseline was $28^\circ$. The $(u,v)$ coverage for the full 40 minutes is shown in Fig~\ref{fig:f2}. The data were correlated using the Astro Space Center (ASC) software correlator \citep{Likhachev17}, but only two spectral subsets of the data, which contained all {known spectral components were retained (one 8-MHz subband in each polarization). The post-correlation data reduction, including fringe fitting, was carried out with the PIMA calibration package \citep{Petrov11}. Most subsequent analysis was carried out with new ad~hoc software suitable for space VLBI data.} The processing configuration provided 1024 channels, resulting in a channel spacing of 7.81~kHz, corresponding to 0.105~km~s$^{-1}$. The final processing was completed after the determination of the best orbital parameters for the SRT, which were accurate to 500~m in position and 0.02~m~s$^{-1}$ in velocity \citep{Stepanyants17}. \begin{figure}[ht!] \epsscale{0.7} \plotone{JMfig3.pdf} \caption{The total power spectrum (average of the RCP and LCP spectra) from the first 600-second segment of observations at the Yebes telescope. No off-source reference spectrum was available, so a polynomial baseline was fit to the signal-free parts of the spectrum and removed. The velocity is with respect to the local standard of rest (LSR). $V({\rm LSR})=0$~km~s$^{-1}$\ corresponds to $V({\rm heliocentric})=-7.5$~km~s$^{-1}$. On the 3.3 ED baselines between the SRT and ground stations, fringes were detected only on the --16.9 and 0.6~km~s$^{-1}$\ features.} \label{fig:f3} \end{figure} \begin{deluxetable}{ccccl}[ht!] \tablecaption{Positions of H$_2$O masers in Cepheus~A\tablenotemark{a} \label{tab:t1}} \tablecolumns{4} \tablenum{1} \tablewidth{0pt} \tablehead{ \colhead{}& \colhead{} & \colhead{} & \colhead{}&\colhead{~Continuum} \\ \colhead{Velocity (lsr)} & \colhead{$\Delta$ RA} & \colhead{$\Delta$ dec} & \colhead{Flux density} & \colhead{association} \\ \colhead{(km~s$^{-1}$)} & \colhead{($''$)} & \colhead{($''$)} & \colhead{(Jy)} &\colhead{~} \\ } \startdata \phn\phn0.6 & 2.29 & --3.33 & 580 & \phn\phn HW3diii \\ \phn--9.7 & 1.64 & --3.10 & 800 & \phn\phn HW3dii \\ --14.8 & 0.05 & \phn0.07 & \phn55 & \phn\phn HW2 \\ --16.2 & 0.01 & \phn0.01 & 130 & \phn\phn HW2 \\ --16.9 & 0\phd\phn\phn & \phn0\phd\phn\phn & 152 & \phn\phn HW2 \\ \enddata \tablenotetext{a}{Relative position accuracy is $\pm0.2''$.} \end{deluxetable} \section{Results} \label{sec:results} The total power spectrum obtained from the Yebes data is shown in Fig.~\ref{fig:f3}. Strong fringes were detected on all three ground baselines but only on the space baseline SRT--Ys. Weaker detections were achieved on the other space baselines but were not used in this analysis. The sensitivity of the cross power spectra was limited by the coherence time of the interferometer, which was about 100~seconds. We measured the fringe rates on the three ground baselines of the spectral features at 0.6, --9.7, --14.8, and --16.2, all with respect to the feature at --16.9~km~s$^{-1}$. We used the task FRMAP in the Astronomical Image Processing System (AIPS) described by \citet{Walker81} and \citet{Thompson17} to find the relative feature positions from their relative fringe rates. Each relative fringe rate localized the relative position of the feature to a line in RA--dec space. Although the hour angle spread provided by the 40-minute observation was small, the three ground-based baselines provide a good spread in position angle such that accurate relative coordinates were obtained with an uncertainty of $\pm0.02''$ in each coordinate. The positions are listed in Table~\ref{tab:t1}. However, it is difficult to align the masers with the continuum. We have placed the --16.9~km~s$^{-1}$\ feature near the center of the outflow in HW2. The absolute positions of 39 masers associated with HW2 in 1995 were reported by \citet{Torrelles98}. None of these velocity components can be reliably associated with our detections. However, most of the strong components identified in 1995 were within $\pm0.3''$ of the center of~HW2. In particular, components near our velocity commonly appear in maser complex R4 \citep{Torrelles11}. We adopt $\pm0.2''$ as our alignment accuracy. \begin{figure}[ht!] \epsscale{0.5} \plotone{JMfig4.pdf} \caption{Spectrum of the feature near 0.6~km~s$^{-1}$\ observed at the Pushchino Observatory from 3~Aug 2012 to 21~Aug 2013. Epochs of observation are shown by the dotted horizontal lines. Note the drift in the central velocity.} \label{fig:f4} \end{figure} \begin{figure}[ht!] \epsscale{0.85} \plotone{JMfig5.pdf} \caption{Closeup of the 0.6~km~s$^{-1}$\ feature. The dots are the total power spectrum obtained from the Yebes telescope data (see full spectrum in Fig.~\ref{fig:f3}). The smooth line is a Gaussian profile fitted to the data. The straight line segmented curve is the difference between the RCP and LCP total power spectra after removing a gain factor. The scale of the difference spectrum has been multiplied by a factor of 20. The absence of any significant signal indicated that the magnetic field is less than 120~mG.} \label{fig:f5} \end{figure} \begin{figure}[ht!] \epsscale{0.8} \plotone{JMfig6_new.pdf} \caption{The cross power spectrum (dots) from the SRT--Ys baseline data for the first 600-second block in LCP. The visibility phase and amplitude data are shown in the top and bottom plots, respectively. A complex two-component Gaussian model was fit to this data. This model is shown by the solid line in the top plot (phase), with the velocity range marked by the vertical lines within which the signal-to-noise ratio is adequate, and by curve (c) in the bottom plot (amplitude). Curve (b) is the scalar sum of the two spectral components, and curve (a) is the total flux density reduced in scale by a factor of four for comparison.} \label{fig:f6} \end{figure} Fringes on the SRT to ground baselines were only detected on the features at --16.9 and 0.6~km~s$^{-1}$\ (the detection threshold is about 2~Jy). The one at --16.9 ~km~s$^{-1}$, which is associated with HW2, had a fringe visibility amplitude of only about 0.02. We focused our analysis on the strong isolated feature at 0.6~km~s$^{-1}$. Routine monitoring of the spectrum at the Pushchino Observatory indicates that most features persist for about a year. In particular, the feature at 0.6~km~s$^{-1}$\ appeared between 30~Aug 2012 and 20~Sept 2012 and disappeared between 1~March 2013 and 18~July 2013 (see Fig.~\ref{fig:f4}). Except for this time range, no features near this velocity were detected during the monitoring observations from 10~Oct 2010 to 14~Aug 2014. The rms noise level was typically 5~Jy. Thus, we can assign it a lifetime of $8\pm3$~months. The total power spectrum from our observations at Yebes is shown in Fig.~\ref{fig:f5}. A single Gaussian profile fit to the Stokes I spectral data (RCP~+~LCP)/2 gives the parameters: amplitude = $580\pm3$~Jy, velocity = $0.58\pm 0.01$~km~s$^{-1}$, and width (full width at half-maximum, FWHM) = $ 0.672\pm0.005$~km~s$^{-1}$. To search for the circular polarization, we calculated the Stokes V profile via the formula $V=S({\rm RCP})-a\times S({\rm LCP})$. The parameter $a$ accounts for the small unknown gain difference between the two polarizations and was chosen to minimize the mean square deviation between $S$(RCP) and $S$(LCP). A longitudinal component of magnetic field in the maser medium will shift the profiles slightly in frequency. In this case, the $V$ profile has a distinctive shape proportional to the derivative of the total intensity profile. This is an anti-symmetric ``S"-shaped curve. The magnitude of the curve, $V_{\rm max}$, is related to the longitudinal component of the magnetic field by the equation \citep{FiebigGusten89} \begin{equation} V_{\rm max}/I_{\rm max}=13.4\times10^{-6}B/\Delta v~~, \end{equation} \begin{deluxetable}{cccC}[ht!] \tablecaption{Visibility components of the 0.6~km~s$^{-1}$\ feature \label{tab:t2}} \tablecolumns{4} \tablenum{2} \tablewidth{0pt} \tablehead{ \colhead{Velocity} & \colhead{$S$} & \colhead{$\Delta v$} & \colhead{${T_B}^*$} \\ \colhead{(km~s$^{-1}$)} & \colhead{(Jy)} & \colhead{(km~s$^{-1}$)} & \colhead{(K)} } \startdata 0.895 & 43 & 0.47 & 1.5\times10^{14} \\ 0.355 & 77 & 0.47 & 3\phd\phn\times10^{14} \\ \enddata \tablenotetext{*}{Lower limit.} \end{deluxetable} \begin{figure}[ht!] \epsscale{0.8} \plotone{JMfig7.pdf} \caption{The relative phase between the 0.895 and 0.355~km~s$^{-1}$\ subcomponents as a function of time during the 40-minute observation. The data have been coherently averaged to 4~minutes. The relative phase and relative phase drift over the observation of $45^\circ$ can be used to constrain the separation of the components. The orbit specification for RadioAstron of 0.02~m~s$^{-1}$ would allow a maximum of $\pm2^\circ$ of the observed phase shift to be caused by the change in the baseline error. AGN observations near the time of these observations suggest that the actual error is about four times smaller.} \label{fig:f7} \end{figure} \noindent where $B$ is the line-of-sight magnetic field strength in mGauss (mG) and $\Delta v$ is the line width in~km~s$^{-1}$. We assumed the Zeeman parameters for the strongest hyperfine component of the 22~GHz transition with $\Delta v= 0.672$~km~s$^{-1}$. There is no hint of a Zeeman signature at the level of 1.4~Jy ($V_{\rm max}/I_{\rm max}<2.4\times10^{-3}$). Hence, the line-of-sight component of the magnetic field strength is less than about 120~mG. For comparison, \citet{Vlemmings06} measured the magnetic fields in about 30 features in Cepheus~A, mostly in the HW2 region, and found them typically be in the range of 100--600~mG. The vector-averaged cross power spectrum of the 0.6~km~s$^{-1}$\ feature is shown in Fig.~\ref{fig:f6}. The spectrum shows two components with a sharp change in phase between them. This is a clear indication of a double source structure. We fit a double Gaussian profile to the complex cross power spectrum. The parameters of this fit are listed in Table~\ref{tab:t2}. We were not able to obtain a stable three Gaussian component fit to the total power spectrum. However, we believe the total power associated with the two components cannot be significantly greater than the cross power amplitudes or they would be clearly visible in the total power spectrum (see fitted profile in Fig.~\ref{fig:f5}). Hence, we assign both of them visibility amplitudes of greater than 0.8, and hence sizes of less than 15 ~$\mu$as\ which leads to the estimate of the lower limits of brightness temperature in Table~\ref{tab:t2}. Note that the normalized fringe visibility can be accurately determined because the total power spectrum can be measured with both the SRT and Ys telescope. In this case, the fringe visibility is simply the cross power spectrum divided by the geometric mean of the total power spectra in raw correlator units. Individual values of system equivalent flux densities (SEFD) from a~priori measurements are not needed. The fraction of flux (Jy~km~s$^{-1}$) in the cross power spectrum is $0.13\pm0.02$ of the 0.6~km~s$^{-1}$\ complex. This fraction is the ratio of the integrals of curve~b and curve~a in Fig.~\ref{fig:f6}. To further investigate the structure of the 0.6~km~s$^{-1}$\ component, we examined the cross power spectra on the three ground-only baselines. A careful calibration of the cross power spectra with the associated autocorrelation spectra on a minute-by-minute basis shows that the normalized fringe visibilities are 0.83, 0.61, and 0.53 for Ys--Nt, Nt--Zc, and Ys--Zc baselines of length 115, 138, and 228~M$\lambda$, respectively (see Fig.~\ref{fig:f2}). As mentioned above, the statistical uncertainty in these estimates is small because the system temperatures and telescope collecting areas drop out of the calculation, but the visibilities could be underestimated because of local oscillator coherence loss factors. These visibilities can be modeled approximately by a circular Gaussian disk of diameter (FWHM) of $400\pm15$~$\mu$as\ and flux density of 580~Jy. We refer to this structure as a halo. Note that we could not determine the registration between the halo and the compact double structure. The visibilities vs.\ baseline length and a cartoon of the maser components are shown in Fig.~\ref{fig:f9}. The phase difference between the two components of the 0.6~km~s$^{-1}$\ feature is about $125^\circ$ at the midpoint of the observations or about 0.35 of the fringe spacing, or 24~$\mu$as. If the features were aligned along the direction of maximum resolution at a PA of $28^\circ$ (see Fig.~\ref{fig:f2}), then they would be spaced by 24~$\mu$as. This is the minimum possible spacing. The actual separation and position angle can be estimated by the change in the relative phase of the features over the observations, which is $43^\circ$ (see Fig.~\ref{fig:f7}). The maximum contribution to this relative phase due to a change in instrumental delay caused by a baseline error is $\pm2^\circ$ \citep{Stepanyants17}. We thus are able to calculate a phase difference for the beginning of the observation to be $102\pm10^\circ$ and the phase difference at the end of the observation to be $145\pm10^\circ$. The position offset and its PA can be determined by the two $(u,v)$ plan measurements, as shown in Fig.~\ref{fig:f8}. The baseline rotates by only about $3^\circ$, but this is sufficient to determine the offset to be $160\pm35$~$\mu$as\ at a PA of $113\pm5^\circ$. This corresponds to a projected velocity gradient of 4~km~s$^{-1}$~AU$^{-1}$. \begin{figure}[ht!] \epsscale{0.8} \plotone{JMfig8.pdf} \caption{The offset between the 0.90 and 0.36~km~s$^{-1}$\ subcomponents determined from relative phase measurements on the SRT--Ys baseline at 12:00 (red line) and 12:40 UT (black line). Each measurement constrains the relative position to a line in position space.} \label{fig:f8} \end{figure} The question arises as to whether the size estimates of the components could be affected by interstellar scattering. The angular broadening of images due to the turbulent interstellar medium can be estimated from the NE2001 model of \citet{CordesLazio03}. For the Galactic longitude of $109.8^\circ$ and latitude of $2.1^\circ$, the integrated effect over 700~pc at 22.2~GHz is 7~$\mu$as. Hence, scattering could only have a small effect on our measurements. \begin{figure}[ht!] \epsscale{1.2} \plotone{JMfig9_new.pdf} \caption{The fringe visibility amplitude for the 0.6~km~s$^{-1}$\ feature on the three ground baselines vs.\ baseline length. The solid line is a model of a circular Gaussian halo of 400~$\mu$as\ angular diameter (FWHM) containing 96\% of the integrated flux density, plus an unresolved component to account for unresolved flux at large projected baselines. Inset: A cartoon of the maser emission from the 0.6~km~s$^{-1}$\ feature. The small components are modeled on the SRT--Ys baseline data, which show two subcomponents separated by 160~$\mu$as\ at a PA of $113^\circ$. This PA corresponds to the axis of the flow from Hd3ii. About 13\% of the integrated flux density is in the subcomponents. The subcomponents are shown centered on the halo, but this relative alignment is unknown.} \label{fig:f9} \end{figure} \section{Discussion} \label{sec:discuss} Three properties of the 0.6~km~s$^{-1}$\ feature clearly distinguish it from the other Cepheus~A\ features detected in our observations: (1) the doublet structure of the feature revealing itself only at the space--ground baselines, (2) the unusual value of the radial velocity (i.e., a value not prominently represented in the cluster of masers near HW2 and HW3dii), and (3) its strong variability with nonlinear drift in velocity with time (see Fig.~\ref{fig:f4}). As discussed in the first subsection below, the existence of the doublet structure can have a spectroscopic explanation, but evidence also exists that the real explanation is astrophysical in nature, as discussed in the further subsections. The most likely explanation of the structure is that it results from turbulence on a variety of scales up to 400~$\mu$as. The two peaks we detected may be simply the emission peaks on the principal scale that the SRT--ground baselines are sensitive to, i.e., tens to hundreds of~$\mu$as. In order to understand the physical nature of the 0.6~km~s$^{-1}$\ feature, we need to determine the type of astrophysical object associated with it. Results presented in Table~1 and Fig.~\ref{fig:f1} show that emission in this feature comes from the area around the compact HII region HW3diii (see Fig.~\ref{fig:f1}). Maps by \citet{Chibueze12} show that the widespread maser features corresponding to the outflow in this area have proper motion velocities around 10~km~s$^{-1}$. The features with the other velocities, including more redshifted ones, are located in the turbulent central maser cluster. The presence of a circumstellar disk or envelope around a young stellar object (YSO) can explain the relatively high range of velocities observed. Results obtained by \citet{Chibueze12} provide strong support to the presence of the massive YSO in the region. The very close location of the most redshifted maser feature to that of the most blueshifted one makes the disk hypothesis more likely. Turbulent motions in the form of evolving 3D vortices (eddies) are characteristic of the environment of massive YSOs. The turbulence is introduced at the largest scales determined by the boundary conditions and dissipates at the smallest scales determined by viscosity. The turbulence is generated in the circumstellar disks around massive YSOs, where it plays a decisive role in the mixing of material, momentum transfer, and other processes important for the disk structure and evolution. Excellent theoretical examples of the turbulent vortex formation in accretion disks can be found in, e.g., \citet{Meheut10} and \citet{KurBisKay14}. Unfortunately, manifestations of a turbulent vortex in maser emission from accretion disks are much less studied observationally. The main problem is the difficulty of associating maser sources with their locations in the disks. This has been addressed only in a few cases [e.g., \citet{Gallimore03} for the R4 maser arc near Cepheus~A\ HW2, \citet{Sanna17} for the Cepheus~A\ HW2 disk, and \citet{Sanna15} for G023.01-00.41]. In contrast, in the outflows from massive YSOs, the largest and intermediate scales of turbulence are well traced by water maser observations [see the papers on W49N by \citet{Walker84} and \citet{Gwinn94a} and more recent papers on the nearby sources Cepheus~A\ and W75N \citep{Uscanga10} and W3IRS5 \citep{Imai02}. In their consideration of the maser data on the turbulence in the flows from the massive YSOs, \citet{Strelnitski02} proposed that ``the maser hot spots originate at the sites of ultimate dissipation of highly supersonic turbulence." This assertion finds support in a well-ordered spatio-kinematical pattern in the small-scale water maser features reported in Cepheus~A\ HW2 by \citet{Uscanga03}, in W75N by \citet{Uscanga05}, and in W49N by \citet{Gwinn94b}. Observations of \citet{Uscanga05} suggested microstructure with a size about 1~AU. This structure had a short lifetime supposedly on the order of a month. All information on the known examples of the structures with an ``eddy-like" spatio-kinematical pattern does not contain evolutionary information and has a form of snapshots, although the other maser structures in Cepheus~A\ HW2 show persistence on the time scales of years \citep{Torrelles01}. \citet{Uscanga05} speculated that these short-lived kinds of spatio-kinematical microstructures are ‘’either produced by fluid instabilities within the shocked material or correspond to nearly round cloudlets (turbulent eddies?) in the ambient medium. In the sections below, we discuss a spectroscopic origin for our observations as well as three dynamical phenomena that may explain them. \subsection{ Spectroscopic Origin: Hyperfine Splitting} \label{sec:hyperfine} The velocity separation of 0.54~km~s$^{-1}$\ between the components in Fig.~\ref{fig:f6} is close to the velocity separations of the H$_2$O hyperfine splittings of 0.45~km~s$^{-1}$\ between the $F=7$--6 and $F=6$--5 transitions and 0.58~km~s$^{-1}$\ between $F=6$--5 and $F=5$--4. However, there are several problems with this spectroscopic hypothesis. First, the components of the double-peaked spectrum in Fig.~\ref{fig:f6} are not spatially coincident, so they would have to be associated with different hyperfine components. Second, the $F=7$--6 hyperfine transition has the lowest frequency (and $F=5$--4 the highest) of the three strong transitions \citep{Kukolich69}, while the strength order is $F=7$--6, $F=6$--5, then $F=5$--4 \citep{DeguchiWatson86}. We would therefore expect the strongest peak at the lowest frequency (most positive Dopper shift) in our spectrum, but the opposite is seen in Fig.~\ref{fig:f6}. Moreover, all three of the hyperfine transitions introduced above have comparable line strengths [see Fig.~1 of \citet{DeguchiWatson86}], so a triplet spectrum would be expected rather than a doublet in the case of hyperfine intensity anomalies. We think this explanation is unlikely because it would require some complicated combination of hyperfine-specific pumping and/or competitive gain effects to generate the observed spectrum. \subsection{Keplerian Rotation} \label{sec:keplerian} For the first dynamical interpretation, the maser hot spots might (see Fig.~\ref{fig:f9}) be amplifying along chords (i.e., filaments) in the plane of a Keplerian disc, orbiting a protostellar or protoplanetary object, viewed approximately edge-on. In this case, the length of the filaments responsible for the emission is displaced radially by 80~$\mu$as\ ($8.4\times10^9$~m) from the center. A rotational velocity equal to 0.27~km~s$^{-1}$, half the velocity separation of the components, gives a central mass, $M=rv^2/G$, of $9.1\times10^{24}$~kg, or approximately 1.5 Earth masses. The orbital period would be 2300 days, which is much longer than our monitoring period. A very large maser depth (negative optical depth) is possible in a disc if the number density is close to the maximum for strong collisional pumping of the 22~GHz transition: $n=2\times10^{10}$~cm$^{-3}$ at $T_K=750$~K in largely dust-free gas \citep{Gray16}. Under these conditions, a 1\% inversion with an ortho-H$_2$O abundance of $3\times10^{-5}$ yields a gain coefficient of $1.05\times10^{-8}$~m$^{-1}$, and therefore a maser depth would be well above the level needed to achieve saturation. Under this hypothesis, the splitting of the 0.6~km~s$^{-1}$\ feature can be explained by rotation of the planetary object around the massive YSO in the region. \subsection{A Pair of Approximately Spherical Clouds} \label{sec:2clouds} The second dynamical interpretation is that the 0.6~km~s$^{-1}$\ maser emission results from the partial overlap, along the line of sight, of a pair of approximately spherical clouds. This alignment could be random, although it is much more likely that the objects are related. The clouds may have a very large relative velocity, provided that the dominant component lies in the plane of the sky. The relative velocity along the line of sight needs to be comparable to the Doppler-broadened line width, which is the same in both clouds. If this is the case, radiation at some frequencies will be amplified along the line of sight through a medium that combines material from both clouds. If the centers of the clouds pass close to each other along the line of sight, the likely result is a maser flare; see, for example, \citet{Lekht09}. The object we observe in Cepheus~A\ would, in this scenario, be either a pre- or postflare object, depending on whether the clouds are approaching, or separating from, their minimum line-of-sight separation. Multi-epoch observations would be necessary to test this model via proper motion analysis. At any frequency in the spectrum of the overlapping clouds, a ray amplifying through the overlapping region will pass through an optical depth $\tau_1$ of material from the first cloud and $\tau_2$ from the second cloud, with a resulting spectrum as shown, for example, in Fig.~9 of \citet{Lekht09}. We note that the differently shifted central response frequencies of the two clouds imply that the greatest optical depth, at a particular frequency, does not in general correspond to the greatest combined path length through the clouds, even if the lengths are otherwise identical. Comparison with our Fig.~\ref{fig:f6}, lower panel, suggests that our pair of clouds would be somewhat less overlapped than the Lekht et~al.\ examples. Also, the model of only a pair of approximately spherical clouds does not naturally explain the variability pattern of the Pushchino monitoring. A more realistic model may involve nonspherical clouds or more overlapping clouds. In fact, this brings us close to the turbulence hypothesis discussed in the next section but without a pronounced turbulent vortex. \subsection{Structures in a Turbulent Flow} \label{sec:turbulence} In the third dynamical model, we consider the case of turbulent vortices shed from the dense gas formation. Vortex formation, shedding, and evolution in the flow over dense obstacle are widely discussed in the literature, [e.g., \citet{Lienhard11, Blewins90, Loytsansky70}]. The turbulent motions have different regimes that are described by a set of dimensionless numbers (criteria). The corresponding regime of an unestablished flow is usually characterized by the Strouhal number, $St$. Expressed in observational parameters, it is equal to the ratio of characteristic scale, $R$, to the product of a characteristic speed, $v$, and characteristic time, $\tau$, i.e., $St=R/(\tau v)$. (Note: The Strouhal number is often defined as $St^{-1}$.) This number represents the ratio of the local velocity derivative to the convective derivative in the Navier--Stokes equation. Thus, this number describes the ability of the flow to form persistent turbulent vortices. The basic property of this criterion is that the Strouhal number $St$ has values from about 0.2 to about 0.3 for a wide range of Reynolds numbers, $Re$ [see mentioned textbooks, report by \citet{Roshko54}, and relatively recent experimental study by \citet{Shi11} and theoretical study by \citet{Ponta04}]. In order to facilitate discussion of our observations, we write $St$ as \begin{equation} St=57R_{\rm AU}/(\tau_m v_{\rm kms})~~, \label{eq:eqstrouhal} \end{equation} where $\tau_m$ is the time in months, $v_{\rm kms}$ is the velocity in ~km~s$^{-1}$, and $R_{\rm AU}$ is the spatial scale in astronomical units. Pushchino monitoring shows that the 0.6~km~s$^{-1}$\ feature in Cepheus~A\ at the time of our observations experienced a rather strong flare, which is not likely to be periodic. Figure~\ref{fig:f4} shows that the flare lasted for about eight months and had two peaks at slightly different velocities. These peaks may correspond either to the full cycle of a single vortex rotation or to formation of two different vortices. Under the single-vortex hypothesis, the two maser spots correspond to two edges of the vortex. To estimate the Strouhal number, we adopt $\tau_m=16\pm6$, twice the lifetime of the 0.6~km~s$^{-1}$\ maser flare. We doubled the lifetime because (1) the full cycle of rotation implies that emission peak returns to the same velocity and (2) the arc in the position velocity dependence of the 0.6~km~s$^{-1}$\ feature (see Fig.~\ref{fig:f4}) suggests that it lasts for about half of a full cycle. Further, we assume that the component velocity difference 0.54~km~s$^{-1}$\ corresponds to a velocity difference of the edges of the vortex. Under this assumption, the characteristic velocity should be half of this value, i.e., $v_{\rm kms}=0.27$, and the measured value of the separation of the two components $R_{\rm AU}=0.11$ (160~$\mu$as) should be about the vortex diameter. The resulting Strouhal number is $St\sim1.5$, which is out of the normal range even for the cases of very high Reynolds numbers \citep{Green95, Schewe83}. Hence, we consider the hypothesis of the single turbulent vortex to be unlikely. In the two-vortex interpretation, each maser spot represents a vortex that forms in the wake of an obstacle in an outflow (von~K\'arm\'an street vortices). In our case, the line between the maser spots corresponds well to the axis of the outflow observed by \citet{Chibueze12}, and we consider this outflow as the progenitor of vortex formation. It is possible that the obstacle is associated with HW3diii. Subsequent vortices in the street rotate in the opposite sense. The density of the vortices decreases with the distance from the obstacle, so the dense gas responsible for the bright maser emission is present only in close proximity to the obstacle, so we observe only the first two. Vortex shedding has the following phases: (1) formation of one vortex with a component of velocity toward the flow axis on one side of the obstacle, (2) formation of another vortex with a component of its velocity toward the flow axis on the other side of the obstacle (at which stage we observe two dense vortices moving toward the flow axis from opposite sides), (3) the vortices approach the flow axis and start moving along the flow (in the meantime, a new vortex starts forming). When the obstacle is not symmetric, the vortices formed on one side of the obstacle can be denser, bigger, and, hence, brighter in maser emission. This model is consistent with the Pushchino monitoring results in Fig.~\ref{fig:f4} under the hypothesis that strong flares correspond to vortex formation, and we observe these structures moving along the flow axis. We should then observe two scales: the larger scale corresponds to vortex separation, or obstacle size (about 0.11~AU in our case), and the smaller scale, to two vortices with opposing rotation that manifest themselves at the highest angular resolution (our observed unresolved structures). Turbulence would therefore dissipate on scales much smaller than 0.11~AU in this region. Temporally, the period of the vortex shedding will correspond to half of the time difference between the strong flares, so about two months for the data in Fig.~\ref{fig:f4}; its characteristic velocity is about 10~km~s$^{-1}$, from typical proper motions measured by \citet{Chibueze12}, and the characteristic size is about 0.11~AU. These parameters give $St=0.3$, a plausible value for a turbulent flow in the interstellar medium. The hypothesis of a pair of turbulent vortices formed by an obstacle in the flow is therefore consistent with both the RadioAstron and Pushchino data from Fig.~\ref{fig:f4}. \section{Conclusion} \label{sec:conclusion} We have investigated the structure of a single maser ``spot" in the Cepheus~A\ region. We found that the maser spot had a total extent of about 400~$\mu$as. It is threaded by a magnetic field of less than 120~mG. The substructure is undoubtedly complex, but it includes two prominent structures separated by 160~$\mu$as, which contain about 13\% of the flux. The high contrast suggests that they may be unsaturated lines of sight. They may correspond to a pair of turbulent eddies shed by an obstacle in a flow, i.e., a~K\'arm\'an vortex street with Strouhal number of about 0.3, to objects bound in orbit by a planetary size mass, or to individual filaments or overlapping spherical clouds. We note that the current study lacks information on the intermediate baselines, which are essential for accurate image recovery. Involvement of the High-Sensitivity Array (HSA) or full VLBA in observations of Cepheus~A\ in combination with RadioAstron would help to elucidate whether we have resolved the smallest scale of the turbulence, which is a basic parameter for understanding the evolution and structure of the interstellar medium of star-forming regions. Observations of flares should be conducted at intervals of a few months to determine their temporal and spatial characteristics. \bigskip We thank Vladimir Kostenko, Vyacheslav Avdeev and Pyotr Voitsik for help with correlation and calibration issues and Mark Reid for helpful suggestions on the manuscript. The RadioAstron project is led by the Astro Space Center of the Lebedev Physical Institute of the Russian Academy of Sciences and the Lavochkin Scientific and Production Association under a contract with the Russian Federal Space Agency, in collaboration with partner organizations in Russia and other countries. Partly based on observations performed with radio telescopes of IAA RAS (Institute of Applied Astronomy of the Russian Academy of Sciences). Partly based on observations with the Noto telescope operated by the Istituto di Radioastronomia di Bologna. Partly based on observations with the 40-m radio telescope of the Yebes Observatory of the IGN (National Geographic Institute of Spain). Technical support was received from the National Space Facilities Control and Test Center. Results of optical position measurements of the Spektr-R spacecraft (the platform for the RadioAstron Space Radio Telescope) by the global MASTER Robotic Net, ISON collaboration, and the Kourovka Astronomical Observatory of the UrFU (Ural Federal University) were used for spacecraft orbit determination in addition to mission facilities. This work was supported in part by the Ministry of Education and Science (the basic part of the State assignment, RK no. AAAA-A17-117030310283-7) and by the Act no. 211 of the Government of the Russian Federation, agreement 02.A03.21.0006. \facilities{RadioAstron Space Radio Telescope (Spectr-R), Yebes Radio Observatory (National Geographic Institute of Spain), Noto Radio Observatory (Bologna Institute of Radio Astronomy), and Zelenchukskaya Radio Observatory} \software{ASC software correlator \citep{Likhachev17}, PIMA \citep{Petrov11}, AIPS \citep{vanMoorsel96}} \vfill\eject
{ "timestamp": "2018-02-20T02:17:42", "yymm": "1802", "arxiv_id": "1802.06756", "language": "en", "url": "https://arxiv.org/abs/1802.06756" }
\section{Fourier transforms of $\kappa(z)$ and $\kappa_z(z)$} \label{app:FT_kappa} We consider \begin{eqnarray} && {\kappa}_{q,\omega}(z) = \frac{i\omega+1/\tau}{2{\bar v}_M} \int_0^{\pi/2} d\theta \frac{\sin(\theta)}{\cos(\theta)} \exp\left[ -\frac{i\omega +1/\tau}{{\bar v}_M \cos(\theta)} |z| \right] J_0\big[q |z| \tan(\theta)\big] - \delta(z) ~, \end{eqnarray} and we calculate its Fourier transform. Defining ${\tilde \omega} = \omega - i/\tau$, we get \begin{eqnarray} { \kappa}_{q,\omega}(q_z) &=& - \left\{ \frac{{\tilde \omega}}{2{\bar v}_M\sqrt{q_{z}^2+q^2}} \ln\left[ \frac{{\tilde \omega} -{\bar v}_M\sqrt{q_z^2+q^2}}{{\tilde \omega} + {\bar v}_M\sqrt{q_z^2+q^2}} \right] + 1 \right\} ~. \end{eqnarray} Note that $\kappa_{q,\omega}(q_z) \to 0$ as $q,q_z \to 0$ (as required by the gauge invariance). Moreover, expanding the function $\nu_{0,M} \kappa_{q,\omega}(q_z)/2$ for small $q,q_z$ we get \begin{eqnarray} \nu_{0,M} \kappa_{q,\omega}(q_z) \to \frac{n_M}{m_M} \frac{q^2+q_z^2}{\omega^2} ~, \end{eqnarray} as expected, since this function is nothing but the small-$q$ expansion of the density-density response function of the metal. As the same time, for ${\tilde \omega} \to 0$ \begin{eqnarray} \nu_{0,M} \kappa_{q,\omega}(q_z) \to -1 - i\frac{\pi{\tilde \omega}}{2{\bar v}_M\sqrt{q_{z}^2+q^2}} ~. \end{eqnarray} The small-frequency and -momentum behavior of the density-density response function $\nu_{0,M} \kappa_{q,\omega}(q_z)$ agrees with that of that of a 3DEG.~\cite{Giuliani_and_Vignale} Finally, in the limit $\tau\to\infty$, \begin{eqnarray} \kappa_{q,\omega}(q_z) = - \Bigg\{ \frac{\omega}{2{\bar v}_M\sqrt{q_{z}^2+q^2}} \ln\Bigg| \frac{\omega -{\bar v}_M\sqrt{q_z^2+q^2}}{\omega + {\bar v}_M\sqrt{q_z^2+q^2}} \Bigg| + 1 \Bigg\} - i \frac{\pi \omega}{2{\bar v}_M\sqrt{q_{z}^2+q^2}} \Theta\big[{\bar v}_M^2 (q_z^2+q^2)- \omega^2\big] ~. \end{eqnarray} Similarly, we consider \begin{eqnarray} && {\kappa}_{q,\omega}^z(z) = \frac{1}{2} \int_0^{\pi/2} d\theta \sin(\theta) \exp\left[ -\frac{i\omega +1/\tau}{{\bar v}_M \cos(\theta)} |z| \right] J_0\big[q z \tan(\theta)\big] {\rm sign}(z) ~, \end{eqnarray} whose Fourier transform reads \begin{eqnarray} { \kappa}^z_{q,\omega}(q_z) &=& -\frac{i q_z}{q_z^2 + q^2} \kappa(q_z) ~. \end{eqnarray} \section{Case $p=1$: the real and imaginary parts of the potential for small $q$} We consider Eq.~(\ref{eq:Z_p_1}), which we rewrite as \begin{eqnarray} \label{eq:plasmon_final_app} Z_{q,\omega} = \frac{2 q}{\pi} \int_{0}^{+\infty} \frac{dq_z}{q_z^2+q^2-q_{{\rm TF},M}^2\kappa_{q,\omega}(q_z)} ~, \end{eqnarray} using the symmetry properties of $\kappa_{q,\omega}(q_z)$. Setting $\omega = c_{\rm p} q$, in the limit $q\to 0$ we get \begin{eqnarray} \kappa_{q,\omega}(q_z) \to -1 - i \frac{\pi \omega}{2 {\bar v}_M q_z} \Theta({\bar v}_M^2 q_z^2 + {\bar v}_M^2 q^2 - \omega^2) ~, \end{eqnarray} which allows us to calculate \begin{eqnarray} \label{eq:plasmon_final_app_Re} Z_{q,\omega} &\to& \frac{2q}{\pi} \int_{0}^{+\infty} \frac{dq_z}{q_z^2 + q_{{\rm TF},M}^2} \left[ 1 - i \frac{\pi q_{{\rm TF},M}^2 \omega}{2 {\bar v}_M q_z} \frac{\Theta({\bar v}_M^2 q_z^2 + {\bar v}_M^2 q^2 - \omega^2)}{q_z^2 + q_{{\rm TF},M}^2} \right] \nonumber\\ &\to& \frac{q}{q_{{\rm TF},M}} \left[ 1 + i \frac{\omega}{2 v q_{{\rm TF},M}} \ln \left(\frac{\omega^2 - {\bar v}_M^2 q^2}{{\bar v}_M^2 q_{{\rm TF},M}^2}\right) \right] ~. \end{eqnarray} \section{Case $p=0$} In this case the Laplace equation for the electrostatic potential becomes \begin{eqnarray} \label{eq:app_poisson_p0} (\partial_z^2 - q^2) {\bar \phi}_{q,\omega}(z) &=& -q_{{\rm TF}, M}^2 \left[ \int_{0}^\infty d\zeta \kappa_{q,\omega}(z-\zeta) {\bar \phi}_{q,\omega}(\zeta) + \kappa^z_{q,\omega}(z) {\bar \phi}_{q,\omega}(0^-) \right] ~, \end{eqnarray} for $z<0$. This equation can be solved by using the Wiener-Hopf method as we detail in what follows, similarly to the problem of the anomalous skin effect treated in Ref.~\onlinecite{Reuter_procroyal_1948}. Let us first define the functions \begin{eqnarray} && f(z) = \Theta(-z) \phi_{q,\omega}(z) ~, \nonumber\\ && g(z) = q_{{\rm TF},M}^2 \Theta(z) \left[ \int_{0}^\infty d\zeta \kappa_{q,\omega}(z-\zeta) {\bar \phi}_{q,\omega}(\zeta) + \kappa^z_{q,\omega}(z) {\bar \phi}_{q,\omega}(0^-) \right] ~, \end{eqnarray} which allow to rewrite Eq.~(\ref{eq:app_poisson_p0}) as \begin{eqnarray} \label{eq:WH_1} g(z) = (\partial_z^2 - q^2) f(z) + q_{{\rm TF},M}^2 \left[ \int_{-\infty }^\infty d\zeta \kappa_{q,\omega}(\zeta-z) f(\zeta) + \kappa^z_{q,\omega}(z) f(0) \right] ~. \end{eqnarray} Hereafter $f(0) \equiv f(0^-)$. Similarly for $f'(0) \equiv \partial_z f(z)\big|_{z\to 0^-}$. The form of Eq.~(\ref{eq:WH_1}) is suitable to apply the Wiener-Hopf technique. We will momentarily take its Fourier transform. For a generic function $A(z)$, we define it as \begin{eqnarray} A(q_z) &\equiv& {\cal F}[A(z)](q_z) \nonumber\\ &=& \int_{-\infty}^{\infty} dz e^{iq_z z} A(z) ~. \end{eqnarray} We will extend $q_z$ to the whole complex plane, and make use of theorems of complex analysis. For the reader convenience, we recall them here:~\cite{Noble_WH_book} \begin{enumerate} \item If $A(z)$ is such that $|A(z)|< c_1 e^{a z}$ for $z\to +\infty$, and $|A(z)| < c_2 e^{b z}$ for $z\to -\infty$, with $b>a$, then its Fourier transform $A(q_z)$ is analytic in the strip $a < \mathop {\rm Im}(q_z) < b$. \item Given $a$ and $b$, with $a<b$, if two functions $A(q_z)$ and $B(q_z)$ are analytical, respectively, for $\mathop {\rm Im}(q_z) > a$ and $\mathop {\rm Im}(q_z)<b$, and satisfy $A(q_z) = B(q_z)$ for $a< \mathop {\rm Im}(q_z) < b$, then there exist a unique function $C(q_z)$ analytical everywhere which coincides with $A(q_z)$ [$B(q_z)$] for $\mathop {\rm Im}(q_z) > a$ [$\mathop {\rm Im}(q_z)<b$]. \item Given ${\tilde A}$ and $p\geq 0$ constants (with $p$ an integer), if $A(q_z)$ is an integral function such that $|A(q_z)| \leq {\tilde A} |q_z|^p$ for $|q_z| \to \infty$, then $A(q_z)$ is a polynomial of degree $\leq p$. \item Assume $A(q_z)$ to be an analytic function in the strip $a<\mathop {\rm Im}(q_z)<b$ such that, for $q_z$ in the strip and $|\Re e(q_z)|\to \infty$, $|A(q_z)| < {\tilde A} |q_z|^{-p}$ (with $p>0$ and ${\tilde A}$ a constant). Then $A(q_z)$ can be written as \begin{eqnarray} \label{eq:splitting_sum} A(q_z) = A_+(q_z) + A_-(q_z) ~, \end{eqnarray} where $A_+(q_z)$ is regular for $\mathop {\rm Im}(q_z)>a$ and $A_-(q_z)$ is regular for $\mathop {\rm Im}(q_z) < b$, and \begin{eqnarray} A_\pm(q_z) = \pm \int_{i c_\pm -\infty}^{i c_\pm + \infty} \frac{d{\tilde q}_z}{2\pi i} \frac{A(q_z)}{{\tilde q}_z-q_z} ~, \end{eqnarray} for any $c_+$ and $c_-$ such that $a<c_+<\mathop {\rm Im}(q_z)<c_-<b$. \item Assume $B(q_z)$ to be analytic and {\it different from zero} in the strip $a<\mathop {\rm Im}(q_z)<b$ and such that, for $q_z$ in the strip and $|\Re e(q_z)|\to \infty$, $|B(q_z)| \to 1$. Then $B(q_z)$ can be written as \begin{eqnarray} \label{eq:splitting_product} B(q_z) = \frac{B_+(q_z)}{B_-(q_z)} ~, \end{eqnarray} where $B_+(q_z)$ is regular for $\mathop {\rm Im}(q_z)>a$ and $B_-(q_z)$ is regular for $\mathop {\rm Im}(q_z) < b$, and \begin{eqnarray} \label{eq:def_B_plus_minus} B_\pm(q_z) = \exp\left[ \int_{i c_\pm -\infty}^{i c_\pm + \infty} \frac{d{\tilde q}_z}{2\pi i} \frac{\ln B(q_z)}{{\tilde q}_z-q_z} \right] ~, \end{eqnarray} for any $c_+$ and $c_-$ such that $a<c_+<\mathop {\rm Im}(q_z)<c_-<b$. This theorem is a corollary of the previous one, when the latter is applied to the function $A(q_z) = \ln B(q_z)$. \end{enumerate} Accordingly, we now observe that the Fourier transforms of the functions $\kappa_{q,\omega}(z)$ and $\kappa_{q,\omega}^z(z)$ [{\it i.e.} $\kappa_{q,\omega}(q_z)$ and $\kappa_{q,\omega}^z(q_z)$] have the same analytic properties, {\it i.e} they are analytic in the strip $|\mathop {\rm Im} q_z| < {\bar q}_z$, where ${\bar q}_z = \min\big[q,|\mathop {\rm Im} \sqrt{(\omega+i/\tau)^2-q^2}|\big]$. By definition, $g(q_z)$ is regular for $\mathop {\rm Im} q_z > -{\bar q}_z$, while $f(q_z)$ is analytic for $\mathop {\rm Im} q_z < {\bar q}_z$. Let us consider the Fourier transform of $\partial_z^2 f(z)$ which, taking into account the discontinuities in $z=0$, reads \begin{eqnarray} {\cal F}[\partial_z^2 f(z)] = -q_z^2 f(q_z) + i q_z f(0) - f'(0) ~. \end{eqnarray} The function ${\cal F}[\partial_z^2 f(z)]$ must be bounded for $|q_z|\to \infty$. Therefore, \begin{eqnarray} \label{eq:asymptotics_f} f(q_z) \to i\frac{f(0)}{q_{z}} - \frac{f'(0)}{q_z^2} ~, \end{eqnarray} for $|q_z|\to \infty$. We now take the Fourier transform of Eq.~(\ref{eq:WH_1}) in the strip $|\mathop {\rm Im}(q_z)| < {\bar q}_z$. It reads \begin{eqnarray} \label{eq:WH_2} g(q_z) = -(q_z^2 + q^2) f(q_z) + i q_z f(0) - f'(0) + q_{{\rm TF},M}^2 \left[ \kappa_{q,\omega}(q_z) f(q_z) + \kappa^z_{q,\omega}(q_z) f(0) \right] ~, \end{eqnarray} which is rewritten as \begin{eqnarray} \label{eq:WH_3} g(q_z) - [ q_{{\rm TF},M}^2 \kappa^z_{q,\omega}(q_z) + i q_z ] f(0) + f'(0) = - \big[q_z^2 + q^2 - q_{{\rm TF},M}^2 \kappa_{q,\omega}(q_z) \big] f(q_z) ~, \end{eqnarray} We now want to apply the theorem of Eq.~(\ref{eq:splitting_product}) to the right-hand side of Eq.~(\ref{eq:WH_3}). In order to do so, we have to consider the roots of the equation \begin{eqnarray} \label{eq:WH_4} q_z^2 + q^2 - q_{{\rm TF},M}^2 \kappa_{q,\omega}(q_z) = 0 ~, \end{eqnarray} Since $\kappa_{q,\omega}(q_z)$ is even in $q_z$, the solutions of Eq.~(\ref{eq:WH_4}) in the strips $|\mathop {\rm Im} q_z| < {\bar q}_z$ are denoted by $\pm q_{z,i}$ ($i = 1,\ldots,r$). Without loss of generality, we assume that $\mathop {\rm Im}(q_{z,i}) \geq 0$ for all $i = 1,\ldots,r$. We assume $\mathop {\rm Im}(q_{z,i})\neq 0$ for all $i = 1,\ldots,r$, and we order the roots such that $\mathop {\rm Im}(q_{z,1})< \ldots<\mathop {\rm Im}(q_{z,r})$. We define \begin{eqnarray} && P(q_z) = \left\{ \begin{array}{ll} (q_z^2 - q_{z,1}^2) \cdots (q_z^2 - q_{z,r}^2) & {\rm if}~ r\neq 0 \\ 1 & {\rm if}~ r=0 \end{array} \right. ~, \nonumber\\ && \tau(q_z) = \frac{(q_z^2 + {\bar q}_z^2)^{r-1}}{P(q_z)} \big[ (q_z^2 + q^2) - q_{{\rm TF},M}^2 \kappa_{q,\omega}(q_z) \big] ~. \end{eqnarray} Note that with this definition $\tau(q_z)$ is analytic and different from zero in the entire strip $-{\bar q}_z<\mathop {\rm Im}(q_z)<{\bar q}_{z}$, and goes as $\tau(q_z) \to 1$ for $|q_z| \to \infty$. Therefore we can apply the result of Eq.~(\ref{eq:splitting_product}) and decompose it as $\tau(q_z) = \tau_+(q_z)/\tau_-(q_z)$, where [Eq.~(\ref{eq:def_B_plus_minus})] \begin{eqnarray} \label{eq:tau_pm_def} \tau_\pm(q_z) = \exp \left[ \int_{i c_\pm - \infty}^{i c_\pm + \infty} \frac{d{\tilde q}_z}{2\pi i} \frac{\ln \tau({\tilde q}_z)}{{\tilde q}_z-q_z} \right] ~, \end{eqnarray} where $|c_\pm|<{\bar q}_z$. $\tau_+(q_z)$ [$\tau_-(q_z)$] is independent of the choice of $c_+$ ($c_-$). Now, sending $c_+\to -{\bar q}_z$ ($c_-\to {\bar q}_z$), we obtain that the so defined $\tau_+(q_z)$ [$\tau_-(q_z)$] is regular and bounded for $\mathop {\rm Im} q_z > -{\bar q}_z$ ($\mathop {\rm Im} q_z < {\bar q}_z$). We now define \begin{eqnarray} \label{eq:Phi_pm_def} \Phi_\pm(q_z) = (q_z \pm i {\bar q}_z)^{\mp(r-1)} \tau_{\pm}(q_z) ~, \end{eqnarray} and we note that $\Phi_+(q_z)$ [$\Phi_-(q_z)$] is still a regular function for $\mathop {\rm Im} q_z > -{\bar q}_z$ ($\mathop {\rm Im} q_z < {\bar q}_z$). With this definition Eq.~(\ref{eq:WH_3}) becomes \begin{eqnarray} \label{eq:WH_5} \frac{P_-(q_z) f(q_z)}{\Phi_-(q_z)} + \frac{ g(q_z) - i q_z f(0) + f'(0) }{P_+(q_z)\Phi_+(q_z)} - \frac{q_{{\rm TF},M}^2 \kappa^z_{q,\omega}(q_z) f(0)}{P_+(q_z) \Phi_+(q_z)} = 0 ~. \end{eqnarray} Here we introduced \begin{eqnarray} P_\pm(q_z) = (q_z \pm q_{z,1})\cdots(q_z \pm q_{z,r}) ~, \end{eqnarray} whose inverse is analytic in the upper (lower) half of the complex plane. We now note that the first term of Eq.~(\ref{eq:WH_5}) is regular for $\mathop {\rm Im} (q_z) < {\bar q}_z$, while the second one is regular for $\mathop {\rm Im} (q_z) > -{\bar q}_z$. The third term, however, is regular only in the strip $-q_{z,1}<\mathop {\rm Im}(q_z) <{\bar q}_z$. The aim now is to split into two terms $\Psi_+(q_z)$ and $\Psi_-(q_z)$, using Eq.~(\ref{eq:splitting_sum}). Therefore, we define \begin{eqnarray} \Psi_+(q_z) + \Psi_-(q_z) = \frac{\kappa^z_{q,\omega}(q_z)}{P_+(q_z) \Phi_+(q_z)} ~, \end{eqnarray} where $\Psi_-(q_z)$ is regular for $\mathop {\rm Im} (q_z) < {\bar q}_z$ and $\Psi_+(q_z)$ is regular for $\mathop {\rm Im} (q_z) > -q_{z,1}$ [or $\mathop {\rm Im} (q_z) > -{\bar q}_{z}$, if Eq.~(\ref{eq:WH_4}) has no zeros]. The two functions are defined as \begin{eqnarray} \Psi_\pm(q_z) = \pm \int_{i c_\pm -\infty}^{i c_\pm + \infty} \frac{d{\tilde q}_z}{2\pi i} \frac{\kappa^z_{q,\omega}({\tilde q}_z)}{({\tilde q}_z-q_z)P_+(q_z) \Phi_+(q_z)} ~. \end{eqnarray} Where $-q_{z,1}< c_+ < \mathop {\rm Im}(q_z) < c_- < {\bar q}_z$. Note that the integrand goes to zero faster than $1/|{\tilde q}_z|$ for $|{\tilde q}_z| \to \infty$, and the hypothesis of the theorem are therefore satisfied. With this definition Eq.~(\ref{eq:WH_5}) is re-arranged as \begin{eqnarray} \label{eq:WH_6} \frac{P_-(q_z) f(q_z)}{\Phi_-(q_z)} - q_{{\rm TF},M}^2 f(0) \Psi_-(q_z) = q_{{\rm TF},M}^2 f(0) \Psi_+(q_z) -\frac{ g(q_z) - i q_z f(0) + f'(0) }{P_+(q_z)\Phi_+(q_z)} ~. \end{eqnarray} Now the left-hand side is regular for $\mathop {\rm Im} (q_z) < {\bar q}_z$, while the right-hand side is regular for $\mathop {\rm Im} (q_z) > -q_{z,1}$, and they coincide on the strip $-q_{z,1}< \mathop {\rm Im} (q_z) < {\bar q}_z$. Therefore, together they define a function $\psi(q_z)$ analytic in all the complex plane. Note that $g(q_z) \sim 1/|q_z|$ and $\Psi_\pm(q_z) \sim 1/|q_z|$ for $|q_z| \to \infty$, while $\tau_{\pm}(q_z)$ are bounded. The left-hand side goes as $q_z^0$ as $|q_z| \to \infty$, and it is therefore equal to a constant $Q$ to be determined. Therefore \begin{eqnarray} \label{eq:WH_7} f(q_z) = \frac{\Phi_-(q_z)[Q + q_{{\rm TF},M}^2 f(0) \Psi_-(q_z)]}{P_-(q_z)} ~. \end{eqnarray} The overall constant is determined from the large-$q_z$ behavior of $f(q_z)$ given in Eq.~(\ref{eq:asymptotics_f}), which implies that $Q = i f(0)$, and therefore \begin{eqnarray} \label{eq:WH_8} f(q_z) = if(0) \frac{\Phi_-(q_z)[1 - i q_{{\rm TF},M}^2 \Psi_-(q_z)]}{P_-(q_z)} ~. \end{eqnarray} We now calculate $Z_{q,\omega} \equiv qf(0)/f'(0)$. The value of $f'(0)$ is obtained by taking the following limit: \begin{eqnarray} \label{eq:f_prime_def} f'(0) &=& \lim_{q_z\to \infty} \big[ i q_z f(0) -q_z^2 f(q_z) \big] \nonumber\\ &=& i f(0) \lim_{q_z\to \infty} \left[ q_z - q_z \frac{\tau_-(q_z) (1 - i {\bar q}_z/q_z)^{1-r} [1 - q_{{\rm TF},M}^2 \Psi_-(q_z)]}{\displaystyle\prod_{i=1}^r (1 + q_{z,i}/q_z)} \right] \nonumber\\ &=& i f(0)\big[i {\nu'} + i \nu +i (1-r) {\bar q}_z + q_{z,1} + \ldots + q_{z,r} \big] ~, \end{eqnarray} where we set $\tau_-(q_z) = 1 - i \nu/q_z$ and we used that $\Psi_-(q_z) = {\nu}'/(q_{{\rm TF},M}^2 q_z)$ for $|q_z| \to \infty$. $\nu$ is determined from \begin{eqnarray} \label{eq:nu_def} \nu &=& -\int_{-\infty}^{\infty}\frac{dq_z}{2\pi} \ln\tau(q_z) \nonumber\\ &=& - \int_{0}^{\infty}\frac{dq_z}{2\pi} \ln\big[\tau(q_z)\tau(-q_z)\big] \nonumber\\ &=& -\int_{0}^{\infty}\frac{dq_z}{\pi} \ln\left[ \frac{(q_z^2 + {\bar q}_z^2)^{r}}{P(q_z)} \frac{q_z^2 + q^2 - q_{{\rm TF},M}^2 \kappa_{q,\omega}(q_z)}{q_z^2 + {\bar q}_z^2} \right] ~. \end{eqnarray} where we took $c\to 0^+$ in Eq.~(\ref{eq:tau_pm_def}). Therefore, the integral on the last line of Eq.~(\ref{eq:nu_def}) is intended to be evaluated for $z$ infinitesimally above the real axis. In a similar way (we send $c_\pm\to 0^\mp$ in the integration boundaries) \begin{eqnarray} \nu' = q_{{\rm TF},M}^2 \int_{-\infty}^{+\infty} \frac{d{\tilde q}_z}{2\pi i} \frac{\kappa^z_{q,\omega}({\tilde q}_z)}{P_+(q_z) \Phi_+(q_z)} ~, \end{eqnarray} Finally, we get \begin{eqnarray} Z_{q,\omega} = - \frac{q}{{\nu'} + \nu + (1-r) {\bar q}_z -i(q_{z,1} + \ldots + q_{z,r})} ~. \end{eqnarray} \end{document}
{ "timestamp": "2018-02-21T02:00:39", "yymm": "1802", "arxiv_id": "1802.06797", "language": "en", "url": "https://arxiv.org/abs/1802.06797" }
\section{Introduction} Let $G$ be a permutation group on a finite set $\O$ of size $n$. A subset of $\O$ is said to be a {\it base} for $G$ if its pointwise stabilizer in $G$ is trivial. The minimal size of a base for $G$ is denoted by $b(G)$ (or sometimes $b(G,\O)$ if we wish to emphasize the action). It is easy to see that $|G| \le n^{b(G)}$, so that $b(G) \ge \frac{\log |G|}{\log n}$. A well known conjecture of Pyber \cite{P} asserts that there is an absolute constant $c$ such that if $G$ is primitive on $\O$, then $b(G) < c\, \frac{\log |G|}{\log n}$. Following substantial contributions by a number of authors, the conjecture was finally established in \cite{Mar} in the following form: there is an absolute constant $C$ such that for every primitive permutation group $G$ of degree $n$, \begin{equation}\label{bd} b(G) < 45\, \frac{\log |G|}{\log n} + C. \end{equation} To obtain a more explicit, usable bound, one would like to reduce the multiplicative constant 45 in the above, and also estimate the constant $C$. Most of the work in \cite{Mar} was concerned with affine groups contained in $AGL(V)$, acting on the set of vectors in a finite vector space $V$ (since the conjecture had already been establish for non-affine groups elsewhere). For these, one needs to bound the base size for a linear group $G \le GL(V)$ that acts irreducibly on $V$. One source for the undetermined constant $C$ in the bound (\ref{bd}) comes from a key result in this analysis, namely Proposition 2.2 of \cite{LSbase}, in which quasisimple linear groups are handled. This result says that there is a constant $C_0$ such that if $G$ is a quasisimple group acting irreducibly on a finite vector space $V$, then either $b(G) \le C_0$, or $G$ is a classical or alternating group and $V$ is the natural module for $G$; here by the natural module for an alternating group $A_m$ over $\F_{p^e}$ ($p$ prime) we mean the irreducible ``deleted permutation module" of dimension $m-\d(p,m)$, where $\d(p,m)$ is 2 if $p|m$ and is 1 otherwise. This result played a major role in the proof of Pyber's conjecture for primitive linear groups in \cite{LSbase, LSbase2}, which was heavily used in the final completion of the conjecture in \cite{Mar}. The main result in this paper shows that the constant $C_0$ just mentioned can be taken to be 6. Recall that for a finite group $G$, we denote by $E(G)$ the the subgroup generated by all quasisimple subnormal subgroups of $G$. Also write $V_d(q)$ to denote a $d$-dimensional vector space over $\F_q$. \begin{theorem} \label{main} Let $V = V_d(q)$ ($q=p^e$, $p$ prime) and $G \leq GL(V)$, and suppose that $E(G)$ is quasisimple and absolutely irreducible on $V$. Then one of the following holds: \begin{itemize} \item[{\rm (i)}] $E(G) = {\rm Alt}_m$ and $V$ is the natural ${\rm Alt}_m$-module over $\F_q$, of dimension $d = m-\d(p,m)$; \item[{\rm (ii)}] $E(G) = Cl_d(q_0)$, a classical group with natural module of dimension $d$ over a subfield $\F_{q_0}$ of $\F_q$; \item[{\rm (iii)}] $b(G)\le 6$. \end{itemize} \end{theorem} This result has been used in \cite{HLM} to improve the bound (\ref{bd}), replacing the multiplicative constant 45 by 2, and the constant $C$ by 24. With substantially more effort, it should be possible to reduce the constant 6 in part (iii) of the theorem, and work on this by the first author is in progress. \section{Preliminary lemmas} If $G$ is a finite classical group with natural module $V$, we define a {\it subspace action} of $G$ to be an action on an orbit of subspaces of $V$, or, in the case where $G = Sp_{2m}(q)$ with $q$ even, the action on the cosets of a subgroup $O_{2m}^\pm (q)$. \begin{lem}\label{sporex} Let $G$ be an almost simple group with socle $G_0$, and suppose $G$ acts transitively on a set $\O$. \begin{itemize} \item[{\rm (i)}] If $G_0$ is exceptional of Lie type, or sporadic, then $b(G)\le 7$, with equality only if $G = M_{24}$. \item[{\rm (ii)}] If $G_0$ is classical, and the action of $G$ on $\O$ is primitive and not a subspace action, then $b(G)\le 5$, with equality if and only if $G = U_6(2).2$, $\O = (G:U_4(3).2^2)$. \end{itemize} \end{lem} \pf Part (i) follows from \cite[Corollary 1]{BLS} and \cite[Corollary 1]{BOW}. Part (ii) is \cite[Theorem 1.1]{Bur}. \hal For a simple group $G_0$, and $1\ne x \in {\rm Aut}(G_0)$, define $\a(x)$ to be the minimal number of $G_0$-conjugates of $x$ required to generate the group $\langle G_0, x\ra$, and define \[ \a(G_0) = {\rm max}\,\left(\a(x)\,:\,1\ne x \in {\rm Aut}(G_0) \right). \] \begin{lem}\label{gusa} Let $G_0 = Cl_n(q)$, a simple classical group over $\F_q$ with natural module of dimension $n$. Then one of the following holds: \begin{itemize} \item[{\rm (i)}] $\a(G_0) \le n$; \item[{\rm (ii)}] $G_0 = PSp_n(q)$ ($q$ even) and $\a(G_0) \le n+1$; \item[{\rm (iii)}] $G_0 = L_2(q)$ and $\a(G_0) \le 4$; \item[{\rm (iv)}] $G_0 = L_3(q)$ and $\a(G_0) \le 4$; \item[{\rm (v)}] $G_0= L_4^\e(q)$ and $\a(G_0) \le 6$; \item[{\rm (vi)}] $G_0 = PSp_4(q)$ and $\a(G_0) \le 5$; \item[{\rm (vii)}] $G_0 = L_2(9),\,U_3(3)$ or $L_4^\e(2)$. \end{itemize} \end{lem} \pf This is \cite[3.1 and 4.1]{GS}. \hal To state the next result, let $\bar G$ be a simple algebraic group over an algebraically closed field $K$ of characteristic $p$, and let $V=V(\lambda)$ be an irreducible $K\bar G$-module of $p$-restricted highest weight $\lambda$. Let $\Phi$ be the root system of $\bar G$, with fundamental roots $\a_1,\ldots ,\a_l$, and let $\lambda_1,\ldots ,\lambda_l$ be corresponding fundamental dominant weights. Denote by $\Phi_S$ (resp. $\Phi_L$) the set of short (resp. long) roots in $\Phi$, and if all roots have the same length, just write $\Phi_S=\Phi$, $\Phi_L=\emptyset$. Let $W = W(\Phi)$ be the Weyl group, and for $\a \in \Phi$ let $U_\a = \{u_\a(t):t\in K\}$ be a corresponding root subgroup. Now let $\mu$ be a dominant weight of $V=V(\lambda)$, write $\mu = \sum_{j=1}^l c_j\lambda_j$, and let $\Psi = \langle \a_i : c_i=0\ra$ be a subsystem of $\Phi$. Define \[ r_\mu = \frac{|W:W(\Psi)|\cdot |\Phi_S\setminus \Psi_S|}{2|\Phi_S|},\;\; r_\mu' = \frac{|W:W(\Psi)|\cdot |\Phi_L\setminus \Psi_L|}{2|\Phi_L|} \] (the latter only if $\Phi_L\ne \emptyset$). Let \[ s_\lambda = \sum_{\mu} r_\mu,\;\; s_\lambda' = \sum_{\mu} r_\mu'\;(\hbox{ if }\Phi_L\ne \emptyset), \] where each sum is over the dominant weights $\mu$ of $V(\lambda)$. For $g \in \bar G\setminus Z(\bar G)$ and $\g \in K^*$, let $V_\g(g) = \{v \in V : vg = \g v\}$, and write ${\rm codim}V_\g(g) = \dim V - \dim V_\g(g)$. \begin{lem}\label{lawth} Let $V = V(\lambda)$ as above. \begin{itemize} \item[{\rm (i)}] If $g \in \bar G\setminus Z(\bar G)$ is semisimple, and $\g \in K^*$, then ${\rm codim}V_\g(g) \ge s_\lambda$. \item[{\rm (ii)}] If $\a \in \Phi_S$, then ${\rm codim}V_1(u_\a(1)) \ge s_\lambda$. \item[{\rm (iii)}] If $\Phi_L\ne \emptyset$ and $\b \in \Phi_L$, then ${\rm codim}V_1(u_\b(1)) \ge s_\lambda'$. \item[{\rm (iv)}] For any non-identity unipotent element $u \in \bar G$, we have ${\rm codim}V_1(u) \ge {\rm min}(s_\lambda,s_\lambda')$. \end{itemize} \end{lem} \pf Parts (i)-(iii) are \cite[Prop. 2.2.1]{law}. For part (iv), note that \cite[Cor. 3.4]{GM} shows that $\dim V_1(u)$ is bounded above by the maximum of $\dim V_1(u_\a(1))$ and $\dim V_1(u_\b(1))$; hence (iv) follows from (ii) and (iii). \hal For $\bar G$ of type $D_5$ or $D_6$ and $V$ a spin module for $\bar G$, we shall need the following sharper result. Note that the root system $D_n$ has two subsystems of type $A_1^2$ (up to conjugacy in the Weyl group); with the usual labelling of fundamental roots, we denote these by $(A_1^2)^{(1)} = \langle \a_1,\a_3 \rangle$ and $(A_1^2)^{(2)} = \langle \a_{n-1},\a_n \rangle$. \begin{lem}\label{d56} Let $\bar G = D_n$ with $n\in \{5,6\}$, and let $V=V(\lambda)$ be a spin module for $\bar G$ with $\lambda = \lambda_n$ or $\lambda_{n-1}$. Let $s \in \bar G\setminus Z(\bar G)$ be a semisimple element, and $u \in \bar G$ a unipotent element of order $p$. \begin{itemize} \item[{\rm (i)}] Suppose $n=6$. Then ${\rm codim}V_\g(s) \ge 12$ for any $\g \in K^*$; and ${\rm codim}V_1(u) \ge 12$ provided $u$ is not a root element. \item[{\rm (ii)}] Suppose $n=5$. \begin{itemize} \item[{\rm (a)}] Then ${\rm codim}V_\g(s) \ge 8$ for any $\g\in K^*$, provided $C_{\bar G}(s)'\ne A_4$; and if $C_{\bar G}(s)'= A_4$, then ${\rm codim}V_\g(s) \ge 6$. \item[{\rm (b)}] Provided $u$ is not a root element and also does not lie in a subsystem subgroup $(A_1^2)^{(1)}$, we have ${\rm codim}V_1(u) \ge 8$. \end{itemize} \end{itemize} \end{lem} \pf For semisimple elements $s$, we follow the method of \cite[Section 8]{law} (originally in \cite{ken}). Let $\Psi$ be a subsystem of the root system $\Phi$ of $\bar G$, and define an equivalence relation on the set of weights of $V(\lambda)$ by saying that two weights are related if their difference is a sum of roots in $\Psi$. Call the equivalence classes $\Psi$-{\it nets}. Now define $\Phi_s = \{\a \in \Phi\,|\,\a(s)=1\}$, the root sytem of $C_{\bar G}(s)$. If $\Phi_s \cap \Psi = \emptyset$, then any two weights in a given $\Psi$-net that differ by a root in $\Psi$ correspond to different eigenspaces for $s$. The subsystem $\Phi_s$ is contained in a proper subsystem spanned by a subset of the nodes of the extended Dynkin diagram of $\bar G$. Suppose $\Phi_s \ne A_{n-1}$. Then it is straightforward to check that there is a subsystem $\Psi$ that is $W$-conjugate to $(A_1)^{(2)}$ such that $\Phi_s\cap \Psi = \emptyset$. For this $\Psi$ there are $2^{n-2}$ $\Psi$-nets of size $2$, and so it follows from the observation in the previous paragraph that ${\rm codim}V_\g(s) \ge 2^{n-2}$ for any $\g \in K^*$. Now suppose $\Phi_s = A_{n-1}$. Here there is a subsystem $\Psi$ that is $W$-conjugate to $(A_1^2)^{(1)}$ such that $\Phi_s\cap \Psi = \emptyset$. For this $\Psi$ there are $2^{n-5}$ (resp. $2^{n-3}$, $2^{n-3}$) $\Psi$-nets of size 4 (resp. 2,1), and hence ${\rm codim}V_\g(s) \ge 2^{n-4}+2^{n-3}$ for any $\g \in K^*$. This lower bound is 12 when $n=6$, and 6 when $n=5$. This proves (i) and (ii) for semisimple elements. Now consider unipotent elements $u \in \bar G$ of order $p$. Assume first that $p$ is odd. Recall that the Jordan form of a unipotent element $u \in D_n$ on the natural module determines a partition $\lambda$ of $2n$ having an even number of parts of each even size; moreover, each such partition corresponds to a single conjugacy class, except when all parts of $\lambda$ are even, in which case there are two classes, interchanged by a graph automorphism of $D_n$ (see \cite[Chapter 3]{LSbk}). Denote by $u_\lambda$ (and by $u_\lambda,u_\lambda'$ for the exceptional partitions) representatives of the unipotent classes in $\bar G$. By \cite[\S 4]{Spalt}, if $\mu,\lambda$ are partitions and $\mu < \lambda$ in the usual dominance order, then $u_\mu$ lies in the closure of the class $u_\lambda^{\bar G}$ (or $u_\lambda'^{\bar G}$). Suppose $u$ is not a root element, and also is not in a subsystem subgroup $(A_1^2)^{(1)}$ when $n=5$. Then it follows from the above that the closure of $u^{\bar G}$ contains $u' = u_\mu$ with $\mu = (3,1^{2n-3})$ or $(2^4,1^{2n-8})$, the latter only if $n=6$. Moreover, ${\rm codim} V_1(u) \ge {\rm codim} V_1(u')$ (see the proof of \cite[3.4]{GM}). If $\mu = (3,1^{2n-3})$ , then $u'$ lies in the $B_1$ factor of a subgroup $B_1\times B_{n-2}$ of $\bar G$, and the restriction of $V$ to this subgroup is given by \cite[11.15(ii)]{LSbk}; it follows that $u'$ acts on $V$ with Jordan form $J_2^{2^{n-2}}$, giving the conclusion in this case. And if $\mu = (2^4,1^4)$ with $n=6$, then $u'$ is in $(A_1^2)^{(1)}$, which is contained in a subsystem $A_4$, and the restriction of the spin module $V$ to $A_4$ is given by \cite[11.15(i)]{LSbk}; the lower bound on ${\rm codim}V_1(u')$ in (i) follows easily from this. It remains to consider unipotent involutions with $p=2$. The conjugacy classes of these in $\bar G$ are described in \cite[\S 7]{AS} (alternatively in \cite[Chapter 6]{LSbk}). Adopting the notation of \cite{AS}, representatives are $a_l,c_l$ ($l$ even, $2\le l\le n$), and also $a_6'$ in $D_6$ (which is conjugate to $a_6$ under a graph automorphism). These are regular elements of Levi subsystem subgroups $S$, as follows: \[ \begin{array}{l|ccccccc} u & a_2&c_2&a_4&c_4&a_6&a_6'&c_6 \\ \hline S & A_1 & (A_1^2)^{(2)} & (A_1^2)^{(1)} & A_1 (A_1^2)^{(2)} & (A_1^3)^{(1)} &(A_1^3)^{(2)}& A_1^4 \end{array} \] where $(A_1^3)^{(1)} = \langle \a_1,\a_3,\a_5\rangle$ and $(A_1^3)^{(2)} = \langle \a_1,\a_3,\a_6\rangle$. The restrictions $V\downarrow S$ can be worked out using \cite[11.15]{LSbk}, from which we calculate $\dim C_V(u)$ for all the representatives: \[ \begin{array}{r|ccccccc} u & a_2&c_2&a_4&c_4&a_6&a_6'&c_6 \\ \hline \dim C_V(u),\,n=5 & 12&8&10&8 & - & - \\ \hline \dim C_V(u),\,n=6 & 24 & 16& 20 & 16 & 20 & 16 & 16 \end{array} \] The conclusion of the lemma follows. \hal \section{Bases for some subspace actions}\label{basesub} Let $G = Cl(V)$ be a simple symplectic, unitary or orthogonal group over $\F_q$, with natural module $V$ of dimension $n$. For $r < n$, denote by ${\mathcal N}_r$ an orbit of $G$ on the set of non-degenerate $r$-subspaces of $V$. The main result of this section gives an upper bound for the base size of the action of $G$ on ${\mathcal N}_r$ when $r$ is very close to $\frac{n}{2}$: \begin{thm}\label{nrbase} Let $G_0 = PSp_n(q)\,(n\ge 6)$, $PSU_n(q)\,(n\ge 4)$ or $P\O^\e_n(q)\,(n\ge 7,\,q \hbox{ odd})$, and let $G$ be a group with socle $G_0$ such that $G \le PGL(V)$, where $V$ is the natural module for $G_0$. Define \[ r = \left\{\begin{array}{l} \frac{1}{2}\left(n-(n,4)\right), \hbox{ if }G_0 = PSp_n(q), \\ \frac{1}{2}\left(n-(n,2)\right), \hbox{ if }G_0 = PSU_n(q) \hbox{ or }P\O^\e_n(q). \end{array} \right. \] Then $b(G,{\mathcal N}_r)\le 5$. \end{thm} Theorem \ref{nrbase} will follow quickly from the following result. The deduction is given in Section \ref{ded}. \begin{thm} \label{7/30} Let $G$ and $r$ be as in Theorem $\ref{nrbase}$, and let $H$ be the stabilizer in $G$ of a non-degenerate $r$-subspace in ${\mathcal N}_r$. Let $x \in G$ be an element of prime order. Then one of the following holds: \begin{itemize} \item[{\rm (i)}] $\frac{\log |x^G \cap H|}{\log |x^G|}< \frac{1}{2}+\frac{7}{30}$; \item[{\rm (ii)}] $G_0 = PSp_8(q)$ and $x$ is a unipotent element with Jordan form $(2,1^6)$. \end{itemize} \end{thm} Our proof is modelled on that of \cite[Thm. 1.1]{tim3}, where a similar conclusion is obtained for the action of $G$ on the set of pairs $\{U, U^\perp \}$ of non-degenerate $n/2$-spaces. \subsection{Proof of Theorem \ref{7/30}} We shall give a proof of the theorem just for the case where $G_0$ is a symplectic group $PSp_n(q)$. The proofs for the orthogonal and unitary groups run along entirely similar lines. We begin with a lemma on the corresponding algebraic groups. Let $K = \bar \F_q$ and $\bar{G} = PSp_n(K)$, and let $V=V_n(K)$ be the underlying symplectic space. As in Theorem \ref{7/30}, write $r=\frac{1}{2}\left(n-(n,4)\right) = \frac{1}{2}n-m$, where $m = \frac{1}{2}(n,4)$. Let $\bar H$ be the stabilizer in $\bar G$ of a non-degenerate $r$-subspace, so that $\bar{H} = (Sp_{n/2-m}(K) \times Sp_{n/2+m}(K))/\{\pm I\}$. Write $p = {\rm char}(K)$. When $p=2$, the classes of involutions in $\bar G$ are determined by \cite{AS}: for any odd $l\le n/2$, there is one class with Jordan form of type $(2^l,1^{n-2l})$, with representative denoted by $b_l$; and for any nonzero even $l\le n/2$ there are two such classes, with representatives denoted by $a_l,c_l$. These are distinguished by the fact that $(v,va_l)=0$ for all $v\in V$. \begin{lem}\label{myprop} With the above notation, if $x$ is an element of prime order in $\bar H$, then $\dim (x^{\bar{G}} \cap {\bar{H}}) \le N_x$, where $N_x$ is given in Table $\ref{algtable}$. In the table, $l_0$ is the multiplicity of the eigenvalue 1 in the action of $x$ on $V$, and $a_i$ is the number of Jordan blocks of size $i$ in the Jordan form of $x$. \end{lem} \begin{table}[h!] \centering \label{algtable} \begin{tabular}{|c|c|} \hline Type of element $x$ & $N_x$ \\ \hline semisimple of odd prime order & $\frac{1}{2} \dim x^{\bar{G}} + \frac{1}{4}(n-l_0) +m^2$ \\ semisimple involutions & $\left(\frac{1}{2} + \frac{2}{n}\right) \dim x^{\bar{G}}$ \\ unipotent of odd prime order & $\frac{1}{2} \dim x^{\bar{G}} + \frac{1}{4}(n- \sum_{i \textrm{ odd}} a_i) +m^2$ \\ unipotent involutions of types $b_l$, $c_l$ & $\left( \frac{1}{2}+\frac{2m+1}{n+2}\right) \dim x^{\bar{G}}$ \\ unipotent involutions of type $a_l$ & $\left(\frac{1}{2}+\frac{3m}{2n}\right)\dim x^{\bar{G}}$ \\ \hline \end{tabular} \caption{Bounds on $\dim (x^{\bar{G}} \cap {\bar{H}}) $ for elements $x$ of prime order.} \end{table} \pf Denote by $V_1$ and $V_2=V_1^\perp$ the $(n/2-m)$- and $(n/2+m)$-dimensional subspaces of $V$ preserved by ${\bar{H}}$. First suppose $x \in {\bar{H}}$ is a semisimple element of odd prime order $r$. Define $\omega$ to be an $r$th root of unity and let $\ell_i$ be the multiplicity of $\omega^i$ $(0 \leq i \leq r-1)$ as an eigenvalue of $x$ in its action on $V$. We further define $y_{ij}$ to be the multiplicity of $\omega^i$ as an eigenvalue of $x$ in its action on $V_j$. Note that $\ell_i = y_{i1} + y_{i2}$. Then \[ \dim x^{\bar{G}} = \frac{n^2+n}{2} - \left(\frac{\ell_0}{2} + \frac{1}{2}\sum_{i=0}^{r-1} \ell_i^2\right), \] and furthermore, \[ \begin{array}{ll} \dim (x^{\bar{G}} \cap {\bar{H}}) & = \dim x^{\bar{H}} \leq \frac{n^2+2n}{4}+m^2 - (\frac{1}{2} \ell_0 + \frac{1}{4}\sum_{i=0}^{r-1} \ell_i^2 ) \\ & = \frac{1}{2}\dim x^{\bar{G}} + \frac{1}{4}(n-\ell_0)+m^2 \\ & \leq \left( \frac{1}{2}+\frac{1}{n+2}\right) \dim x^{\bar{G}} + m^2. \end{array} \] Now suppose that $x$ is a semisimple involution. Here $C_{\bar G}(x)^0$ is the image modulo ${\pm I}$ of either $GL_{n/2}(K)$ or $Sp_l(K) \times Sp_{n-l}(K)$, for some even $l\le n/2$. In the first case, $\dim x^{\bar{G}} = n^2/4+n/2$ and so \[ \dim (x^{\bar{G}} \cap {\bar{H}}) = \dim x^{\bar{H}} = \frac{1}{2} \dim x^{\bar{G}} +\frac{n}{4} +\frac{m^2}{2} \leq \left(\frac{1}{2}+\frac{1}{n}\right) \dim x^{\bar{G}} + \frac{m^2-1}{2} \leq \left(\frac{1}{2}+\frac{2}{n}\right) \dim x^{\bar{G}}. \] Now consider the second case, where $C_{\bar G}(x)^0 = Sp_l(K) \times Sp_{n-l}(K)$. Here $x$ is ${\bar{G}}$-conjugate to $[ -I_l, I_{n-l}]$, and $\dim x^{\bar{G}}= nl - l^2 = l(n-l)$. For $j=1,2$, the restriction of $x$ to $V_j$ is $Sp(V_j)$-conjugate to $[ -I_{l_j}, I_{c-l_j}]$ for some even integer $l_j \geq 0$. Noting that $l = l_1+l_2$, we then have \[ \dim(x^{\bar{G}} \cap {\bar{H}}) = l_1(\frac{n}{2}-m-l_1) + l_2(\frac{n}{2}+m-l_2) \leq \frac{1}{2} \dim x^{\bar{G}} + m(l_2-l_1 ) \leq\left( \frac{1}{2}+\frac{2}{n}\right) \dim x^{\bar{G}}. \] Now suppose that $x$ is a unipotent element of odd prime order $r=p$ and that $x$ has Jordan form on $V$ corresponding to the partition $(r^{a_r}, \dots , 1^{a_1}) \vdash n$. We have two further partitions $(r^{b_r}, \dots , 1^{b_1}) \vdash n/2 -m$ and $(r^{c_r}, \dots , 1^{c_1}) \vdash n/2 +m$ associated to $x$ because it preserves $V_1$ and $V_2$. Notice that $a_i = b_i+c_i$. By \cite[1.10]{LLS}, \[ \dim x^{\bar{G}} = \frac{ n^2+n}{2} - \frac{1}{2}\sum_{i=1}^r \Big(\sum_{k=i}^r a_k \Big)^2-\frac{1}{2} \sum_{i \textrm{ odd}} a_i. \] Hence, using \cite[p.698]{tim3}, we have \[ \dim (x^{\bar{G}} \cap {\bar{H}}) \leq \frac{1}{2} \dim x^{\bar{G}} + \frac{1}{4}(n- \sum_{i \textrm{ odd}} a_i) +m^2 \leq \left( \frac{1}{2}+\frac{1}{n+2}\right) \dim x^{\bar{G}} + m^2. \] Finally, we consider the case where $x$ is a unipotent involution. First suppose that $x$ is ${\bar{G}}$-conjugate to either $b_l$ or $c_l$ (as described in the preamble to the lemma). Then \cite[1.10]{LLS} implies that $\dim x^{\bar{G}} = l(n-l+1)$. Let $x$ act on $V_i$ with associated partition $(2^{l_i}, 1^{c_i-l_i})$ for $i=1,2$, where $c_1 = n/2-m$ and $c_2=n/2+m$. Then \[ \dim (x^{\bar{G}} \cap {\bar{H})}= \dim x^{\bar{H}} \leq \frac{1}{2} \dim x^{\bar{G}} + \frac{l}{2}+m(l_2-l_1) \leq \left( \frac{1}{2}+\frac{2m+1}{n+2}\right) \dim x^{\bar{G}}. \] Lastly, if $x$ is ${\bar{G}}$-conjugate to $a_l$ for some $2\leq l \leq n/2$, then by \cite[1.10]{LLS}, $\dim x^{\bar{G}} = l(n-l)$. By the definition of an $a$-type involution, if $y \in x^{\bar{G}} \cap {\bar{H}}$ fixes a subspace $V_i$, then the restriction of $y$ to $V_i$ is conjugate to $a_{l_i}$ for some even integer $l_i \geq 0$. Therefore \[ \dim (x^{\bar{G}} \cap {\bar{H}}) = \dim x^{\bar{H}} \leq \frac{1}{2}\dim x^{\bar{G}} +m( l_2-l_1) \] and we determine that $l_2-l_1 < \frac{3l(n-l)}{2n}$, so \[ \dim (x^{\bar{G}} \cap {\bar{H}} )= \dim x^{\bar{H}} \leq \left(\frac{1}{2}+\frac{3m}{2n}\right)\dim x^{\bar{G}}. \] This completes the proof of the lemma. \hal Now we embark on the proof of Theorem \ref{7/30}, considering in turn the various types of elements $x$ of prime order in the symplectic group $G$. We shall frequently use the notaion for such elements given in \cite[\S 3.4]{bg}. Our approach in general is to find a function $\kappa(n)$ such that \begin{equation}\label{inkap} \frac{\log |x^G \cap H|}{\log |x^G|}< \frac{1}{2}+\kappa(n), \end{equation} where $\kappa(n)<\frac{7}{30}$ except possibly for some small values of $n$; these small values are then handled separately, usually by direct computation. \begin{lem}\label{ssodd} The conclusion of Theorem $\ref{7/30}$ holds when $x$ is a semisimple element of odd order. \end{lem} \pf Suppose $x \in H$ is a semisimple element of odd prime order $r$. Let $\mu = (\ell, a_1, \dots, a_k)$ be the tuple associated to $x$ (as defined in \cite[Definition 3.27]{tim2}), and define $i$ to be the smallest natural number such that $r \mid q^i-1$. According to \cite[3.30]{tim2} this means that \[ |C_G(x)| = \left\{\begin{array}{l} |Sp_l(q)|\prod_{j=1}^k |GL_{a_j}(q^i)|,\;i \hbox{ odd} \\ |Sp_l(q)|\prod_{j=1}^k |GU_{a_j}(q^{i/2})|,\;i \hbox{ even}. \end{array} \right. \] Let $d$ to be the number of non-zero $a_j$, and further define $e$ to be equal to 1 or 2 when $i$ is even or odd respectively. By Lemma \ref{myprop} and adapting the argument given in \cite[p.720]{tim3}, we have \begin{equation}\label{inn1} |x^G \cap H| < \left(\frac{n-l}{di}+1\right)^{d/e} 2^{d(e-1)} q^{\tfrac{1}{2} \dim x^{\bar{G}} + \tfrac{1}{4}(n-\ell)+m^2}. \end{equation} Furthermore, \cite[3.27]{tim2} implies that \begin{equation}\label{inn2} |x^G| \geq \frac{1}{2} \left( \frac{q}{q+1} \right)^{d(2-e)} q^{\dim x^{\bar{G}}}. \end{equation} and \cite[3.33]{tim2} gives the lower bound \begin{equation}\label{inn3} \dim x^{\bar{G}} \geq \frac{1}{2}(n^2+n-l^2-l-\frac{1}{ei}(n-l-i(d-e))^2-i(d-e)). \end{equation} First suppose $m=1$ (so that $n\equiv 2\hbox{ mod }4$). Then (\ref{inn1})--(\ref{inn3}) imply that the inequality (\ref{inkap}) holds with $\kappa(n) = \frac{3}{n}+\frac{1}{n+1}$. Note that $\kappa (n)< 7/30$ for $n\geq 18$. For $n=6,10,14$, we must either adjust our value of $\kappa(n)$ or compute $|x^G\cap H|$ and $|x^G|$ explicitly, since here $\frac{3}{n} + \frac{1}{n+1}>7/30$. For $n=14$, we find that (\ref{inkap}) holds with $\kappa(n) = 7/30$ for all choices of $(l,i, d)$ except $(l,i, d) = (0,1,2)$. In the latter case, $H =(Sp_8(q)\times Sp_6(q))/\{\pm I\}$ and $|C_G(x)| = |GL_{a_1}(q)|\, |GL_{a_2}(q)|$ with $a_1+a_2=7$. Hence \[ |x^G\cap H| = \sum_{b_i\le a_i,b_1+b_2=4} |Sp_8(q):GL_{b_1}(q) \times GL_{b_2}(q)| + |Sp_6(q):GL_{a_1-b_1}(q) \times GL_{a_2b_2}(q)|, \] and explicit computation gives $\log |x^G \cap H| /\log |x^G| <\frac{1}{2}+\frac{7}{30}$. For $n=10$, (\ref{inkap}) holds with $\kappa(n)=7/30$ for all valid choices of $(l,i,d)$ except $(l,i,d)=(0,1,2)$ or $(0,1,4)$, and again explicit calculations as above give $\log |x^G \cap H| /\log |x^G| < \frac{1}{2}+\frac{7}{30}$. Finally, for $n=6$, we find that $\log |x^G \cap H| /\log |x^G| < \frac{1}{2}+\frac{7}{30}$ for all choices of $x$ with associated parameters $(l,i,d)$. Now suppose $m=2$. Then (\ref{inn1})--(\ref{inn3}) imply that (\ref{inkap}) holds with $\kappa(n) = \frac{79}{20(n+1)}$ (when $e=1$), and with $\kappa(n) =\frac{22}{5(n+2)}$ (when $e=2$). We have $\kappa(n) < \frac{7}{30}$ for $n\ge 20$. For $n<20$, explicit calculations of $|x^G\cap H|$ as above yield the conclusion. \hal \begin{lem}\label{ssinvol} The conclusion of Theorem $\ref{7/30}$ holds when $x$ is a semisimple involution. \end{lem} \pf Suppose that $x \in H$ is a semisimple involution. Denote by $s$ the codimension of the largest eigenspace of $x$ on $V = V_n(K)$. According to \cite[3.37]{tim2}, $|C_G(x)|$ is equal to $|Sp_s(q)|\,|Sp_{n-s}(q)|$, $|Sp_{n/2}(q)|^2.2$, $|Sp_{n/2}(q^2)|.2$ or $|GL_{n/2}^\e(q)|.2$, with $s<\frac{n}{2}$ in the first case, and $s=\frac{n}{2}$ in the latter three cases. Suppose $x$ is as in one of the first two cases. Adapting the analogous argument given in \cite[p.720]{tim3}, we deduce that \[ |x^G \cap H | < 4\left(\frac{q^2+1}{q^2-1} \right) q^{\frac{s(n-s)}{2} - m(1-m)},\;\; |x^G| > \frac{1}{2}q^{s(n-s)} \] (the constant $\frac{1}{2}$ in the second inequality should be replaced by $\frac{1}{4}$ when $s=\frac{n}{2}$). These bounds imply that (\ref{inkap}) holds with \[ \kappa(n) = \left\{\begin{array}{l} \frac{2}{n}, \hbox{ if }s<\frac{n}{2}, m=1 \\ \frac{3}{n+1}, \hbox{ if }s<\frac{n}{2}, m=2 \\ \frac{3}{2n}, \hbox{ if }s=\frac{n}{2}, n\ge 12. \end{array} \right. \] For $n\ge 12$ we have $\kappa(n)< \frac{7}{30}$, giving the conclusion. And for smaller values of $n$, we obtain the conclusion by explicit calculation of the values of $|x^G\cap H|$ and $|x^G|$. Next suppose $|C_G(x)| = |Sp_{n/2}(q^2)|.2$. Then $|x^G| > \frac{1}{4}q^{n^2/4}$ by \cite[3.37]{tim2}. If $\frac{n}{4}$ is even then $x^G\cap H= \emptyset$, so assume $\frac{n}{4}$ is odd. An argument analogous to that at the top of p.722 of \cite{tim3} for this case gives $|x^G\cap H| < \frac{1}{4}q^{(n^2/8)+2}$. These bounds imply that (\ref{inkap}) holds with $\kappa(n) = \frac{2}{n}$, and this is less than $\frac{7}{30}$ for all $n\ge 12$. Finally, suppose that $|C_G(x)| = |GL_{n/2}^\e(q)|.2$. Again \cite[3.37]{tim2} and arguments of \cite[p.722]{tim3} give \[ |x^G| > \frac{1}{4}\left(\frac{q}{q+1}\right) q^{\frac{1}{4}n(n+2)},\;\; |x^G\cap H| < \frac{1}{4} q^{\frac{n^2}{8}+\frac{n}{2}+\frac{m^2}{2}}. \] Hence (\ref{inkap}) holds with $\kappa(n) = \frac{5}{2n}$, which is less than $\frac{7}{30}$ for $n>10$, and for $n\le 10$ we obtain the conclusion as usual by explicit calculation of $|x^G\cap H|$ and $|x^G|$. \hal \begin{lem}\label{unipodd} The conclusion of Theorem $\ref{7/30}$ holds when $x$ is a unipotent element of odd order. \end{lem} \pf Let $x \in H$ be a unipotent element of order $p$, and suppose $p$ is odd. Let the Jordan form of $x$ on $V$ correspond to the partition $\lambda \vdash n$. By Lemma \ref{myprop}, \begin{equation} \label{unidim} \dim{x^{\bar{H}}} \leq \frac{1}{2} \dim x^{\bar{G}} + \frac{1}{4}(n-e)+m^2, \end{equation} where $e$ is the number of odd parts in $\lambda$. \noindent \textbf{Case $\lambda = (k^{n/k})$} Since $k$ must divide both $n/2-m$ and $n/2+m$, we have $k=2$ or 4 (the latter only if $m=2$). Arguing as at the bottom of p.722 of \cite{tim3}, we have $\dim x^{\bar{G}} \geq \tfrac{1}{4}n(n+2)$, and also \[ |x^G| > \frac{q}{q+1} q^{\dim x^{\bar{G}}},\;\; |x^G \cap H| = |x^H| < 4 q^{\dim x^{\bar{H}}} \leq 4 q^{\frac{1}{2} \dim x^{\bar{G}}+ \frac{1}{4}(n-e)+m^2}. \] These bounds imply that (\ref{inkap}) holds with $\kappa(n)=\frac{3}{n+1}$, which is less than $\frac{7}{30}$ for $n\ge 14$. As usual, for smaller values of $n$ we obtain the result by explicit computation of $|x^G\cap H|$ and $|x^G|$. \vspace{2mm} \noindent \textbf{Case $\lambda = (2^j, 1^{n-2j})$, $n-2j>0$} First suppose that $j=1$. Then $|x^G| > \frac{1}{4} q^n$ and $|x^G \cap H | < q^{n/2 +m} + q^{n/2-m}$. This implies that $\frac{\log |x^G \cap H|}{\log |x^G|}< \frac{1}{2}+\frac{7}{30}$ for all values of $n\ge 6$ except $n=8$. The case $n=8$ is the exception in part (ii) of Theorem \ref{7/30}. Next suppose that $j=2$. Here $|x^G| > \frac{1}{4(q+1)} q^{2n-1}$. Since the two Jordan blocks of size 2 can lie in the two different subspaces $V_1$ and $V_2$, or in the same one, we have \[ |x^G \cap H| < q^{(n-2m)/2 + (n+2m)/2} + 2q^{n-4+m(m-1)} + 2q^{n+m(m-1)} \] Hence (\ref{inkap}) holds with $\kappa(n)=\frac{3}{n+1}$, which is less than $\frac{7}{30}$ for $n\ge 12$. For smaller values of $n$ we obtain the conclusion by explicit computations of $|x^G\cap H|$ and $|x^G|$. Finally, assume $j \geq 3$ (and so $n\geq 8$ since $n-2j>0$). The number of ways to distribute the $j$ Jordan blocks of size 2 amongst the subspaces $V_1,V_2$ is at most $j+1$. Then, adapting the analogous bound in \cite[p.723]{tim3} and making use of Lemma \ref{myprop}, we have \[ |x^G \cap H| < 4(j+1) q^{ \dim x^{\bar{G}}/2+ j/2+m^2} \] and as in \cite[p.723]{tim3}, we have $|x^G|> \frac{1}{4} q^{\dim x^{\bar{G}}} = \frac{1}{4} q^{j(n-j+1)}$. This yields (\ref{inkap}) with $\kappa(n) = \frac{4}{n+2}$, which is less than $\frac{7}{30}$ for $n\ge 16$. As usual, smaller values of $n$ are handled by direct computation. \vspace{2mm} \noindent \textbf{Case $\lambda = (k^{a_k}, \dots , 2^{a_2}, 1^l)$, $k \le n/2+m$ } In the computations below, we adapt the arguments on p.723 of \cite{tim3}. Let $d$ be the number of non-zero $a_i$. Then \[ |x^G| > \frac{1}{2^{d+1}} \left( \frac{q}{q+1}\right)^d q^{\dim x^{\bar{G}}}. \] If $d=1$ then $\lambda = (k^{(n-l)/k}, 1^l)$, and we can take $k>2$ by the previous case. By \cite[1.10]{LLS}, we have \[ \dim x^{\bar{G} }= \frac{n^2}{2}+ \frac{n}{2} - \frac{l(n-l)}{k}-\frac{l^2}{2}-\frac{1}{2k}(n-l)^2-\frac{l}{2}-\frac{\alpha}{2k}(n-l), \] where $\alpha$ is zero if $k$ is even and one if $k$ is odd. Arguing as in \cite[p.723]{tim3} we also have \[ |x^G \cap H| < \left(\frac{n-l}{k} +1\right) 2^2 q^{\dim x^{\bar{G}}/2+ (n-l)(1-\alpha/k)/4 +m^2}. \] These bounds imply (\ref{inkap}) with $\kappa(n) = \frac{3}{n-3}$, which is less than $\frac{7}{30}$ for $n\ge 16$, and smaller values of $n$ are handed by explicit computation. Now suppose that $d \geq 2$. By \cite[p. 723]{tim3}, \[ \dim x^{\bar{G}}\geq \frac{1}{4}n^2 + \frac{1}{4}(d^2-d+2) - \frac{1}{16}d^4-\frac{1}{24} d^3 + \frac{3}{16}d^2-\frac{1}{3}d-\frac{1}{4}l^2-\frac{1}{2}, \] and adapting the analogous bound given in \cite[p.723]{tim3} and referring to Lemma \ref{myprop}, we have \[ |x^G\cap H| < 4^d \left(\frac{n/2-d^2/4+d/4-l/2-1}{d} +1 \right)^d q^{\tfrac{1}{2}\dim x^{\bar{G}} + (n-l)/4+m^2} \] These bounds give (\ref{inkap}) with $\kappa(n)=\frac{4}{n}$, which is less than $\frac{7}{30}$ for $n\ge 18$, and smaller values of $n$ are handed by explicit computation. \hal \begin{lem}\label{unipodd} The conclusion of Theorem $\ref{7/30}$ holds when $x$ is a unipotent involution. \end{lem} \pf Let $p=2$, and recall the description of the involution class representatives $a_l,b_l,c_l$ of $G$ in the preamble to Lemma \ref{myprop}. First assume that $x$ is conjugate to $a_l$ for some even integer $l$ with $2\leq l \leq n/2$. If $l=2$, then by \cite[1.10]{LLS} and \cite[Proposition 3.9]{tim2} we have \begin{equation} \label{l=2} |x^G \cap H| < 2 q^{2(n/2-m-2)} + 2q^{2(n/2+m-2)}. \end{equation} If $l\geq 4$ then we may adapt the analogous equation in \cite[p.723]{tim3} and obtain \[ |x^G \cap H| < (\frac{l}{2}+1) 2^2 q^{(\frac{1}{2} + \frac{3m}{2n}) l(n-l)}. \] Furthermore, for all $l$, by \cite[p.723]{tim3} \[ |x^G| > \frac{1}{2} q^{l(n-l)}. \] These bounds imply that $\frac{\log |x^G \cap H|}{\log |x^G|}< \frac{1}{2}+\frac{7}{30}$, provided $n\ge 14$ when $l=2$, and $n\ge 24$ when $l\ge 4$. Smaller values of $n$ can be dealt with by explicit computation of $|x^G\cap H|$ and $|x^G|$. Now suppose that $x$ is conjugate to either a $b_l$- or $c_l$-type involution. If $l=1$ then by \cite[1.10]{LLS} and \cite[Proposition 3.9]{tim2} \begin{equation} |x^G \cap H| < q^{n/2-m}+ q^{n/2+m}, \label{l=1} \end{equation} and if $l=2$, then \begin{equation} \label{l=22} |x^G\cap H| < q^n + q^{2(n/2-m-1)}+ q^{2(n/2+m-1)}. \end{equation} If $l\geq 3$, then by adapting the analogous argument in \cite[p.724]{tim3}, we deduce \[ |x^G \cap H|< 4\left(\frac{q^2+1}{q^2-1} \right) (q^{\tfrac{1}{2}\dim x^{\bar{G}}+2m-1}+q^{\tfrac{1}{2}\dim x^{\bar{G}}+m-1}) +4 \left( \frac{q^2+1}{q^2-1}\right) q^{\tfrac{1}{2} \dim x^{\bar{G}} + l/2+ m} \] where $\mathrm{dim} x^{\bar{G}} = l(n-l+1)$. Lastly, \cite[p. 724]{tim3} gives \[ |x^G| > \frac{1}{2} q^{l(n-l+1)} . \] As usual, these bounds imply that $\frac{\log |x^G \cap H|}{\log |x^G|}< \frac{1}{2}+\frac{7}{30}$ for $n\ge 14$, and explicit computations give the same conclusion for smaller values of $n$. \hal This completes the proof of Theorem \ref{7/30}. \subsection{Deduction of Theorem \ref{nrbase}}\label{ded} The deduction of Theorem \ref{nrbase} from Theorem \ref{7/30} proceeds along the lines of the proof of \cite[1.1]{Bur}. First we shall require a small extension of \cite[Prop. 2.2]{Bur}. For a finite group $G$, define \[ \eta_G(t) = \sum_{C\in \mathcal{C}} |C|^{-t} \] where $\mathcal{C}$ is the set of conjugacy classes of elements of prime order in $G$. \begin{lem}\label{zeta} Let $G$ be a finite classical group as in Theorem $\ref{nrbase}$, with $n\ge 6$. \begin{itemize} \item[{\rm (i)}] Then $\eta_G(\frac{1}{3}) < 1$. \item[{\rm (ii)}] Let $G = PGSp_8(q)$. Then $\eta_G(\frac{1}{3}) < 0.396$. \end{itemize} \end{lem} \pf (i) This is \cite[Prop. 2.2]{Bur}. (ii) We compute the sizes of the conjugacy classes with each centraliser type using \cite[Table B.7]{bg}, and bound the number of classes with each centraliser type using the same arguments as those given in the proof of \cite[Lemma 3.2]{Bur}. The result follows from these computations. \hal We also need to cover separately the two cases of Theorem \ref{nrbase} for dimensions less than 6. \begin{lem}\label{u4u5} Theorem $\ref{nrbase}$ holds for $G_0 = PSU_4(q)$ or $PSU_5(q)$. \end{lem} \pf Consider the first case, Here $G = PGU_4(q)$ acting on ${\mathcal N}_1$, the set of non-degenerate 1-spaces. Let $v_1,\ldots ,v_4$ be an orthonormal basis of the natural module for $G$. If $q$ is odd, then $\langle v_1\R$, $\langle v_2\R$, $\langle v_3\R$, $\langle v_1+v_2+v_3+v_4\R$ is a base for the action of $G$; and if $q$ is even, then $\langle v_1\R$, $\langle v_2\R$, $\langle v_3\R$, $\langle v_1+v_2+v_3 \R$, $\langle v_2+v_3+v_4\R$ is a base. Now let $G = PGU_5(q)$ acting on ${\mathcal N}_2$. Let $v_1,\ldots ,v_5$ be an orthonormal basis. Any element of $G$ that fixes the three non-degenerate 2-spaces $\langle v_1,v_2\R$, $\langle v_2,v_3\R$ and $\langle v_3,v_4\R$ also fixes $\langle v_1,v_5\R$ and $\langle v_4,v_5\R$ (as these are $\langle v_2,v_3, v_4\R^\perp$ and $\langle v_1,v_2, v_3\R^\perp$), hence fixes all the 1-spaces $\langle v_1\R,\ldots ,\langle v_5\R$. Hence adding two further non-degenerate 2-spaces intersecting in $\langle v_1+ \cdots +v_5\R$ to the first three gives a base of size 5. \hal \vspace{2mm} {\it Proof of Theorem \ref{nrbase} } Let $G,r$ be as in the statement of Theorem \ref{nrbase}, and let $H$ be the stabilizer of a non-degenerate $r$-subspace in ${\mathcal N}_r$. In view of Lemma \ref{u4u5}, we may assume that the dimension $n\ge 6$. For a positive integer $c$, let $Q(G,c)$ be the probability that a randomly chosen $c$-tuple of elements of ${\mathcal N}_r$ does not form a base for $G$. Then \begin{equation}\label{qgc} Q(G,c) \leq \sum_{x \in X} |x^G| \left( \frac{{\rm fix}_{{\mathcal N}_r}(x)}{|{\mathcal N}_r|}\right)^c = \sum_{x \in X} |x^G| \left( \frac{|x^G\cap H|}{|x^G|}\right)^c, \end{equation} where $X$ is a set of conjugacy class representatives of the elements of $G$ of prime order. Clearly $G$ has a base of size $c$ if and only if $Q(G,c)<1$. Assume for the moment that $G_0 \ne PSp_8(q)$. Then by Theorem \ref{7/30} we have \[ \frac{|x^G\cap H|}{|x^G|} < |x^G|^{-\frac{1}{2}+\frac{7}{30}} \] for all elements $x \in G$ of prime order. Hence it follows from (\ref{qgc}) that \[ Q(G,5) < \sum_{x \in X} |x^G|^{1 +5\left(-\frac{1}{2}+\frac{7}{30}\right)} = \eta_G(1/3). \] Therefore by Lemma \ref{zeta}(i), $G$ has a base of size 5, as required. It remains to consider the case where $G_0 = PSp_8(q)$. Here Theorem \ref{7/30}(ii) gives $\frac{|x^G\cap H|}{|x^G|}< |x^G|^{-\frac{1}{2}+\frac{7}{30}}$ for all elements $x\in G$ of prime order, except when $x$ is a unipotent element with Jordan form $(2, 1^6)$. In the latter case $|x^G| = q^8-1$ and $|x^G\cap H| = q^6+q^2-2$. Hence \[ Q(G,5) < \eta_G(1/3) + (q^8-1)\left(\frac{q^6+q^2-2}{q^8-1}\right)^{5}, \] and this is less than 1 for all $q$, by Lemma \ref{zeta}(ii). \vspace{2mm} This completes the proof of Theorem \ref{nrbase}. \section{Proof of Theorem \ref{main}} Assume the hypotheses of Theorem \ref{main}. Thus $G \le GL(V) = GL_d(q)$, and $E(G)$ is quasisimple and absolutely irreducible on $V$. Then the group $Z:=Z(G)$ consists of scalars, and $G/Z$ is almost simple. Let $G_0$ be the socle of $G/Z$. \begin{lem}\label{sporadexcep} If $G_0$ is exceptional of Lie type or sporadic, then $b(G) \le 6$. \end{lem} \pf Pick $v \in V\setminus 0$, and consider the action of $G$ on the orbit $\D = v^G$. By Lemma \ref{sporex}(i), if $G_0 \ne M_{24}$ then there exist $Z$-orbits $\d_1,\ldots ,\d_6$ such that $G_{\d_1\cdots \d_6} \le Z$. Hence $b(G) \le 6$. The case where $G_0=M_{24}$ is taken care of in Remark \ref{m24} below. \hal \begin{lem}\label{alter} Theorem $\ref{main}${\rm (i)} or {\rm (iii)} holds if $G_0$ is an alternating group. \end{lem} \pf This follows from \cite[Theorem 1.1]{FOS}. \hal \vspace{4mm} In view of the previous two lemmas, we can suppose from now on that $G_0$ is a classical simple group. Assume that \begin{equation}\label{b8} b(G) \ge 7. \end{equation} We aim to show that conclusion (ii) of Theorem \ref{main} must hold. By the above assumption, the dimension $d\ge 7$, and also every element of $V^{6}$ is fixed by some element of prime order in $G\setminus Z$, and so \begin{equation}\label{v7} V^{6} = \bigcup_{g \in {\mathcal P}} C_{V^{6}}(g), \end{equation} where ${\mathcal P}$ denotes the set of elements of prime order in $G\setminus Z$. Now $|C_{V^{6}}(g)| = |C_{V}(g)|^{6}$, and \begin{equation}\label{alph} \dim C_V(g) \le \left\lfloor (1-\frac{1}{\a(g)}) \dim V \right \rfloor, \end{equation} where $\a(g)$ is as defined in the preamble to Lemma \ref{gusa} (strictly speaking, it is $\a(gZ)$ for $gZ \in G/Z$). Writing $\a = \a(G_0)$, it follows that \[ |V|^{6} = q^{6d} \le |{\mathcal P}|\,q^{6\lfloor d(1-\frac{1}{\a}) \rfloor}. \] Since $|G| = |Z|\,|G/Z| \le (q-1)\,|{\rm Aut}(G_0)|$, we therefore have \begin{equation}\label{crude} q^{6\lceil d/\a \rceil } \le |{\mathcal P}| < |G| \le (q-1)\,|{\rm Aut}(G_0)|. \end{equation} \begin{rem}\label{m24} {\rm Using (\ref{crude}) we can handle the case $G_0=M_{24}$ as follows, completing the proof of Lemma \ref{sporadexcep}: we have $\a(M_{24}) \le 4$ by \cite[2.4]{good}, so (\ref{crude}) yields $\frac{6}{4}d < \log_2|M_{24}|$, hence $d\le 18$. By \cite{HM}, this forces $d=11, q=2$, so $G = M_{24} < GL_{11}(2)$. Here $V$ or $V^*$ is a quotient of the binary Golay code of length 24, dimension 12, by a trivial submodule, and we see from \cite[p.94]{atlas} that there is a $G$-orbit on $V$ of size 276 or 759 on which $G$ acts primitively. The base sizes of these actions of $M_{24}$ are less than 7, by \cite{BOW}, and the conclusion follows.} \end{rem} Let $q=p^a$, where $p$ is prime. The analysis divides naturally, according to whether or not the underlying characteristic of $G_0$ is equal to $p$ -- that is, whether or not $G_0$ is in the set ${\rm Lie}(p)$. \begin{lem}\label{liepdash} Under the above assumption $(\ref{b8})$, $G_0$ is not in ${\rm Lie}(p')$. \end{lem} \pf Suppose $G_0 \in {\rm Lie}(p')$. Lower bounds for $d = \dim V$ are given by \cite{LaS, SZ}, and the values of $\a$ by Lemma \ref{gusa}. Plugging these into (\ref{crude}) (and also using the fact that $d\ge 7$), we see that $G_0$ must be one of the following: \[ \begin{array}{l} PSp_4(3),\, PSp_4(5),\,Sp_6(2),\,PSp_6(3),\,PSp_8(3), \, PSp_{10}(3),\\ U_3(3),\, U_4(3),\, U_5(2), \\ \O_7(3),\, \O_8^+(2). \end{array} \] At this point we use \cite{HM}, which gives the dimensions and fields of definition of all the irreducible projective representations of the above groups of dimension up to 250. Combining this information with (\ref{crude}) leaves just the following possibilities: \[ \begin{array}{|l|l|l|} \hline G_0 & d & q \\ \hline U_5(2) & 10 & 3 \\ U_4(3) & 20 & 2 \\ Sp_6(2) & 7,8& q\le 11 \\ & 14 & 3 \\ \O_8^+(2) & 8 & q\le 29 \\ \hline \end{array} \] Consider first $G_0=U_5(2)$. Here $G = \langle -I\R \times U_5(2).2 < GL_{10}(3)$, and the Brauer character of this representation of $G$ is given in \cite{atlas}. From this we can read off the dimensions of the fixed point spaces of $3'$-elements of prime order. These are as follows, using Atlas notation: \[ \begin{array}{r|c|c|c|c|c} g & 2A,-2A & 2B,-2B & 2C,-2C & 5A & 11AB \\ \hline \dim C_V(g) & 2,8 & 6,4 & 5,5 & 2& 0 \\ \end{array} \] Also $\a\le 5$ by Lemma \ref{gusa}, so (\ref{alph}) gives $\dim C_V(g)\le 8$ for all elements $g\in G$ of order 3. At this point, the inequality $|V|^6 \le \sum_{g\in {\mathcal P}} |C_V(g)|^6$ implied by (\ref{v7}) gives \[ 3^{60} \le |2A|\cdot (3^{12}+3^{48}) + |2B|\cdot (3^{24}+3^{36}) + |2C|\cdot (3^{30}+3^{30}) + |5A|\cdot 3^{12} + |3ABCDEF|\cdot 3^{48}, \] where $|2A|$ denotes the size of the conjugacy class of $2A$-elements, and so on. This is a contradiction. This method works for all the cases in the above table, except $(G_0,d,q) = (\O_8^+(2),8,3)$; in this case the crude inequality $|V|^6 \le \sum_{g\in {\mathcal P}} |C_V(g)|^6$ implied by (\ref{v7}) does not yield a contradiction. Here we have $G \le 2.O_8^+(2) < GL(V) = GL_8(3)$. Observe that $O_8^+(2)$ has a subgroup $N = S_3\times O_6^-(2)$, and $N$ is the normalizer of $\langle x\rangle$, where $x$ is an element of order 3. Then $C_V(x) \ne 0$, and $N$ must fix a 1-space in $C_V(x)$. Moreover, we compute that the minimal base size of $O_8^+(2)$ acting on the cosets of $N$ is equal to 4. It follows that there are four 1-spaces in $V$ whose pointwise stabilizer in $G$ is contained in $Z$. Hence $b(G) \le 4$ in this case. \hal \vspace{2mm} In view of the previous lemmas, from now on we may assume that $G_0 = Cl_n(q_0)$, a classical group over a field $\F_{q_0}$ of characteristic $p$. Recall that $G \le GL(V) = GL_d(q)$ and $G_0 = {\rm soc}(G/Z)$. The next lemma identifies the possible highest weights for $V$ as a module for the quasisimple classical group $E(G)$. \begin{lem}\label{liep} Suppose as above that $G_0 = Cl_n(q_0)$, a classical group in ${\rm Lie}(p)$. Then $\F_{q_0}$ is a subfield of $\F_q$, and one of the following holds: \begin{itemize} \item[(1)] $V = V(\lambda)$, where $\lambda$ is one of the following high weights (listed up to automorphisms of $G_0)$: \[ \lambda_1,\,\lambda_2,\,2\lambda_1,\,\lambda_1+p^i\lambda_1,\,\lambda_1+p^i\lambda_{n-1} \,(i>0) \] (the last one only for $G_0 = L_n^\e(q_0)$); \item[(2)] $G_0 = L_n^\e(q_0)\,(n\ge 3)$, $V = V(\lambda_1+\lambda_{n-1})$; \item[(3)] $G_0 = L_n(q_0)\,(7\le n\le 21)$ and $V = V(\lambda_3)$; \item[(4)] $G_0 = L_6^\e(q_0)$ and $V = V(\lambda_3)$; \item[(5)] $G_0 = L_8^\e(q_0)$ and $V=V(\lambda_4)$; \item[(6)] $G_0 = PSp_6(q_0)$ and $V = V(\lambda_3)$ ($p$ odd); \item[(7)] $G_0 = PSp_8(q_0)$ and $V = V(\lambda_3)$ ($p$ odd) or $V(\lambda_4)$ ($p$ odd); \item[(8)] $G_0 = PSp_{10}(q_0)$ and $V=V(\lambda_3)$ ($p=2$); \item[(9)] $G_0 = P\O_n^\e (q_0)\,(7\le n\le 20,\,n\ne 8)$ and $V$ is a spin module. \end{itemize} \end{lem} \pf Assume first that $q_0>q$. Then by \cite[5.4.6]{KL}, there is an integer $s\ge 2$ such that $q_0 = q^s$ and $d=m^s$, where $m$ is the dimension of an irreducible module for $E(G)$. Note that $m\ge n$ (by the minimal choice of $n$). By (\ref{crude}), \[ q^{6m^s/\a} \le (q-1)\,|{\rm Aut}(Cl_n(q^s))|. \] Lemma \ref{gusa} shows that $\a \le n+2$ (excluding the small groups in Lemma \ref{gusa}(vii)), and hence \[ q^{6m^s/(n+2)} \le (q-1)\,|{\rm Aut}(Cl_n(q^s))| < (q-1)\,q^{s(n^2-1)}\,(2s\log_pq). \] Since $m\ge n$, it follows from this that $s=2$ and \[ m^2 < \frac{(n+2)(2n^2+1)}{6}. \] Now using \cite{Lu}, we deduce that $m=n$ and so \[ E(G) \le SL_n(q^2) < SL_{n^2}(q). \] As in \cite[p.104]{LSbase}, we see that there is a vector $v$ such that $E(G)_v \le SU_n(q)$. By Lemma \ref{sporex}, the base size of an almost simple group with socle $L_n(q^2)$ acting on the cosets of a subgroup containing $U_n(q)$ is at most 4. Hence there are 1-spaces $\d_1,\ldots,\d_4$ whose pointwise stabilizer in $G$ is contained in $Z$, and so $b(G) \le 4$ in this case. This contradicts our initial assumption that $b(G)\ge 7$. Hence we may assume now that $q_0\le q$, so that $\F_{q_0}$ is a subfield of $\F_q$ by \cite[5.4.6]{KL}. Now (\ref{crude}) gives \begin{equation}\label{abd} d < \frac{\a}{6}\left(1+\log_q |{\rm Aut}(G_0)|\right). \end{equation} Noting that apart from the case where $G_0 = P\O_8^+(q_0)$, we have $|{\rm Out}(G_0)| \le q$, it now follows using Lemma \ref{gusa} that $d < N$, where $N$ is as defined in Table \ref{Ndef}. In the last row of the table, $\d$ is $\log_q6$ if $G_0 = P\O_8^+(q_0)$, and is 0 otherwise. \begin{table}[h] \caption{}\label{Ndef} \[ \begin{array}{|l|l|} \hline G_0 & N \\ \hline L_n^\e(q_0) & \frac{1}{6}(n+2)(1+n^2),\;n\le 4 \\ & \frac{1}{6}n(1+n^2),\;n> 4 \\ PSp_n(q_0),\,n\ge 4 & \frac{1}{6}(n+1)\left(2+\frac{1}{2}n(n+1)\right), \;n>4 \\ P\O_n^\e(q_0),\,n\ge 7 & \frac{1}{6}n\left(2+\frac{1}{2}n(n-1)\right)+\d \\ \hline \end{array} \] \end{table} Now applying the bounds in \cite{Lu} (and also the improved bound for type $A$ in \cite{mart}), we see that with one possible exception, one of the cases (1)-(9) in the conclusion holds. The possible exception is $G_0 = L_4^\e(q_0)$ with $p=3$ and $V=V(\lambda_1+\lambda_2)$, of dimension 16. But in this case $G$ does not contain a graph automorphism of $G_0$ (since the weight $\lambda_1+\lambda_2$ is not fixed by a graph automorphism), and so \cite[4.1]{GS} implies that we can take $\a=4$ in (\ref{abd}), and this rules out this case. \hal \begin{lem}\label{3to8} Under the above assumption $(\ref{b8})$, $G_0$ is not as in $(3)-(9)$ of Lemma $\ref{liep}$. \end{lem} \pf Suppose $G_0$ is as in (3)--(9) of Lemma \ref{liep}. First we consider the actions of the simple algebraic groups $\bar G$ over $K = \bar \F_q$ corresponding to $G_0$ on the $K\bar G$-modules $\bar V = V \otimes K= V_{\bar G}(\lambda)$. Define \[ M_\lambda = {\rm min}\left\{{\rm codim}V_\g(g)\, |\, \g \in K^*,\,g \in \bar G\setminus Z(\bar G) \right\}. \] By Lemma \ref{lawth}, a lower bound for $M_\lambda$ is given by ${\rm min}(s_\lambda,s_{\lambda'})$, and simple calculations give the following lower bounds: \[ \begin{array}{|l|l|l|} \hline \bar G & \lambda & M_\lambda \ge \\ \hline A_n\,(n\ge 5) & \lambda_3 & \frac{1}{2}(n-1)(n-2) \\ A_7 & \lambda_4 & 20 \\ C_3 & \lambda_3\,(p>2) & 4 \\ C_4 & \lambda_3\,(p>2) & 12 \\ & \lambda_4\,(p>2) & 13 \\ C_5 & \lambda_3\,(p=2) & 24 \\ D_n\,(n\ge 5) & \lambda_{n-1},\lambda_n & 2^{n-3} \\ B_n\,(n\ge 3) & \lambda_n & 2^{n-2} \\ \hline \end{array} \] Apart from cases (4) and (5) of Lemma \ref{liep}, the group $G/Z$ is contained in $\bar G/Z$; in cases (4) and (5), a graph automorphism of $\bar G$ may also be present. Thus excluding (4) and (5), we see that (\ref{v7}) gives \begin{equation}\label{mbd} q^{6M_\lambda} \le |G|. \end{equation} The bounds for $M_\lambda$ in the above table now give a contradiction, except when $\bar G = D_n\,(n\le 6)$ or $B_n\,(n\le 5)$. We now consider the cases $\bar G = D_n\,(n\le 6)$ or $B_n\,(n\le 5)$. Since $B_{n-1}(q) < D_n(q) < GL(V)$, it suffices to deal with $\bar G = D_6, D_5$ or $B_3$. Suppose $G_0 = D_6^\e(q_0)$ with $\F_{q_0} \subseteq \F_q$. By Lemma \ref{d56}(i), for any element $g\in G$ that is not a scalar multiple of a root element, we have ${\rm codim}C_V(g) \ge 12$; and for root elements $u$, from the above table we have ${\rm codim}C_V(u) \ge 8$. The number of root elements in $G_0$ is less than $2q^{18}$. Hence (\ref{v7}) gives \[ |V|^6 = q^{32\times 6} \le 2q^{18}(q-1)\cdot q^{24\times 6} + |G|q^{20\times 6}, \] which is a contradiction. Now suppose $G_0 = D_5^\e(q_0)$. We perform a similar calculation, using Lemma \ref{d56}(ii). The number of semisimple elements $s$ of $G$ for which $C_{\bar G}(s)'=A_4$ is at most $|Z|\cdot (q-1)|D_5^\e (q):A_4^\e (q).(q-1)| < 2q^{22}$. The number of root elements in $G_0$ is less than $2q^{14}$, and the number of unipotent elements in the class $(A_1^2)^{(1)}$ is less than $2q^{20}$ (these have centralizer in $D_5^\e(q)$ of order $q^{14}|Sp_4(q)|(q-\e)$, see \cite[Table 8.6a]{LSbk}). Moreover, the total number of unipotent elements is at most $q^{40}$. Hence (\ref{v7}) together with Lemma \ref{d56}(ii) gives \[ q^{16\times 6} \le 2(q^{14}+q^{20})(q-1) q^{12\times 6} + q^{40}(q-1) q^{8\times 6} + 2q^{22} q^{10\times 6} + |G|q^{8\times 6}. \] This is a contradiction. Next consider $G_0 = B_3(q)$. In the action on the spin module $V$, there is a vector $v$ with stabilizer $G_2(q)$ in $B_3(q)$. Hence $b(G)\le 4$ in this case, by Lemma \ref{sporex}(ii). It remains to handle the cases (4), (5), where $G$ may contain graph automorphisms of $\bar G$. For $G_0 = L_6^\e(q)$ or $L_8^\e(q)$, the conjugacy classes of involutions in the coset of a graph automorphism are given by \cite[\S 19]{AS} for $q$ even, and by \cite[4.5.1]{GLS} for $q$ odd. It follows that the number of such involutions is less than $2q^{21}$ or $2q^{36}$ in case (4) or (5), respectively. For such an involution $g$, by (\ref{alph}) we have $\dim C_V(g) \le 16$ or 60, respectively. All other elements of prime order in $G$ lie in $\bar G\,Z$, hence have fixed point space of codimension at least $M_\lambda$. Hence we see that (\ref{v7}) gives \[ |V|^6 = \left\{\begin{array}{l} q^{20\times 6} \le |G|\cdot q^{14\times 6} + 2q^{21}\cdot q^{16\times 6}, \hbox{ in case (4)}, \\ q^{70\times 6} \le |G|\cdot q^{50\times 6} + 2q^{36}\cdot q^{60\times 6}, \hbox{ in case (5)}. \end{array} \right. \] Both of these yield contradictions. This completes the proof of the lemma. \hal \begin{lem}\label{case2} The group $G_0$ is not as in $(2)$ of Lemma $\ref{liep}$. \end{lem} \pf Here $G_0 = L_n^\e(q_0)$ with $n\ge 3$, and $V = V(\lambda_1+\lambda_{n-1})$. Suppose first that $\e=+$. Then $G \le PGL_n(q)\,Z$, and $V$ can be identified with $T/T_0$, where \[ T = \{A \in M_{n\times n}(q)\,:\,{\rm Tr}(A) = 0\}, \;T_0 = \{\lambda I_n : n\lambda = 0\}, \] and the action of $GL_n(q)$ is by conjugation. By \cite{St}, we can choose $X,Y \in SL_{n-1}(q_0)$ generating $SL_{n-1}(q_0)$. Define \[ A = \begin{pmatrix}X& 0 \\ 0 & -{\rm Tr}(X) \end{pmatrix}, \; B = \begin{pmatrix}Y& 0 \\ 0 & -{\rm Tr}(Y) \end{pmatrix}. \] Then $GL_n(q)_{A,B} \le \{ {\rm diag}(\lambda I_{n-1}, \mu)\}$, and hence $b(G) \le 4$. Now suppose $\e=-$, so that $G \le PGU_n(q)\,Z$, where we take $GU_n(q) = \{g \in GL_n(q^2) : g^Tg^{(q)}=I\}$. Then we can identify $V$ with the $\F_q$-space $S$ modulo scalars, where \[ S = \{ A \in M_{n\times n}(q^2)\,:\,{\rm Tr}(A) = 0,\,A^T = A^{(q)} \}, \] with $GU_n(q)$ acting by conjugation. As in \cite[p.104]{LSbase}, there is a vector $A \in V$ such that $GU_n(q)_A \le N_r$, where $N_r$ is the stabilizer of a non-degenerate $r$-space and $r = \frac{1}{2}n$ or $\frac{1}{2}(n-(n,2))$. In the first case, the base size of $PGU_n(q)$ acting on ${\mathcal N}_r$ is at most 5, by Lemma \ref{sporex}(ii) (since in this case $N_r$ is contained in a non-subspace subgroup of type $GU_{n/2}(q) \wr S_2$); and the same holds in the second case, by Theorem \ref{nrbase}. It follows that $b(G)\le 5$, contradicting our assumption (\ref{b8}). \hal \vspace{2mm} The proof of Theorem \ref{main} is completed by the following lemma. \begin{lem}\label{case2} If $G_0$ is as in $(1)$ of Lemma $\ref{liep}$, then conclusion {\rm (ii)} of Theorem $\ref{main}$ holds. \end{lem} \pf Here $G_0 = Cl_n(q_0)$, and $V = V(\lambda)$ with $\lambda = \lambda_1,\,\lambda_2,\,2\lambda_1,\,\lambda_1+p^i\lambda_1$ or $\lambda_1+p^i\lambda_{n-1}$. If $\lambda = \lambda_1$, then $d=n$ and $E(G) = Cl_d(q_0)$ is as in part (ii) of Theorem \ref{main}. Now consider $\lambda = \lambda_2$. Here we argue as in the proof of \cite[2.2]{LSbase} (see p.102). If $V = \wedge^2W$ where $W$ is the natural module for $Cl_n(q_0)$ (with scalars extended to $\F_q$), then the argument provides a vector $v \in V$ such that $SL(W)_v = Sp(W)$, and so application of Lemma \ref{sporex}(ii) gives $b(G) \le b(SL(W)/Sp(W)) \le 5$. Otherwise, $V$ is equal to $(\wedge^2W)^+$ (which is $f^\perp$ or $f^\perp/\langle f\ra$ in the notation of \cite[p.103]{LSbase}), and the argument gives \[ b(G) \le b(Sp_{2k}(q),{\mathcal N}_{r}), \] where ${\mathcal N}_{r}$ is the set of non-degenerate subspaces of dimension $r$ and $r = \frac{1}{2}n$ or $\frac{1}{2}(n-(n,4))$. As before, Lemma \ref{sporex}(ii) (in the first case) and Theorem \ref{nrbase} (in the second) now give $b(G)\le 5$. The case where $\lambda= 2\lambda_1$ is similar to the $\lambda_2$ case, arguing as in \cite[p.103]{LSbase}. Note that $p$ is odd here. If $G_0$ is not an orthogonal group, then $E(G) \le SL(W)$ acting on $V=S^2W$, and there is a vector $v$ such that $SL(W)_v = SO(W)$; hence $b(G) \le b(SL(W)/SO(W)) \le 5$, by Lemma \ref{sporex}(ii). And if $G_0$ is orthogonal, then $V = (S^2W)^+$ (of dimension $\dim S^2W-\d$, $\d\in \{1,2\}$), and we see as in the previous case that $b(G) \le b(O_{2k}(q),{\mathcal N}_{r})$ with $r = \frac{1}{2}\left(n-(n,2)\right)$. Hence Theorem \ref{nrbase} gives $b(G)\le 5$ again. Finally, suppose $\lambda = \lambda_1+p^i\lambda_1$ or $\lambda_1+p^i\lambda_{n-1}$. Here as in \cite[p.103]{LSbase}, we have $E(G) \le SL(W)=SL_n(q)$ acting on $V = W\otimes W^{(p^i)}$ or $W\otimes (W^*)^{(p^i)}$. We can think of the action of $SL(W)$ on $V$ as the action on $n\times n$ matrices, where $g \in SL(W)$ sends \[ A \to g^TAg^{(p^i)} \hbox{ or }g^{-1}Ag^{(p^i)}. \] Hence we see that the stabilizer of the identity matrix $I$ is contained in $SU_n(q^{1/2})$ or $SL_n(q^{1/r})$ for some $r>1$, and so as usual Lemma \ref{sporex}(ii) gives $b(G) \le 5$. \hal \vspace{4mm} Thsi completes the proof of Theorem \ref{main}.
{ "timestamp": "2018-02-21T02:05:24", "yymm": "1802", "arxiv_id": "1802.06973", "language": "en", "url": "https://arxiv.org/abs/1802.06973" }
\section*{} \section{Supplementary Material for ``Embedded Topological Insulators''} The supplementary materials collected here contain technical details that were omitted from the main text. In Sec.~I we provide a short proof that restriction or partial tracing states in real-space preserves global symmetries. In Sec.~II we describe in detail the various isolated Embedded Topological Insulator (ETI) models appearing in Fig.~2 of the main text. In Sec.~III we provide further details regarding the situation of two ETIs of opposite topological indices placed in proximity to each other and quantify their interaction through the quantum mutual information measure. Next in Sec.~IV we present detailed model information regarding the ETI crystal and its defects as discussed in the main text. Then in Sec.~V we give further introductory details regarding the conventional classification of topological insulators using spectral projectors onto occupied states. Lastly, in Sec.~VI we supply a short supplement regarding the intricacies of computing the winding number topological index for chiral symmetric wires. \\\\ \twocolumngrid \section{I. Restriction Preserves Global Symmetries} \label{supp:restrict} Implicitly assumed in the main text is the fact that the entanglement isolated spectral projector $\Pi_\mathcal{R}$ maintains the global symmetries of the system. In this supplementary section we provide short a proof of this fact. The global symmetry implemented by the linear operator $g$ acts on the total Hamiltonian as $[H,g]_{\pm}=0$. Whether or not the commutator $[\cdot,\cdot]_+$ or anti-commutator $[\cdot,\cdot]_-$ is used depends on the relevant symmetry, which is either commuting or anti-commuting. The later case is relevant to charge-conjugation and chiral-sublattice symmetries. The occupied and unoccupied states are respectively degenerate under the flat band Hamiltonian $H'=\mathbbm{1}-2P$ with eigenvalues $-1$ (occupied) or $+1$ (unoccupied). The flat band Hamiltonian also has the symmetry $[\mathbbm{1}-2P,g]_\pm=0$ because a commuting symmetry maps each state to another with the same eigenvalue under $H$ or $H'$ while an anticommuting symmetry maps each state to one with the opposite eigenvalue under $H$ or $H'$. A real-space bipartition into $\mathcal{R}$ and its complement $\mathcal{S}=\mathcal{R}^c$ splits the total Hilbert space into $\mathcal{H}= \mathcal{H}_\mathcal{R}\oplus\mathcal{H}_\mathcal{S}$. We can then divide the flat band Hamiltonian $H'$ into three contributions \begin{align} H'= H'_\mathcal{R} + H'_{S} + H'_{\mathcal{RS}} \end{align} where the first two terms have support in $\mathcal{H}_\mathcal{R}$ and $\mathcal{H}_\mathcal{S}$ respectively. The last term represents the coupling between $\mathcal{R}$ and $\mathcal{S}$. Because $g$ is a global (on-site) symmetry, it cannot change the support of each term in $H'$; therefore, it is not only a symmetry of $H'$, but also a symmetry of each term individually, {\it i.e.},\ \begin{equation*} [H'_\mathcal{R},g]_\pm = [H'_\mathcal{S},g]_\pm = [H'_\mathcal{RS},g]_\pm = 0. \end{equation*} The flat band Hamiltonian restricted to $\mathcal{R}$ is given through the restricted single-particle correlator $\rho_\mathcal{R}$, \begin{equation*} H'_\mathcal{R} = \mathbbm{1}_\mathcal{R}-2\rho_\mathcal{R}. \end{equation*} By the same argument that the Hamiltonian $H$ its flattened counterpart $H'$ have the same symmetry, the flattened counterpart of $H'_\mathcal{R}$ will have the same symmetry, or \begin{equation*} [g,\mathbbm{1}_\mathcal{R}-2\Pi_\mathcal{R}]_\pm=0 \end{equation*} where $\Pi_\mathcal{R}$ is the flatten form of $\rho_\mathcal{R}$. This proves our assertion that restriction and subsequent flattening preserves global symmetries. \section{II. Isolated Embedded Topological Insulators: Models and Numerics} \label{supp:All_Isolated_ETIs} \subsection{IIA. Embedded Chiral Symmetric Wires in 2D} \label{supp:ETI_CW} \begin{figure} \boxed{\includegraphics[width=0.28\textwidth]{BDI_wires.pdf}} \caption{(color online) A two orbital model of 1D wire with chiral symmetry from the BDI class. The dots 1 (red) and 2 (blue) label the different orbital states. The dashed box denotes the unit-cell and the arrows the direction of the imaginary valued nearest neighbor hopping matrix elements. We take the unit-cell to be of length 1.} \label{fig:BDI_wire} \end{figure} In this section we discuss one of the simplest Embedded Topological Insulator (ETI) which consists of a topological chiral symmetric wire embedded in a trivial 2D environment. This symmetry class is commonly associated with a bipartite lattice and Hamiltonians that respect this symmetry have no matrix elements between orbitals of the same sublattice type. More precisely, if $H$ is the 1-body Hamiltonian and $S$ is the chiral charge operator that is $+\mathbbm{1}$ on one sublattice type but $-\mathbbm{1}$ on the other, then $H$ has chiral symmetry whenever $\{H,S\}=0$. We remark that in one dimension the chiral classes AIII, BDI and CII are special in that they are the only symmetry classes that support strong topological insulators that are outside of the superconducting Bogoliubov-deGennes (BdG) ensembles. In particular, the 1D AIII and BDI classes carry $\mathbb{Z}$ valued invariants while CII has $2\mathbb{Z}$ valued invariants from Kramers degeneracy. We begin with a minimal 1D wire model with BDI chiral symmetry as defined in Fig.~\ref{fig:BDI_wire}. We have chosen the BDI class for reasons of simplicity. Time-reversal symmetry is realized as $T^2=1$ and charge-conjugation by complex conjugation $P=K$. Written in a two-component basis, the Bloch Hamiltonian takes the form \begin{align} H_\text{\tiny1D}(k_x) = (t_1+t_2) \sigma^x \sin (\tfrac{k_x}{2}) - (t_1 - t_2) \sigma^y\cos (\tfrac{k_x}{2}) \end{align} in the gauge with quasi-periodic Bloch eigenstates $|u_n(k_x + 2\pi)\rangle = \mathrm{e}^{- i 2\pi \hat{r}_x} |u_n(k_x)\rangle$ where $\hat{r}_x=\tfrac{1}{2}\sigma^z$ is the orbital position operator within the unit-cell. The Hamiltonian maintains chiral symmetry with $S=\sigma^z$ and time-reversal symmetry with $T = K \sigma^z$. The strong index $\nu \in \mathbb{Z}$ which counts the number of zero energy modes at an open boundary is~\cite{ryu_topological_2010,mondragon-shem_topological_2014} \begin{align} \nu = \int \frac{{\mathrm{d}} k_x}{\pi}\sum_{n} \langle u_{n}(k_x)| i\,S\,\nabla_{k_x} u_n(k_x)\rangle \end{align} where the sum extends over occupied Bloch bands and the covariant derivative is $\nabla_{k_x} = \partial_{k_x} + i \hat{r}_x$. In practice, $\nu$ is more easily determined from its property as a winding number. More details of this and other subtleties regarding gauge and unit-cell choices are discussed in the Supplementary Section VI. In this model the parameter range $|t_1| > |t_2|$ leads to a strong topological phase with $\nu = 1$ and otherwise a trivial one when $|t_1| < |t_2|$. We will term the strong topological phase a topological chiral symmetric wire. \begin{figure} \boxed{\includegraphics[width=0.3\textwidth]{BDI_stacked_wires.pdf}} \caption{(color online) The square lattice system constructed from stacking 1D chiral symmetric wires. The dashed square denotes the boundary of the unit-cell and there are 4 orbitals in a unit-cell labeled by the different colored dots. The real parameters $t_3,t_4$ are the intra unit-cell and inter unit-cell vertical hoppings respectively.} \label{fig:BDI_stacked_wires} \end{figure} A 2D model with chiral symmetry is constructed by stacking 1D wires into a 2D square lattice as is shown in Fig.~\ref{fig:BDI_stacked_wires}. Because of the requirements of chiral symmetry, the unit-cell has to be enlarged to contain 4 orbitals with two 1D `sub-wires'. The chiral charge operator $S$ then alternates its sign between sub-wires within the unit-cell, {\it i.e.},\ $S=\sigma^z \otimes \sigma^z$. The Bloch Hamiltonian for this model is \begin{align} \label{eqn:CI_Ham} H_\text{\tiny2D}({\bf k}) &= t^+_{12} (\mathbbm{1}\otimes \sigma^x) \sin (\tfrac{k_x}{2}) -t^-_{12} (\mathbbm{1}\otimes \sigma^y)\cos (\tfrac{k_x}{2}) \nonumber \\ &+t^+_{34} (\sigma^x\otimes\mathbbm{1}) \cos(\tfrac{k_y}{2}) +t^-_{34} (\sigma^y\otimes \mathbbm{1}) \sin(\tfrac{k_y}{2}) \end{align} where $t^\pm_{ij} = t_i \pm t_j$. We make this model \emph{trivial} by trivializing each wire: limiting to parameter ranges such that $|t_1|<|t_2|$ and $|t_3|,|t_4|< \text{min}(|t_1|,|t_2|)$. \begin{figure} \includegraphics[width=3.5cm]{ETI_CW.pdf} \caption{(color online) Schematic of an embedded chiral topological wire. The dark red line denotes the location of an `impurity layer' containing a single topological wire of the type in Fig.~\ref{fig:BDI_wire} with parameters $|t'_1| > |t'_2|$ surrounded by a trivial insulator model of the type in Fig.~\ref{fig:BDI_stacked_wires} with $|t_1|<|t_2|$ and $|t_4|<|t_3|$. The dots denote zero modes that are expected to be present and localized the at open boundaries.} \label{fig:ETI_CW} \end{figure} \begin{figure} \includegraphics[width=8cm]{energy_wfunc.pdf} \caption{(color online) Exact diagonalization results for a square system with linear dimensions $L_x=L_y=36$ and containing a single embedded topological chiral wire and open boundaries along $x$. Parameters are $(t_1,t_2,t_3,t_4) = (0.5,1.0,0.1,0.1)$ for the trivial square lattice environment and $(t_1',t_2') = (1.0,0.5)$ for the single impurity wire that is the embedded chiral topological wire. (a) Energy spectrum with two zero energy modes (open red circles). (b) Wavefunction amplitude distribution for a zero mode. The dashed (dark blue) line denotes the location of the impurity layer.} \label{fig:energy_wfunc} \end{figure} \begin{figure} \includegraphics[width=4cm]{ETI_ECW_ES.pdf} \caption{(color online) The entanglement spectrum obtained by restricting to just the single embedded topological chiral wire. Computing the strong invariant $\nu$ from these entanglement spectrum bands yields $\nu=1$ in agreement with the appearance of physical zero modes in the inhomogeneous system.} \label{fig:ETI_ECW_ES} \end{figure} To construct an embedded topological chiral wire system, we replace a single trivial sub-wire within one unit-cell with hopping parameters $t_1',t_2'$ such that $t_1'>t_2'$. Schematically the system we have in mind is shown in Fig.~\ref{fig:ETI_CW}. Numerical exact diagonalization of this inhomogeneous system leads to the energy spectrum in Fig.~\ref{fig:energy_wfunc}(a) which clearly shows the presence of localized zero energy modes as demonstrated in Fig.~\ref{fig:energy_wfunc}(b). Finally, the entanglement spectrum of a closed system (Fig.~\ref{fig:ETI_ECW_ES}) is gapped, with the expected non-trivial band topology with $\nu=1$ derived from the entanglement band wavefunctions. \subsection{IIB. Embedded Chern Insulator in 3D} \label{supp:ETI_CI} \begin{figure}[ht] \includegraphics[width=4.4cm]{ETI_CI.pdf} \caption{(color online) Schematic of the embedded Chern Insulator (ECI). The shaded (red) `impurity' layer is the ECI sandwiched by trivial layers. Arrows denote the direction of the chiral edge currents for open boundaries in the $x,y$ directions.} \label{fig:ETI_CI} \end{figure} \begin{figure} \includegraphics[width=4.2cm]{ETI_CI_BS.pdf} \caption{(color online) Energy band structure of an ECI crystal infinite in $x$ but with finite dimensions $L_y=L_z=20$ and has PBC in $z$, OBC in $y$. Parameters are $(m,m',\delta_z)=(-0.5,0.5,0.1)$. Thick bands (blue) are the 3D bulk bands, thin lines (blue) are 2D-like that localize near the CI layer. Dashed lines (red) are the topological edge modes.} \label{fig:ETI_CI_BS} \end{figure} In this section we describe in further detail the Embedded Topological Insulator (ETI) formed by a 2D Chern Insulator (CI) embedded in a trivial 3D environment. This is yet another example of a co-dimension-1 ETI, but in 3D. The 2D CI falls under Altland-Zirnbauer class-$A$ and has a $\mathbb{Z}$ valued invariant. A schematic of such an inhomogeneous crystal is shown in Fig.~\ref{fig:ETI_CI}; the CI is marked as an ``impurity layer'' in an otherwise-pristine crystal formed by stacking two-dimensional layers. In the pristine 3D environment the two orbital model Bloch Hamiltonian is taken to be\cite{bernevig-hughes_book} \begin{align} H({\bf k}) = &\sin k_x \sigma^x + \sin k_y \sigma^y + \left(2-m-\cos k_x -\cos k_y \right)\sigma^z \nonumber \\ &+ 2 \delta_z \cos k_z\, \mathbbm{I}_2 \end{align} where $\delta_z$, $m$ are real parameters. The coupling between $xy$-layers in the $+z$ direction is quantified by $\delta_z$. When $\delta_z=0$, each 2D $xy$-layer is a representative model of a CI whenever $0<m<2$ (Ch$_1=+1$) or $2<m<4$ (Ch$_1=-1$) and is otherwise trivial (Ch$_1=0$) when $m<0$ or $m>4$. We set $m<0$ for the trivial bulk but let $0 < m'<2$ in the CI layer (See Fig.~\ref{fig:ETI_CI}). \begin{figure} \includegraphics[width=4.0cm]{ETI_CI_PSI2.pdf} \caption{(color online) The wavefunction distribution of a localized chiral edge mode. Dashed (red) line denotes the location of the CI layer and $L_y=L_z=48$ with fixed $k_x\approx =0$.} \label{fig:ETI_CI_PSI2} \end{figure} For small values of $\delta_z$, typically $|\delta_z| < |m|,|m'|$, the total system retains its energy gap between conduction and valence bands, while remaining adiabatically connected to the decoupled limit $\delta_z=0$. This precise qualification of adiabatic continuity is revealed through the disentangling method that we have implemented to identify the embedded Chern Insulator (ECI). Figure~\ref{fig:ETI_CI_BS} shows an open boundary energy band structure of the ECI with the tell-tale chiral edge modes, confirming that the total system is topologically non-trivial. The wavefunction weight $|\psi|^2$ of a chiral edge mode is shown in Fig.~\ref{fig:ETI_CI_PSI2} and verifies the localization on the edge near the CI layer. Tracing out all $xy$-layers save the one containing the ECI one produces the gapped entanglement band structure shown in Fig.~\ref{fig:ETI_CI_ES} that is topologically non-trivial with Ch$_1=+1(-1)$ in its lower (upper) band. We note that tracing over other individual layers or collection of layers produces a similarly gapped entanglement band structure but with zero Ch$_1$ unless the CI layer is included. \begin{figure} \includegraphics[width=3.8cm]{ETI_CI_ES.pdf} \caption{(color online) The gapped entanglement spectrum band structure of the isolated CI layer with $L_z=20$ and infinite in $x,y$. Numerical calculation of the TKNN number from just the lower occupied band gives Ch$_1=1$. } \label{fig:ETI_CI_ES} \end{figure} \section{IIC. Embedded Topological Vortex in 3D} \label{supp:ETI_V} \begin{figure} \includegraphics[width=4.4cm]{ETI_KW.pdf} \caption{(color online) Schematic of an embedded topological vortex within a trivial superconductor with spin-orbit coupling. The bulk superconductor is formed by introducing s-wave pairing to a time-reversal invariant strong topological insulator. The (red) line is the vortex within the trivial superconductor.} \label{fig:ETI_V} \end{figure} \begin{figure} \includegraphics[width=6.5cm]{ETI_V_BS.pdf} \caption{(color online) The [001] surface energy BdG band structure of a trivial superconductor formed from a strong time-reversal invariant topological insulator with uniform s-wave superconducting pairing. The model parameters values are $(t,m_0,M,E_F,\Delta)=(0.5,-1.0,2.5,0.3,0.1)$. Inset shows that the topologically protected surface Dirac codes at the $\Gamma$ point are gapped out by the superconducting pair potential.} \label{fig:ETI_V_BS} \end{figure} In this section we describe the model of the embedded topological vortex (ETV) that arises as a topological defect in a three dimensional s-wave superconductor (SC) with spin-orbit coupling (Fig.~\ref{fig:ETI_V}). Specifically, the model is the one introduced in Ref.~\onlinecite{hosur2011majorana} and can be best described as a time-reversal invariant strong topological insulator with s-wave pairing. The mean-field Bogoliubov-de Gennes (BdG) Hamiltonian has the following form \begin{align} H=\frac{1}{2} \sum_\mathbf{k} \Psi^\dagger_\mathbf{k} \mathcal{H}_\text{BdG}(\mathbf{k}) \Psi_\mathbf{k} \end{align} with \begin{subequations} \begin{align} &\Psi^\dagger_\mathbf{k} = (c^\dagger_{\alpha\uparrow\mathbf{k}}, \; c^\dagger_{\alpha\downarrow\mathbf{k}}, \; c_{\alpha\downarrow\mathbf{-k}},\; -c_{\alpha\uparrow\mathbf{-k}}), \quad \alpha =\uparrow,\downarrow\\ &\mathcal{H}_\text{BdG}(\mathbf{k}) = \begin{pmatrix} H_\text{STI}(\mathbf{k})-E_F & \Delta \\ \Delta^* & E_F- H_\text{STI}(\mathbf{k}) \end{pmatrix}, \end{align} \end{subequations} where $E_F$ is the chemical potential and $H_\text{STI}(\mathbf{k})$ is the Bloch Hamiltonian of the strong time-reversal invariant topological insulator. $H_\text{STI}$ describes a 2-orbital tight-binding model on a cubic lattice with spin-1/2 electrons and is given by \begin{subequations} \begin{align} H_\text{STI}(\mathbf{k}) &= (\vec{d}(\mathbf{k})\cdot \vec{\sigma})\otimes \tau^x + m(\mathbf{k})\,\mathbbm{1}\otimes \tau^z \\ d_i (\mathbf{k}) &= 2t \, \sin k_i \, \sigma^i, \quad i = x,y,z \\ m(\mathbf{k}) &= M + m_0 \sum_{i=x,y,z} \cos k_i \end{align} \end{subequations} with $t,M,m_0$ being real valued parameters. The Pauli matrices $\tau^\alpha$ act on the orbital degrees of freedom while $\sigma^i$ acts on spin. The non-superconducting strong topological insulator (STI) phase corresponds to the parameter range $-3 < \frac{M}{m_0}< -1$ and is due to spin-orbit coupling. Time-reversal symmetry is manifested by the condition \begin{align} (\sigma^y\otimes \mathbbm{1})\, {H_\text{STI}(-\mathbf{k})}^* \,(\sigma^y\otimes \mathbbm{1}) = H_\text{STI}(\mathbf{k}). \end{align} \begin{figure} \includegraphics[width=8cm]{ETI_MZM.pdf} \caption{(color online) Majorana zero modes (MZMs) bound to the ends of a vortex in the trivial D-class superconductor with spin-orbit coupling. Numerical results are from exact diagonalization of a $40\times 40 \times 40$ crystal with open boundaries in $x,y,z$ and a vortex line (dashed red line) placed at the center of the $x,y$ plane according to Eqn. (\ref{eqn:vortex_delta}). Model parameter values are the same as in Fig.~\ref{fig:ETI_V_BS} with $\Delta_0 = 0.1$ and $\xi = 2$. (a) Energy spectrum of the total system near zero frequency. Red dots denote the pair of MZMs localized on the top and bottom surfaces. (b) Wavefunction distribution of a MZM strongly localized on the top surface with a side view in (d) and integrated layer density in (c).} \label{fig:MZM} \end{figure} In the pristine case, the superconducting order parameter $\Delta$ is taken to be a constant. Due to the existence of superconducting pair correlations, the normally present topologically protected surface Dirac cones are gapped out as is shown in Fig.~\ref{fig:ETI_V_BS} whenever $E_F$ lies in the normal phase band gap. Moreover, in the absence of vortices the surface energy bands remain gapped even when the normal phase becomes metallic whenever $E_F$ lies in the conduction or valence bands. Thus the bulk 3D SC as described by $\mathcal{H}_\text{BdG}$ is a trivial superconductor within the class DIII of symmetries. However, the introduction of a vortex degrades the symmetry down to class D by breaking time-reversal, and can lead to surface Majorana zero modes (MZMs) for a finite range of parameters.\cite{fu2008superconducting,hosur2011majorana} For our purposes, we have followed Ref. \onlinecite{hosur2011majorana} and introduced a vortex line defect in 3D by hand. More precisely, we let the $\Delta(\mathbf{r})$ vary in real space according to the following exponentially relaxing form \begin{align} \Delta(\mathbf{r}) = \Delta_0\, \mathrm{e}^{i \phi(\mathbf{r})}\, \left[1-e^{-\rho(\mathbf{r})/\xi}\right] \label{eqn:vortex_delta} \end{align} with $\mathbf{r}=(x,y,z)$, $\tan \phi(\mathbf{r}) = (y-y_0)/(x-x_0)$ and $\rho(\mathbf{r})^2 =\nobreak (x-x_0)^2+(y-y_0)^2$. The nodal line is situated at $(x,y)=(x_0,y_0)$. The SC coherence length is $\xi$ and $\Delta_0$ is the bulk SC order parameter strength. Shown in Fig.~\ref{fig:MZM}(a) is an energy spectrum of the model with a vortex line. It contains a pair of Majorana zero modes (MZMs) localized on the top and bottom surfaces where the vortex penetrates and leaves the bulk SC. The results are obtained from numerical Lanczos exact diagonalization of a large $40\times 40 \times 40$ crystal with open boundary conditions. The MZMs are stable topological boundary modes for a finite range of parameter but become unstable when the chemical potential is sufficiently deep inside the conduction band. This proceeds as a topological transition within the vortex line through the proliferation of gapless Carroli-de Gennes-Matricon modes into the bulk.\cite{hosur2011majorana} Due to the symmetries of the problem, the vortex line defect can best be understood as a topological ``Kitaev wire'' embedded within a bulk trivial SC. \begin{figure} \includegraphics[width=5cm]{ETI_V_ES.pdf} \caption{Entanglement spectrum from isolating a $9\times 9$ square region centered on the vortex nodal line in a $40 \times 40$ sized SC. The entanglement spectrum is gapped as is emphasized by the inset which is a blow up of the small gap region with individual data points highlighted. Calculation of the Zak-Berry phase $\theta_B$ from the `occupied' entanglement eigenstates gives a value of $\pi$ indicating a topologically non-trivial superconducting ETI wire in class D.} \label{fig:ETI_V_ES} \end{figure} Next we apply the disentangling method to validate our interpretation of the SC vortex as being a one dimensional ETI in class D. To do so, we consider a $40 \times 40$ square that is infinite and translationally symmetric in the $+z$ direction but with open boundaries in the $xy$-plane. The nodal line of the vortex is placed at the center and with the same model parameters as in Figs. \ref{fig:ETI_V} and \ref{fig:MZM}. Shown in Fig.~\ref{fig:ETI_V_ES} is the resulting entanglement spectrum obtained by tracing over all sites except a $9 \times 9$ square region (in the $xy$-plane) centered at the nodal line. It shows a gapped entanglement spectrum with a minimal gap at $k_z=0$ which is quite small but still present as is highlighted by the accompanying inset. The class D of symmetries in 1D are classified by a $\mathbb{Z}_2$ index. Topologically non-trivial class-D wires can be detected through the sign of the Pfaffian associated to the antisymmetric matrix representation of the Hamiltonian written in the Majorana operator basis.\cite{kitaev2001unpaired} Equivalent to the Pfaffian $\mathbb{Z}_2$ index is the Zak-Berry phase (modulo $2\pi$) or polarization\cite{budich2013equivalent} which is more straightforward to compute. Specifically, the Zak-Berry phase or Entanglement Berry phase\cite{fukui15_disen} as it is known in this context has the expression \begin{align} \theta_B = \int_{-\pi}^\pi \mathrm{d}k_z \; \text{tr} \,\mathcal{A}_z(k_z) \end{align} where \begin{align} [\mathcal{A}_z(k_z)]_{mn} = \langle \lambda^{\mathcal{(R)}}_m(k_z) | i \nabla_{k_z} \lambda^{(\mathcal{R})}_n(k_z) \rangle \end{align} is the non-Abelian Berry connection derived from the `occupied' entanglement eigenstates $\{|\lambda_n\rangle : \lambda_n > 1/2\}$ of the restricted projector $\rho_\mathcal{R}$. Here $\mathcal{R}$ is the square region that contains the vortex nodal line. For a trivial wire, $\theta_B = 0$ mod $2\pi$ and when non-trivial $\theta_B = \pi$ mod $2\pi$. For practical purposes, it is best to carry out the computation of $\theta_B$ using a discretized grid of $k_z$ points and the discretized gauge invariant Wilson loop expression \begin{subequations} \begin{align} \theta_B &= \text{Im}\,\log \prod_{i=1}^{N_\text{grid}} \det \mathcal{U}_i \\ [\mathcal{U}_i]_{mn} &= \langle \lambda^{(\mathcal{R})}_m (k_i)|\lambda^{(\mathcal{R})}_n(k_{i+1})\rangle. \end{align} \end{subequations} For the data shown in Fig.~\ref{fig:ETI_V_ES}, a uniform grid of 80 points yields a value of $\theta_B =\pi$ confirming the presence of an ETI in agreement with the observance of Majorana zero modes in Fig.~\ref{fig:MZM}. \section{III. Two Embedded Topological Insulators and Quantum Mutual Information} \label{supp:two_ETIs} In this supplemental section we consider in more detail the effect of having two finitely separated ETIs, and the net effect this has on surface states. This is then quantified in the bulk by the use of the quantum mutual information. \subsection{IIIA. Mutual Information} This is a measure based on the entanglement entropy $S_A$ which has the expression\cite{peschel2003calculation} \begin{align*} S_A = -\sum_{i}\left[ \lambda^{(A)}_i \log_2 \lambda^{(A)}_i+(1-\lambda^{(A)}_i) \log_2(1-\lambda^{(A)}_i) \right] \end{align*} for a region $A$ and specialized to the case of free fermions. Here $\{\lambda_i^{(A)}\}$ are the set of eigenvalues of the 1-body reduced density matrix $\rho_A$ in region $A$. The mutual information is given as \begin{align} I_2[A;B] = S_A + S_B - S_{A\cup B} \end{align} and acts as a bound for connected correlation functions involving operators in $A$ and $B$. Specifically, the following connected correlation function is bounded\cite{wolf2008area} as \begin{align} &\langle \mathscr{A} \otimes \mathscr{B}\rangle_c^2 \leq 2 \Vert \mathscr{A} \Vert^2 \Vert \mathscr{B} \Vert^2 I_2(A:B) \end{align} where $\mathscr{A}$ and $\mathscr{B}$ are any bounded (many-body) operators with support in $A$ and $B$ respectively, and with operators norms $\Vert \mathscr{A} \Vert,\Vert \mathscr{B} \Vert$. Specializing to Gaussian states and the 1-body fermionic correlator yields \begin{align} |\langle c^\dagger_\alpha c_\beta \rangle|^4 \leq 2 I_2 (A:B) \label{eqn:hopping_bound} \end{align} where $\alpha,\beta$ are orbitals located in $A$ and $B$ respectively such that $\{c_\alpha, c^\dagger_\beta\} = \delta_{\alpha \beta}$. Whenever the momentum $\mathbf{k}$ is a good quantum number, the entanglement entropies and mutual information $I_2[A;B]$ may also be $\mathbf{k}$-resolved. The latter yields a `mutual information band structure' $I_2[A;B](\mathbf{k})$. In practice, we will take $A$ and $B$ to be two different ETIs embedded in the same environment where $I_2[A;B]$ will track the amount of `hybridization' between $A$ and $B$. A small amount of mutual information between them would imply relatively weak coupling and hence relatively weakly coupled surface states with small energy gaps if present. \begin{figure} \includegraphics[width=4.6cm]{ETI_CI_double.pdf} \caption{(color online) Schematic of the two embedded Chern Insulators (ECIs). The lower (red) and upper (blue) impurity layers are ECIs of opposite topological invariants Ch$_1=\pm 1$ and $d_z$ denotes the number of trivial layers between them.} \label{fig:ETI_CI_double} \end{figure} \begin{figure} \includegraphics[width=4.2cm]{ETI_CI_BS_double_top.pdf} \caption{(color online) Energy band structure of two ECIs (Fig.~\ref{fig:ETI_CI_double}) with $d_z =4$ within a crystal of finite dimensions $L_y=L_z=20$ and has PBC in $z$ but OBC in $y$. Parameters are $(m,m',m'',\delta_z)=(-2,1,3,0.8)$ such that the two ECIs have opposite Chern numbers. Dashed (red) lines are gapless topological edge modes that cross at $k_x=0,\pi$ and localize on the different CI layers.} \label{fig:ETI_CI_BS_double_top} \end{figure} \subsection{IIIB. Two Embedded Chern Insulators} Now we consider a situation where there are two closely located Embedded Chern Insulators (ECIs). A schematic of the composite system is shown in Fig.~\ref{fig:ETI_CI_double} where the two CIs are taken to have opposite Chern numbers Ch$_1=\pm 1$. The model that we use is the one in Supplementary Section IIB with some modifications. This can be done in two different ways leading to different types of surface modes. But viewed as an entire 2D band insulator (in $xy$), the composite system is classified as being topologically trivial (Ch$_1=-1+1=0$). Nevertheless, surface states are present like in previous example and in the large $d_z$ limit are effectively independent topological chiral edge modes. \begin{figure} \includegraphics[width=0.48\textwidth]{ETI_CI_double_dz_composite.pdf} \caption{(color online) Energy band structure data of two ECIs (Fig.~\ref{fig:ETI_CI_double}) with parameters $(m,m',m'')= (-2,1,1)$ at varying distance $d_z$ and interlayer coupling $\delta_z$. The CIs are taken to be time-reversed partners of each other giving opposite Chern numbers, and edge states (red dashed lines) centered at $k_x=0$. The system has finite dimensions $L_y=L_z=20$ with PBC in $z$ but OBC in $y$. (a) Gapped edge states when $d_z=0$ and $\delta_z=0.4$. (b) Near gapless edge states when $d_z=8$ and $\delta_z=0.4$. (c) Energy gaps at $k_x=0$ on a log scale for varying $d_z$ and $\delta_z$ that are finite size effect and machine precision limited. }\label{fig:ETI_CI_double_dz_composite} \end{figure} First, we denote the two mass parameters for the different CI layers in Fig.~\ref{fig:ETI_CI_double} as $m'$ for the bottom (red) and $m''$ for the top (blue) layers respectively. Taking $0<m'<2$ and $2<m''<4$, with $d_z \neq 0$ leads to the energy band structure shown in Fig.~\ref{fig:ETI_CI_BS_double_top} with open $y$ boundaries. There are oppositely dispersing gapless chiral edges centered at $k_x=0,\pi$. Moreover, these modes are localized on the different impurity layers as expected and remain gapless independent of distance $d_z$ so long as the bulk energy gap remains open. A second way to obtain a Ch$_1=-1$ embedded Chern Insulator is to use a time reversed partner of the Ch$_1=+1$ model. This leads to the energy band structure of Fig.~\ref{fig:ETI_CI_double_dz_composite} with surface states centered around $k_x=0$ only. In this instance, an energy gap develops in the surface states (Fig.~\ref{fig:ETI_CI_double_dz_composite}(a)) that rapidly decays with increasing distance (Fig.~\ref{fig:ETI_CI_double_dz_composite}(b,c)). We note that there is an even-odd effect with respect to $d_z$, where for odd finite $d_z$ the energy gap goes to zero in the infinite $L_y$ limit. \begin{figure} \includegraphics[width=0.4\textwidth]{ETI_CI_ES_double_dz_composite.pdf} \caption{(color online) Mutual information measures of the two ECIs related by time-reversal. Parameters are $(m,m',m'',\delta_z)=(-2,1,1,0.4)$ with $L_z=24$. (a) Schematic of the two ECIs labeled here as regions $A$ and $B$. (b) The gapped entanglement spectrum band structure from restriction to the $A$ ECI with $d_z=0$. The lower band has Chern number Ch$_1=+1$. (c) Mutual information $I_2[A;B](\mathbf{k})$ band structure between the two ECIs at $d_z=0$. (d) The mutual information $I_2[A;B]$ averaged over the Brillouin zone at varying $d_z$.} \label{fig:ETI_CI_ES_double_dz_composite} \end{figure} Next, shown in Fig.~\ref{fig:ETI_CI_ES_double_dz_composite}(b) is the entanglement spectrum band structure for a single ECI layer in the case that the two ECIs are adjacent with $d_z=0$. The associated mutual information band structure between ECIs is shown in Fig.~\ref{fig:ETI_CI_ES_double_dz_composite}(c). Note the maxima in $I_2[A;B](\mathbf{k})$ near $k_x = \pm\pi/2$ that correspond to the emergence of the surface states from the bulk [See Fig~\ref{fig:ETI_CI_double_dz_composite}(b)]. Crucially, the entanglement spectrum bands are gapped with quantized Chern number Ch$_1=+ 1$ even when $d_z=0$, thus qualifying the ECI status. However as is evident in Fig.~\ref{fig:ETI_CI_double_dz_composite}(a), the edge state spectrum is gapped meaning that the surface states do not enjoy any topological protection because of edge mode coupling. This perfectly illustrates the situation that non-trivial quantized topological invariants derived from entanglement bands are \emph{no guarantee} for the integrity of the topologically protected surface modes. Physically, this is because the surface states may still be gapped without closing the bulk gap if there are available surface states from compensating ETIs. Only in the case of sufficiently separated ETIs do we expect to recover non-trivial topological surface states as exemplified in Fig.~\ref{fig:ETI_CI_double_dz_composite}(b,c). This picture is further supported by the decay in mutual information between ECIs as shown in Fig.~\ref{fig:ETI_CI_ES_double_dz_composite}(d). {However, it is interesting to note that the exponential decay length of the surface energy gap and the averaged mutual information differ by about a factor of 2 with the energy gap decaying slower. We attribute this to the fact that surface states as being more delocalized than the bulk states, of which the mutual information is only sensitive to the latter. Nevertheless the mutual information -- as computed in this way -- may still serve as a useful quantitative guide for the amount of coupling between ETIs utilizing only bulk information.} Finally, for the sake of completeness, the individual contributions that go into the mutual information band structure $I_2[A;B]$ of Fig.~\ref{fig:ETI_CI_ES_double_dz_composite}(c) is shown in Fig.~\ref{fig:I2_all}. \begin{figure} \includegraphics[width=0.45\textwidth]{entropy_mut_info.pdf} \caption{The breakdown of the different entanglement entropy bands (a) $S[AB](\mathbf{k})$, (b) $S[A](\mathbf{k})$, (c) $S[B](\mathbf{k})$ that make up the mutual information band structure $I_2[A;B](\mathbf{k})= S[A](\mathbf{k})+S[B](\mathbf{k})-S[AB](\mathbf{k})$.} \label{fig:I2_all} \end{figure} \section{IV. Stacking Faults and Partial Dislocations} \label{supp:defects} The stacking fault and partial dislocation defects models in Fig.~\ref{fig:stacking} are derived from a pristine ETI crystal of stacked CIs; termed $A$ and $B$ type. The ETI crystal has the Bloch Hamiltonian \begin{align*} H(\mathbf{k}) & = \sin k_x (\sigma^z \otimes \sigma^x) + \sin k_y (\mathbbm{1} \otimes \sigma^y) \nonumber \\ & +[2-m_1-\cos k_x - \cos k_y] (\mathbbm{1} \otimes \sigma^z) \nonumber \\ & + [\gamma_z + \delta_z \cos k_z] (\sigma^x \otimes \mathbbm{1}) - m_2 (\sigma^z \otimes \sigma^z) \end{align*} where $m_1,m_2,\gamma_z$ and $\delta_z$ are model parameters. Physically, each CI subsystem within the unit cell is modeled on the CIs given in Section IIB or Eqn.~(\ref{eqn:CI_Ham}) but which are `time-reversed' relative to one and other. Moreover, we include differences to their on-site $m$ parameters. Namely, $m_1$ is their average $m$-parameter while $m_2$ quantifies the their difference. Lastly, `inter CI' hopping parameters are quantified by $\gamma_z$ and $\delta_z$ which are the intra-unit cell and inter-unit cell respectively. From this Bloch-Hamiltonian, the real-space form of the Hamiltonian can be obtained by an inverse Fourier transform, performed only on the $y,z$ directions which are no-longer periodic in the presence of the defects. The defected systems are derived by altering this hybrid real-space ($y,z$) and momentum space ($k_x$) pristine ETI crystal model Hamiltonian. Conventionally, we take the periodic boundary conditions in the $+z$ direction but open boundary conditions in $+y$ for the stacking fault, and periodic boundary conditions in $+y$ for the partial dislocation. Thus only the stacking fault has an exposed $yz$ surface. The stacking fault is created in real-space by the removal a $B$ layer and rejoining the resulting exposed $A$ layers, {\it i.e.},\ the Volterra construction. The stacking fault spectrum in Fig.~\ref{fig:stacking}(d) was created with the parameters $m_1=1$, $m_2=-0.1$, $\gamma_z=0.05$, and $\delta_z=0.5$. The partial dislocation is created by truncating half of a $B$ layer in the $y$-direction and rejoining the now-exposed portion of the $A$ layers. The partial dislocation spectrum in Fig.~\ref{fig:stacking}(f) was created with the parameters $m_1=1$, $m_2=0$, $\gamma_z=0.5$, and $\delta_z=0.25$. The topological edge states remain robust to different model parameter values and the manner in which the rejoining operation is performed. \section{V. Classification of Topological Insulators by Spectral Projectors} \label{supp:P} In this supplemental section, we describe for the benefit of readers unfamiliar with spectral projectors, the calculation of topological indices or invariants using $P$. Beginning with the gapped Bloch Hamiltonian $H({\bf k})$ for translationally invariant insulators, $P({\bf k})$ can be expressed as \begin{align*} P({\bf k}) = \oint_\gamma \frac{\mathrm{d} z}{i 2\pi} \; \frac{1}{z + E_F - H({\bf k})} \end{align*} where the contour $\gamma$ encloses the negative part of the $\mathbb{R}$ line in an anti-clockwise fashion. Assuming that $H({\bf k})$ is analytic and a modified form of the Paley-Wiener theorem,\cite{kuchment2016overview} leads to the decay of $P$ in real space \begin{align*} P({\bf R},{\bf R}') &= \int_\text{BZ} \frac{\mathrm{d}^d k}{|\mathcal{C}^*|} P({\bf k}) \mathrm{e}^{i{\bf k} \cdot({\bf R}-{\bf R}')} \nonumber \\ |P({\bf R},{\bf R}')| &< A \mathrm{e}\left( \left[\tfrac{|{\bf R}-{\bf R}'|}{\xi}\right]^d\right) \end{align*} as $|{\bf R}-{\bf R}'| \rightarrow \infty$ for some $A,\xi > 0$. Here ${\bf R}$ and ${\bf R}'$ are lattice site positions. This is just the well known statement that non-interacting band gap insulators have exponentially bounded correlations. Now the way that $P$ defines a vector bundle $\mathbb{V}_\text{occ}$ of occupied Bloch states over the BZ is expressed as \begin{align*} v \in \mathbb{V}_\text{occ} \Leftrightarrow P({\bf k}) |v({\bf k})\rangle = |v({\bf k})\rangle \quad \forall\; {\bf k} \in \text{BZ}. \end{align*} Depending on the symmetry and dimension of the BZ, the computation of the relevant topological indices,\cite{chiu2016classification} which we have denoted by $c([P])$ in the main text, is carried out either with $P$ directly or by selecting a frame bundle from $P$ with a specific gauge choice. An example of the former case is the 2D Chern number in symmetry class A that has the following gauge invariant expression \begin{align*} \text{Ch}_1 = \frac{i}{2\pi} \int_\text{BZ} \; \text{tr} \left[ P({\bf k}) \;dP({\bf k}) \wedge dP({\bf k}) \right] \in \mathbb{Z}. \end{align*} A second example which uses a gauge choice of Bloch basis for convenience is the computation of the strong index for a time-reversal invariant insulator in 3D in class AII. This requires selecting a globally smooth orthonormal basis of Bloch states $\{u_1,u_2,\ldots u_m \}$ that span $ \mathbb{V}_\text{occ}$, which can always be done because of time-reversal symmetry. Using the sewing matrix which has the expression \begin{align*} w_{\alpha \beta}({\bf k}) = \langle u_\alpha (-{\bf k}) | T u_\beta({\bf k}) \rangle \end{align*} where $T$ is the time-reversal operator squaring to (-1), the strong index $\mathbb{Z}_2$ is then given as \begin{align*} \nu = \prod_{{\bf K}_i} \frac{\text{Pf} \;w({\bf K}_i)}{\sqrt{\text{det} \;w({\bf K}_i)}} \end{align*} where the product extends over all time-reversal invariant momenta. Alternatively the strong index may be computed from the non-gauge invariant Chern-Simons integral derived from the specific choice of $\{u_n\}$. Also for the chiral classes in odd dimensions, decomposing $P$ into chiral eigenspaces can be used to compute winding numbers that are the topological invariants of that class. \section{VI. Strong Topological Indices of Chiral Symmetric Wires} \label{supp:nu_cal} In this appendix section, we describe computational aspects related to the determination of the strong topological index $\nu \in \mathbb{Z}$ that classifies chiral symmetric wires. But first we clarify some subtleties regarding the choice of plane-wave basis, unit cells and the $k-$space connection which are involved in computing $\nu$. There are two popular conventions for the plane-wave basis used to define the periodic part of the Bloch wavefunction. Moreover, the chiral symmetric topological classes (AIII,BDI,CII) are sensitive to the particular choice of basis for reasons that shall explain below. The most natural choice of basis follows from the continuum definition adapted to the tight-binding lattice and is given by the following infinite sum of real-space kets \begin{align} |{\bf k}, {\bf r}_\alpha \rangle = \sum_{{\bf R} \in \Lambda} \mathrm{e}^{i {\bf k} \cdot ({\bf R} + {\bf r}_\alpha)} | {\bf R} + {\bf r}_\alpha \rangle \end{align} where $\Lambda$ is the Bravais lattice in $d$-dimensions, ${\bf r}_\alpha$ is the position of the $\alpha$-th tight-binding orbital within the unit cell and $\alpha$ varies over the different orbitals. Bloch states are then expressed as \begin{align} |\Psi_{{\bf k}}\rangle = \int_\text{BZ}\frac{{\mathrm{d}}^d k}{|\mathcal{C}^*|} \sum_{\alpha} u_{\alpha}({\bf k})|{\bf k},{\bf r}_\alpha\rangle \end{align} with $|\mathcal{C}^*|$ being the volume of the 1st Brillouin zone (BZ). This choice is natural because the plane-wave states $|{\bf k},{\bf r}_\alpha\rangle$ transform as a representation of the Euclidean group $\mathbb{E}^d$ as \begin{align} |{\bf k},{\bf r}_\alpha\rangle \rightarrow |M\cdot{\bf k},M\cdot{\bf r}_\alpha + {\bf a}\rangle, \quad M\in \text{O}(d), \; {\bf a}\in \mathbb{R}^d. \end{align} This is because the basis is conscious of the orbital locations within the unit cell and hence transforms as a representation space group of the crystal lattice. Moreover, the coefficients of the Bloch eigenstates $u_\alpha({\bf k})$ are independent of the origin of the unit cell in this basis. This is due to the fact that $H(k)$ only cares about hopping distances between orbital sites in this basis. However the basis is not periodic in the BZ with $|{\bf k} + {\bf G}, {\bf r}_\alpha\rangle = \mathrm{e}^{i {\bf G}\cdot {\bf r}_\alpha} |{\bf k},{\bf r}_\alpha\rangle$ that requires $u_\alpha({\bf k}+{\bf G})=\mathrm{e}^{-i {\bf G}\cdot {\bf r}_\alpha} u_\alpha({\bf k})$, where ${\bf G} \in \Lambda^*$ is a reciprocal lattice vector. The other competing choice of Bloch basis is \begin{align} \widetilde{|{\bf k},{\bf r}_\alpha\rangle} =\sum_{{\bf R} \in \Lambda} \mathrm{e}^{i {\bf k} \cdot {\bf R}} | {\bf R} + {\bf r}_\alpha \rangle = \mathrm{e}^{-i {\bf k} \cdot {\bf r}_\alpha}|{\bf k},{\bf r}_\alpha\rangle \end{align} which is BZ periodic but has less appealing transformation properties. In this case, information regarding the orbital positions have been transformed away. In fact, the two competing bases are related by the non-regular gauge transformation $\widetilde{|{\bf k},{\bf r}_\alpha\rangle} = \mathrm{e}^{-i {\bf k} \cdot {\bf r}_\alpha}|{\bf k},{\bf r}_\alpha\rangle$, in which the phase function is not BZ periodic. The reason this matters is because the intra unit cell orbital configuration is absolutely crucial to understanding and classifying the chiral symmetric class. For example, in the Su-Schrieffer-Heeger (SSH) chain\cite{su1979solitons} -- which is in the BDI class -- the choice of unit cell that bisects a dimerized bond leads to a topologically non-trivial chain. Whereas, the complementary choice of unit cell which includes whole dimers is trivial. Hence, the choice of unit cell needs to be reflected in the $k-$space covariant derivative $\nabla_{\bf k}$ which is used to determine $\nu$. The definition of the covariant derivative that is appropriate turns out to be \begin{align*} \nabla_{\bf k} = \partial_{\bf k} + i\, \hat{{\bf r}} \end{align*} and is such that the Berry connection computed with either choice of basis is identical. Note that $\hat{{\bf r}}$ is the orbital position operator such that $\hat{{\bf r}}|{\bf k},{\bf r}_\alpha\rangle = {\bf r}_\alpha |{\bf k},{\bf r}_\alpha\rangle$ and is zero in the $\widetilde{|{\bf k},{\bf r}_\alpha\rangle}$ basis. Also, $i\nabla_{\bf k}$ is just the unit cell position operator $\hat{{\bf R}}$ expressed in the plane-wave basis. Now, the chiral topological insulator classes (AIII,BDI,CII) in 1D are classified by the topological index $\nu$ that has the expression as a chiral (skew) polarization quantity~\cite{ryu_topological_2010,mondragon-shem_topological_2014} \begin{align} \nu = \frac{1}{\pi}\int {{\mathrm{d}} k}\sum_{n} \langle u_{n}(k)| i\,S\,\nabla_k u_n(k)\rangle \in \mathbb{Z} \end{align} where $|u_n(k)\rangle = \sum_{\alpha} u_{n,\alpha}(k) |\alpha\rangle$ are the Bloch eigenstates and the $n$ sum extends over occupied $E_n(k)$ bands. The operator $S$ acts on orbital degrees of freedom and is the chiral grading operator that anti-commutes with the Bloch Hamiltonian. Because of the identification of $i\nabla_{k}$ with $\hat{R}$, this expression of $\nu$ has the interpretation of a 1D polarization weighted by $S$. However there is an alternative interpretation of $\nu$ that is more useful in revealing the origin of its quantization. First a re-writing the Bloch Hamiltonian into blocks of definite $S$ chirality yields the following block off-diagonal form \begin{align} H(k) = \sum_{n=1}^N E_n(k) \begin{pmatrix} 0_N & q_n(k)^\dagger \\ q_n(k) & 0_N \end{pmatrix} \end{align} for a system of $2N$ energy bands. Here the first $N$ orbital states are $S=+1$ definite while the remaining $N$ are $S=-1$, and $0_N$ is the $N\times N$ zero matrix. The sub-matrices $q_n(k)$ are defined as \begin{align} [q_n(k)]_{\alpha \dot{\alpha}} = [u_{n\uparrow}(k)]_\alpha[u_{n\downarrow}(k)]_{\dot\alpha}^*,\quad \alpha,\dot\alpha=1,\ldots N \end{align} where \begin{align} u_{n\uparrow}(k) &= \tfrac{1}{\sqrt{2}}\left( u_n(k) +S u_n(k) \right) \\ u_{n\downarrow}(k) &= \tfrac{1}{\sqrt{2}}\left( u_n(k) -S u_n(k) \right) \end{align} are projections of the Bloch eigenstates $u_n(k)$ with positive energy $E_n(k)>0$ onto definite $S$ chirality. A gapped $H(k)$ guarantees that there is always a well defined separation between the negative and positive eigenstates and hence a smooth $q_n(k)$. It can be shown that the chiral winding is equivalently expressed as \begin{align} \nu = \frac{1}{2\pi}\int {\mathrm{d}} k \, \text{Tr}\left[ Q(k)^\dagger i \nabla_{k} Q(k) \right] \label{eqn:nu} \end{align} where $Q(k) = \sum_{n=1}^N q_n(k) \in \text{U}(N)$ is unitary. Now the use of the covariant derivative $\nabla_k = \partial_k + i \hat{r}$ is absolutely necessary because the matrix $Q(k)$ is not BZ periodic. Nevertheless by transforming to the $\widetilde{|k,r_\alpha\rangle}$ basis with the transformation \begin{align*} \tilde{Q}(k) = D(k)^\dagger Q(k)D(k) \end{align*} with $D(k)= \mathrm{e}^{ik \, \hat{r}}$ yields a $\tilde{Q}(k)$ that is unitary and BZ periodic. Then $\nu$ is \begin{align} \nu = \frac{1}{2\pi}\int {\mathrm{d}} k \, \text{Tr}\left[ \tilde{Q}(k)^\dagger i \partial_{k} \tilde{Q}(k) \right] \end{align} which measures to the winding of the phase $\det\, \tilde{Q}(k) \in \text{U}(1)$ around the 1D BZ. It is manifestly quantized due to the BZ periodicity of $\tilde{Q}(k)$. In practical numerical experiments, the determination of $\nu$ proceeds not with computing the integral in (\ref{eqn:nu}) but by plotting in the phase of $\det\, \tilde{Q}(k)$ as a function of $k$ to determine the number of times it winds around zero. \end{document}
{ "timestamp": "2018-02-21T02:00:34", "yymm": "1802", "arxiv_id": "1802.06790", "language": "en", "url": "https://arxiv.org/abs/1802.06790" }
\section{Introduction} \label{SectionI} Positronium (Ps) is the hydrogen-like bound state of an electron and its antiparticle, the positron. Ps can exist in two different ground states: the singlet state p-Ps, rapidly decaying into two photons with a lifetime of \SI{0.125}{\nano\second}, and the triplet state o-Ps, annihilating in 3$\gamma$ photons with a lifetime of \SI{142}{\nano\second} in vacuum \cite{PositroniumReview}. Being a purely leptonic two-body system, this short-lived atom constitutes a privileged opportunity for high-precision studies of Quantum Electrodynamics (QED) for bound states, to test the validity of accurate QED correction calculations and to measure with high accuracy the fine structure constant $ \alpha $, the Rydberg constant $ R_\infty $ and the electron/positron mass $ m_e $ \cite{Karshenboim}. As a matter of fact, the Ps energy levels can be calculated as a perturbative series in powers of $ \alpha $ with very high precision, only limited by the knowledge of the fundamental constants \cite{QEDexpansion}. Hence, Ps offers a great advantage with respect to analogous calculations and experiments on hydrogen, for example, because in this case QED predictions require a precise knowledge of the proton finite-size effects \cite{Karshenboim,Karshenboim2} which are increasingly important in H spectroscopy. Some experimental tests are based on precision laser spectroscopy \cite{Karshenboim2}, in particular exploiting the two-photon Doppler-free transition \otS{} -- \tS{} between long-lived triplet o-Ps energy levels. These experiments can be realized because of the metastability of the \tS{} state, which decays, in a field-free region, by annihilation in \SI{1.14}{\micro\second} (the lifetime is increased by a factor of eight compared to the ground state due to the decrease in the overlap of the positron--electron wave-function \cite{Charlton}) with the consequence of a very narrow natural radiative width. First observations of this optical excitation were performed by Chu \emph{et al.} \cite{Chu} by using pulsed lasers on Ps atoms exiting a metallic surface. In improved experiments using continuous-wave lasers, the transition frequency was measured with a precision of 2.6 ppb, sufficient to provide a test for $ o(\alpha^4) $ QED corrections \cite{Fee}. To verify calculations to higher order, further experiments based on advanced Ps synthesis starting from a continuous positron beam transported towards a silica converter \cite{Alberola}, and using an enhanced laser excitation system, has been developed: new accurate determinations of the \otS{} -- \tS{} transition frequency are expected from a combination of Ps detection techniques, also using the observation of the \tS{} state annihilation \cite{Crivelli}. Other QED tests are based upon the determination of the Ps $ n = 2 $ fine structure splitting, in particular using microwaves to induce electric-dipole transitions between the metastable \tS{} state (produced by impinging positron beams on metal surfaces \cite{metalsurf}) and the \tP{} sublevels \cite{Hatamian}. All these Ps experiments can benefit from an efficient and clean production of the \tS{} metastable state, alternative to the usual methods of Ps excitation via a two-photon process or by direct excited state emission from Ps converters. Another important research field in which Ps plays a privileged role is the study of matter-antimatter gravitational interactions. A direct measurement of the gravitational free fall of antimatter atoms, or matter-antimatter neutral systems like Ps, is foreseen as a test for speculative models aiming to describe the observed asymmetry between matter and antimatter in the Universe by a gravitational asymmetry \cite{Nieto}. Specifically, for pursuing this objective, measurements made on Ps atoms are complementary to other experiments actually proposed \cite{Gbar, AlphaG} or running \cite{Kellerbauer, Aegis} employing antihydrogen; in the former case one expects that the possible absence of a free-fall can be a clear signature of an ``antigravity'' component in the gravitational interaction violating the weak equivalence principle. Experimental proposals currently under evaluation are based on detecting the free fall of Ps atoms, either by guiding a Rydberg Ps beam towards a position-sensitive detector with electrostatic potentials \cite{MillsCassidy} or by accurately measuring the vertical displacement of the atoms' trajectory with a matter-wave optical interferometer \cite{Oberthaler} or with a matter-wave mechanical interferometer in a novel Talbot-Lau configuration for increased compactness \cite{Sala}. Whichever experimental layout will be found most suitable, it will anyway be necessary to prepare a sample of Ps atoms in a long-lived state, otherwise the rapid annihilation would quickly reduce their useful number making it very difficult, for instance, to distinguish an interferometric pattern against the background. Laser excitation to long-lived Rydberg levels has been the first studied solution to overcoming the lifetime limitation \cite{MillsCassidy, Aegis}. This promising route, however, has some potential limitations. It requires an experimental apparatus essentially free from residual fields (and field gradients) in the free-fall region, because of the high electrical polarizability of Rydberg states and their significant ionization rate due to the motional Stark effect induced by magnetic fields. Moreover, laser excitation and subsequent spontaneous optical decay populate a large number of sublevels, making the control of such detrimental effects difficult. An alternative way to produce long-lived samples of Ps atoms is indeed laser excitation of the \tS{} metastable level. A collimated beam of metastable Ps atoms has been shown to be extremely useful for improving inertial sensitivity in proposed matter-wave interferometric layouts \cite{Oberthaler, Sala}. Recently, the production of Ps atoms in the \tS{} state by single-photon excitation in the presence of a static electric field was demonstrated \cite{AlonsoHoganCassidy}. This result was achieved because of the Stark mixing between S and P sublevels induced by an electric field of a few \si{\kilo\volt\per\centi\meter}, which allows to momentarily populate also the \tS{} states by a laser pulse resonant with the (electric-dipole-allowed) \otS{} -- \tP{} transition. Subsequently, to avoid the rapid radiative decay of the mixed states towards the ground state, the electric field was adiabatically reduced to zero, finally leaving a beam of Ps metastable atoms. In this experiment a $ 6.2 $ \% production efficiency relative to the number of formed Ps atoms was reported, consistent with the expected losses due to the electric field switching time. Another possible route for the production of Ps metastable state, which is conceptually simpler and ideally free from this drawback, is based on the fact that the desired \tS{} states can be conveniently populated by spontaneous radiative decay from o-Ps atoms laser-excited from the ground staet to the $ n = 3 $ level (precisely on \ttP{} states). This decay competes with the radiative decay to the triplet ground staet \otS{}, and in fact the expected value for the branching ratio of the spontaneous radiative decay \ttP{} -- \tS{} (in absence of magnetic and electric fields) amounts to 12\% \cite{Villa,Caravita}. For a comparison, the theoretical maximum efficiency of \tS{} production by the two-photon Doppler free excitation process was determined to be $ 17.6\% $ \cite{Haas}. The pathway above proposed was experimentally demonstrated in the hydrogen system \cite{Harvey}, where a beam of $ 10^6 $ metastable atoms/s was obtained. Note that scaling considerations between the hydrogen atom and Ps lead to the conclusion that their branching factors must be identical. Laser excitation of the \ttP{} Ps levels manifold by UV pulses was recently achieved \cite{NEqual3} in the framework of the AEgIS experimental program devoted to antihydrogen synthesis for gravitational studies \cite{Aegis}. In this work we used the same apparatus to demonstrate the feasibility of producing a source of metastable or long-lived Ps atoms. The present experiment was performed in a dedicated chamber where both guiding magnetic and electric fields were present. The detection technique is based on the analysis of the single-shot positron annihilation lifetime spectroscopy (SSPALS) spectrum \cite{Spectroscopy,TrapBasedBeam}. A simple illustration of the excitation and de-excitation processes involved in our experiment is shown in Fig.~\ref{simpl_scheme} (see Section IV for a more detailed discussion). Ps atoms produced in a triplet o-Ps ground state are excited by a nanosecond UV laser pulse to the $n = 3$ manifold, whose sublevels are mixed by the presence of the guiding electric field. The excited atoms then follow three main de-excitation paths: (i) a first fraction of atoms decays spontaneously back to the \otS{} triplet ground state in a few tens of nanoseconds, where they start again the normal ground state annihilation path; (ii) a second fraction of atoms quenches in the $\SI{25}{\milli\tesla}$ magnetic field present in the experimental chamber and decays to the \ooS{} singlet ground state where they promptly annihilate; (iii) a third fraction decays to the metastable \tS{} state, as discussed above. \begin{figure}[h!tp] \centering \includegraphics[width=0.8 \linewidth]{schema8.png} \caption{Partial energy level diagram of Ps with the $ n = 3 $ laser excitation (solid arrow) and subsequent de-excitation processes (dotted arrows) at play in our experiment (see text).} \label{simpl_scheme} \end{figure} In the case of the first decay channel no significant change in the overall lifetime of the Ps sample is introduced by the laser excitation. Consequently, the shape of the corresponding annihilation signal in SSPALS spectra is left almost unvaried by the presence of the laser. The second decay channel causes only an immediate loss of a small fraction of Ps atoms occuring simultaneously with the laser shot, and a consequent relative reduction of the population of Ps atoms in \otS{} at later times. This causes a small signal reduction in SSPALS spectra (of the order of $ 2 - 3 \, \% $, see \cite{NEqual3}) but does not alter the signal shape by introducing delayed annihilation signals. The third decay channel is the one of interest here for obtaining long-lived metastable Ps atoms. In the absence of an electric field, this state does not have first order dipole allowed radiative decay paths and it only annihilates with the long annihilation lifetime of $\SI{1.14}{\mu s}$ \cite{Charlton}. Hence this would constitue a long-lived component present in the cloud of Ps excited atoms, and in the SSPALS spectrum one would observe a decrease of the annihilation signal immediately after the laser shot, followed by an increase at very later times when the Ps atoms hit the walls of the experimental chamber. In the presence of an electric field (as in our experimental setup), Stark mixing considerably reduces the lifetime of Ps in the \tS{} state, as its lifetime is affected by the partial mixing with the \tP{} states which can radiatively decay back to \otS{} (the spontaneous radiative decay lifetime of \tS{} in our field configuration is indeed $ \sim \SI{105}{\nano\second} $, see the detailed discussion in Section \ref{SectionIV}). However, even if its lifetime is reduced by the presence of the fields, Ps in the \tS{} state still constitues a long-lived component, compared to the triplet ground state \otS{}, which alters SSPALS spectra at later times. A novel analysis technique of SSPALS data has been developed to highlight the appearance of slightest modifications in these spectra which can be induced by the presence of a small fraction of long-lived Ps states. This novel technique can be of general interest for data analysis in similar experiments. In the following sections, we describe the experimental apparatus, present the statistical technique used for the accurate and reliable analysis of the results, and compare them with a simple rate equation model describing radiative and annihilation decays well after the laser excitation. \section{Experimental methods} \label{SectionII} The system used to perform the present experiment is described in detail elsewhere \cite{NEqual3, BunchingSystem, PositronBunching}. Briefly, positrons emitted by $\beta^+$ decay of a $\SI{9}{mCi}$ $\ ^{22}\text{Na}$ source were slowed by a solid Ne moderator \cite{SolidNeonModerator} to a kinetic energy of a few \si{\electronvolt}, trapped and cooled in a Surko-style trap through the use of buffer gas \cite{PlasmaAndTraps}. The cooling efficiency of the Ne moderator decreases by aging, peaking when the moderator is grown (\emph{i.e.} it was heated and evaporated and then regenerated by condensation of gaseous Ne) then declining with an initial rate of several \% of efficiency per hour lost during the first few hours \cite{NeModerator}. Subsequently positrons were moved to a second trap (accumulator) where several pulses from the first trap were stored. There the positron plasma was radially compressed using the rotating-wall technique \cite{PositronCompression} and then extracted by fast pulses of the electric potential on the trap electrodes in the form of $ \SI{20}{\nano\second} $ bunches of $ \sim 3 \cdot 10^7 $ positrons at \SI{100}{\electronvolt} axial energy. The cloud was then transported to a magnetic-field-free region where it was further compressed in time \cite{PositronTimeBunching} using a 24-electrode buncher \cite{PositronBunching} to about $ \SI{7}{\nano\second} $ and accelerated onto a nanochanneled silicon target with a final kinetic energy of $\SI{3.3}{keV}$. In the target, $ e^+ $ were efficiently converted into o-Ps and emitted into vacuum \cite{NCP, CryoNCP}. A calibrated CsI detector and a microchannel plate (MCP) with a phosphor screen, set in place of the target, were used to characterize the number and the spot dimension of positrons impinging on the target. It was estimated that $ 30-40\% $ of positrons released from the accumulator hit the target in a spot with a full width at tenth maximum $\leq\SI{4}{\milli\meter}$, the rest being lost in the transfer. Two symmetric coils generating a magnetic field perpendicular to the target were used to increase the positron transport efficiency onto the target. The described experiments were performed while keeping the target at room temperature in a \SI{25}{\milli\tesla} magnetic field environment and in the presence of an electric field of around \SI{300}{\volt\per\centi\meter}, mostly parallel to the magnetic field in the laser excitation region. This field was produced by the last electrode of the buncher that acts as an electrostatic lens. As mentioned above, Ps annihilations were monitored using the SSPALS technique. A $20 \times 25 \times 25 \SI{}{\milli\meter}$ lead tungstate (PbWO$_4$) scintillator \cite{TrapBasedBeam} coupled to a Hamamatsu R11265-100 photomultiplier tube (PMT) was placed $\SI{40}{\milli\meter}$ above the target to record photons emitted by positron-electron annihilations. To enhance the resolution at the longest decay times, the signal from the PMT was split and sent to two channels of a $\SI{2.5}{GHz}$ oscilloscope with high ($\SI{100}{mV/division}$) and low ($\SI{1}{V/division}$) gain. Joined data from the two channels give the SSPALS spectrum shown in Fig.~\ref{Background} when $e^+$ are bunched on the surface of the MCP (no Ps formation; detector response) and on the target (Ps formation). In the presence of Ps formation SSPALS spectra present a prompt peak and a tail. The prompt peak, in a region up to $\SI{100}{\nano\second}$ from the positron implantation, is due to $2\gamma$ annihilations of $e^+$. The tail is dominated by the Ps decay in vacuum, therefore it is proportional to the time derivative of the $n=1$ Ps population $-dN_1/dt$. In the inset the signal given by the high gain channel on the target is reported. Calibration measurements have shown that the detector performs linearly in the dynamic range of the high gain channel but could show slightly non linear behaviors in the peak region within the dynamic range of the low gain channel. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{Background.png} \caption{SSPALS spectra measured in absence of Ps formation (black line, average of 338 shots), and in presence of Ps formation (gray line, average of 159 single shots) normalized to the peak height. In the inset, the SSPALS acquired on the target with the high gain channel is shown. The time origin is taken at the maximum of the prompt peak.} \label{Background} \end{figure} For the present measurements a first UV laser pulse was used for Ps excitation to the $n = 3$ energy levels. A second IR laser pulse was also employed for selective photoionization of the excited atoms. The laser setup is described in detail in Refs. \cite{TwoStepExcitation} and \cite{LaserApparatus}, and was previously used to demonstrate $n=3$ excitation of Ps \cite{NEqual3}. With respect to that setup, here the laser system was improved to deliver about 2.5 times the energy in the UV. The UV pulse energy was kept above \SI{80}{\mu J}, peaking at about \SI{130}{\mu J} in optimal conditions (measured outside the experimental chamber, $5 \%$ absorption of the viewport not considered). The wavelength of the UV laser was monitored during the measurements to verify that unavoidable thermal drifts induced by prolonged operation of the pump laser did not alter the wavelength setting on the resonance of the $1\text{S} \rightarrow 3\text{P}$ transition ($\lambda = 205.045 \pm 0.005 \si{\nano\meter}$) \cite{NEqual3}. It had a horizontal polarization (\emph{i.e.} polarization perpendicular to the sample), an asymmetric, nearly Gaussian temporal profile with a full-width at half maximum (FWHM) of \SI{1.5}{\nano\second}, and a Gaussian-like spectral profile with $ \sigma_{UV} = 2 \pi \cdot \SI{48}{\giga\hertz} $. Here the increase in energy of the UV opened the possibility to sacrifice about $20\%$ by adding a telescopic system to control the beam spot size and shape. The spatial intensity distribution was almost Gaussian before entering the last telescope, where the two lenses were used to significantly reduce the spot size and to make it astigmatic so that most of the energy ends up in front of the active region of the target (see Fig.~\ref{intensity}). The effective size of the UV beam used during the measurement was about 3.0 - 3.5 \si{\milli\meter} FWHM both in horizontal and vertical direction. The second, intense infrared (IR) laser pulse at $\SI{1064}{\nano\meter}$ was simultaneously delivered to the experimental chamber to selectively photoionize o-Ps in the $n =3$ excited state. This horizontally polarized pulse had an energy of $\SI{50}{\milli\joule}$ and a temporal FWHM of $\SI{6}{\nano\second}$. It was superimposed on the UV pulse both in time with a precision of $ < \SI{1}{\nano\second}$, by using an optical delay line, and in space by increasing its size so as to completely cover the excitation pulse area (top-hat profile of $\geq \SI{20}{\milli\meter}$ diameter). Both beams were aligned on the target region by monitoring their position with a CCD camera on a \macor screen placed inside the vacuum region, a few cm away from the target, rotated $45 \degree$ downwards so as to face the camera, which was placed on a $45 \degree$ angled viewport. A mutual synchronization of positrons and laser pulses with a time resolution of $\SI{2}{\nano\second}$ and a jitter of less than $\SI{600}{\pico\second}$ was obtained by a custom field-programmable gate array (FPGA) synchronization device (see \cite{LaserApparatus}). The time delay between the prompt positron annihilation peak and the laser pulses was set to $\SI{16}{\nano\second}$. \begin{figure}[htp] \centering \includegraphics[width=0.45 \linewidth]{SpectrumA.png} \includegraphics[width=0.45 \linewidth]{SpectrumB.png} \caption{Intensity distribution in front of the target as observed on the \SI{25.4}{\milli\meter} \macor screen rotated by $45 \degree$ downwards, by a camera on a $45 \degree$ angle viewport (with respect to the beam direction) and orthogonal to the screen. This beam has an elliptical shape and an asymmetric distribution of light due to the passage in an astigmatic telescope. The left panel shows the intensity distribution without the telescope. The right panel shows the intensity distribution with the lens tilted by about $10 \degree$, showing the increased intensity in front of the active area. The exposure settings of the acquisition camera were kept the same.} \label{intensity} \end{figure} \section{Measurements and analysis of the results} \label{SectionIII} Two different sets of measurements were performed. The first one was carried out by simultaneously firing the UV and IR lasers. The goal was to verify that the excitation of Ps to the $n = 3$ level manifold and the following photoionization only induce a proportional decrease of the o-Ps population decaying to three $ \gamma $ rays and no significant increase of the annihilations at later times. The second set of measurements was performed by firing only the UV laser in order to populate the long-lived \tS{} state and observe whether an excess of signal was present at later times, induced by its decay (the annihilation channel (iii) discussed in the Introduction). The comparison between the averaged SSPALS spectra acquired with lasers off and both UV+IR lasers on is reported in Fig.~\ref{SSPALS_UVIR}. Photoionization of the \ttP{} state dissociates the Ps, and the free positrons are quickly accelerated back toward the last negative electrode of our setup, where they annihilate in a few \si{\nano\second}. Therefore these positrons do not contribute to the delayed annihilations in the SSPALS spectrum. The comparison between the averaged SSPALS spectra acquired with lasers off and only the UV laser is shown in Fig.~\ref{SSPALS_UV}. With reference to the annihilation dynamics described in the Introduction, the quenching of the excited states in the \SI{25}{\milli\tesla} magnetic field (channel (ii)) and, moreover, the decay to the metastable \tS{} state (channel (iii)), are expected to induce a decrease of the Ps annihilations immediately after the UV shot, and an increase at later time with respect to the SSPALS spectra with both lasers off (see also inset in Fig.~\ref{SSPALS_UV}). The measurement was carried out by regenerating the Ne moderator and then alternating shots with the UV laser on to shots with the laser off up until the moderator aging caused the positron beam intensity to decrease enough to justify the growth of another moderator (usually when the signal reached about 75 \% of the initial value). Positron accumulation time limits the shot repetition rate to about three minutes; each sequence of shots taken between two successive moderator regenerations is typically composed of between 16 and 25 shots. A total of $ 179 $ shots with the UV laser on, $ 159 $ shots with both the UV and IR lasers on and $ 338 $ with both lasers off were acquired. \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{SSPALS_UVIR.png} \caption{SSPALS spectra of Ps into vacuum with lasers off in gray and UV+IR lasers on ($205.045 + 1064 \SI{}{\nano\meter}$) in black normalized to the peak height. Each spectrum is the average of 159 single shots. The laser pulses were shot $\SI{16}{\nano\second}$ after the prompt positron annihilation peak. The time origin is taken at the maximum of the prompt peak.} \label{SSPALS_UVIR} \end{figure} \begin{figure}[h!tp] \centering \includegraphics[width=\linewidth]{SSPALS_UV.png} \caption{SSPALS spectra of Ps into vacuum with lasers off in gray and UV laser on ($\SI{205.045}{\nano\meter}$) in black normalized to the peak height. Each spectrum is the average of 179 single shots. The laser pulses were shot $\SI{16}{\nano\second}$ after the prompt positron annihilation peak. The detail of the difference of the two SSPALS spectra between $70$ and $\SI{300}{\nano\second}$ from the prompt peak is reported in the inset. The time origin is taken at the maximum of the prompt peak.} \label{SSPALS_UV} \end{figure} A way to quantify the amount of delayed annihilations induced by the presence of the laser is to compute the relative difference of the areas of SSPALS spectra with/without laser in selected time windows. We adopt here the same analysis methodology already demonstrated for detecting long-lived Rydberg states of Ps \cite{NEqual3} based on the calculation of $ S $ parameter, which is defined as $ S_i = (\Area_i^\off - \Area_i^\on) / \Area_i^\off $ where $ \Area_i^\on $ is the area under the spectrum in a selected time window of a single shot when the laser is on and $ \Area_i^\off $ is the area of the following shot in the same time interval when the laser is blocked. This definition of $S_i$ implicitly assumes the same positron beam intensity for both shots, or a proper normalization of the spectra, because the fraction of emitted Ps is expected to be proportional to the implanted positron number. Any quantity proportional to the positron beam intensity can provide a valid normalization tool. The prompt peak height is a frequently adopted choice \cite{NEqual3, PeakH}. For the present case, however, we cannot rely on it as we aim to higher precision in normalizing the spectra than the level of linearity of the scintillator detectors in the prompt peak (as mentioned before). To circumvent this difficulty, we used only the portion of the SSPALS spectrum acquired at high gain (therefore avoiding any detector saturation, see inset in Fig.~\ref{Background}) to compute a suitable normalization factor. This required the development of a novel normalizing technique of SSPALS spectra. We will name it \emph{detrending} for its conceptual similarity with the homonymous technique used in signal analysis \cite{DetrendingDNA}. The technique here developed was specifically optimized for the analysis of SSPALS measurements where two distinct class of time-interleaved measurements are present, in order to make the most efficient use of the a-priori knowledge about the laser status to reduce the error in estimating the normalization factor. A detailed and general formulation of our technique can be found in the Appendix. The technique consists firstly in computing the area under the SSPALS spectrum in a suitable time window, then pairing it with the time elapsed from the last moderator regeneration to the time the specific shot had been acquired. The resulting data series is well fitted by a second-order polynomial function to model the moderator aging, which is the most significant source of positron intensity variation as a function of time. For each moderator regeneration two polynomial fits are performed: the first fitting only the points acquired with the laser on, the second only the points acquired with the laser off. The average of the two resulting fitted polynomials is a model of the evolution of the shot intensity in time and provides the necessary normalization factors. To obtain the $\Area_i^\on$ and $\Area_i^\off$ parameters we divided the measured areas by the value of the average polynomial evaluated at the time each shot was acquired. For our analysis of the SSPALS spectra, two regions were chosen in which the experimental curve was integrated and the $ S_i $ parameter was computed. The first area was chosen so that it doesn't intersect the prompt peak (which is also implied by the previous condition of the high-gain channel not saturating) and so that most of the produced positronium would still be freely expanding in the chamber without hitting the walls. The range from \SI{70}{\nano\second} to \SI{350}{\nano\second} from the peak satisfies these conditions. Indeed, according to measurements performed on Rydberg Ps in the same chamber and after positron implantation with the same energy and in the same target, the interaction with the walls begins after around \SI{350}{\nano\second} from the prompt peak \cite{NEqual3}. The second region was chosen to lie contiguous to the first one ranging from \SI{350}{\nano\second} to \SI{500}{\nano\second}, where the signal approaches the noise level (see Fig.~\ref{Areas}). \begin{figure}[htp] \centering \includegraphics[width=\linewidth]{Areas2.png} \caption{SSPALS spectrum of a single shot acquisition with the UV laser on as seen by the high gain channel (continuous black curve). The region around the origin of the horizontal scale where the oscilloscope signal saturates corresponds to the prompt peak. The areas selected for the analysis are highlighted (see text).} \label{Areas} \end{figure} The resulting $S$ parameter obtained considering all of the acquired spectra is computed as \begin{equation}% S ~=~ \frac{\Area_\off - \Area_\on}{\Area_\off} \, , \label{SPDefinition} \end{equation} \noindent where $\Area_\on$ and $\Area_\off$ are the average of the values of $\Area_i^\on$ and $\Area_i^\off$ respectively. The uncertainty $\Delta S$ of the $S$ parameter, calculated with the described detrending technique, can be derived by using error propagation theory (see Eq.~\ref{ErrorPropagation}) starting from $\Area_\on$, $\Area_\off$ their respective uncertainties $\sigma_\on$ and $\sigma_\off$ (see Eq.~\ref{SigmaDefinition} in Appendix). \begin{equation} \Delta S ~=~ \sqrt{~\frac{\sigma_\on^2}{\Area_\off^2} + \frac{\Area_\on^2 \cdot \sigma_\off^2}{\Area_\off^4}~} \, . \label{ErrorPropagation} \end{equation} \noindent Notice that in the whole procedure we avoided any background subtraction. Indeed we expect the background to be due to the annihilations of Ps inside the nanochannels, due to reemitted positrons and due to the response of the detector to the positron burst, and therefore to be proportional to the shot intensity. As detailed in the Appendix it is counterproductive to attempt a background subtraction under these premises. The result of the detrending technique applied to the acquired data on the \emph{first} and \emph{second} regions of the spectra are shown in Fig.~\ref{UVIRFinal} and Fig.~\ref{UVFinal}. The former figure contains data taken with both the UV and IR lasers on, while the latter figure is obtained with the UV laser only. The scatter plot points correspond to single shots, and their horizontal coordinate is given by the time elapsed from the moderator regeneration and the moment when the shot was acquired, while their vertical coordinates correspond to the measured $\Area_i^{\on/\off}$ for each shot. Blue points correspond to shots acquired with the laser off, red points to shots acquired with the laser on. \begin{figure*}[htbp] \centering \includegraphics[width=0.49 \textwidth]{UVIR_Final_First.png} \includegraphics[width=0.49 \textwidth]{UVIR_Final_Second.png} \caption{Detailed analysis for the run in which both the UV and the IR lasers were both used. Each sample represent the normalized area (\emph{i.e.} $\Area^\on_i$ or $\Area^\off_i$) for a single run. Full circles represent the laser-on runs, empty squares represent laser-off shots. The sideways Gaussian curves represent an estimate of the distribution of the $\Area^{\on/\off}_i$ as described in the text. In the insets the computed values of the $S$ parameter are given.} \label{UVIRFinal} \end{figure*} \begin{figure*}[htbp] \centering \includegraphics[width=0.49 \textwidth]{UV_Final_First.png} \includegraphics[width=0.49 \textwidth]{UV_Final_Second.png} \caption{Detailed analysis for the run in which only the UV laser was used. Each sample represent the normalized area (\emph{i.e.} $\Area^\on_i$ or $\Area^\off_i$) for a single run. Full circles represent runs in which the laser has been shot, empty squares represent laser-off shots. The sideways Gaussian curves represent an estimate of the distribution of the $\Area^{\on/\off}_i$ as described in the text. In the inset the computed value of the $S$ parameter are given.} \label{UVFinal} \end{figure*} \begin{table*} \begin{tabular}{ccc} \textbf{First region} & \hspace{25mm} & \textbf{Second region} \\ \hline \result{0.926}{0.005}{1.074}{0.005}{13.8}{0.6} & \textbf{UV + IR} & \result{0.919}{0.011}{1.081}{0.012}{14.9}{1.4} \\ \result{0.989}{0.003}{1.011}{0.004}{2.2}{0.5} & \textbf{UV} & \result{1.001}{0.009}{0.999}{0.009}{0.0}{1.3} \end{tabular} \caption{Summary of the results of the detrending analysis for the selected areas.} \label{ResultTable} \end{table*} The two functions plotted sideways in the leftmost part of each graph are Gaussian curves centered on the averages of the $\Area_i^\on$ and $\Area_i^\off$ sets of points, and having the variances of the respective sample distributions. Hence they represent an estimate of the distribution of $\Area_{\on/\off}$ under the assumption that they are normally distributed. The thick horizontal lines also mark the average values $\Area_\on$ and $\Area_\off$, and the dashed envelopes around them range as $\pm \sigma_\on$ and $\pm \sigma_\off$ respectively. The results of the data analysis are reported in Table~\ref{ResultTable}. They show an $S$ parameter in the \emph{first} region when both lasers are shot (see Fig.~\ref{UVIRFinal}) that is consistent within the experimental uncertainty with the observed $S$ in the \emph{second} region. As mentioned previously, when both lasers are fired a fixed fraction of the produced Ps is removed immediately after the prompt peak and the remaining population retains the original lifetime. The expected behavior, \emph{i.e.} the ratio between the two curves being constant in the SSPALS time frame and not affected by the interaction with the chamber walls, is confirmed. The UV laser data show a $ 2.2\% $ reduction of the annihilation rate in the \emph{first} region (see Fig.~\ref{UVFinal}) which can be consistently explained with the early stages of the excitation/de-excitation processes, mainly the magnetic quenching. If this were the only phenomenon affecting the Ps lifetime the same value of the $S$ parameter should be observed also in the \emph{second} region, since these processes (as photoionization in the UV+IR case) remove a fraction of the Ps immediately after the prompt peak but do not affect the lifetime of the remaining fraction. On the contrary, the experimental data show an $S$ value in the \emph{second} region which, given the current precision, is incompatible with that measured in the \emph{first} region with a likelihood ratio \cite{Likelihood} of $90\%$. This variation is instead compatible with the presence of a long-lived fraction of Ps which contributes to the annihilation signal. This $S$ difference is attributable to the presence of the (partially mixed) metastable \tS{} fraction in the Ps beam. The same conclusion holds by changing the integration intervals by some tens of nanoseconds. \section{Modelling} \label{SectionIV} To quantitatively support the interpretation of these results, we formulate here a simplified rate equation model which describes the long time evolution of the (number) populations of Ps atoms in the relevant energy levels, to elucidate the complex dynamics of the optical and annihilation decays after laser excitation. As discussed in the Introduction, the presence of the fields in the experimental chamber has noteworthy consequences on the optical transition patterns and on the optical decay and annihilation lifetimes of excited Ps atoms. An accurate study of a Ps atom in these conditions can be done with the help of a simulation code which performs, for each $n$ manifold, the diagonalization of the full interaction Hamiltonian in arbitrary electric and magnetic fields \cite{Villa,Caravita}, using the same numerical methodology introduced by \cite{ReducedLifetime}. This code calculates the modified Ps energy levels, the generalized Einstein coefficients for the optical transitions, the sublevel lifetimes and the radiative decay branching ratios. \begin{figure}[h!tp] \centering \includegraphics[width=\linewidth]{schema9.png} \caption{Sketch of the relevant energy level structure of a Ps atom after $n = 3$ laser excitation, as discussed in the text. The spontaneous optical decays (red continuous arrows) and annihilation (green dashed arrows) lifetimes are indicated. The small ellipses indicate the calculated branching ratios for the decay of the mixed $n = 3$ sublevels. The large circle encloses the long--lived energy levels and the annihilation patterns considered for the reduced rate equation model.} \label{scheme} \end{figure} Generally speaking, the relevant effects of the fields in our experimental conditions can be summarized as follows (with reference to the energy level diagram shown in Fig.~\ref{scheme}). The presence of the \SI{25}{\milli\tesla} magnetic field induces some singlet--triplet mixing between states with identical magnetic quantum number $m$, hence some $n =2$ and $n =3$ triplet substates can optically decay towards the singlet ground state \ooS{}, subsequently annihilating with the short lifetime of \SI{125}{ps}. This is what is known as a \emph{magnetic quenching} \cite{PositroniumReview}. On the other hand, the presence of the (average) \SI{300}{\volt \per \centi\meter} electric field induces a Stark effect on the excited Ps atoms, with a mixing between substates belonging to different orbital quantum numbers. This Stark mixing is stronger for the $n =3$ manifold, where for most substates the S, P or D character of the wavefunctions is completely lost and no known quantum numbers are adequate for their spectroscopic description. The transition probabilities from the $n =3$ sublevels towards those of the $n =2$ and $n =1$ manifolds can be calculated as suitable linear combinations of the transition probabilities between unperturbed sublevels which obey electric dipole selection rules. Conversely, in the case of the $n =2$ manifold we are in presence of a small to moderate mixing, and the substate wavefunctions mainly retain their unperturbed S or P character, with only a small contribution coming from the wavefunction of the other orbital quantum number. Then, as already implicitly done in the whole paper, we can reasonably designate them again with the unperturbed spectroscopic symbols. It is important to note, as mentioned in the Introduction, that the small P contribution contained in partially mixed \tS{} states determines a spontaneous decay towards the ground state \otS{}, with emission of one optical photon, otherwise forbidden by electric dipole selection rules. In order to investigate the production of long--lived Ps in such a (partially mixed) \tS{} state, a particular attention was devoted to calculate the \textit{effective optical lifetime} of the whole Ps cloud in our experimental conditions, \emph{i.e.} its characteristic exponential decay time associated with the average spontaneous radiative decay rate \tS{}--\otS{} in the presence of the electric field of our setup. If the electric field was uniform, the \tS{}--\otS{} spontaneous radiative decay rate $ r_{opt} $ would be constant. If, however, the field is not uniform (as in our case), the atom's survival probability can be still approximated (given that the non-uniformities are not too severe) with a negative exponential law where $ r_{opt} $ is the average spontaneous radiative decay rate over the atom's flight trajectory. The average spontaneous radiative decay rate of the whole Ps cloud $ \langle r_{opt} \rangle $ is thus obtained by averaging over all possible trajectories within the cloud, and the effective optical lifetime is given by $ \tau_{opt} = 1 / \langle r_{opt} \rangle $. The calculation of $ \langle r_{opt} \rangle $ in our geometry was performed as follows. A detailed electric field map of our experimental chamber \cite{NEqual3,PositronBunching} was calculated using SIMION \cite{Simion} in the plane orthogonal to the laser propagation axis (see Fig.~\ref{MetaLifetime}, left panel). Using the values of $ r_{opt} $ calculated with the simulation code \cite{Villa,Caravita} as a function of the electric field, a spatial map of the optical lifetime was obtained from the electric field map (see Fig.~\ref{MetaLifetime}, right panel). Finally, the average spontaneous radiative decay rate of the whole \tS{} Ps cloud was calculated by means of a 2D Monte Carlo in the plane orthogonal to the laser propagation axis. Ps was assumed to be emitted isotropically from the target with an axial velocity of $ 1.0 \times 10^5 \, \si{\meter\per\second} $, in agreement with the Doppler velocimetry survey reported in \cite{NEqual3} (conducted in similar experimental conditions with the same target). The trajectory of each atom was assumed to be a straight line with angle $ \theta $ of orientation with respect to the target's normal. A first averaging of the spontaneous radiative decay rate was performed along each trajectory to work out $ r_{opt} $ for each value of $ \theta $. Finally, an averaging over $ \theta $ between $ -\pi/2 $ and $ \pi/2 $ (assuming uniform emission distribution from the nanochannel converter) gave $ \langle r_{opt} \rangle $. This choice is in line with the outcome of \cite{NEqual3} using the same target, where a broad distribution of transverse velocities, consistent with an isotropic Ps emission, was observed. \begin{figure*}[htp] \centering \includegraphics[width=0.48 \textwidth]{final_efieldmap.png} \includegraphics[width=0.48 \textwidth]{final_lifetimemap.png} \caption{Electric field magnitude in the real experimental geometry (left panel) and optical lifetime of $ 2^3S $ state (right panel, values above \SI{600}{\nano\second} have been clamped for better visual readibility). The target position and the direction of the positron beam are reported. For the detailed geometry of the chamber see \cite{NEqual3,PositronBunching}.} \label{MetaLifetime} \end{figure*} The effective optical lifetime of the partially mixed \tS{} levels was estimated to be $ \tau_{opt} = \SI{105}{\nano\second} $, essentially due to the increased probability of optical de-excitation as the atoms approach the last electrostatic lens of the bunching system. Other results derived from our calculations, relevant for the construction of a reliable reduced rate equation model (with reference to Fig.~\ref{scheme}) are: (1) the optical lifetime of the $n =3$ manifold of mixed sublevels amounts to \SI{35}{\nano\second} on average; (2) the optical lifetime of the (partially mixed) \tP{} levels are of the same order of the corresponding unperturbed levels, resulting in \SI{3.4}{\nano\second}. Moreover, we observe that the averaged branching ratio of the metastable \tS{} production from the $n =3$ mixed sublevels can be estimated around to 10\%, while for the \tP{} state production it is 12\%. The remaining 78\% can be attributed to the spontaneous optical decay towards the ground state \otS{} ($\sim$61\%) and to population losses towards rapid annihilating singlet states, mainly due to magnetic quenching ($\sim 17 \%$, see also \cite{NEqual3}). From these observations it is clear that the long--lived fraction of the excited Ps states determining the long--time behavior of the SSPALS spectra in our experiment is essentially composed of \tS{} states. This suggests to approximately separate the complex population dynamics of excitation plus optical and annihilation decays into two successive parts, as schematically pictured in Fig.~\ref{scheme}. The first part of the population dynamics starts with the arrival of the laser pulse and lasts a few tens of nanoseconds. The $n =3$ manifold is first populated with efficiency $ \eta_3 \sim 14\% $, according to a weighted average of the photo--ionization experiment data listed in Tab.~1 (and assuming $ \sim 100 \% $ ionization efficiency \cite{NEqual3}). Subsequently, the three processes enumerated in the Introduction take place: (i) rapid optical decay towards the triplet \otS{} ground; (ii) rapid annihilation decay due to magnetic quenching with efficiency $ \eta_q $, representing a net population loss; (iii) spontaneous decay towards the long--lived \tS{} states with relative efficiency $ \eta_m$. These two last quantities will be considered as fitting parameters in the rate equation model. The second part of the population dynamics starts some tens of nanoseconds after the laser pulse and is dominated by: (a) optical decays of the populated \tS{} states towards the triplet ground state with lifetime $\tau_{opt} = $ \SI{105}{\nano\second}; (b) annihilation decays of \tS{} states into three $\gamma $ photons with lifetime $\tau_2 = $ \SI{1.14}{\micro\second}; (c) annihilation decays of \otS{} with lifetime $\tau_1 =$ \SI{142}{\nano\second}. A simplified rate equation model of the populations' evolution in the time interval relevant to the experimental data analysis can thus be formulated by disregarding the complex sublevel dynamics right after the laser pulse, and focusing only on the longer time scale level dynamics. Naming $ N_1(t)$ and $ N_2(t)$ the number populations of Ps atoms in \otS{} and \tS{} states, respectively, and introducing $N_0(t)$ as the number population of annihilated Ps atoms, the rate equations which describe the free decay dynamics are: \begin{equation} \begin{aligned} \frac{d N_2}{d t} \, &= -\frac{N_2(t)}{\tau_2} \, -\frac{N_2(t)}{\tau_{opt}} \,\\ \frac{d N_1}{d t} \, &= -\frac{N_1(t)}{\tau_1} \, \,+\, \frac{N_2(t)}{\tau_{opt}} \, \\ \frac{d N_0}{d t} \, &= \frac{N_1(t)}{\tau_1} \, \,+\, \frac{N_2(t)}{\tau_2} \,\,\,\, . \end{aligned} \label{EqModel} \end{equation} This set of differential equations can be integrated in a straightforward manner by setting proper initial conditions. In the case with laser on, following the above discussion, these are $N_2^\on (t_0) = \eta_3 \times \eta_m $ and $N_1^\on (t_0) = 1 - \eta_3 \times \eta_m - \eta_3 \times \eta_q $. In the case of laser off, where all atoms are assumed to be in the \otS{} states, they are simply $N_1^\off (t_0) = 1$ and $N_2^\off (t_0) = 0$. The initial time of the integration $t_0$ was chosen at $ 16+35 \si{\nano\second} $ after the prompt peak to let the atoms on $ n = 3 $ decay to $ \sim 1 / e $, \emph{i.e.} when the processes in the initial excitation dynamics can start being neglected (as discussed above). Note that in both cases we can arbitrarily set $N_0(t_0) = 0$ because the SSPALS spectra are determined by the derivative $d N_0 / d t$. Estimated values of the $ S $ parameter were obtained by solving numerically the Eqs.~ \ref{EqModel} and integrating the two sets of solutions with laser on and laser off over the same time intervals used for the experimental data analysis (\emph{i.e.} the first and second region, see Fig.~\ref{Areas}). To obtain the \tS{} production efficiency, the $S$ estimates was fitted to the corresponding experimental results obtained with the UV laser only, by using $ \eta_3 $ and $ \eta_m $ as free parameters, and setting the excitation efficiency to the measured value of $ \eta_3 = 14\% $, as discussed before. The outcome of the fitting procedure was verified by varying the integration starting time in the selected range. The results were found consistent within 5\% and well within their statistical uncertainties. \begin{table}[h] \centering \begin{tabular*}{0.9\columnwidth}{@{\extracolsep{\fill}}lll} \multicolumn{3}{c}{\textbf{Relative efficiences}} \\ \hline $ \eta_m $ & (\tS{} branching eff.) & $ \ \ ( 14.8 \pm 9.4)\% $ \\ $ \eta_q $ & (quenching eff.) & $ \ \ (15.0 \pm 3.4)\% $ \\ $ \eta_3 \times \eta_m $ & (overall \tS{} prod. eff.) & $ \ \ (2.1 \pm 1.3) $\% \end{tabular*} \caption{Results obtained by fitting the rate equation model (Eqs. \ref{EqModel}) to the experimental data reported in Tab. \ref{ResultTable}. The best--fit parameter values for the $ \mathrm{3 \rightarrow 2} $ metastable production efficiency $ \eta_m $ and the quenching probability $ \eta_q $ are those obtained with $t_0 = 16+35 \si{\nano\second} $. The overall metastable production efficiency $ \eta_3 \times \eta_m $ is also reported.} \label{FitResultTable} \end{table} The values of $ \eta_m $ and $ \eta_q $ obtained from the best fit, are reported in Table \ref{FitResultTable}. The estimated \tS{} production efficiency $ \eta_m \simeq (14.8 \pm 9.4)$\% is found statistically compatible with the expected $ 10\% $. The overall efficiency in exciting triplet ground-state Ps atoms to \tS{} in the present setup is finally obtained by multiplying the two efficiencies, $ \eta_3 \,\times\, \eta_m = (2.1 \,\pm\, 1.3)\% $. The number of Ps atoms in the \tS{} level produced per $ e^+ $ bunch is obtained from the estimated intensity of our $ e^+ $ source, which delivers on the target $ \sim 1.3 \cdot 10^7 $ $ e^+ $ every \SI{180}{s} (see \cite{PositronBunching}), multiplied by the Ps conversion efficiency of our target ($ \sim 35\% $ see \cite{PositronBunching,NCP}). The estimated production rate of Ps atoms in the \tS{} level is thus $ \sim 1.0 \cdot 10^5 $ every \SI{180}{s}. \section{Conclusions} We have studied the excitation of positronium to the long-lived \tS{} state by spontaneous radiative decay from the \ttP{} level manifold, excited by a UV laser pulse. The experiment was performed in a dedicated chamber and in the presence of a guiding $\SI{25}{\milli\tesla}$ magnetic field and an average $\SI{300}{\volt\per\centi\meter}$ electric field. The presence of the fields caused mixing between the sublevels of each $n$--manifold, with the consequence of inducing an optical decay of the populated \tS{} states (otherwise stable against one--photon radiative decay), finally shortening the lifetime of these states from $\SI{1.14}{\micro\second}$ to $\SI{105}{\nano\second}$. Ps atoms in these excited states, anyway, constituted a longer--lived component that was observed by means of single--shot positronium annihilation lifetime spectra (SSPALS). The evidence of the successful metastable state production was obtained with a novel analysis technique of SSPALS data, able to identify very small deviations from the reference spectra obtained without an exciting laser pulse. The experimental results were fitted with a rate equation model which describes the long--time evolution of the populations of the Ps relevant states after the $n = 3$ excitation event. Annihilation and decay rates were obtained using an exact calculation code of Ps energy levels and optical transitions in arbitrary electric and magnetic fields. The observed \tS{} state production efficiency relative to the amount of produced Ps was evaluated to be $ \eta_3 \times \eta_m = (2.1 \pm 1.3)$\%. This production efficiency is about 1/3 of the $ 6.2 \% $ recently obtained by Stark mixing between S and P sublevels during \otS{} -- \tP{} laser excitation \cite{AlonsoHoganCassidy}. However, for the Stark mixing method, the rate at which the electric field can be switched off (necessary to avoid the rapid radiative decay of the P component in the mixed state) limits the amount of long--lived \tS{} Ps atoms that survives electric field switching. Conversely, the method demonstrated here is ideally free from this drawback, as it can be realized in absence of any electrical field which induces sublevel mixing and subsequent radiative decay losses. A further advantage of our method is that it could allow, in a field-free environment, reaching high \tS{} production efficiencies. This can be realized, for example, by increasing the length of the \otS{}--\ttP{} laser pulse to $ > \SI{10}{\nano\second} $ and selecting a bandwidth to optimally cover the transition Doppler profile. After some iterations of laser excitation and spontaneous radiative decay, indeed, a very high fraction of the initial Ps atoms would be pumped to the \tS{} state. Another possibility could be to add an extra IR laser at \SI{1312.2}{\nano\meter} to directly pump the \ttP{} -- \tS{} transition. Ideally, this would make \otS{} -- \ttP{} -- \tS{} an equally--populated three--level system (assuming no extra losses and saturation of the laser pulses) that could lead to up to $ 10 \% $ excitation efficiency with the present UV bandwidth, corresponding to a fourfold gain with respect to the current setup and start being competitive with direct two-photon excitation (17.6\%, see \cite{Haas}), yet not requiring intense narrow-band lasers. Thus, the production of Ps atoms in the \tS{} state by spontaneous radiative decay from the \ttP{} in the absence of external fields, seems to be a promising alternative for obtaining this metastable Ps state, potentially leading to higher production efficiency. Measurements of the \ttP{} -- \tS{} decay in a electric field free environment are planned in order to verify the suitability of this method.
{ "timestamp": "2018-02-21T02:06:47", "yymm": "1802", "arxiv_id": "1802.07012", "language": "en", "url": "https://arxiv.org/abs/1802.07012" }
\section{Introduction} New evidence for the existence of massive black holes ($> 10^4$ M$_{\odot}$) near the Galactic center \citep[][but see also \citet{ravi17}]{oka17,tsuboi17} raises fundamental questions about the formation of supermassive black holes (SMBHs) and their evolution within galaxies. Combined with the growing number of observed offset and dual active galactic nuclei \citep[e.g.][]{comerford11,comerford14,barrows16,barrows17}, this establishes the idea that non-central SMBHs may be relatively common and potentially observable not only in the Milky Way (MW) but also in other galaxies. Previous studies have suggested that the orbital decay experienced by SMBHs accreted onto a galaxy through minor mergers can be stalled, creating a population of SMBHs that fail to reach the galactic center within a Hubble time \citep{G94,schneider02,volonteriOffCenBH05,bellovaryBH10,tremmel15,tremmel18,dosopoulou17,dvorkin17}. \citet{tremmel18} find that only a fraction of mergers involving MW-mass halos result in the formation of close SMBH pairs. Rather, many galaxy mergers (and the majority of minor mergers) result in SMBHs that are deposited on wide orbits following the disruption of their host galaxy during the interaction. Because these wandering SMBHs often come from smaller galaxies, their masses are likely close to their initial mass. Due to their connection to a previous population of satellite galaxies, future observations of wandering SMBHs may provide important insight into how often, at what mass, and in what halos SMBHs are formed. The {\sc Romulus} cosmological simulations \citep{tremmel17} are uniquely capable of tracking the orbital evolution of SMBHs within their host galaxies to sub-kpc accuracy \citep{tremmel15,tremmel18}. SMBHs are also seeded in the early Universe based on gas properties. This allows SMBHs to exist in smaller halos and at earlier times compared with more common approaches. In this Letter we use the {\sc Romulus25} simulation to self-consistently predict the average number of wandering SMBHs and their dynamics within MW-mass galaxies. In \S2 we discuss the simulations, the sub-grid physics implemented for SMBHs, and our halo selection criteria. In \S3 we discuss our results predicting the population of wandering SMBHs in MW-mass galaxies, which are summarized in \S4. \section{The Simulations} {\sc Romulus25} is a a $25^3$ Mpc$^3$ uniform volume simulation run with a $\Lambda$CDM cosmology following the most recent results from Planck \citep[$\Omega_0=0.3086$, $\Lambda=0.6914$, h$=0.67$, $\sigma_8=0.77$;][]{planck16}, a Plummer equivalent force softening of $250$ pc (a $350$ pc spline force softening is used), and mass resolution for dark matter and gas of $3.39 \times 10^5$ and $2.12 \times 10^5$ M$_{\odot}$ respectively. The simulation was run using the new Tree + SPH code, {\sc ChaNGa} \citep{changa15}, including models for a cosmic UV background, star formation, `blastwave' supernovae (SN) feedback, low temperature metal cooling \citep{wadsley04,wadsley08,Stinson06,shen10}, as well as a novel implementation of SMBH formation, growth and dynamics \citep{tremmel15,tremmel17}. {\sc Romulus25} reproduces $z=0$ empirical relations between halo mass, galaxy stellar mass, and SMBH mass. It also results in realistic cosmic star formation and SMBH growth histories \citep{tremmel17}. SMBHs of mass $10^6$ M$_{\odot}$ are seeded from gas in rapidly collapsing, pristine regions capable of quickly producing a very massive black hole. More than 85\% of SMBHs are seeded within the first Gyr of the simulation without \textit{a priori} assumptions regarding their halo occupation. This results in SMBHs being seeded in $10^8-10^{10}$ M$_{\odot}$ halos, with their occupation evolving such that only $\sim10\%$ of $10^{10}$ M$_{\odot}$ halos host SMBHs at $z = 0$. This seeding method allows for a more complete census of SMBHs throughout each galaxy's merger history. Once seeded, SMBHs are allowed to grow by accreting gas via a modified Bondi-Hoyle prescription that utilizes the local \textit{resolved} kinematics of gas to account for angular momentum support. Free parameters associated with sub-grid models for SMBH and stellar physics were constrained by an extensive parameter optimization. For more details we refer the reader to \citet{tremmel17}. Crucial to this work, the orbital evolution of SMBHs in {\sc Romulus25} is tracked down to sub-kpc scales by utilizing the sub-grid model presented in \citet{tremmel15} to account for unresolved dynamical friction. This method has been explicitly shown to result in realistic SMBH orbital evolution \citep{tremmel15,tremmel18}. SMBHs form close pairs when they are within two softening lengths ($0.7$ kpc) of one another and are considered relatively bound ($\frac{1}{2}\Delta \textbf{v} < \Delta \textbf{a} \cdot \Delta \textbf{r}$, where $\Delta \textbf{v}$, $\Delta \textbf{a}$, and $\Delta \textbf{r}$ are the relative velocity, acceleration, and distance vectors between two SMBHs). When this occurs, the individual kinematics are no longer followed and the two SMBHs act as a single object with the sum of the two masses and the same total momentum. We therefore consider SMBHs in the central $0.7$ kpc of the halo to be central SMBHs and all others to be `wanderers'. While this work has been motivated by recent claims of massive, wandering black holes in the Milky Way, our simulations are not tuned to reproduce these results, nor do we model SMBHs smaller than $10^6$ M$_{\odot}$. Rather, the orbital evolution of SMBHs in {\sc Romulus25} is purely a prediction of the simulation. We use the Amiga Halo Finder \citep{knollmann09} to extract individual halos from the volume. SMBH positions are taken relative to the center of the halo and their velocities relative to the center of mass velocity of the halo's inner 1 kpc. We define `Milky Way-like' to include halos with total mass between $5\times10^{11}$ and $2\times10^{12}$ M$_{\odot}$. {\sc Romulus25} contains 26 such halos at $z=0$, excluding all satellites. Halos in this mass range are large enough to have rich merger histories and substantial populations of wandering SMBHs. Smaller galaxies have quieter merger histories and fewer wandering SMBHs while there are much fewer larger galaxies in {\sc Romulus25}. The similarity to the MW and Andromeda is also convenient because, due to their proximity, they may be promising initial laboratories for wandering SMBH studies. The stellar masses of these halos are similar to that of the Milky Way ($\sim 10^{10.15}-10^{10.85}$ M$_{\odot}$) after applying a factor to account for observational limitations \citep{munshi13}. We focus on SMBHs within the inner 10 kpc of halos, as this represents a region dominated by the central galaxy (for reference the MW disk is $\sim10$ kpc in radius). \section{The Population of Wandering SMBHs in Milky Way-mass Halos} \begin{figure} \centering \includegraphics[trim=15mm 5mm 10mm 20mm, clip, width=90mm]{cum_frac_number_bhs_10kpc_reverse.pdf} \caption{{\sc Wandering SMBHs near the center of MW-mass Halos}. The cumulative fraction of MW-mass halos in {\sc Romulus25} as a function of the number of SMBHs they host, including central SMBHs. All halos host at least one SMBH within 10 kpc from halo center but the majority host more than that. The black line represents the entire sample, the blue dashed line the sub-sample without major mergers sinze $z=0.75$, and the orange dot-dash line the sub-sample visually categorized as having a disk morphology. Hosting several wandering SMBHs is the norm for MW-mass halos (only 1 of the 26 in {\sc Romulus25} does not have any). The number of wandering SMBHs is insensitive to morphology or recent merger history.} \label{number_bhs} \end{figure} \begin{figure*} \centering \includegraphics[trim=15mm 17mm 20mm 15mm, clip, width=150mm]{h38_wanderingbhs_arrows_white_final.pdf} \includegraphics[trim=15mm 17mm 20mm 15mm, clip, width=150mm]{h44_wanderingbhs_arrows_white_final.pdf} \includegraphics[trim=15mm 0mm 20mm 15mm, clip, width=150mm]{h48_wanderingbhs_arrows_white_final.pdf} \caption{{\sc Spatial Distribution and Velocities of Wandering SMBHs}. Stellar images of three galaxies from side-on (left) and face-on (right) orientations relative to the galactic disk. Galaxies were chosen from our parent sample of 26 galaxies in MW-mass halos to be most similar to the MW in terms of having a disk-dominated morphology and lacking any major (1:4 or larger) mergers since $z = 0.75$. Each pixel is colored based on emission in the U (blue), V (green), and J (red) bands, assuming a Kroupa IMF. Circles indicate position of the SMBHs and the arrows the direction and magnitude of their velocity relative to their galactic center. The wandering SMBHs have orbits with random inclination and eccentricity and are generally not in the galactic disk.} \label{galaxy_images} \end{figure*} We find a total of 316 SMBHs, including central SMBHs and excluding any within satellite halos, residing within the virial radius of 26 MW-mass halos in {\sc Romulus25}, an average of $12.2 \pm 8.4$ SMBHs per halo and an average of $5.1 \pm 3.3$ SMBHs within 10 kpc of halo center, again including central SMBHs (133 total; errors are standard deviation). All but one of the 26 halos has a single central SMBH within the inner 0.7 kpc of the galaxy, with the one halo's most central SMBH residing approximately 1 kpc from halo center. Of the 108 offset SMBHs that exist within 10 kpc from halo center, 90\% entered into the inner halo ($D < 10$ kpc) more than 2 Gyr previously. Many have existed within their current host since the first few Gyr of the simulation. These are not SMBHs with orbits actively decaying toward the center, but SMBHs on long lived, kpc-scale orbits within their host galaxy. In Figure~\ref{number_bhs} we plot the cumulative fraction of MW-mass halos hosting different numbers of SMBHs within their central 10 kpc, including central SMBHs. We select a subset of MW-mass halos that have had no mergers of total mass ratio greater than $0.25$ since $z = 0.75$ (16 halos) and another subset with central galaxies visually inspected to have a disk morphology at $z=0$ (12 halos). Wandering SMBHs are commonplace in MW-mass galaxies, with only one of 26 halos hosting just a single SMBH in its inner 10 kpc. The morphology or recent merger history of the galaxy does not affect the number of wandering SMBHs, consistent with the fact that many wanderers entered the galaxy at early times and are disconnected from its recent evolution. Figure~\ref{galaxy_images} shows synthetic images of stars in three example galaxies, all of which are disk dominated and have had no recent major mergers, similar to our current model of the MW. SMBH orbits are generally not within the galactic disk and have a wide range of inclinations and eccentricities. Out of the non-central ($D > 0.7$ kpc) SMBHs within 10 kpc of halo center in disk dominated galaxies, only $20\pm7\%$ exist within 30 degrees of the plane of the disk (see Figure~\ref{bh_angles}). If it were random, with the polar angle relative to the disk plane, $\phi$, taken from a flat distribution in $sin(\phi)$, 50\% of the SMBHs would be in this region, making this result significant to more than $4\sigma$. A significant fraction of these SMBHs have large velocities perpendicular to the disk, indicating that they are only passing through the region. This is consistent with previous work. Mergers aligned with the galactic disk are more likely to deposit SMBHs closer to the galactic center \citep{callegariBH09,callegari11}. Further, the disk is denser with stars and gas, providing more efficient dynamical friction. \begin{figure} \centering \includegraphics[trim=10mm 7mm 10mm 30mm, clip, width=90mm]{angle_distribution.pdf} \caption{{\sc SMBH Locations Within Disk Galaxies}. The distribution of polar angles relative to halo center and the the plane of the galactic disk for wandering SMBHs ($0.7 < D < 10$ kpc) in MW-mass disk galaxies from {\sc Romulus25} (blue line). The dashed black line represents the expectation from randomly sampling a unit sphere. In {\sc Romulus25} wandering SMBHs show a clear preference to exist out of the disk plane.} \label{bh_angles} \end{figure} The masses of wandering SMBHs are often very close to their initial mass, with over half of those within 10 kpc from halo center having grown by less than 70\% of their initial mass. The gas supply declines quickly away from the galactic center, making accretion more difficult to sustain. Still, some have grown by several times their initial mass, with 8\% having grown by over a factor of 10. Most of these SMBHs gain their mass through mergers and accretion early in the simulation before they are accreted onto their current host. Others were once central SMBHs that got perturbed outward and replaced. In rare cases, wandering SMBHs are able to grow significantly while on orbits that take them periodically close to the galactic center. Figure~\ref{bh_velocities} shows the radial and tangential velocities of the SMBHs within 10 kpc of halo center in units of their host halos's maximum circular velocity, which varies between $\sim150-300$ km/s for our sample. The velocities are taken relative to the center of mass velocity of the halo's central kpc. The black line represents a total velocity of $\sqrt{2}v_{max}$, which is roughly equal to the escape velocity of the galaxy. Many of the SMBHs lie within this line. In all but two of the galaxies (which both show signs of disruption) the radius of maximum circular velocity is much less than 10 kpc (generally 1-3 kpc). This implies that most SMBHs are bound to the central galaxy while the 16\% that lie outside of this region are bound to the halo on larger scales and have more eccentric orbits. Some of these wandering SMBHs may be surrounded by dense nuclear star clusters with typical masses of 10$^{6-7}$ M${\odot}$ and effective radii of $\sim10$ pc \citep{wehner06,ferrarese06b,scottGraham13,scottGraham13b} . {\sc Romulus25} cannot resolve such detailed structures, but were they to exist they would effectively increase the dynamical mass of the SMBHs and potentially cause them to sink more efficiently. To test this effect, we follow the method used in \citet{Barausse12} and approximate the sinking timescale for each wandering SMBH using the formula derived in \citet{BinneyTremaine}. \begin{equation} \mathrm{t}_{\mathrm{df}} \sim \left( \frac{19\, \mathrm{Gyr}}{\ln\Lambda} \right ) \left( \frac{r_i}{5\, \mathrm{kpc}} \right ) \left ( \frac{\sigma}{200\, \mathrm{km/s}} \right ) \left( \frac{10^8\, \mathrm{M}_{\odot}}{m_{\mathrm{bh}}} \right ) \\ \end{equation} \noindent We assume $\Lambda \sim b_{max}/b_{min}$ with the maximum impact parameter, $b_{max}$, equal to the initial radius of the SMBH orbit, $r_i$. We take the velocity dispersion of each halo to follow the empirical relations from \citet{oser12}. \begin{equation} \sigma \sim 190 \left ( \frac{\mathrm{M}_{\star}}{10^{11} \mathrm{M}_{\odot}} \right )^{0.2} (1+z)^{0.44} \end{equation} Using Eq. (1) with $r_i$ equal to the final radial distance of each SMBH from galaxy center, we estimate the dynamical friction timescale at $z = 0$ for each wandering SMBH in the inner 10 kpc of their host halo when a factor of 10 is added to their mass (a high estimate of the ratio of nuclear star cluster to SMBH mass). We apply a minimum impact parameter, $b_{min}$, of $10$ pc to represent the characteristic size of nuclear star clusters. While this significantly decreases the timescale estimated by Eq. (1), 65\% of these wandering SMBHs still have a sinking timescale longer than the total time they've spent at $D < 10$ kpc (50\% for $D < 5$ kpc). While this is a simplistic estimate, it indicates that our results are robust to the existence of unresolved, massive stellar components around SMBHs. The orbits of SMBH pairs are not followed at separations closer than 0.7 kpc (see \S2), but we test whether it is likely that unresolved wandering SMBHs exist at smaller scales using Eqs. (1) and (2). We take the initial orbital radius, $r_i$, to be the smallest resolved separation, 0.7 kpc. Because Eq. (2) is fit only to galaxies at $z<2$ we evaluate the equation with $z = \mathrm{min}(z_{pair},2)$, where $z_{pair}$ is the redshift of close pair formation. We take $b_{max}$ to be 0.7 kpc and $b_{min}$ to be the 90 degree deflection radius of the SMBH. The stellar mass is taken directly from the pair's host galaxy at the appropriate redshift. Whether calculated with host properties at $z_{pair}$ or at $z=0$, the results of Eqs. (1) and (2) predict that each central close pair should form a bound binary well before the Hubble time. Because {\sc Romulus25} does not include lower mass black holes that would have much longer binary formation timescales, these results are not in tension with recent claims for black holes with mass $\sim10^4$ M$_{\odot}$ on sub-kpc orbits in the MW \citep{oka17,tsuboi17}. \begin{figure} \centering \includegraphics[trim=5mm 3mm 10mm 22mm, clip, width=90mm]{velocity_scatter_10kpc.pdf} \caption{{\sc Distribution of Wandering SMBH Velocities}. The radial velocity direction and magnitude versus total tangential velocity magnitude for wandering SMBHs ($D>0.7$ kpc) in the inner 10 kpc of MW-mass halos. Both velocities are given relative to the maximum circular velocity of their host halo,$v_{max}$. The solid black lines represent $\sqrt{2}v_{max}$. The orbits of wandering SMBHs have random eccentricities and the majority are bound to the central galaxy.} \label{bh_velocities} \end{figure} Figure~\ref{distances} plots the median cumulative distribution of SMBH distances in MW-mass halos. Many wandering SMBHs exist at $R>10$ kpc. These SMBHs have grown even less than those closer to halo center, with a median mass of only 1.07 times their initial seed mass and 70\% having grown by less than a factor of 2. There is again not any clear dependence on galaxy morphology or recent merger history. \section{Summary} We present a prediction from a large-scale cosmological simulation for the population of wandering SMBHs in MW-mass halos using 26 halos extracted from the {\sc Romulus25} simulation. {\sc Romulus25} is uniquely able to track the evolution of SMBH orbits during and following galaxy mergers while self-consistently accounting for the changing kinematics and structure of their host galaxy \citep{tremmel15,tremmel17,tremmel18}. \begin{figure} \centering \includegraphics[trim=17mm 5mm 10mm 20mm, clip, width=90mm]{Distance_Distribution_new.pdf} \caption{{\sc Distance Distribution for Wandering SMBHs}. The median cumulative number of wandering SMBHs as a function of distance from halo center for MW-mass halos. Shaded regions span 75th and 25th percentiles. The colors and styles are the same as Figure~\ref{number_bhs}.} \label{distances} \end{figure} We predict that MW-mass halos often host several wandering SMBHs on kpc-scale orbits regardless of merger history or morphology. Wandering SMBHs are unlikely to be found in galactic disks, a result that is consistent with previous works \citep{callegariBH09,callegari11}. The majority of wandering SMBHs have grown little since their seeding, though some have grown substantially, with growth mostly occurring at early times prior to their arrival in their final host. Previous works have shown how SMBHs are commonly deposited on wide, long lived orbits within galaxies as a result of the disruption of their host during a galaxy merger \citep{Yu02,callegariBH09,callegari11,dosopoulou17,tremmel18}. Without any stellar core to assist in their orbital decay, the SMBHs will remain on kpc-scale orbits for long periods of time. The merger history of MW-mass halos, including our own, have included many minor mergers more likely to lead to tidal disruption and the deposition of `naked' SMBHs within the galaxy \citep{tremmel18}. Such a population of wandering SMBHs in MW-mass halos has been predicted both by semi-analytic models \citep[e.g.][]{volonteriOffCenBH05,dvorkin17} as well as cosmological simulations \citep{bellovaryBH10,volonteri16}. An improvement over previous cosmological simulations, {\sc Romulus25} is able to accurately track the orbital evolution of SMBHs using a well tested technique that accounts for their orbital decay due to unresolved dynamical friction \citep{tremmel15}. SMBHs are seeded in very specific environments at the centers of proto-galaxies at high redshift. The results presented here self-consistently incorporate galaxy merger histories as well as SMBH occupation and dynamical evolution with fewer \textit{a priori} assumptions and in more detail compared to previous cosmological simulations and semi-analytical models. {\sc Romulus25} does not include the effect of three body interactions, nor does it include prescriptions for gravitational recoil events resulting from SMBH mergers. Both may contribute further to the population of wandering SMBHs in galaxies \citep[e.g.][]{volonteriOffCenBH05,blecha16}. Observing wandering SMBHs in massive galaxies can provide unique constraints on SMBH formation and early growth. This may be possible in the near future if these SMBHs retain a bound stellar population around them, if they interact with nearby stars and result in tidal disruption events, or if they accrete gas and appear as an ultra-luminous X-ray source or off-center active galactic nucleus. We will explore the observable nature of wandering SMBHs in future work. \\ \section*{Acknowledgments} FG, TQ and MT were partially supported by NSF award AST-1514868. MT gratefully acknowledges support from the YCAA Prize Postdoctoral Fellowship. AP was supported by the Royal Society. This research is part of the Blue Waters sustained-petascale computing project supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993) and the state of Illinois. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. This work is also part of a PRAC allocation support by the National Science Foundation (award number OCI-1144357). MV acknowledges funding from the European Research Council under the European Community's Seventh Framework Programme (FP7/2007-2013 Grant Agreement no. 614199, project `BLACK'). The analysis was done using software packages Pynbody \citep{pynbody} and TANGOS \citep{tangos}. The authors thank Jillian Bellovary, Priyamvada Natarajan, and Angelo Ricarte for a thorough reading of the manuscript and useful discussion. \bibliographystyle{apj}
{ "timestamp": "2018-04-24T02:20:26", "yymm": "1802", "arxiv_id": "1802.06783", "language": "en", "url": "https://arxiv.org/abs/1802.06783" }
\section{Introduction} \label{sec:intro} Human navigates the world through five senses, including taste, touch, smell, hearing, and sight. We sometimes rely on one sense while sometimes on multiple senses. For computer systems, the optical sensor is perhaps the most essential sensor which captures information like human eyes. Cameras are widely used for public safety and services in hospitals, shopping malls, streets, etc. On the other hand, booming use of other sensors is seen in many IoT applications due to the advances in wireless communications and MEMS. In this work, we like to raise one fundamental question: how can we improve the perceptivity of computer systems by integrating multiple sensors? More specifically, we are interested in fusing video and inertial sensor data to achieve person identification (PID), as is shown in Fig.~\ref{fig:scene:labeled}. \begin{figure} \centering \subfigure[]{ \label{fig:scene:ZhongDing01} \begin{minipage}[b]{0.2\textwidth} \centering \includegraphics[width=1.4in]{./Figures/ZhongDing01.pdf} \end{minipage} \subfigure[]{ \label{fig:scene:labeled} \begin{minipage}[b]{0.2\textwidth} \centering \includegraphics[width=1.4in]{./Figures/4639ID.pdf} \end{minipage}} \caption{Scenes where biological features are difficult to extract. \label{fig:scene}} \end{figure} Efficient PID is the first step toward surveillance, home security, person tracking, no checkout supermarkets, and human-robot conversation. Traditional PID technologies are usually based on capturing biological features like face, voice, tooth, fingerprint, DNA, and iris~\cite{FaceRecognition, tooth, iris}. However, these techniques require intimate information of users, cumbersome registration, training process, and user cooperation. Also, relying on optical sensors implies high environmental dependency (such as lighting, obstacle, resolution, view angle, etc.), thus not suitable for public sites. A scene captured in a construction site is shown in Fig.~\ref{fig:scene:ZhongDing01}, where workers must wear helmets and masks to protect themselves from falling objects and toxic gases. A top view of a courtyard is shown in Fig.~\ref{fig:scene:labeled}. Clearly, recognizing biological features is difficult in such scenarios. Some other recognition approaches are based on wireless signals, but require active participation by users~\cite{WirelessRecognition01, WirelessRecognition02}. The ID-Match method proposed in~\cite{IDMatchRFID} integrates computer vision via depth camera and UHF RFID. It is capable of recognizing individuals walking in groups while wearing RFID tags, thus enabling human-robot interaction. However, this method is handicapped by short range, and all the users need to carry extra RFID tags. In this work, we propose a practical, effective and convenient PID system by combining computer vision and inertial sensor data. Because almost everyone carries a smartphone and almost every smartphone has inertial sensors inside. The main workflow of our PID system is shown in Fig.~\ref{fig:IdentificationFlow}. From video data, a set $O = \{o_{1}, o_{2}, ...\}$ of human objects and their comparable features are retrieved. Similarly, from inertial sensors, a set $S = \{s_{1}, s_{2}, ...\}$ of inertial data and their comparable features are retrieved. Then, the similarity score of each $o_i$ and each $s_j$ is calculated. By analyzing all the similarity scores, the pairing between $O$ and $S$ is derived, which leads to PID result. Inertial sensors are widely used to derive carrier's motions, paths, and physical activities. They are standard modules for current smartphones. On the other hand, we can get motions, traces, and physical activities of people from videos. When persons do different types of activities, it is easy to pair an object with a sensor. But when people do the same activity at the same time, it is difficult to identify persons. So this work only discusses the situation that all the people under camera are walking. The contributions of this work are as follows. First, we develop a practical, low-cost, and robust PID system. Second, our solution integrates two types of popular sensors. Third, in this work, our matching method focuses on WPID to show the robust of our PID system that combines video and inertial sensor data together. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{./Figures/IdentificationFlow.pdf} \caption{Data fusion workflow. \label{fig:IdentificationFlow}} \end{figure} The rest of this paper is structured as follows. \Sec{sec:method} introduces our PID system and WPID method. Performance evaluation results are presented in \Sec{sec:simu}. Conclusions are drawn in \Sec{sec:conclu}. \section{PROPOSED WALKING PERSON IDENTIFICATION} \label{sec:method} We consider an environment in Fig.~\ref{fig:IdentificationFlow} with a video camera and multiple users. The data collected from both camera and smartphones is sent to a server for PID purpose. Our PID system has four software modules as shown in Fig.~\ref{fig:SystemArchitecture4}. The video feature extraction module retrieves human objects and walking traces from a sequence of video frames. The acceleration (Acc) feature extraction module retrieves walking information from acceleration data. The similarity scoring module compares the walking features from both data sources and assigns them similarity scores. The object-ID pairing module couples human objects with smartphones based on the similarity scores. \begin{figure}[t] \centering \includegraphics[width=7.0cm]{./Figures/SystemArchitecture4.pdf} \caption{Our PID architecture. \label{fig:SystemArchitecture4}} \end{figure} \subsection{Video Feature Extraction Module} The human object retrieval sub-module processes each frame to extract the objects that are recognized as human. It is directly realized by YOLO~\cite{YOLO, YOLO9000, ObjectDetection}. For each frame, YOLO outputs a set $O$ of human objects represented by bounding boxes, and each bounding box is a rectangle inside where YOLO recognizes a human object. The $i$th bounding box of $O$ is denoted by $o_{i}$ and its center, width, and height are denoted by $o_{i}.c$, $o_{i}.w$, and $o_{i}.h$, respectively. Examples are shown in Fig.~\ref{fig:boxforwalking}. The trace-finding sub-module is to connect the human objects of adjacent video frames and form continuous traces, where a trace is a sequence of human objects that are regarded as the same person. Efficient object tracking algorithms are available in~\cite{Bewley2016_sort, 7045866, 6909555, Yang, 7410891}, but we design a lightweight tracing method based on movement limitation. Generally, human's running speed is less than $15$ km/h. Assuming a frame rate of $30$ frames per second (fps), in most cases, a person cannot move over $0.1$ of his height between two frames. Based on this assumption, each trace has a \textit{search range} to find its human object in the next frame. The results are some traces connecting human objects in continuous frames. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{./Figures/boxforwalking.pdf} \caption{The changes of bounding boxes during walking. \label{fig:boxforwalking}} \end{figure} After trace-finding, the step feature extraction sub-module retrieves walking-related features from each trace. Fig.~\ref{fig:boxforwalking} shows two sequences of frames of two human objects. Suppose our camera has a downward viewing angle. Person 1 walks along a vertical line. When he steps forward, his bounding box becomes taller. When he closes feet, his bounding box becomes shorter. Person 2 walks along a horizontal line. When he steps forward, his bounding box becomes wider. When he closes feet, his bounding box narrows down. As a result, the changes of $o_{i}.h/o_{i}.w$ over time are regarded as step patterns. We use $t_{i}$ to denote the ratio-feature of $i$th trace. Fig.~\ref{fig:steppatternextraction} shows the ratio-features extracted from two persons, who make $6$ and $5$ strides in $100$ frames, respectively. We also mark the ground truth of strides in the graph. As can be seen, the ratio-feature can well present human step patterns. \begin{figure}[t] \centering \includegraphics[width=8.6cm]{./Figures/5258_5358steppattern.pdf} \caption{Ratio-features of walking traces. \label{fig:steppatternextraction}} \end{figure} \subsection{Acc Feature Extraction Module} In this work, each user carries a smartphone which has installed our application (app), and they can put them in pockets or just hand them. Our software only collects acceleration from the inertial sensor. Since activity recognition from inertial sensor data has been intensively studied, we simply adopt existing solutions~\cite{Senorinmobilephone, sensorbetter}. The sensor data $\hat{a_{i}}$ from the $i$th device is a sequence of acceleration magnitudes after removing direction. Since most energy captured by accelerometer associated with human movements is below 15 Hz~\cite{mathie2003monitoring}, we remove the high-frequency components from $\hat{a_{i}}$. $\hat{a_{i}}$ is low-pass filtered by a $10$th order Butterworth filter with a 15 Hz cut-off frequency~\cite{SensorForStep}. Further, since our frame rate is 30 fps, the simple frequency of $\hat{a_{i}}$ is decreased to 30 per second. After these steps, we get $a_{i}$ as step feature. \subsection{Similarity Scoring Module} After retrieving step features from video data and sensor data, we want to answer the following question: How similar is ratio-feature sequence $t_{i}$ to Acc sequence $a_{j}$? The similarity between $t_{i}$ and $a_{j}$ is denoted by $Sim$. In this work, we try to match the extremum positions of two sequences by ignoring their exact values. First, we conduct an extremum detection to find local maximum/minimum points with a window of length $d$. For example, when we set $d=10$, we traverse all the points and their most adjacent $10$ points. If the value of a point is bigger/smaller than all the other $10$ most adjacent points, this point is recorded as a maximum/minimum point. In our experiments, we set $d=10$. A maximum point is marked as $1$, a minimum point is marked as $-1$, and the rest are marked as $0$. This process transforms $t_{i}$ and $a_{j}$ into ternary sequences: $\widetilde{t_{i}}$ and $\widetilde{a_{j}}$. The similarity score between $\widetilde{t_{i}}$ and $\widetilde{a_{j}}$ is defined as: \begin{equation} \centering \label{eq:method2} Sim(\widetilde{t_{i}},\widetilde{a_{j}}) = \frac{n}{\sum_{x} dif(\widetilde{t_{i}}[x],\widetilde{a_{j}})}. \end{equation} $n$ is the number of extremums in $\widetilde{t_{i}}$ and $dif(\widetilde{t_{i}}[x],\widetilde{a_{j}})$ is defined as: \begin{equation} \centering \label{eq:dif} dif(\widetilde{t_{i}}[x],\widetilde{a_{j}}) =\left\{ \begin{array}{lr} 0, & \widetilde{t_{i}}[x]=0;\\ |y-x|, & \widetilde{t_{i}}[x] \neq 0~ and~ y~exists;\\ 1.5\times d,& otherwise.\\ \end{array} \right. \end{equation} Here, we scan each binary value $\widetilde{t_{i}}[x]$ of $\widetilde{t_{i}}$. If $\widetilde{t_{i}}[x]=0$, then $dif(\widetilde{t_{i}}[x],\widetilde{a_{j}})$ returns $0$. If $\widetilde{t_{i}}[x] \neq 0$, $dif(\widetilde{t_{i}}[x],\widetilde{a_{j}})$ traverse $\widetilde{a_{j}}$ in the range $x-d$ to $x+d$. $y$, a position in the search range, is the nearest position from $x$ and has $\widetilde{a_{j}}[y] = \widetilde{t_{i}}[x]$. If such $y$ exists in the search range, $dif(\widetilde{t_{i}}[x],\widetilde{a_{j}})$ returns $|y-x|$; if not, $1.5\times d$ is returned. Dividing $n$ by the sum of these differences gives the similarity score between $\widetilde{t_{i}}$ and $\widetilde{a_{j}}$. \subsection{Object-ID Pairing Module} After similarity scoring, we get $Sim$. $Sim$ is a two-dimensional array recording all the similarity scores until frame $f$. Let $P_{f}$ be the Object-ID pairing result until frame $f$. The pairing problem is now formulated as a different expression of linear sum assignment problem (LSAP)~\cite{assignment}: \begin{equation} \centering \label{eq:pairing} max\sum_{i\in O}{\sum_{j\in S} sim_{ij}p_{ij}}, \end{equation} $sim_{ij}$ is the similarity score between $i$th human object in $O$ and $j$th sensor in $S$, and the assignment constraints are: \begin{equation*} \centering \label{eq:constraints} \begin{split} \sum_{i\in O}p_{ij} \leq 1 & ~~~~~~~~\forall j\in S, \\ \sum_{j\in S}p_{ij} \leq 1 & ~~~~~~~~\forall i\in O, \\ p_{ij}\in \{0,1\} & ~~~~~~~~\forall i\in O, j\in S. \end{split} \end{equation*} We use hungarian algorithm to solve this problem. $p_{ij} = 1$ means that human object $i$ is paired to sensor $j$; $p_{ij} = 0$ means that human object $i$ cannot be paired to sensor $j$. In our work, each frame $f$ can have a pairing result $P_{f}$, and we call the pairing result at this stage as \texttt{Raw Pair} stage result. However, in practice, the identification result is unstable if we base on our \texttt{Raw Pair} stage result. For example, we may identify one person as Sansa when one frame comes in, but we may identify this person as Jack when next frame comes in, and this person may be identified as Lucy when the frame after the next frame comes in. This problem, which is especially serious when the trace of a person is still short, makes the result rough and hard to see. For this consideration, we propose a \texttt{Refined Pair} stage. In \texttt{Refined Pair} stage, the identification result of a trace not just depends on $P_{f}$, but $P_{1}$ to $P_{f}$. Let $RP_{f}$ be the result generated in the \texttt{Refined Pair} stage for frame $f$. Let $RSim$ be a two-dimensional array, and the value of $rsim_{ij}$ is the number of times that object $i$ has been paired to sensor $j$. The refined pairing problem can be formulated as a LSAP: \begin{equation} \centering \label{eq:rpairing} max\sum_{i\in O}{\sum_{j\in S}rp_{ij}} \log_{2}{(1+rsim_{ij})}, \end{equation} subject to: \begin{equation*} \centering \label{eq:rconstraints} \begin{split} \sum_{i\in O}rp_{ij} \leq 1 & ~~~~~~~~\forall j\in S, \\ \sum_{j\in S}rp_{ij} \leq 1 & ~~~~~~~~\forall i\in O, \\ rp_{ij}\in \{0,1\} & ~~~~~~~~\forall i\in O, j\in S. \end{split} \end{equation*} Different from $sim_{ij}$, $rsim_{ij}$ is the number of pairing times. $sim_{ij}$ is small, but $rsim_{ij}$ can be very large if the trace of human object $i$ is long. The logarithmic function, shown in Eq.~\ref{eq:rpairing}, is used to weaken the impact of the length of traces on pairing. As $p_{ij}$, $rp_{ij} = 1$ means that human object $i$ is paired to sensor $j$; $rp_{ij} = 0$ means that human object $i$ is not paired to sensor $j$. \section{Performance Evaluation}\label{sec:simu} We have developed a prototype system with one video camera and multiple mobile devices. The camera is Logitech webcam with the resolution of $640\times 480$. To prove that our solution is not device-dependent, we have tried different models of smartphones, including Redmi Note 4X, ASUS ZenFone 3, HTC 10 Evo. The server is a personal computer with an Intel(R) Core(TM) i7-3770 CPU and an NVIDIA GeForce GT 620 graphics card. All devices used in our system are synchronized by the same network time server. We conduct a number of experiments on our WPID method. The average speed of our tracing and WPID method on different pairing stages is around 120 fps. Apparently, our WPID method on two different stages only consumes a few server resources. To show the robustness of our WPID method, experiments are carried out under different areas and viewing angles. A downward viewing angle and outdoor area is set up as shown in Fig.~\ref{fig:variable:3}. The horizontal viewing angle and indoor area is set up as shown in Fig.~\ref{fig:variable:4}. During our experiments, all the persons carry smartphones in their pockets or hands and wander around freely in their styles. As shown in Fig.~\ref{fig:variable}, our WPID method can work under different areas, different view angles, different ways of carrying the smartphones, and different walking styles. The following statistics are all the cases of two persons, and the result of each condition is generated from at least 2000 continuous frames. \begin{figure} \centering \subfigure[]{ \label{fig:variable:3} \begin{minipage}[b]{0.2\textwidth} \centering \includegraphics[width=1.4in]{./Figures/3_EX.pdf} \end{minipage} \subfigure[]{ \label{fig:variable:4} \begin{minipage}[b]{0.2\textwidth} \centering \includegraphics[width=1.4in]{./Figures/4_EX.pdf} \end{minipage}} \caption{Some correctly identified results under different viewing angles and different spaces. \label{fig:variable}} \end{figure} To measure the accuracy of our WPID method, let $O$ be the number of persons having shown in front of the camera until the latest frame. Let $N^{ID}_{i}$ be the number of frames that the $i$th person is identified by our program, and $N^{CD}_{i}$ be the number of frames that the $i$th person is correctly identified by our program. We define our correctly identification rate $R_{cd}$ as: \begin{equation} \centering \label{eq:ridrate} R_{cd} = \frac{\sum_{i=1}^{O}N^{CD}_{i}}{\sum_{i=1}^{O}N^{ID}_{i}}. \end{equation} Let $TL$ be the length of time that a person is continuously detected by YOLO. If $TL$ is too small, the sequences extracted is too short to be considered as a step pattern. As a result, we set a threshold $TS$ for $TL$. Only when the lengths of two sequences are both larger than $TS$, we do our matching processes. By setting $TS$ from $0.33$ to $4$ seconds, Table~\ref{table:rid} shows the $R_{cd}$ on two stages. From Table~\ref{table:rid}, the increase of $TS$ leads to the increase of $R_{cd}$ in most cases. However, the increase of $R_{cd}$ is not obvious. Also, \texttt{Refined} stage achieves better performance than \texttt{Raw} stage, especially in visual performance. \begin{table}[!t] \centering \caption{The correctly identified rates with different $TS$.} \label{table:rid} \begin{tabular}{|c|c|c|c|c|c|}\hline \backslashbox{Stage}{TS/s} & 0.33\quad & 1\quad & 2\quad & 3\quad & 4 \\ \hline \texttt{Raw} & 0.69 & 0.74 & 0.75 & 0.75 & 0.75\\ \hline \texttt{Refined} & 0.71 & 0.74 & 0.76 & 0.76 & 0.76 \\ \hline \end{tabular} \end{table} \section{CONCLUSIONS}\label{sec:conclu} We propose a new PID system by combining optical and inertial sensors. We design a light tracking algorithm, a WPID method, and two pairing stages. When people do different activities, it is easy to identify persons by comparing behaviors extracted from video and inertial sensor data. So the most complex part of our system is to identify persons who do the same activities at the same time. In this work, we design a WPID method to identify walking persons to show the robustness and potential of our PID system. We conduct extensive experiments and do a lot of discussions to validate the above claims. Results show that the correct identification rate of our WPID method can up to 76\% in 2 seconds. \bibliographystyle{IEEEbib}
{ "timestamp": "2018-02-21T02:07:02", "yymm": "1802", "arxiv_id": "1802.07021", "language": "en", "url": "https://arxiv.org/abs/1802.07021" }
\section*{ACKNOWLEDGEMENT} CR and AC thank the U.S. Army International Technology Center Atlantic for financial support (Grant No. W911NF-14-1-0315). This work has partially been supported by the CNR-SPIN Seed Project No. B52F17001370005. AM acknowledges support from the ``Rita Levi Montalcini" programme for the recruitment of young researchers. CR and EP acknowledges support from the project NANOPREPAINT - PAR FSC Abruzzo 2007-2013 - Linea di Azione I.1.1.a.
{ "timestamp": "2018-02-21T02:06:54", "yymm": "1802", "arxiv_id": "1802.07017", "language": "en", "url": "https://arxiv.org/abs/1802.07017" }
\section{Introduction}\label{sec:intro} Dense subgraph extraction lies at the core of graph data mining. The problem and its variants have been intensively studied. Most of the existing studies focus on finding the densest subgraph in one network. For example, polynomial time algorithms and efficient approximation algorithms are devised to find the subgraph with maximum average degree~\cite{goldberg1984finding}. There are also quadratic programming methods for extracting subgraphs with high graph affinity density~\cite{liu2013fast,pavan2007dominant}. In many real-world applications, there are often more than one kind of relations among objects studied. Thus, it is common to have more than one graph describing a same set of objects, one kind of relation captured by one graph. As a result, an interesting contrast data mining problem arises. Given two graphs sharing the same set of vertices, what is the subgraph such that the gap between its density in the two graphs is the largest? We call such a subgraph the \textbf{Density Contrast Subgraph} (\textbf{DCS}). \updates{To demonstrate the power of DCS, consider the task of surveying and summarizing the trends of an area, say data mining research. Such a task is practical and common for technical writers, academic researchers and graduate students among many others. Based on a database of published data mining papers, how can we detect trends from the database automatically? Angel~\textit{et~al.}~\cite{angel2012dense} proposed to build a keyword association graph from the input text data, and identify stories/topics via groups of densely-connected keywords from it. For example, applying the method of~\cite{angel2012dense} on data mining papers we may find a topic ``scalable tensor factorization'', because the words ``scalable'', ``tensor'' and ``factorization'' often co-occur in papers. However, directly extracting dense subgraphs corresponding to densely-connected keywords from a keyword association graph like~\cite{angel2012dense} may not help us detect trends effectively. For example, in our experiments, if we just extract dense subgraphs from the graph indicating pairwise keywords association strength in the titles of data mining papers published in the last 10 years, we find topics ``time series'' and ``feature selection''. But these two topics have been intensively investigated ever since and do not present a new trend. In the recent 10 years, according to the graph density measure on the data we have, the topic ``time series'' even cooled down a little bit.} \updates{To detect trends effectively, we take two keyword association graphs into consideration and apply DCS algorithms. Besides the keyword association graph based on papers published recently, we also need the other keyword association graph derived from the papers published in early years. Those groups of keywords whose connection strengths are much tighter in the recent keyword association graph than in the early keyword association graph are identified as trends in data mining. In our experiments we obtained results like ``social networks'', ``matrix factorization'' and ``unsupervised feature selection''. These topics all became popular only in recent years. } \updates{DCS can also be applied to detecting current anomalies against historical data. Specifically, we can build a weighted graph where the edge weights are our expectation of how tightly the vertices are connected to each other, which can be derived from, for example, historical data. Then, we observe the current pairwise connection strength of vertices, and build another weighted graph based on our observations. We apply DCS on these two weighted graphs. Some typical application scenarios include detecting emerging traffic hotspot clutters, emerging communities in social networks, and money launderer dark networks.} \nop{ One application of DCS is time-evolving graph analysis. For example, consider the co-author networks. If we have two co-author networks, one is the network with co-authorship in the most recent five years, and the other is the network with co-authorship more than five years ago. What is the emerging co-author group where the collaboration among the authors has been enhanced the most in the last five years? What is the disappearing co-author group whose collaboration has faded away the most in the past five years? Both questions can be answered by finding density contrast subgraphs in the two co-author networks, if we model the strength of collaboration among a group of authors as the density of their induced subgraph in a co-author network. Another interesting application of DCS is analyzing content-based social networks, where there are naturally two graphs about social network users. The first one is a social graph, where edges between users represent social ties. The other graph is a user topic similarity graph, which can be easily derived from the abundant user content data in the content-based social network. The edges represent the similarity between users' content. In general, two users who generate very similar content may not have to have strong social ties. At the same time, according to the homophily phenomenon, users tightly connected to each other socially may have a good chance to share some common content topics. By mining density contrast subgraphs in the two graphs, we can find outlier communities: a group of users whose content similarity connections are much denser than their connections in the social graph or a group of users who are socially densely connected but dissimilar to each other in their generated contents. Those outlier communities may lead to business opportunities, such as friend recommendation in social networks. Clearly, finding dense subgraphs in one graph and then check the density of the counterparts in the other graph is not effective since, due to the homophily phenomenon, the chance is high that the corresponding subgraphs in both graphs are dense simultaneously. } In this paper, we study the Density Contrast Subgraph problem under two widely adopted graph density measures, average degree and graph affinity. One may notice that for both density measures, we may form a ``difference'' graph, where the weight of each edge is obtained by the difference of the weights of this edge in the two graphs. However, this does not mean that traditional densest subgraph extraction methods can be applied to find density contrast subgraphs. In the traditional densest subgraph problems, edge weights are always positive. In the difference graph of the density contrast subgraph problem, we may have negative edge weights. The existence of negative edge weights changes the nature of densest subgraph finding substantially. For example, finding the densest subgraph with respect to average degree in a graph with only positive edge weights is polynomial time solvable~\cite{goldberg1984finding}, and has an efficient 2-approximation algorithm~\cite{charikar2000greedy}, while if the graph has negative edge weights, it becomes NP-hard and also hard to approximate as to be proved in Section~\ref{sec:ad}. To tackle the Density Contrast Subgraph problem, we make several technical contributions. We prove the computational hardness of finding DCS under the two density measures of average degree and graph affinity. For the average degree measure, we also prove it is hard to approximate within a factor of $O(n^{1-\epsilon})$. An efficient $O(n)$-approximation algorithm is then developed to solve this problem. The DCS problem under the graph affinity measure is also NP-hard, and is a QP (Quadratic Programming) which is non-concave. For this problem, we first devise an efficient 2-Coordinate Descent algorithm that is guaranteed to converge to a KKT point. Based on the 2-coordinate descent algorithm, we give a constructive proof of that edges of negative weights cannot appear in a DCS with respect to graph affinity. Using our construction, we can further improve a KKT point solution to a positive clique solution. A smart initialization heuristic is proposed to reduce the number of initializations for our iterative algorithm, which in experiments brings us speedups of 1-3 orders of magnitude. Extensive empirical studies are conducted to demonstrate the effectiveness and efficiency of our algorithms. The rest of the paper is organized as follows. We review the related work in Section~\ref{sec:related}. In Section~\ref{sec:pre}, we briefly introduce the two density measures used in our work, average degree and graph affinity, and formulate the Density Contarst Subgraph problem. In Section~\ref{sec:ad}, we give our solutions to the DCS problem under the measure of average degree. In Section~\ref{sec:ga}, we tackle the DCS problem under the graph affinity measure. We report the experimental results in Section~\ref{sec:exp} and conclude the paper in Section~\ref{sec:con}. \section{Related Work}\label{sec:related} Dense subgraph extraction is a key problem in both algorithmic graph theory and graph mining applications~\cite{tsourakakis2013denser, angel2012dense, mitzenmacher2015scalable, khuller2009finding, fratkin2006motifcut}. One of the most popular definitions of subgraph density is the average degree. Intensive studies have been conducted on finding a subgraph with the maximum average degree in one single graph~\cite{bhattacharya2015space, epasto2015efficient, goldberg1984finding, charikar2000greedy}. Goldberg~\cite{goldberg1984finding} first proposed a polynomial time algorithm based on maximum flow. Charikar~\cite{charikar2000greedy} described a simple greedy algorithm which has an approximation ratio of 2. Besides average degree, graph affinity, which is a quadratic function $\textbf{x}^\top A \textbf{x}$ of a subgraph embedding $\textbf{x} \in \triangle^n$, is also widely adopted as a measure of subgrah density~\cite{liu2013fast,pavan2007dominant,chu2015alid,wang2016tradeoffs}. Motzkin and Straus~\cite{motzkin1965maxima} proved that, for unweighted graphs, maximizing graph affinity is equivalent to finding the maximum clique in the graph. Pavan and Pelillo~\cite{pavan2007dominant} first proposed an algorithm based on replicator dynamics to find local maximas of the quadratic function $\textbf{x}^\top A \textbf{x}$ on the simplex $\triangle^n$. Liu~\textit{et~al.}~\cite{liu2013fast} proposed a highly efficient algorithm called SEA (see Appendix) to solve the problem, where the core idea is to use a shrink-and-expand strategy to accelerate the process of finding Karush-Kuhn-Tucker (KKT) points. Wang~\textit{et~al.}~\cite{wang2016tradeoffs} discussed the trade-off between the graph affinity density and subgraph size in extracting dense subgraphs. Please note that the existing work on maintaining dense subgraphs on temporal graphs~\cite{angel2012dense,bhattacharya2015space, epasto2015efficient, bahmani2012densest} cannot solve our problem, although two consecutive snapshots of a temporal graph can be regarded as a special case of the input to DCS. \cite{angel2012dense, bhattacharya2015space, epasto2015efficient, bahmani2012densest} are all for extracting dense subgraphs from the latest snapshot, where for a valid input there are no edges with negative weights. The algorithms in~\cite{bhattacharya2015space, epasto2015efficient} can only deal with unweighted graphs. In our problem, mining DCS from two graphs is equivalent to mining dense subgraphs from a ``difference graph'', which may have negative edge weights. We show in Section~\ref{sec:ad} that, when the density measure is average degree, the existence of negative edge weights makes our DCS problem NP-hard and hard to approximate. This is dramatically different from extracting densest subgraph with respect to average degree~\cite{bhattacharya2015space, epasto2015efficient, bahmani2012densest}, which is polynomial time solvable and has an efficient 2-approximation algorithm. For the graph affinity density measure, ~\cite{angel2012dense} considers a general definition of subgraph density where edge density, the discrete version of graph affinity, is used as a special case. However, ~\cite{angel2012dense} is for maintaining all subgraphs whose density is greater than a threshold, and only subgraphs with size (\#vertices) smaller than $N_{max}=5$ is considered. Thus, although mining DCS with respect to graph affinity can be reduced to finding the densest subgraph in one single graph (the ``difference'' graph), techniques in~\cite{angel2012dense} still cannot be used. Mining dense subgraphs from multiple networks to find ``coherent'' dense subgraphs also attracts much research interest~\cite{hu2005mining,li2012pattern,kelley2005systematic,wu2016mining}. For example, Wu~\textit{et~al.}~\cite{wu2016mining} investigated the problem of finding a subgraph that is dense in one conceptual network and also connected in a physical network. All these studies focus on finding ``coherent'' dense subgraphs in multiple graphs. \nop{ Mining dense subgraphs from multiple networks also attracts much research interest. Hu~\textit{et~al.}~\cite{hu2005mining} and Li~\textit{et~al.}~\cite{li2012pattern} studied how to find coherent dense subgraphs whose edges are not only densely connected but also frequently occur in multiple gene co-expression networks. Finding co-dense subgraphs that exist in multiple gene co-expression or protein interaction networks were investigated in~\cite{kelley2005systematic,pei2005mining}. Recently, Wu~\textit{et~al.}~\cite{wu2016mining} investigated the problem of finding a subgraph that is dense in one conceptual network and also connected in a physical network. All these studies focus on finding ``coherent'' dense subgraphs in multiple graphs. Finding subgraphs that contrast in graph density measures from multiple graphs remains untouched.} \nop{ Another line of related research is contrast data mining, which aims at discovering patterns and models that manifest drastic differences between data sets. Dong and Li~\cite{dong1999efficient} introduced emerging patterns to capture useful contrasts between data classes. Ting and Bailey~\cite{ting2006mining} proposed algorithms to find the minimal contrast subgraph, which is a graph pattern appears in one graph but not in the other graph, and all of its proper subgraphs are either shared by or not contained in the two graphs. Yang~\textit{et~al.}~\cite{yang2014mining} studied the problem of detecting the most frequently changing subgraph in two consecutive snapshots of a time-evolving graph. The difference between~\cite{yang2014mining} and our work is that, rather than subgraph density, \cite{yang2014mining} adopts the maximum number of independent paths (maximum flow) to measure how much a subgraph changes in two consecutive snapshots. } Another line of related research is contrast graph mining, which aims at discovering subgraphs that manifest drastic differences between graphs. \updates{Wang~\textit{et~al.}~\cite{wang2008spatial} and Gionis~\textit{et~al.}~\cite{gionis2015bump} studied how to find the anomalous subgraphs that contrast others in one graph.} Ting and Bailey~\cite{ting2006mining} proposed algorithms to find the minimal contrast subgraph, which is a graph pattern appears in one graph but not in the other graph, and all of its proper subgraphs are either shared by or not contained in the two graphs. Yang~\textit{et~al.}~\cite{yang2014mining} studied the problem of detecting the most frequently changing subgraph in two consecutive snapshots of a time-evolving graph. The major difference between these studies and our work is that, none of these studies adopt subgraph density as the measure for mining contrast subgraphs. \nop{ Cadena~\textit{et~al.}~\cite{cadena2016dense} investigated how to extract the subgraph whose total edge weight deviates from its expected total edge weight the most, which is the work closest to ours in literature. Although the total edge weight of a subgraph is related to our density measures, average degree and graph affinity\footnote{The total edge weight of a subgraph is the numerator of this subgraph's average degree and edge density, which is often regarded as the discrete version of graph affinity~\cite{liu2013fast,wang2016tradeoffs}.}, these three measures are still quite different from each other. Thus, properties of and solutions to the problem in~\cite{cadena2016dense} and our problems are very different. } \updates{~\cite{cadena2016dense} is the work closest to ours in literature. In~\cite{cadena2016dense}, Cadena~\textit{et~al.} investigated how to extract the subgraph whose total edge weight deviates from its expected total edge weight the most. The total edge weight is related to the density measures adopted in our work, average degree and graph affinity, since the total edge weight of a subgraph is the numerator of this subgraph's average degree and edge density, which is often regarded as the discrete version of graph affinity~\cite{liu2013fast,pavan2007dominant,wang2016tradeoffs}. However, these three measures are still quite different from each other. Thus, properties of the problem in~\cite{cadena2016dense} and our problems are very different. } \section{Preliminaries}\label{sec:pre} In this section, we introduce several essential concepts in our discussion and formulate the Density Contrast Subgraph problem. For readers' convenience, Table~\ref{tab:notation} lists the frequently used notations. \begin{table}[t] \centering \begin{tabular}{|p{30mm}|p{50mm}|} \hline Notation & Description \\ \hline $G=\langle V,E,A\rangle$ & An undirected and weighted graph, where each edge $(u,v) \in E$ is associated with a positive weight $A(u,v)$\\ \hline $G(S)=\langle V,E(S),A(S) \rangle$ & The induced subgraph of $S$ in graph $G$ \\ \hline $W(S)$ & The total degree of $S$ in graph $G$. $W(S)=\sum_{(u,v) \in E(S)}{A(u,v)}$ \\ \hline $\triangle^n$ & A simplex. $\triangle^n=\{\textbf{x} \mid \sum_{i=1}^n{x_i}=1, x_i \geq 0\}$ \\ \hline $\textbf{x} \in \triangle^n$ & An embedding of a subgraph, $x_u$ denotes the participation of $u$ in this subgraph\\ \hline $S_{\textbf{x}}$ & Support set of $\textbf{x}$. $S_{\textbf{x}}=\{u \mid x_u > 0\}$\\ \hline $G_1=\langle V,E_1,A_1 \rangle$, $G_2=\langle V,E_2,A_2 \rangle$ & Inputs of our Density Contrast Subgraph problem \\ \hline $G_D=\langle V,E_D,D \rangle$ & The difference graph between $G_2$ and $G_1$, where $D=A_2-A_1$ and $E_D=\{(u,v) \mid D(u,v) \neq 0 \}$\\ \hline $G_{D^+}=\langle V,E_{D^+},D^+ \rangle$ & The ``positive'' part of $G_D$, where $D^+(i,j)=\max\{D(i,j),0\}$ and $E_{D^+}=\{(u,v) \mid D(u,v) > 0 \}$\\ \hline $N_D(i)$ & The set of $i$'s neighbors in $G_D$. $N_D(i)=\{j \mid D(i,j) \neq 0\}$ \\ \hline $W_D(i;G_D(S))$ & The degree of vertex $i$ in the induced subgraph $G_D(S)$. $W_D(i;G_D(S))=\sum_{j \in N_D(i) \cap S}{A(i,j)}$ \\ \hline \end{tabular} \caption{Frequently used notations.} \label{tab:notation} \end{table} \subsection{Measures of Graph Density} An undirected and weighted graph is represented by $G=\langle V,E,A \rangle$, where $V$ is a set of vertices, $E$ is a set of edges and $A$ is an affinity matrix. Since $G$ is undirected, if $(u,v) \in E$ then $(v,u) \in E$. Denote by $n=|V|$ the number of vertices and $m=|E|$ the number of edges. $A$ is an $n \times n$ symmetric matrix. The entry $A(u,v)>0$ denotes the weight of the edge $(u,v)$, and $A(u,v)=0$ if $(u,v) \notin E$. Given an undirected graph $G=\langle V,E,A \rangle$ and a subset of vertices $S$, the induced subgraph of $S$ is denoted by $G(S)=\langle S,E(S),A(S) \rangle$, where $E(S)=\{(u,v) \mid (u,v) \in E \wedge u \in S \wedge v \in S\}$, and $A(S)$ is a submatrix of $A$ so that only the row and columns of vertices in $S$ are present. Average degree is a widely investigated graph density measure. Given an undirected graph $G=\langle V,E,A \rangle$ and a set of vertices $S$, the total degree of the induced subgraph $G(S)$ is $W(S)=\sum_{(u,v) \in E(S)}{A(u,v)}$. The \textbf{average degree} of the induced subgraph $G(S)$ is defined by \begin{equation}\label{eq:ad} \rho(S)=\frac{W(S)}{|S|}=\frac{1}{|S|}\sum_{u \in S}{\sum_{(u,v) \in E(S)}{A(u,v)}}=\frac{1}{|S|}\sum_{u \in S}{W(u;G(S))} \end{equation} where $W(u;G(S))=\sum_{(u,v) \in E(S)}{w(u,v)}$ is $u$'s degree in $G(S)$. Graph affinity is another popularly adopted graph density measure. In graph affinity, a subgraph is represented by a \textbf{subgraph embedding} in a standard simplex $\triangle^n=\{\textbf{x} \mid \sum_{i=1}^n{x_i}=1, x_i \geq 0\}$. For a subgraph embedding $\textbf{x}=[x_1,x_2,...,x_n]$, the entry $x_u$ indicates the participation importance of vertex $u$ in the subgraph. Denote by $S_{\textbf{x}}=\{u \mid x_u > 0\}$ the \textbf{support set} of $\textbf{x}$. The \textbf{graph affinity} density of a subgraph embedding $\textbf{x} \in \triangle^n$ is defined by \begin{equation}\label{eq:ga} f(\textbf{x})=\textbf{x}^{\top}A\textbf{x}=\sum_{i=1}^{n}\sum_{j=1}^n{x_ix_jA(i,j)}=\sum_{(u,v) \in E_S}{x_ux_vA(u,v)} \end{equation} In traditional dense subgraph mining problems, when the density measure is average degree, the densest subgraph is often large in size~\cite{angel2012dense}, while if the density measure is graph affinity, the support set of the densest subgraph embedding is normally small~\cite{wang2016tradeoffs}. \subsection{Mining Density Contrast Subgraph} Given two undirected graphs $G_1=\langle V,E_1,A_1 \rangle$ and $G_2=\langle V,E_2,A_2 \rangle$, we want to find a subgraph such that its density in $G_1$ minus its density in $G_1$ is a large value. Similar to traditional dense subgraph mining, we are more interested in the subgraph whose density difference is the greatest among all subgraphs. Thus, the \textbf{Density Contrast Subgraph} (\textbf{DCS}) problem can be formulated as an optimization problem. Specifically, if the density measure is the average degree, the optimization problem is \begin{equation}\label{eq:max_ad} \max_{S \subseteq V}{\rho_2(S)-\rho_1(S)=\frac{W_2(S)}{|S|}-\frac{W_1(S)}{|S|}} \end{equation} where $W_1(S)$ and $W_2(S)$ are total degrees of $S$ in $G_1$ and $G_2$, respectively. We call Eq.~\ref{eq:max_ad} the problem of \textbf{Density Contrast Subgraphs with respect to Average Degree} (\textbf{DCSAD}). If we adopt graph affinity as the density measure, the optimization problem then becomes \begin{equation}\label{eq:max_ga} \max_{\textbf{x} \in \triangle^n}{f_2(\textbf{x})-f_1(\textbf{x})=\textbf{x}^{\top}A_2\textbf{x}-\textbf{x}^{\top}A_1\textbf{x}} \end{equation} We call Eq.~\ref{eq:max_ga} the problem of \textbf{Density Contrast Subgraphs with respect to Graph Affinity} (\textbf{DCSGA}). \updates{Note that, similar to maximizing graph affinity in $G$, which is often used to maximize the edge density ($\frac{W(S)}{|S|^2}$)~\cite{wang2016tradeoffs}, we can also solve (\textbf{DCSGA}) for finding a subgraph whose edge density gap in $G_1$ and $G_2$ ($\frac{W_D(S)}{|S|^2}$) is maximized. To convert the solution $\textbf{x}$ to a set of vertices $S$, we just set $S=S_{\textbf{x}}$.} It is easy to find that to find the subgraph such that the absolute value of its density difference is maximized, besides solving Eq.~\ref{eq:max_ad} or Eq.~\ref{eq:max_ga}, we also solve $\max_{S \subseteq V}{\rho_1(S)-\rho_2(S)}$ or $\max_{\textbf{x} \in \triangle^n}{f_1(\textbf{x})-f_2(\textbf{x})}$. \begin{figure} \centering \includegraphics[width=.2\textwidth]{G.pdf} \caption{An Example of the Difference Graph} \label{fig:exp} \end{figure} A nice property that both Eq.~\ref{eq:max_ad} and Eq.~\ref{eq:max_ga} have is that the objective equals the density of $S$'s induced subgraph (or $\textbf{x}$ if using graph affinity as density) in a ``difference graph'' between $G_2$ and $G_1$. Given $G_1=\langle V,E_1,A_1 \rangle$ and $G_2=\langle V,E_2,A_2 \rangle$, the \textbf{difference graph} $G_D=\langle V,E_D,D \rangle$ is the graph associated with the affinity matrix $D=A_2-A_1$. Thus, $E_D=\{(u,v) \mid D(u,v) \neq 0 \}$. We also define the graph that contains only edges with positive weights as $G_{D^+}=\langle V, E_{D^+}, D^+ \rangle$, where $E_{D^+}=\{(u,v) \mid D(u,v) > 0 \}$. Fig.~\ref{fig:exp} gives an example of $G_1$, $G_2$, $G_D$ and $G_{D^+}$. It is easy to verify that Eq.~\ref{eq:max_ad} is equivalent to \begin{equation}\label{eq:DCS_ad} \max_{S \subseteq V}{\rho_D(S)=\frac{W_D(S)}{|S|}} \end{equation} where $W_D(S)$ is the total degree of $S$ in $G_D$. Also, Eq.~\ref{eq:max_ga} is equivalent to \begin{equation}\label{eq:DCS_ga} \max_{\textbf{x} \in \triangle^n}{f_D(\textbf{x})=\textbf{x}^{\top}D\textbf{x}} \end{equation} The major difference between finding dense subgraphs in a difference graph $G_D$ and the traditional dense subgraph detection problems is that there are negative edge weights in a difference graph. In Sections~\ref{sec:ad} and~\ref{sec:ga} we will analyze how negative edge weights affect properties and algorithms of dense subgraph mining problems. Also, from Eq.~\ref{eq:DCS_ad} and Eq.~\ref{eq:DCS_ga} we can see that the optimal value is positive if and only if the matrix $D$ has at least one positive entry, that is, the difference graph has at least one edge with potisive weight. If $D$ does not have positive entries, the optimal values to Eq.~\ref{eq:DCS_ad} and Eq.~\ref{eq:DCS_ga} are both 0, the optimal $S$ to Eq.~\ref{eq:DCS_ad} contains one single vertex, and the optimal $\textbf{x}$ to Eq.~\ref{eq:DCS_ga} has only one entry that equals 1 and all other entries 0. \subsection{Why not Ratio of Difference?} Instead of the absolute value of density difference, why don't we consider the ratio of density difference, i.e. $\frac{\rho_2(S)}{\rho_1(S)}$ or $\frac{f_2(\textbf{x})}{f_1(\textbf{x})}$, as the objective? The reason is that the ratio of density difference sometimes is not well-defined or has trivial solutions. Consider a single vertex $u$ as a subgraph. Its densities in $G_1$ and $G_2$ are both 0 so the ratio of density difference is $\frac{0}{0}$. Also, in Fig.~\ref{fig:exp}, the edge $(v_1,v_2)$ has density ratio $+\infty$ since it only appears in $G_2$ but not $G_1$. \subsection{Generalization of the Difference Graph} In Sections~\ref{sec:ad} and~\ref{sec:ga} we will introduce our DCS finding algorithms that can take any weighted graphs as input, where the weight of an edge can be positive or negative. Thus, the definition of the difference graph of $G_1=\langle V,E_1,A_1 \rangle$ and $G_2=\langle V,E_2,A_2 \rangle$ is not restricted to the graph whose affinity matrix is $A_2-A_1$. For example, we can set the difference graph as $G_D=\langle V,E_D,D=A_2-\alpha A_1\rangle$ and maximizing $\rho_D(S)$ (or $f_D(\textbf{x})$) is equivalent to finding $S$ (or $\textbf{x}$) such that $\rho_2(S) \geq \alpha \rho_1(S)$ (or $f_2(\textbf{x}) \geq \alpha f_1(\textbf{x})$), and $\rho_2(S)-\alpha \rho_2(S)$ (or $f_2(\textbf{x})-\alpha f_1(\textbf{x})$) is maximized. This is similar to the optimal $\alpha$-quasi-clique problem~\cite{tsourakakis2013denser}. Also, when there is one edge in the difference graph whose weight is much heavier than all the other edges, such an edge itself is very possible to be the optimal subgraph. To avoid this, for edges with too heavy weights in $G_D$, we can adjust their weights such that they are not too much heavier than other edge weights in $G_D$. Then the DCS extracted usually will become larger in size. \section{DCS with respect to Average Degree}\label{sec:ad} In this section, we first explore some key properties of the \textbf{DCSAD} problem. Then, we devise an efficient greedy algorithm with \updates{a data-dependent} ratio. \subsection{Complexity and Approximability} Like traditional dense subgraph discovery problem, the \textbf{DCSAD} prefers ``connected'' subgraphs, of course, in the difference graph $G_D$. \begin{property}\label{prp:connect} Let $G_D$ be the difference graph of $G_1$ and $G_2$. For any $S \subseteq V$, if $G_D(S)$ is not a connected subgraph, then there exists a set $S' \subseteq S$ such that $G_D(S')$ is connected and the density difference $\rho_D(S') \geq \rho_D(S)$. \end{property} \begin{proof} Without loss of generality, we assume $G_D(S)$ has two connected components $G_D(S_1)$ and $G_D(S_2)$, where $S_1 \cup S_2=S$ and $S_1 \cap S_2=\emptyset$. Clearly, $W_D(S)=W_D(S_1)+W_D(S_2)$ because $G_D(S_1)$ and $G_D(S_2)$ are isolated. So we have $\rho_D(S)=\frac{W_D(S)}{|S|}=\frac{|S_1|}{|S|}\rho_D(S_1)+\frac{|S_2|}{|S|}\rho_D(S_2)$, which means $\rho_D(S)$ is a convex combination of $\rho_D(S_1)$ and $\rho_D(S_2)$. Thus, $\rho(S)=\frac{W_D(S)}{|S|} \leq \max\{\rho_D(S_1),\rho_D(S_2)\}$ \end{proof} Traditional dense subgraph discovery with respect to average degree can be solved in polynomial time~\cite{goldberg1984finding}, and has an efficient 2-approximation algorithm~\cite{charikar2000greedy}. Unfortunately, our problem does not have the same computational properties. \begin{theorem}\label{th:ad_hard} The \textbf{DCSAD} (Eq.~\ref{eq:DCS_ad}) problem is NP-hard. \end{theorem} \begin{proof} We prove this by a reduction from the maximum clique problem, which is known NP-hard. Given an instance of the maximum clique problem, which is an undirected and unweighted graph $G=\langle V,E \rangle$, we build two graphs $G_1$ and $G_2$ as the input of the \textbf{DCSAD} problem. Let $E_1=\{(u,v) \mid (u,v) \in V \times V \wedge u \neq v \wedge (u,v) \notin E\}$. We set $G_1=\langle V,E_1,A_1 \rangle$ and for every edge $(u,v) \in E_1$, we set the weight $A_1(u,v)=|E|+1$. Clearly building $G_1$ and $G_2$ can be done in polynomial time w.r.t.\ the size of $G$. We set $G_2=\langle V,E_2,A_2 \rangle$ where $E_2=E$. For every edge $(u,v) \in E_2$, we set the weight $A_2(u,v)=1$. It is obvious that for any $S \subseteq V$, the density difference ${\frac{W_2(S)}{|S|}-\frac{W_1(S)}{|S|}} < 0$ if $G_1(S)$, the induced subgraph of $S$ in $G_1$, contains at least one edge in $E_1$. Thus, the optimal $S$ must satisfy that $G_1(S)$ does not contain any edges in $E_1$. Due to the definition of $E_1$, $G_2(S)$, the induced subgraph of $S$ in $G_2$ is a clique. So the optimal density difference is $|S|-1$ where $S$ is the maximum clique in $G_2$. Because $G_2$ and $G$ actually are the same, the optimal density difference of $G_2$ and $G_1$ is at least $k-1$ if and only if $G$ contains a clique with at least $k$ vertices. Due to the NP-hardness of the maximum clique problem, the \textbf{DCSAD} problem is also NP-hard. \end{proof} The \textbf{DCSAD} problem is not only NP-hard but also hard to approximate under reasonable complexity assumptions. \begin{corollary}\label{th:ad_approx} Assuming \textbf{P}$\neq$\textbf{NP}, the \textbf{DCSAD} problem (Eq.~\ref{eq:DCS_ad}) cannot be approximated within $O(n^{1-\epsilon})$ for any $\epsilon > 0$. \end{corollary} \begin{proof} We still use our reduction in the proof of Theorem~\ref{th:ad_hard}. We already proved that the optimal density difference is $k-1$ where $k$ is the size of the maximum clique in $G$. Also, it is easy to see that if a \textbf{DCSAD} algorithm returns a value $k'-1$ such that $\frac{k-1}{k'-1} \leq \beta$, there is a $k'$-clique in $G$. Since $k \geq k'$, $\frac{k}{k'} \leq \frac{k-1}{k'-1} \leq \beta$. Thus, if \textbf{DCSAD} can be approximated within $\beta$, so is the maximum clique problem. It is known that the maximum clique problem cannot be approximated within $O(n^{1-\epsilon})$ for any $\epsilon > 0$, assuming \textbf{P}$\neq$\textbf{NP}. Thus, if \textbf{P}$\neq$\textbf{NP}, the \textbf{DCSAD} problem (Eq.~\ref{eq:max_ad}) cannot be approximated within $O(n^{1-\epsilon})$ for any $\epsilon > 0$. \end{proof} \subsection{Greedy Algorithms} \nop{\todo{This section is hard to follow. It may be good if we can briefly describe the intuition of the major ideas informally before we jump into the detailed algorithms. Your algorithm names carry a prefix ``New''. One may wonder what this ``New'' means. You may want to either clarify or remove the word.}} Although \textbf{DCSAD} cannot be approximated within $O(n^{1-\epsilon})$, an $O(n)$ approximation is easy to achieve. We have two cases, \begin{enumerate} \item If there are no edges with positive weights in $G_D$, apparently any $S$ that only contains a single vertex is an optimal solution to the \textbf{DCSAD} problem, and the optimal density difference is 0. \item If $G_D$ has at least one edge with positive weight, $S=\{u,v\}$ is an $O(n)$ approximation solution, where $(u,v)=\arg\max_{(u,v) \in E_D}{D(u,v)}$. The reason is as follows. For any $S' \subseteq V$, $\rho_D(S')$ must be no greater than the density of an $n$-clique where every edge's weight is $D(u,v)$. Such an $n$-clique has density $(n-1)D(u,v)$. Note that $\rho_D(S)=D(u,v)$. Thus, $\frac{\rho_D(S)}{\max_{S' \subseteq V}{\rho_D(S')}} \geq n-1 = O(n)$. \end{enumerate} Utilizing the above results, and inspired by the greedy approximation algorithm (shown in Algorithm~\ref{alg:greedy}) for the traditional dense subragph discovery problem~\cite{charikar2000greedy}, we devise an $O(n)$ approximation algorithm, the DCSGreedy algorithm (Algorithm~\ref{alg:ad}), which also has \updates{a data-dependent} ratio. The idea of the Algorithm~\ref{alg:ad} is to generate multiple potentially good solutions and pick the best one. As discussed above, when $G_D$ has positive weighted edges, the edge $(u,v)$ with the maximum weight is a candidate solution since it is $\frac{1}{n-1}$-optimal. The Greedy algorithm may also generate a good solution, although for the \textbf{DCSAD} problem its approximation ratio is no better than $O(n^{1-\epsilon})$ for any $\epsilon>0$. Thus, we run Algorithm~\ref{alg:greedy} on $G_D$ to generate $S_1$. We also run Algorithm~\ref{alg:greedy} on $G_{D^+}$ to get $S_2$, because not only $S_2$ may be a better solution, but also $\rho_{D^+}(S_2)$, the average degree of $S_2$ in $G_{D^+}$, helps us derive \updates{a data-dependent} ratio of Algorithm~\ref{alg:ad}, which will be shown in Theorem~\ref{th:online_ratio}. In line~\ref{line:cc} of Algorithm~\ref{alg:ad}, $CC_D(S)$ is the set of connected components of $G_D(S)$, where a connected component is represented by a set of vertices. Line~\ref{line:cc} is for refining the solution $S$ obtained at line~\ref{line:max} when $G_D(S)$ is not connected, since \textbf{DCSAD} prefers ``connected'' subgraphs. \begin{theorem}\label{th:online_ratio} The $S$ returned by Algorithm~\ref{alg:ad} has \updates{a data-dependent} ratio of $\frac{2\rho_{D^+}(S_2)}{\rho_D(S)}$, where $S_2$ is the set in line~\ref{line:greedy2} of Algorithm~\ref{alg:ad}. \end{theorem} \begin{proof} It is known that $\rho_{D^+}(S_2)$ is a 2-approximation of the maximum density in $G_{D^+}$~\cite{charikar2000greedy}. For any $S' \subseteq V$, clearly $\rho_D(S') \leq \rho_{D^+}(S')$. Thus, the maximum density in $G_D$ is at most $2\rho_{D^+}(S_2)$ and the $S$ returned by Algorithm~\ref{alg:ad} has \updates{a data-dependent} ratio of $\frac{2\rho_{D^+}(S_2)}{\rho_D(S)}$. \end{proof} \nop{ We analyze the time complexity of Algorithm~\ref{alg:ad}. Suppose $|V|=n$, $|E_1|=m_1$ and $|E_2|=m_2$. To build the difference graph $G_D$, we first sort adjacency lists of $G_1$ and $G_2$, which can be done in $O((m_1+m_2)\log{n}+n)$ time. Then for a vertex $u$, we use a merge sort to build its adjacency list in $G_D$ in $O(|N_1(u)|+|N_2(u)|)$ time, where $N_1(u)$ and $N_2(u)$ are the sets of $u$'s neighbors in $G_1$ and $G_2$, respectively. Thus, building $G_D$ can be finished in $O((m_1+m_2)\log{n}+n)$ time. Finding the maximum edge weight can be done in $O(m_1+m_2)$ time since $G_D$ has at most $m_1+m_2$ edges. For executing $\textbf{Greedy}(G_D)$, the bottleneck is Line~\ref{line:pick} of Algorithm~\ref{alg:greedy}, picking the vertex with smallest degree. Since $G_D$ is a weighted graph and edges weights are normally not uniform, the data structure in~\cite{charikar2000greedy} does not apply here. We adopt a segment tree~\cite{bentley1977solutions} to store the current degrees of vertices in $S_1$. The initialization of the segment tree takes $O(n\log{n})$ time. Then the vertex $i$ that has the smallest degree can be retrieved in $O(1)$ time. When removing $i$ from $S_1$, we update the degrees of the vertices still in $S_1$ that are affected by removing $i$ and the segment tree. This can be done in $O(|N_D(i)|\log{n})$ time, where $|N_D(i)|$ is the set of $i$'s neighbors in $G_D$. So $\textbf{Greedy}(G_D)$ can be finished in $O((m_1+m_2+n)\log{n})$ time. Apparently $\textbf{Greedy}(G_{D^+})$ can also be done in $O((m_1+m_2+n)\log{n})$ time because $G_{D^+}$ has fewer edges than $G_D$. Lines~\ref{line:cc_start} and~\ref{line:cc} obviously can be done in $O(m_1+m_2+n)$ time. Thus, in total Algorithm~\ref{alg:ad} can be efficiently implemented in $O((m_1+m_2+n)\log{n})$ time.} We analyze the time complexity of Algorithm~\ref{alg:ad}. Suppose $|V|=n$, $|E_1|=m_1$ and $|E_2|=m_2$. The difference graph $G_D$ can be built in $O((m_1+m_2)\log{n}+n)$ time, if we sort the adjacent lists of $G_1$ and $G_2$ first, and use then a merge sort to build $u$'s adjacent list in $G_D$ for each $u \in V$. Finding the maximum edge weight can be done in $O(m_1+m_2)$ time since $G_D$ has at most $m_1+m_2$ edges. Running the Greedy algorithm on a graph $G=\langle V,E,A \rangle$ can be finished in $O((|E|+|V|)\log{|V|})$ time, if we adopt a segment tree~\cite{bentley1977solutions} to store the current degrees of vertices in $S_1$. Thus, $\textbf{Greedy}(G_D)$ and $\textbf{Greedy}(G_{D^+})$ together can be done in $O((m_1+m_2+n)\log{n})$ time. Lines~\ref{line:cc_start} and~\ref{line:cc} obviously can be done in $O(m_1+m_2+n)$ time. Thus, in total Algorithm~\ref{alg:ad} can be efficiently implemented in $O((m_1+m_2+n)\log{n})$ time. \begin{algorithm}[t] \TitleOfAlgo{\textbf{Greedy}} \caption{Greedy Algorithm.} \label{alg:greedy} \KwIn{$G=\langle V,E,A \rangle$} \KwOut{$S$} \begin{algorithmic}[1] \STATE $S \leftarrow V$, $S_1 \leftarrow V$ \WHILE {$|S_1| \geq 1$} \IF {$\frac{W(S_1)}{|S_1|} > \frac{W(S)}{|S|}$} \STATE $S \leftarrow S_1$ \ENDIF \STATE $i \leftarrow \arg\min_{j \in S_1}{W(j;G(S_1))}$ \label{line:pick} \STATE $S_1 \leftarrow S_1 \setminus \{i\}$ \ENDWHILE \RETURN $S$ \end{algorithmic} \end{algorithm} \begin{algorithm}[t] \TitleOfAlgo{\textbf{DCSGreedy}} \caption{DCSGreedy algorithm for solving \textbf{DCSAD}.} \label{alg:ad} \KwIn{$G_1=\langle V,E_1,A_1 \rangle$, $G_2=\langle V,E_2,A_2 \rangle$} \KwOut{$S$, and \updates{a data-dependent} ratio $\beta$} \begin{algorithmic}[1] \STATE Build the difference graph $G_D=\langle V,E_D \rangle$ \IF {$G_D$ does not have edges with positive weights} \STATE Randomly pick a vertex $v$ \RETURN $S \leftarrow \{v\}$ \ENDIF \STATE $(u,v) \leftarrow \arg\max_{(u,v) \in E_D}{D(u,v)}$ \STATE $S \leftarrow \{u,v\}$, $S_1 \leftarrow \textbf{Greedy}(G_D)$ \label{line:greedy1}, $S_2 \leftarrow \textbf{Greedy}(G_{D^+})$ \label{line:greedy2} \STATE $S \leftarrow \arg\max_{S' \in \{S, S_1, S_2\}}\frac{W_D(S')}{|S'|}$ \label{line:max} \IF {$G_D(S)$ is not connected}\label{line:cc_start} \STATE $S \leftarrow \arg\max_{S' \in CC_D(S)}{\frac{W_D(S')}{|S'|}}$ \label{line:cc} \ENDIF \RETURN $S$ and $\beta \leftarrow \frac{2\rho_{D^+}(S_2)}{\rho_D(S)}$ \end{algorithmic} \end{algorithm} \section{DCS with respect to Graph Affinity}\label{sec:ga} In this section, we first explore several properties of the \textbf{DCSGA} problem. Then, we devise a Coordinate-Descent algorithm which is guaranteed to converge to a KKT point. We also propose a refinement step to further improve a KKT point solution. Since \textbf{DCSGA} is non-concave, normally we need multiple initializations to find a good solution. To reduce the number of initializations, we utilize a smart initialization heuristic. Combining the Coordinate-Descent algorithm, the refinement step and the smart initialization heuristic together, we have our NewSEA algorithm for the \textbf{DCSGA} problem. \subsection{Properties} We first show that, like the \textbf{DCSAD} problem, \textbf{DCSGA} also prefers connected subgraphs in the difference graph $G_D$. \begin{property} Let $G_D=\langle V,E_D,D \rangle$ be the difference graph of $G_1$ and $G_2$. For any $\textbf{x} \in \triangle^n$ such that $f_D(\textbf{x})=\textbf{x}^\top D \textbf{x} \geq 0$, if $G_D(S_{\textbf{x}})$ is not connected, where $S_{\textbf{x}}$ is the support set of $\textbf{x}$, then there exists $\textbf{x}'$ whose support set $S_{\textbf{x}'} \subseteq S_{\textbf{x}}$, and $G_D(S_{\textbf{x}'})$ is connected, and $f_D(\textbf{x}') \geq f_D(\textbf{x})$. \end{property} \begin{proof} Without loss of generality, we assume $G_D(S)$ has two connected components $G_D(S_1)$ and $G_D(S_2)$, where $S_1 \cup S_2=S$ and $S_1 \cap S_2=\emptyset$. We decompose $\textbf{x}$ such that $\textbf{x}=\textbf{y}+\textbf{z}$, where $S_{\textbf{x}}=S_1$ and $S_{\textbf{y}}=S_2$. Because $S_1$ and $S_2$ are two connected components in $G_D(S)$, we have $\textbf{y}^\top D \textbf{z}=0$. Thus, $\textbf{x}^\top D \textbf{x}=(\textbf{y}+\textbf{z})^\top D (\textbf{y}+\textbf{z}) = \textbf{y}^\top D \textbf{y} + \textbf{z}^\top D \textbf{z}$. Let $\textbf{y}'=\frac{\textbf{y}}{|\textbf{y}|_1}$ and $\textbf{z}'=\frac{\textbf{z}}{|\textbf{z}|_1}$. Clearly, $\textbf{y}' \in \triangle^n$ and $\textbf{z}' \in \triangle^n$. So both $\textbf{y}'$ and $\textbf{z}'$ are subgraph embeddings. Thus, we have$f_D(\textbf{x})=\textbf{x}^\top D \textbf{x}=|\textbf{y}|_1^2f_D(\textbf{y}')+|\textbf{z}|_1^2f_D(\textbf{z}')$. Since $\textbf{x}^\top D \textbf{x} \geq 0$ and both $|\textbf{y}|_1^2$ and $|\textbf{z}|_1^2$ are non-negative, $\max\{f_D(\textbf{y}'),f_D(\textbf{z}')\} \geq 0$. Also $|\textbf{y}|_1+|\textbf{z}|_1=|\textbf{x}|_1=1$, so $|\textbf{y}|_1^2+|\textbf{z}|_1^2 \leq 1$. We get that $f_D(\textbf{x}) \leq (|\textbf{y}|_1^2+|\textbf{z}|_1^2)\max\{f_D(\textbf{y}'),f_D(\textbf{z}')\} \leq \max\{f_D(\textbf{y}'),f_D(\textbf{z}')\}$. \end{proof} The \textbf{DCSGA} is a standard Quadratic Programming (QP) problem, which in general is NP-hard. We prove that \textbf{DCSGA} is NP-hard. \begin{theorem} The \textbf{DCSGA} (Eq.~\ref{eq:DCS_ga}) problem is NP-hard. \end{theorem} \begin{proof} Consider an undirected and unweighted graph $G$ whose adjacency matrix is $A$, where the entries of $A$ are either 0 or 1. It is known that maximizing $\textbf{x}^\top A \textbf{x}$ s.t. $\textbf{x} \in \triangle^n$ is NP-hard, because the optimum is $1-\frac{1}{k}$, where $k$ is the size of the maximum clique of $G$~\cite{motzkin1965maxima}. Given an arbitrary undirected and unweighted graph $G$, we create a corresponding instance of the \textbf{DCSGA} problem by building $G_1$ as a graph without any edges and setting $G_2=G$. Clearly for any $\textbf{x} \in \triangle^n$, we have $\textbf{x}^\top A \textbf{x}=\textbf{x}^\top D \textbf{x}$, where $D$ is the affinity matrix of the difference graph between $G_2$ and $G_1$. Thus, this simple reduction proves that the \textbf{DCSGA} problem is also NP-hard. \end{proof} \subsection{The SEACD Algorithm}\label{sec:seacd} Since $\textbf{DCSGA}$ is NP-hard and is a QP, we employ local search algorithms to find good solutions. Because the density difference $\textbf{x}^\top D \textbf{x}$ is normally non-concave, we seek for $\textbf{x}$ that satisfies the Karush-Kuhn-Tucker (KKT) conditions~\cite{boyd2004convex}, which are necessary conditions of local maxima points. It is easy to derive that, if $\textbf{x}$ is a KKT point of the \textbf{DCSGA} problem, it should satisfy \begin{equation}\label{eq:KKT} \nabla_uf_D(\textbf{x})=2(D\textbf{x})_u \begin{cases} =\lambda~~x_u>0 \\ \leq \lambda~~x_u=0 \end{cases} \forall u \in V \end{equation} where $\nabla_uf_D(\textbf{x}^*)$ is the partial derivative with respect to $x_u$, and $(D\textbf{x}^*)_u$ is the $u$-th entry of the vector $D\textbf{x}^*$. Since $x \in \triangle^n$, when Eq.~\ref{eq:KKT} holds, we have $f_D(\textbf{x})=\sum_{u \in V}{x_u*(D\textbf{x})_u}=\frac{\lambda}{2}$. The condition in Eq.~\ref{eq:KKT} is also equivalent to \begin{equation}\label{eq:KKT_eq} \max_{k:x_k<1}{\nabla_kf_D(\textbf{x})} \leq \min_{k:x_k>0}{\nabla_kf_D(\textbf{x})} \end{equation} The Shrink-and-Expansion (SEA\footnote{The details of SEA algorithm are illustrated in Appendix.}) algorithm in~\cite{liu2013fast} utilizes a replicator dynamic to solve the problem that maximizes $\textbf{x}^\top A \textbf{x}$ s.t. $\textbf{x} \in \triangle^n$, where $A$ is an affinity matrix of an undirected graph. Although $D$ in Eq.~\ref{eq:DCS_ga} can also be regarded as an affinity matrix, unfortunately the SEA algorithm cannot be directly applied to our problem. This is because the replicator dynamic can only deal with non-negative matrices, while in our problem the matrix $D$ may have negative entries. Thus, we devise a 2-Coordinate Descent algorithm to solve Eq.~\ref{eq:DCS_ga}. In every iteration of the 2-Coordinate Descent algorithm, we only pick two variables $x_i$ and $x_j$, and fix the rest $n-2$ variables. We adjust the values of $x_i$ and $x_j$ to increase the objective $f_D(\textbf{x})$ without violating the simplex constraint. Suppose $x_i+x_j=C$, and let $b_i=\sum_{a \in N_D(i),a \neq j}{D(a,i)x_a}$, $b_j=\sum_{a \in N_D(j),a \neq i}{D(a,j)x_a}$, where $N_D(i)$ is the set of $i$'s neighbors in $G_D$. We adjust $x_i$ and $x_j$ by solving a simple optimization problem involving only one variable, since $x_j$ should always equal $C-x_i$ when the rest $n-2$ variables are fixed. Specifically, the optimization problem is \begin{equation}\label{eq:CD} \begin{split} &\text{max}~~~\frac{1}{2}f_D(\textbf{x})=g(x_i)=b_ix_i+b_j(C-x_i)+D(i,j)x_i(C-x_i)+Cnst \\ &\text{s.t.}~~~~0 \leq x_i \leq C \end{split} \end{equation} where $Cnst$ is a constant independent from $x_i$ and $x_j$. Eq.~\ref{eq:CD} can be solved analytically. There are two cases, \begin{enumerate} \item $D(i,j)=0$, which means $i$ and $j$ are not adjacent in the difference graph $G_D$. Then $g(x_i)=(b_i-b_j)x_i+b_jC+Cnst$. Obviously we should set $x_i=C$ if $b_i > b_j$, and set $x_i=0$ if $b_i < b_j$. We do not adjust $b_i$ or $b_j$ if $b_i=b_j$. \item $D(i,j) \neq 0$, which means $i$ and $j$ are adjacent in $G_D$. We have $g(x_i)=-D(i,j)x_i^2+Bx_i+b_jC+Cnst$ where $B=D(i,j)C+b_i-b_j$. Let $r=\frac{B}{2D(i,j)}$. If $0 \leq r \leq C$, we set $x_i=\arg\max_{x \in \{0,r,c\}}{g(x)}$. If $r<0$ or $r>C$, we set $x_i=\arg\max_{x \in \{0,C\}}{g(x)}$. \end{enumerate} To pick $x_i$ and $x_j$ for an iteration, we exploit the partial derivatives. We pick $i=\arg\max_{k:x_k<1}{\nabla_kf_D(\textbf{x})}$ and $j=\arg\min_{k:x_k>0}{\nabla_kf_D(\textbf{x})}$. If $\nabla_if_D(\textbf{x}) \leq \nabla_jf_D(\textbf{x})$, which means we reach a KKT point, the algorithm stops. The 2-Coordinate Descent algorithm is guaranteed to converge to a stationary point, which is equivalent to a KKT point because the constraint $\textbf{x} \in \triangle^n$ in Eq.~\ref{eq:DCS_ga} is linear~\cite{boyd2004convex}. Picking $x_i$ and $x_j$ at the beginning of every iteration clearly can be done in $O(n)$ time. But $O(n)$ may still be too costly for large graphs. Thus, to further improve the efficiency of our algorithm, we adopt the strategy of the Shrink-and-Expansion algorithm. We define a \textbf{local KKT point} on $S \subseteq V$ as a point $\textbf{x} \in \triangle^n$ that satisfies the following conditions, \begin{equation}\label{eq:KKT_local} \begin{split} &x_u=0~~if~~u \notin S \\ &\nabla_uf_D(\textbf{x})=2(D\textbf{x})_u \begin{cases} =\lambda~~x_u>0 \\ \leq \lambda~~x_u=0\\ \end{cases} \forall u \in S \\ & \lambda=2f_D(\textbf{x}) \end{split} \end{equation} where the major difference from Eq.~\ref{eq:KKT} is that only the vertices in $S \subseteq V$ are considered. It is also equivalent to \begin{equation}\label{eq:KKT_local_eq} \max_{k \in S:x_k<1}{\nabla_kf_D(\textbf{x})} \leq \min_{k \in S:x_k>0}{\nabla_kf_D(\textbf{x})} \end{equation} The 2-Coordinate Descent algorithm is guaranteed to converge to a local KKT point on $S$, when we keep $x_u=0$ for every $u \notin S$, and $x_u$ is involved in iterations only when $u \in S$. Algorithm~\ref{alg:seacd} shows our method. We start with an initial embedding $\textbf{x} \in \triangle^n$. Line~\ref{line:shrink} is the Shrink stage, since after calling the 2-coordinate descent algorithm, the support set of $\textbf{x}$ may shrink due to some originally positive $x_i$ is set to 0. Line~\ref{line:expansion} is the start of the expansion stage. We first enlarge $S$ by adding to $S$ the vertices whose partial derivatives are greater than $\lambda=2f_D(\textbf{x})$, and then do exactly the same expansion operation of the original SEA algorithm~\cite{liu2013fast} (see Appendix). If $Z$ in Line~\ref{line:expansion} is empty, the current $\textbf{x}$ is already a KKT point satisfying conditions in Eq.~\ref{eq:KKT} and the SEA iterations stop. \begin{algorithm}[t] \TitleOfAlgo{\textbf{SEACD}} \caption{Coordinate Descent SEA Algorithm.} \label{alg:seacd} \KwIn{$G_D$, an initial embedding $\textbf{x} \in \triangle^n$} \KwOut{$\textbf{x}$} \begin{algorithmic} \STATE $S \leftarrow S_{\textbf{x}}$ \WHILE {\textbf{true}} \STATE Use the 2-Coordinate Descent algorithm and take $\textbf{x}$ as the initial value to find a local KKT point $\textbf{x}^{new}$ on $S$ \label{line:shrink} \STATE $\textbf{x} \leftarrow \textbf{x}^{new}$ \STATE $S \leftarrow \{v \mid x_v > 0\}$, $\lambda \leftarrow 2f_D(\textbf{x})$ \STATE $Z \leftarrow \{i \mid \nabla_if_D(\textbf{x}) > \lambda, i \in V\}$ \label{line:expansion} \IF {$Z=\emptyset$} \STATE \textbf{break} \ENDIF \STATE Do the SEA Expansion operation on $S \cup Z$ to adjust $\textbf{x}$ \label{line:sea_expansion} \STATE $S \leftarrow S_{\textbf{x}}$ \ENDWHILE \label{line:sea_end} \RETURN $\textbf{x}$ \end{algorithmic} \end{algorithm} Like the original SEA algorithm~\cite{liu2013fast}, The SEACD algorithm converges to a KKT point. \begin{theorem}\label{th:converge} The SEACD algorithm (Algorithm~\ref{alg:seacd}) is guaranteed to converge to a KKT point. \end{theorem} We analyze the computational cost of Algorithm~\ref{alg:seacd}. It is worth noting that to efficiently run Algorithm~\ref{alg:seacd}, the initial embedding $\textbf{x}$ should have a small support set such that during the execution of Algorithm~\ref{alg:seacd}, $S$ and $Z$ in the while loop are normally small sets. In the Shrink stage, for every iteration, we need $O(|S|)$ time to pick $x_i$ and $x_j$, $O(1)$ time to adjust $x_i$ and $x_j$, and $O(|N_D(i)|+|N_D(j)|)$ time to update the partial derivatives of the vertices affected by adjusting $x_i$ or $x_j$. $S$ is usually a small set and $|N_D(i)|+|N_D(j)|$ is often a small number since real-world graphs are normally sparse. Thus, the cost of each iteration of the shrink stage is low. In Line~\ref{line:expansion} of the Expansion stage, we only need to check the partial derivatives of the vertices that have at least one neighbor in $S$, since the partial derivatives of all other vertices are $0$. Thus, the cost of Line~\ref{line:expansion} is $\sum_{v \in S}{|N_D(v)|}$. Line~\ref{line:sea_expansion} is the same as the Expansion operation of the SEA algorithm, whose cost is $O(\sum_{v \in S \cup Z}{|N_D(v)|})$~\cite{liu2013fast}. Since both $S$ and $Z$ are normally small sets, the cost of one SEA iteration (one Shrink stage + one Expansion stage) is low. \subsection{Refining a KKT Point Solution}\label{sec:refine} After a KKT point solution is reached, we may further improve the solution. We call a clique in $G_D$ as a \textbf{positive clique} if all its edge weights are positive, and $\textbf{x} \in \triangle^n$ a \textbf{positive clique solution} if $G_D(S_{\textbf{x}})$ is a positive clique. Utilizing the 2-Coordinate Decent algorithm, we give a construction that refines a KKT point $\textbf{x}$ such that $G(S_{\textbf{x}})$ is not a positive clique to a better solution. \begin{theorem}\label{th:clique} For any KKT point $\textbf{x}$, let $S_{\textbf{x}}=\{v \mid x_v>0\}$. If $G_D(S_{\textbf{x}})$ is not a positive clique, we can find a $\textbf{y}$ such that $G_D(S_{\textbf{y}})$ is a positive clique and $f_D(\textbf{y}) \geq f_D(\textbf{x})$, where $S_{\textbf{y}}=\{v \mid y_v>0\}$ and $S_{\textbf{y}} \subseteq S_{\textbf{x}}$. \end{theorem} \begin{proof} Suppose $\textbf{x}$ is a KKT point and $G_D(S_{\textbf{x}})$ is not a positive clique. We pick $x_i$ and $x_j$ from $S_{\textbf{x}}$ such that $D(i,j) \leq 0$. If $D(i,j)=0$, since $\nabla_if_D(\textbf{x})=\nabla_jf_D(\textbf{x})$, we have $(D\textbf{x})_i=(D\textbf{x})_j$ which means $\sum_{a \in N_D(i)}{D(a,i)x_a}=\sum_{a \in N_D(j)}{D(a,j)x_a}$. Thus, $b_i=b_j$ in Eq.~\ref{eq:CD} and we have $g(x_i)=b_jC+Cnst$. Note that $b_jC$ is independent of $x_i$ and $x_j$, as long as $x_i+x_j=C$. We set $x_i=C$ and $x_j=0$ to remove vertex $j$ from the current subgraph, and the objective $f_D(\textbf{x})$ remains the same. If $D(i,j)<0$, we solve the optimization problem in Eq.~\ref{eq:CD}. Apparently $g(x_i)$ is a convex function with respect to $x_i$ because $-D(i,j)>0$. To maximize the objective $g(x_i)$, we should set $x_i^{new}=\arg\max_{x \in \{0,C\}}{g(x)}$. Thus, after solving Eq.~\ref{eq:CD}, either $x_i$ or $x_j$ becomes 0 and the objective $f_D(\textbf{x})$ is improved. Thus, if $G_D(S_{\textbf{x}})$ is not a positive clique, we can always remove one vertex $i$ (by setting $x_i=0$) that is incident to an edge with negative weight or is not adjacent to all other vertices in $S_{\textbf{x}}$, and keep the objective non-decreasing. Suppose after removing this vertex we get $\textbf{y}$. We use the 2-coordinate descent algorithm to adjust $\textbf{y}$ to a local KKT point on $S_{\textbf{y}}$, and obviously the objective $f_D(\textbf{y})$ is not decreased. If $G_D(S_{\textbf{y}})$ is still not a positive clique, we repeat the above procedure of removing one vertex and adjusting to a local KKT point. During this process, the support set shrinks if the current solution is not a positive clique solution. Since the support set cannot shrink forever (it should has at least 1 vertex), finally we will reach a positive clique solution $\textbf{y}$ such that $S_{\textbf{y}} \subseteq S_{\textbf{x}}$. Moreover, during the process of reaching $\textbf{y}$, the objective is non-decreasing. Thus, we have $f_D(\textbf{y}) \geq f_D(\textbf{x})$. \nop{ During this process, the support set of our solution keeps shrinking, and the objective is non-decreasing. Thus, finally we will reach a solution $\textbf{y}$ such that $f_D(\textbf{y}) \geq f_D(\textbf{x})$ and $G_D(S_{\textbf{y}})$ is a positive clique. \todo{You need to specifically explain why $G_D(S_{\textbf{y}})$ will ultimately turn to positive.} At the same time, $S_{\textbf{y}} \subseteq S_{\textbf{x}}$.} \end{proof} Since an optimal $\textbf{x}$ must be a KKT point, Theorem~\ref{th:clique} implies that there exist a solution $\textbf{y} \in \triangle^n$ such that $\textbf{y}$ is an optimal solution to Eq.~\ref{eq:DCS_ga}, and $G_D(S_{\textbf{y}})$ is a positive clique in $G_D$. Note that a positive clique in $G_D$ is a clique in $G_{D^+}$. Thus, we can run Algorithm~\ref{alg:seacd} directly on $G_{D^+}$ instead of $G_D$ to get a solution $\textbf{x}$. If $G_{D^+}(S_{\textbf{x}})$ is not a clique in $G_{D^+}$, we use the construction in the proof of Theorem~\ref{th:clique} to find a new solution $\textbf{y}$ whose $G_{D^+}(S_{\textbf{y}})$ is a clique. Algorithm~\ref{alg:refine} shows the construction, where we do not consider the case when $D^+(i,j)<0$ since $D^+$ only has non-negative entries. Since the edges with negative weights can be ignored, it seems we can run the original SEA algorithm~\cite{liu2013fast} on $G_{D^+}$ directly to find DCS. However, SEA in~\cite{liu2013fast} is not guaranteed to return a positive clique solution $\textbf{x}$. If $G_{D^+}(S_{\textbf{x}})$ is not a clique, $G_{D}(S_{\textbf{x}})$ may have some edges with negative weights and $\textbf{x}$ is definitely not an optimal solution. This is because according to the proof of Theorem~\ref{th:clique}, if $D(i,j)<0$ where $x_i>0$ and $x_j>0$, we can solve the optimization problem in Eq.~\ref{eq:CD} over $x_i$ and $x_j$ to further improve the objective. Therefore, we still need our refinement step (Algorithm~\ref{alg:refine}). \begin{algorithm}[t] \TitleOfAlgo{\textbf{Refinement}} \caption{Refining a KKT point.} \label{alg:refine} \KwIn{$G_{D^+}$, a KKT point $\textbf{x}$} \KwOut{$\textbf{y}$} \begin{algorithmic}[1] \STATE $\textbf{y} \leftarrow \textbf{x}$ \WHILE {$G_{D^+}(S_\textbf{y})$ is not a clique} \STATE Pick $u$ and $v$ such that $(u,v)$ is not an edge in $G_{D^+}$ \STATE $y_u \leftarrow y_u+y_v$, $y_v \leftarrow 0$ \STATE Use the 2-Coordinate Descent algorithm and take $\textbf{y}$ as the initial value to find a local KKT point $\textbf{y}^{new}$ on $S_{\textbf{y}}$ \STATE $\textbf{y} \leftarrow \textbf{y}^{new}$ \ENDWHILE \RETURN $\textbf{y}$ \end{algorithmic} \end{algorithm} Always returning a positive clique solution as the DCS is one advantage of adopting graph affinity as the density measure, since the returned DCS has very good interpretability. From $G_1$ to $G_2$, for every pair of vertices in the DCS, their connection is enhanced. Please note that, although there exists an optimal solution $\textbf{x}$ such that $G_D(S_{\textbf{x}})$ is a positive clique, it does not mean a maximum clique finding algorithms like~\cite{rossi2014fast} can be applied to solve the \textbf{DCSGA} problem. The major reason is that $G_D$ in the \textbf{DCSGA} problem is a weighted graph while maximum clique finding algorithms deal with unweighted graphs. \noindent \textbf{Advantages of the Coordinate-Descent SEA} With the help of the refinement step (Algorithm~\ref{alg:refine}), the original SEA algorithm~\cite{liu2013fast} works for the DCSGA problem. However, our Coordinate-Descent SEA algorithm has some advantages over the original SEA algorithm. The correctness of the Expansion operation (see Appendix) depends on that a local KKT point is reached in the Shrink stage. Thus, when implementing the Shrink stage, the correct condition of convergence should be $\max_{k \in S:x_k<1}{\nabla_kf_D(\textbf{x})}-\min_{k \in S:x_k>0}{\nabla_kf_D(\textbf{x})} \leq \epsilon$, where $S$ is the set of vertices on which we try to find a local KKT point, and $\epsilon$ is the parameter of precision. However, the original SEA~\cite{liu2013fast} adopts $f_D(\textbf{x})-f_D(\textbf{x}^{old}) \leq \epsilon$ as the convergence condition, where $\textbf{x}$ and $\textbf{x}^{old}$ are the solutions after and before a Shrink iteration by the replicator dynamics. In Section~\ref{sec:exp}, we will show that such a convergence condition may fail to achieve a local KKT point and as a result, the objective $f_D(\textbf{x})$ is even reduced in the following Expansion stage. Moreover, when the convergence condition of the Shrink stage is correctly set, the replicator dynamics of the original SEA~\cite{liu2013fast} converges much slower than the coordinate-descent method, especially on dense graphs. Since our algorithm can also deal with graph with only positive edge weights, it is also a competitive solution to the traditional graph affinity maximization problem. \subsection{Smart Initializations of $\textbf{x}$} One problem remaining unsettled is how to choose the initial embedding $\textbf{x}$ for running Algorithm~\ref{alg:seacd}. Since the \textbf{DCSGA} problem is non-concave, we adopt the strategy of multiple initializations, that is, we run Algorithm~\ref{alg:seacd} multiple times with different initial embeddings. The best solution generated in all runs is returned as the final solution. For the interest of efficiency of the SEACD algorithm as illustrated in Section~\ref{sec:seacd}, an initial embedding $\textbf{x}$ should have a small support set. One simple way of initialization is to set $\textbf{x}=\textbf{e}_u$, where in $\textbf{e}_u$, only the $u$-th entry is 1 and all other entries are 0. The original SEA algorithm employs this simple method and it uses every vertex $u \in V$ to set the initial embedding~\cite{liu2013fast}. Thus, in~\cite{liu2013fast}, the SEA algorithm is called $n=|V|$ times. For large graphs, $O(n)$ initializations are clearly very time-consuming. We adopt a smart heuristic to reduce the number of initializations. The major idea is to first find an upper bound $\mu_u$ for each $u \in V$, where $\mu_u$ is the upper bound of $\textbf{x}^{\top}D\textbf{x}$ for any $\textbf{x} \in \triangle^n$ such that $x_u>0$ and $G_{D^+}(S_{\textbf{x}})$ is a clique. Then we only use the vertices with big upper bounds to do initializations. Define the \textbf{ego net} of $u$ in $G_{D^+}$ as $G_{D^+}(T_u)$ where $T_u$ is the set containing $u$ and all $u$'s neighbors in $G_{D^+}$. Let $w_u=\max_{i \in T_u \vee j \in T_u}{D^+(i,j)}$. Clearly, $w_u$ is an upper bound of the maximum edge weight in $u$'s ego net. Using $O(|E_{D^+}|)$ time, we compute $w_u$ for every $u \in V$. \begin{theorem}\label{th:bound} For any $u \in V$, $\textbf{x}^{\top}D\textbf{x} \leq \frac{(k-1)w_u}{k}$, where $\textbf{x} \in \triangle^n$ and $G_{D^+}(S_{\textbf{x}})$ is a $k$-clique containing $u$, and $w_u$ is an upper bound of the maximum edge weight in $G_{D^+}(T_u)$, the ego net of $u$ in $G_{D^+}$. \end{theorem} \begin{proof} Suppose for $\textbf{x} \in \triangle^n$, $G_{D^+}(S_{\textbf{x}})$ is a $k$-clique containing $u$. Thus, $\textbf{x}^{\top}D^+\textbf{x}=\sum_{(i,j) \in E_{D^+}(S_{\textbf{x}})}{x_ix_jD^+(i,j)}=\textbf{x}^{\top}D\textbf{x}$. Since $G_{D^+}(S_{\textbf{x}})$ is a $k$-clique containing $u$, for any $(i,j) \in E_{D^+}(S_{\textbf{x}})$, $D(i,j) \leq w_u$. Therefore, $\textbf{x}^{\top}D\textbf{x} \leq w_u\sum_{(i,j) \in E_{D^+}(S_{\textbf{x}})}{x_ix_j}$. When $G_{D^+}(S_{\textbf{x}})$ is a $k$-clique, it is easy to find that $\sum_{(i,j) \in E_{D^+}(S_{\textbf{x}})}{x_ix_j} \leq \sum_{(i,j) \in E_{D^+}(S_{\textbf{x}})}{\frac{1}{k}\frac{1}{k}}=\frac{k-1}{k}$. Thus, $\textbf{x}^{\top}D\textbf{x} \leq \frac{(k-1)w_u}{k}$. \end{proof} Based on Theorem~\ref{th:bound}, assuming $k_u$ is the size of the maximum clique in $G_{D^+}$ that contains $u$, then $\textbf{x}^TD^{+}\textbf{x}$ is no more than $\frac{(k_u-1)w_u}{k_u}$, where $\textbf{x} \in \triangle^n$, $x_u>0$ and $G_{D^+}(S_{\textbf{x}})$ is a clique. Although computing $k_u$ for every $u \in V$ is NP-hard, it is easy to find an upper bound of $k_u$, which is $\tau_u+1$ where $\tau_u$ is the core number of $u$ in $G_{D^+}$~\cite{rossi2014fast}. Thus, we use $\mu_u=\frac{\tau_uw_u}{\tau_u+1}$ as the upper bound of the affinity of a clique in $G_{D^+}$ that contains $u$. Note that computing $\tau_u$ for every $u \in V$ can be done in $O(|E_{D^+}|)$ time~\cite{rossi2014fast}. We sort all vertices in $V$ in the descending order of $\mu_u$. Then we use the new order of vertices to initialize $\textbf{x}$. Suppose we have tried some vertices and $\textbf{y}$ is the current best solution. Then all vertices $v$ such that $\mu_v \leq f_D(\textbf{y})$ will not be used to initialize $\textbf{x}$. In such a case, normally we only need to do a small number of initializations. Note that when we use a vertex $u$ to initialize $\textbf{x}$, it is not guaranteed that after running the SEACD algorithm and the Refinement algorithm a solution $\textbf{x}$ is returned where $x_u>0$. It is possible that $x_u=0$ in the returned solution $\textbf{x}$. Thus, our method for reducing the number of initializations is not a pruning technique, but a heuristic. In Section~\ref{sec:exp} we show that our smart initialization heuristic is very effective and it never impairs the quality of the final solution $\textbf{x}$ compared to trying all vertices for initializations in experiments. \medskip Combining all results in this section, we propose the NewSEA algorithm shown in Algorithm~\ref{alg:new_sea}. \begin{algorithm}[t] \TitleOfAlgo{\textbf{NewSEA}} \caption{The NewSEA algorithm for solving \textbf{DCSGA}.} \label{alg:new_sea} \KwIn{$G_{D^+}$} \KwOut{$\textbf{y}$} \begin{algorithmic}[1] \STATE $\textbf{y} \leftarrow \textbf{0}$ \STATE Compute $w_u$, $\tau_u$ for every $u \in V$ \STATE Compute $\mu_u=\frac{\tau_uw_u}{\tau_u+1}$ for every $u \in V$ \STATE Sort $V$ in descending order of $\mu_u$ \FOR {$u \in V$} \IF {$\mu_u \leq f_D(\textbf{y})$} \STATE \textbf{break} \ENDIF \STATE Set $\textbf{x}$ such that $\textbf{x}_u=1$ and $\textbf{x}_v=0$ for all $v \neq u$ \STATE $\textbf{x} \leftarrow \textbf{SEACD}(G_{D^+},\textbf{x})$ \STATE $\textbf{x} \leftarrow \textbf{Refinement}(G_{D^+},\textbf{x})$ \IF {$f_D(\textbf{x})> f_D(\textbf{y})$} \STATE $\textbf{y} \leftarrow \textbf{x}$ \ENDIF \ENDFOR \RETURN $\textbf{y}$ \end{algorithmic} \end{algorithm} \section{Experiments}\label{sec:exp} In this section, we report a series of experiments to verify the effectiveness and efficiency of our algorithms. \subsection{Algorithms and Datasets in Experiments} For the \textbf{DCSAD} problem, we tested our DCSGreedy algorithm, the Greedy algorithm on $G_D$ (denoted by \textbf{$G_D$ only}) and the Greedy algorithm on $G_{D^+}$ (denoted by \textbf{$G_{D^+}$ only}). For the \textbf{DCSGA} problem, we tested our NewSEA algorithm, our SEACD algorithm plus the Refinement step but without our smart initializations heuristic (denoted by \textbf{SEACD+Refine}), and the original SEA algorithm~\cite{liu2013fast} plus the Refinement step (denoted by \textbf{SEA+Refine}). All \textbf{DCSGA} algorithms were run on $G_{D^+}$ directly. The convergence condition of the Shrink stage in NewSEA and SEACD+Refine are all set to $\max_{k \in S:x_k<1}{\nabla_kf_D(\textbf{x})}-\min_{k \in S:x_k>0}{\nabla_kf_D(\textbf{x})} \leq 10^{-2}*\frac{1}{|S|}$, where $S$ is the current set on which we want to reach a local KKT point. This convergence condition very often is too difficult to achieve for the replicator dynamic in the Shrink stage of SEA+Refine, because the replicator dynamic converges too slowly. Thus, for the Shrink stage in SEA+Refine, the convergence condition was set to that the improvement of the objective $f_D(\textbf{x})$ is less than $10^{-6}$ after one iteration. As pointed out in Section~\ref{sec:refine}, this convergence condition actually is not enough for achieving a local KKT point. Thus, in our experiments, the Shrink stage of SEA+Refine sometimes could not converge to a local KKT point and as a result, in the following Expansion stage error occurred, the objective $f_D(\textbf{x})$ was even reduced after expansion. We list the statistics of all data sets used in our experiments in Table~\ref{tab:gd_sta}. The setting of ``Weighted'' represents that we built $G_D$ as $G_2-G_1$ directly. In some graphs there are several edges with weights significantly greater than the weights of other edges, they make the DCS with respect to graph affinity a very small subgraph, sometimes even a single edge. Thus, to limit the influence of these small number of edges with too heavy weights, we also tried the Discrete setting, where we set edge weights in $G_D$ discrete values such that the maximum weight is not too much greater than the other edge weights. Details of how to set edge weight in the Discrete Setting and ``$G_D$ Type" are illustrated in each task in the rest of this section. \begin{table*} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline Data & Setting & $G_D$ Type & n & $m^+$ & $m^-$ & Max $w$ & Min $w$ & Average $w$ \\ \hline DBLP & Weighted & Emerging & 22,572 & 61,703 & 61,551 & 46 & -100 & -0.015 \\ \hline DBLP & Weighted & Disappearing & 22,572 & 61,551 & 61,703 & 100 & -46 & 0.015 \\ \hline DBLP & Discrete & Emerging & 22,572 & 21,367 & 61,551 & 2 & -2 & -0.518 \\ \hline DBLP & Discrete & Disappearing & 22,572 & 61,551 & 21,367 & 2 & -2 & 0.518 \\ \hline DM & --- & Emerging & 9890 & 140,705 & 67,541 & 1.988 & -5.997 & 0.0007 \\ \hline DM & --- & Disappearing & 9890 & 67,541 & 140,705 & 5.997 & -1.988 & -0.0007 \\ \hline Wiki & --- & Consistent &116,836 & 762,999 & 1,264,872 & 9.619 & -12.46 & -0.474 \\ \hline Wiki & --- & Conflicting &116,836 & 1,264,872 & 762,999 & 12.46 & -9.619 & 0.474 \\ \hline Movie & --- & Interest$-$Social & 55,710 & 338,524 & 914,292 & 1 & -1 & -0.46 \\ \hline Movie & --- & Socia$-$Interest & 55,710 & 914,292 & 338,524 & 1 & -1 & 0.46 \\ \hline Book & --- & Interest$-$Social &55,710 & 124,027 & 918,925 & 1 & -1 & -0.762\\ \hline Book & --- & Social$-$Interest &55,710 & 918,925 & 124,027 & 1 & -1 & 0.762\\ \hline DBLP-C & Weighted & --- & 1,282,461 & 2,538,746 & 2,359,487 & 400 & -186 & 0.188 \\ \hline DBLP-C & Discrete & --- & 1,282,461 & 2,538,746 & 2,359,487 & 2 & -2 & -0.013 \\ \hline Actor & Weighted & --- & 382,219 & 15,038,083 & 0 & 216 & 1 & 1.101\\ \hline Actor & Discrete & --- & 382,219 & 15,038,083 & 0 & 10 & 1 & 1.098\\ \hline \end{tabular} \caption{Statistics Difference Graphs in Experiments (\rm{$n$ represents \#vertices, $m^+$ is \#edges with positive weights and $m^-$ is \#edges with negative weights. ``Max $w$'' is the maximum edge weight while ``Min $w$'' is the minimum one. We also report the average edge weight in the column of ``Average $w$''. ``Setting'' and ``$G_D$ Type'' denote how the difference graph was built. ``$G_D$ Type'' denotes which graph is used as $G_1$ and which is used as $G_2$.} )} \label{tab:gd_sta} \end{table*} \subsection{Finding Emerging and Disappearing Co-author Groups} We applied DCS to find emerging/disappearing co-author groups from co-author networks. We adopted the DBLP dataset (\url{https://static.aminer.org/lab-datasets/citation/dblp.v8.tgz}) and extracted all papers published in the top conferences according to the CS Ranking website (\url{http://csrankings.org/}). Based on these papers, we built two co-author graphs. The first graph $G_1=\langle V,E_1,A_1 \rangle$ contains the co-authorships before the year of 2010, and the second one $G_2=\langle V,E_2,A_2 \rangle$ contains the co-authorships from 2010 to 2016. For an edge linking two authors in a co-author graph, the weight is the number of papers written by these two authors together. To build the difference graph $G_D=\langle V,E_D,D \rangle$, we tried two settings, the Weighted setting and the Discrete setting. In the Weighted setting, we set $D(u,v)=A_2(u,v)-A_1(u,v)$, which is the standard setting of the DCS problem. In the Discrete setting, the entries of $D$ are set to discrete values. Specifically, if $A_2(u,v)-A_1(u,v) \geq 5$, which means $u$ and $v$ have at least 5 more co-authored papers in $G_2$ than in $G_1$, we set $D(u,v)=2$. If $2 \leq A_2(u,v)-A_1(u,v)<5$ , we set $D(u,v)=1$. If $-4<A_2(u,v)-A_1(u,v)<0$, we set $D(u,v)=-1$. If $A_2(u,v)-A_1(u,v) \leq -4$, we set $D(u,v)=-2$. The two different settings of $G_D$ normally lead to different DCS. Running our DCS algorithms on $G_D$ described above, no matter in Weighted setting or Discrete Setting, what we find is the Emerging co-author group whose strength (density) of collaborations was enhanced after 2010. Thus, the type of $G_D$ described above is called Emerging. We also wanted to mine the disappearing co-author group whose collaboration strength was weakened the most after 2010. Therefore, we tried another type of $G_D$, the Disappearing $G_D$, which was obtained by flipping the sign of weight of each edge in the Emerging $G_D$. It turned out that, under the same $G_D$ and the same density measure, all algorithms find the same group of authors. We list all co-author groups obtained in Table~\ref{tab:group}. If a group is found under the graph affinity measure, the weight (in the simplex) of each author is also given. We give a short note on the affiliation/address and research interest of each group. Table~\ref{tab:group_info} reports the groups found under different settings and density measures. For the average degree measure, we also report the approximation ratio $\frac{2\rho_{D^+}(S_2)}{\rho_D(S)}$. For each group, we report its density differences under the two measures. Note that, for $\textbf{x}$ under the graph affinity measure, its average degree is $\frac{W_D(S_{\textbf{x}})}{|S_{\textbf{x}}|}$. We also report the edge density difference, defined as $\frac{W_D(S)}{|S|^2}$ of each co-author group, since edge density can be regarded as a discrete version of graph affinity. The results show that the research topics of the emerging groups are machine learning and security, which both are hot topics in recent years. As to the disappearing groups, Compiler \& Software System are all relatively mature areas of computer science, and, for the 3 Japanese Robotics research groups, it is known that recently Japanese researchers do not publish as many papers in international conferences as they did before. \begin{table*} \centering \begin{tabular}{|p{130mm}|c|} \hline List of Authors & Note \\ \hline Feiping Nie(0.4428), Heng Huang(0.462), Chris H. Q. Ding(0.0230), Hua Wang(0.0717) & UTA Machine Learning \\ \hline Lorrie Faith Cranor(0.1428), Nicolas Christin(0.1428), Blase Ur(0.1428), Richard Shay(0.1428), Saranga Komanduri(0.1428), Michelle L. Mazurek(0.1428), Lujo Bauer(0.1428) & CMU Privacy \& Security \\ \hline Kensuke Harada, Kiyoshi Fujiwara, Fumio Kanehiro, Hirohisa Hirukawa, Shuuji Kajita, Kenji Kaneko & Japan Robotics 1 \\ \hline Toshio Fukuda(0.5), Fumihito Arai(0.5) & Japan Robotics 2 \\ \hline Fumio Kanehiro(0.1428), Shuuji Kajita(0.1428), Kenji Kaneko(0.1428), Kensuke Harada(0.1428), Kiyoshi Fujiwara(0.1428), Hirohisa Hirukawa(0.1428), Mitsuharu Morisawa(0.1428) & Japan Robotics 3 \\ \hline Monica S. Lam, Katherine A. Yelick, Alok N. Choudhary, Michael L. Scott, James C. Browne, Marina C. Chen, Rudolf Eigenmann, Dennis Gannon, Charles Koelbel, Wei Li 0015, Thomas J. LeBlanc, David A. Padua, Constantine D. Polychronopoulos, Sanjay Ranka, Ian T. Foster, Carl Kesselman, Geoffrey Fox, Tomasz Haupt, Allen D. Malony, Janice E. Cuny, Joel H. Saltz, Alan Sussman & Compiler \& Software System \\ \hline \end{tabular} \caption{Co-author Groups} \label{tab:group} \end{table*} \begin{table*} \centering \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline Setting & $G_D$ Type & Density & Co-author Group & \#Authors & \tabincell{c}{Positive\\Clique?} & \tabincell{c}{Ave. Degree\\Difference} & \tabincell{c}{Approx.\\Ratio} & \tabincell{c}{Graph Affinity\\Difference} & \tabincell{c}{Edge Density\\Difference ($\frac{W_D(S)}{|S|^2}$)} \\ \hline Weighted & Emerging & Average Degree & \tabincell{c}{UTA Machine\\Learning} & 4 & Yes & 81.5 & 2 & --- & 20.375 \\ \hline Weighted & Emerging & Graph Affinity & \tabincell{c}{UTA Machine\\Learning} & 4 & Yes & 81.5 & --- & 23.167 & 20.375 \\ \hline Weighted & Disappearing & Average Degree & \tabincell{c}{Japan\\Robotics 1} & 6 & Yes & 143 & 2 & --- & 23.833 \\ \hline Weighted & Disappearing & Graph Affinity & \tabincell{c}{Japan\\Robotics 2} & 2 & Yes & 50 & --- & 50 & 50 \\ \hline Discrete & Emerging & Average Degree & \tabincell{c}{CMU Privacy \\\& Security} & 7 & Yes & 12 & 2 & --- & 1.714 \\ \hline Discrete & Emerging & Graph Affinity & \tabincell{c}{CMU Privacy \\\& Security} & 7 & Yes & 12 & --- & 1.714 & 1.714 \\ \hline Discrete & Disappearing & Average Degree & \tabincell{c}{Compiler \&\\Software System} & 22 & Yes & 21.45 & 2 & --- & 0.975 \\ \hline Discrete & Disappearing & Graph Affinity & \tabincell{c}{Japan\\Robotics 3} & 8 & Yes & 14 & --- & 1.714 & 1.714\\ \hline \end{tabular} \caption{Information of Co-Author Groups} \label{tab:group_info} \end{table*} \subsection{\updates{Mining Emerging and Disappearing Data Mining Topics}} \updates{Using the same DBLP dataset, we extracted titles of papers published in some famous Data Mining venues including KDD, ICDM, SDM, PKDD, PAKDD, TKDE, TKDD and DMKD. Similar to~\cite{angel2012dense}, we built keyword association graphs from the paper titles. Unlike~\cite{angel2012dense}, we tried to identify emerging and disappearing data mining topics during 2008-2017, compared to the time period 1998-2007. Thus, we split all paper titles in two parts according to their publication years, and built two keyword association graphs $G_1$ (for 1998-2007) and $G_2$ (for 2008-2017). We removed all stop words and used the rest words in these paper titles as keywords. The edge weights of $G_1$ and $G_2$ were set based on the pairwise co-occurrences of keywords as suggested by~\cite{angel2012dense}. Specifically, for an edge between two keywords, we set its weight as 100 times the percentage of paper titles containing both the keywords. Statistics of the difference graphs can be found in Table~\ref{tab:gd_sta} (the DM dataset).} \updates{This time again all DCSGA algorithms found the same emerging topic \textbf{\{social (0.5), networks (0.5)\}} and the same disappearing topic \textbf{\{mining (0.12), association (0.44), rules (0.44)\}}. Our DCSGreedy algorithm for solving DCSAD also found the disappearing topic \{mining, association, rules\}. We skip the emerging topic w.r.t. the average degree measure, because DCSGreedy found a large set of 38 keywords which lacks interpretability. Since a research topic/story often only has a few keywords, the graph affinity which prefers small and densely connected subgraphs is a more proper density measure in this task compared to the average degree. In~\cite{angel2012dense}, Angel~\textit{et~al.} also suggested to use small and dense subgraphs for identifying stories in text data. } \updates{To further demonstrate the effectiveness of applying DCS in identifying emerging/disappearing research topics, we also display the top results returned by our SEACD+Refinement algorithm. Remember this algorithm does initializations using every vertex in $G_D$ and returns multiple positive cliques in $G_D$. We removed the duplicate cliques and the cliques that are sub-graphs of other cliques found. We list the top-5 positive cliques with the highest graph affinity difference found by the SEACD+Refinement in Table~\ref{tab:DM_EmDis}. } \updates{From the results we can find that our DCSGA algorithms are very effective. Social networks, matrix factorization, semi-supervised learning and unsupervised feature selection all became hot topics only in recent years, and they were not that popular in early years. Moreover, due to the need from industry and the development of computation power, large scale is turning into one of the most important concerns in data mining research. For the disappearing topics, association rule mining, support vector machines, inductive logic programming and intrusion detection are all relatively mature research topics which were majorly investigated in early years. ``Knowledge discovery'' used to be a popularly adopted term when data mining as a research area arouse.} \updates{What's more, we also report the top-5 topics in $G_1$ and $G_2$ in Table~\ref{tab:DM_G1G2}. Since average degree density measure prefers large subgraphs and is not very proper for identifying topics/stories, we do not report the top topics w.r.t.\ average degree. The aim of displaying such results is to show the necessity of applying DCS to find emerging/disappearing topics. If we mine emerging/disappearing topics only in one graph like~\cite{angel2012dense} does, the results may be not effective. For example, if we only consider $G_2$ to mine emerging topics, we would find \{time (0.5), series (0.5)\} and \{feature (0.5), selection (0.5)\}. However, \{time (0.5), series (0.5)\} and \{feature (0.5), selection (0.5)\} were hot topics before 2008 so they were not emerging topics during 2008-2017. The topic \{time (0.5), series (0.5)\} even cooled down in the last ten years, since its graph affinity density dropped from 1.185 (in $G_1$) to 1.049 (in $G_2$) according to our calculation.} \begin{table} \centering \begin{tabular}{|c|c|c|} \hline \multirow{2}*{Rank} & \multicolumn{2}{|c|}{Keyword Set/Topic} \\ \cline{2-3} & Emerging & Disappearing\\ \hline 1 & \{social (0.5), networks (0.5)\} & \tabincell{c}{\{mining (0.12), association (0.45),\\ rules (0.43)\}} \\ \hline 2 & \{large (0.5), scale (0.5)\} & \{knowledge (0.5), discovery (0.5) \} \\ \hline 3 & \{matrix (0.5), factorization (0.5)\} & \tabincell{c}{\{support (0.39), vector (0.38),\\ machines (0.23) \}} \\ \hline 4 & \tabincell{c}{\{semi (0.45), supervised (0.45),\\ learning (0.1)\}} & \tabincell{c}{\{logic (0.36), inductive (0.26),\\ programming (0.38)\}} \\ \hline 5 & \tabincell{c}{\{unsupervised (0.34), feature (0.29),\\ selection (0.27)\}} & \{intrusion (0.5), detection (0.5)\} \\ \hline \end{tabular} \caption{\updates{Top 5 Emerging/Disappearing Topics w.r.t.\ Graph Affinity}} \label{tab:DM_EmDis} \end{table} \begin{table} \centering \begin{tabular}{|c|c|c|} \hline \multirow{2}*{Rank} & \multicolumn{2}{|c|}{Keyword Set/Topic} \\ \cline{2-3} & $G_1$ (1998-2007) & $G_2$ (2008-2017)\\ \hline 1 & \{time (0.5), series (0.5)\} & \{social (0.5), networks (0.5)\} \\ \hline 2 & \tabincell{c}{\{support (0.41), vector (0.41),\\ machines (0.18)\}} & \{time (0.5), series (0.5)\} \\ \hline 3 & \{feature (0.5), selection (0.5)\} & \{large (0.5), scale (0.5)\} \\ \hline 4 & \{decision (0.5), trees (0.5)\} & \{feature (0.5), selection (0.5)\} \\ \hline 5 & \{nearest (0.5), neighbor (0.5)\} & \tabincell{c}{semi (0.46), supervised (0.47),\\ learning (0.07)\}} \\ \hline \end{tabular} \caption{\updates{Top 5 Topics w.r.t.\ Graph Affinity}} \label{tab:DM_G1G2} \end{table} \nop{ \begin{table} \centering \begin{tabular}{|c|c|c|c|c|} \hline \multirow{2}*{Rank} & \multicolumn{2}{|c|}{$G_1$ (1998-2007)} & \multicolumn{2}{|c|}{$G_2$ (2008-2017)} \\ \cline{2-5} & Keywords Set/Topic & \tabincell{c}{Graph\\Affinity} & Keywords Set/Topic & \tabincell{c}{Graph\\Affinity} \\ \hline 1 & \{time (0.5), series (0.5)\} & 1.00 & \{social (0.5), networks (0.5)\} & 1.00 \\ \hline 2 & \tabincell{c}{\{support (0.41), vector (0.41),\\ machines (0.18)\}} & 1.00 & \{time (0.5), series (0.5)\} & 1.00 \\ \hline 3 & \{feature (0.5), selection (0.5)\} & 1.00 & \{large (0.5), scale (0.5)\} & 1.00 \\ \hline 4 & \{decision (0.5), trees (0.5)\} & 1.00 & \{feature (0.5), selection (0.5)\} & 1.00 \\ \hline 5 & \{nearest (0.5), neighbor (0.5)\} & 1.00 & \tabincell{c}{\{semi (0.46), supervised (0.47),\\ learning (0.07)\}} & 1.00 \\ \hline \end{tabular} \caption{\updates{Top 5 Topics w.r.t.\ Graph Affinity}} \label{tab:DM_G1G2} \end{table} } \subsection{Efficiency Comparison} Limited by space, we focus on the running time of the $\textbf{DCSGA}$ algorithms, since all $\textbf{DCSAD}$ algorithms have quasi-linear time complexity $O((m_1+m_2+n)\log{n})$, and are efficient and scalable in practice. Besides the above DCS mining tasks, to compare the efficiency of the algorithms, we also employed several other data sets whose statistics can be found in Table~\ref{tab:gd_sta}. How these datasets were generated and the description of experiments on these datasets please refer to the Appendix. \nop{ Besides the above DCS mining tasks, to compare the efficiency of the algorithms, we also employed two large data sets, DBLP-C and Actor. The DBLP-C data set contains timestamped co-authorship records. We split all co-authorship records into two almost even parts by setting a specific timestamp as the separation timestamp. Then the two parts were used to build two co-author graphs $G_1$ and $G_2$, where the weight of an edge between two vertices (authors) is the number of collaborations. Similar to the Emerging/Disappearing co-author group mining task, we adopted the Weighted and Discrete settings to build the difference graph $G_D$. The actor data set is a collaboration network of actors, where the weight of an edge between two vertices (actors) is the number of collaborations. We directly used this actor collaboration network as a difference graph, since as pointed out in Section~\ref{sec:refine}, our \textbf{DCSGA} algorithms are also competitive solutions to traditional graph affinity maximization on weighted graphs. For the Actor difference graph, we also tried the Weighted setting and the Discrete setting, where in the Discrete setting we set edge weights $D(u,v)=10$ if $D(u,v)$ originally was greater than 10. The statistics of DBLP-C and Actor difference graphs can be found in Table~\ref{tab:gd_sta}. Table~\ref{tab:large_ga} reports the DCS found, where all $\textbf{DCSGA}$ algorithms again found the same DCS every time. } \begin{table*}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|} \hline Data & Setting & $G_D$ Type & NewSEA & \tabincell{c}{SEACD+\\Refine} & \tabincell{c}{SEA+\\Refine} & \tabincell{c}{\#Errors\\in SEA} \\ \hline DBLP & Weighted & Emerging & 0.05 & 3.2 & 14.3 & 1 \\ \hline DBLP & Weighted & Disappearing & 0.05 & 3.2 & 13.7 & 1 \\ \hline DBLP & Discrete & Emerging & 0.06 & 2.9 & 7.3 & 2 \\ \hline DBLP & Discrete & Disappearing & 0.06 & 2.9 & 6.8 & 0 \\ \hline DM & --- & Emerging & 0.35 & 14.1 & 185.3 & 0 \\ \hline DM & --- & Disappearing & 0.21 & 6.9 & 36.3 & 0 \\ \hline Wiki & --- & Consistent & 56.6 & 452 & 36121 & 80 \\ \hline Wiki & --- & Conflicting & 23.8 & 110 & 7703 & 211 \\ \hline Movie & --- & \tabincell{c}{Interest$-$\\Social} & 16.3 & 29.6 & 580.6 & 1 \\ \hline Movie & --- & \tabincell{c}{Social$-$\\Interest} & 23.1 & 32.7 & 404.8 & 1 \\ \hline Book & --- & \tabincell{c}{Interest$-$\\Social} & 2.02 & 14.5 & 53.2 & 0 \\ \hline Book & --- & \tabincell{c}{Social$-$\\Interest} & 20.9 & 32.7 & 397 & 0 \\ \hline DBLP-C & Weighted & --- & 2.01 & 8054 & 23090 & 118 \\ \hline DBLP-C & Discrete & --- & 12.3 & 7678 & 22837 & 131 \\ \hline Actor & Weighted & --- & 2.3 & 2249 & 73671 & 321 \\ \hline Actor & Discrete & --- & 155 & 2574 & 124132 & 4419 \\ \hline \end{tabular} \caption{Running time in seconds.} \label{tab:time} \end{table*} Table~\ref{tab:time} reports the running time of each $\textbf{DCSGA}$ algorithm on each data set. Since we set different convergence conditions for the Shrink stage of each algorithm, one may wonder whether the convergence condition for SEA+Refine is too strict and makes SEA+Refine not as efficient as the other two algorithms. Thus, we also report the number of errors made by SEA+Refine in the Expansion stages. Note that the errors in Expansion are caused by that the Shrink stage cannot reach a local KKT point. From Table~\ref{tab:time} we find that the SEA+Refine algorithm often made mistakes in the Expansion stage, which means the convergence condition for the Shrink stage of SEA+Refine is still too loose to achieve a local KKT point. It is worth noting that the two algorithms using our coordinate descent algorithm in the Shrink stage, NewSEA and SEACD+Refine, never made mistakes in the Expansion stage. We also find that our NewSEA algorithm often is much faster than the other two algorithms. Note that the only difference between NewSEA and SEACD+Refine is the smart initialization heuristic. Compared to SEACD+Refine, the smart initialization heuristic sometimes brings us a speed up of 3 orders of magnitude. Moreover, SEACD+Refine is always faster than SEA+Refine, sometimes 80 times faster. It seems when the input $G_{D^+}$ is sparse, SEACD+Refine and SEA+Refine are close in efficiency. When $G_{D^+}$ becomes denser, the gap in efficiency gets larger. The Expansion error rate (defined by $\frac{\text{\#Errors in SEA}}{n}$) seems correlated with how dense $G_{D^+}$ is. The results are shown in Fig.~\ref{fig:avrng}, where $m^+/n$ measures how dense $G_{D^+}$ is, and $m^+$ is the number of edges in $G_{D^+}$. \begin{figure}[t] \centering \subfigure[Speed-Up]{ \includegraphics[width=.17\textwidth]{SpeedUp.pdf} } \subfigure[Errors of SEA]{ \includegraphics[width=.17\textwidth]{Errors.pdf} } \caption{SpeedUp of SEACD+Refine and Errors in Expansions of SEA+Refine} \label{fig:avrng} \end{figure} \subsection{\updates{Comparison with EgoScan~\cite{cadena2016dense}}} \updates{Both DCSAD and DCSGA are new problems that were not discussed in literature before, and this paper focuses on algorithmic solutions to the two problems, so there are no very suitable baselines for our algorithms. However, in this section, we still compare our DCS mining algorithms with the EgoScan algorithm in~\cite{cadena2016dense}, which is the work closest to ours in literature. The objective of EgoScan is to maximize $W_D(S)$ subject to $S \subseteq V$ on the difference graph $G_D$. } \updates{We ran the EgoScan algorithm\footnote{We thank the authors of~\cite{cadena2016dense} for providing us the code of EgoScan.} on the datasets used in our experiments. Unfortunately, since EgoScan needs a Semi-Definite Programming (SDP) solver as a frequently used subroutine, and the SDP solver is really slow and consumes too much memory when ego nets of vertices are large (having more than thousands of vertices), we only got results on the 4 DBLP co-author difference graphs that we used to draw emerging/disappearing co-author groups. For the 4 graphs, EgoScan always spent more than 100 seconds to finish. For other datasets, either EgoScan could not finish running in one day or the memory (16GB) of our machine was not enough for the SDP solver. The high computational cost is actually one drawback of applying EgoScan in practice.} \updates{We display the results of running EgoScan on the DBLP co-author data. Since all co-author groups found by EgoScan have at least 44 authors, we cannot list all the authors. We only show statistics of these co-author groups. From Table~\ref{tab:egoscan_sta} and referring to Table~\ref{tab:group_info} which shows statistics of the author groups found by our DCS algorithms, we find that our DCS algorithms are much better than EgoScan in finding DCS w.r.t.\ average degree and edge density. Moreover, subgraphs found by EgoScan are all big, even bigger than the subgraphs found by our DCSGreedy algorithms.} \begin{table*}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Setting & $G_D$ Type & \#Authors & \#Edges & Positive Clique? & Ave. Degree Difference & Edge Density Difference ($\frac{W_D(S)}{|S|^2}$) \\ \hline Weighted & Emerging & 82 & 473 & No & 26.95 & 0.3287 \\ \hline Weighted & Disappearing & 59 & 311 & No & 45.39 & 0.7693 \\ \hline Discrete & Emerging & 44 & 124 & No & 7.46 & 0.1694 \\ \hline Discrete & Disappearing & 80 & 527 & No & 13.8 & 0.1725 \\ \hline \end{tabular} \caption{\updates{Statistics of Co-Author Groups (Subgraphs) Found by EgoScan}} \label{tab:egoscan_sta} \end{table*} \updates{We also compare our DCS algorithms with EgoScan in finding subgraphs w.r.t.\ the total edge weight difference $W_D(S)$, which is shown in Table~\ref{tab:wd_comp}. Note that the total edge weigh difference of a solution $\textbf{x}$ returned by our NewSEA algorithm is defined as $W_D(S_{\textbf{x}})$. Under the evaluation metric of total edge weight difference, EgoScan performs much better than our DCS algorithms.} \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|} \hline Setting & $G_D$ Type & DCSGreedy & NewSEA ($W_D(S_{\textbf{x}})$) & EgoScan \\ \hline Weighted & Emerging & 326 & 326 & 2210 \\ \hline Weighted & Disappearing & 858 & 100 & 2678 \\ \hline Discrete & Emerging & 84 & 84 & 328 \\ \hline Discrete & Disappearing & 472 & 112 & 1104 \\ \hline \end{tabular} \caption{\updates{Total edge weight difference ($W_D(S)$) of Co-Author Groups Found by DCS algorithms and EgoScan}} \label{tab:wd_comp} \end{table} \updates{Table~\ref{tab:egoscan_sta},~\ref{tab:wd_comp} and~\ref{tab:group_info} show that DCS w.r.t.\ different measures could be very different. We have the following rough suggestions for deciding which measures to use in practice: (1) if users prefer small DCS and good interpretability, we should take graph affinity as the density measure and apply our NewSEA algorithm, since it always returns a positive clique where for every pair of vertices, their connection in $G_2$ is tighter than their connection in $G_1$; (2) if users prefer a medium sized subgraph, then average degree should be the measure and we apply our DCSGreedy algorithm; (3) If users want a even larger subgraph, total edge weight maybe the suitable measure because it seems that such a measure encourages even bigger subgraphs than average degree.} \section{Conclusion}\label{sec:con} In this paper, we studied the Density Contrast Subgraph problem that have interesting applications in practice. Two popularly adopted graph density measures, average degree and graph affinity, were considered. We proved the hardness of the DCS problem under the two measures, and devised algorithms that work well in practice for finding DCS under both density measures. We reported a series of experiments on both real and synthetic datasets and demonstrated the effectiveness and efficiency of our algorithms. There are some interesting future directions. For example, our methods are based on graph density, but density sometimes cannot reflect how ``dissimilar'' a subgraph looks in two graphs. Thus, how to extract subgraphs that are dissimilar in two graphs with respect to some graph similarity measures~\cite{koutra2013deltacon} is interesting. Also, our methods only mine one DCS with the greatest density difference, how to mine multiple subgraphs with big density difference is another interesting direction. \bibliographystyle{abbrv}
{ "timestamp": "2018-02-21T02:00:24", "yymm": "1802", "arxiv_id": "1802.06775", "language": "en", "url": "https://arxiv.org/abs/1802.06775" }
\section{Introduction} Let $X$ be a smooth projective variety over $\mathbb{C}$, and let $A^i(X):=CH^i(X)_{\mathbb{Q}}$ denote the Chow groups of $X$ (i.e. the groups of codimension $i$ algebraic cycles on $X$ with $\mathbb{Q}$--coefficients, modulo rational equivalence). Let $A^i_{hom}(X)$ (and $A^i_{AJ}(X)$) denote the subgroup of homologically trivial (resp. Abel--Jacobi trivial) cycles. The field of algebraic cycles is something of a mathematician's goldmine, with a wealth of open questions lying around for the picking. Many of these open questions are special cases or variants of Bloch's conjecture: \begin{conjecture}[Bloch \cite{B}]\label{bloch1} Let $X$ be a smooth projective variety of dimension $n$. Let $\Gamma\in A^n(X\times X)$ be such that \[ \Gamma_\ast=0\colon\ \ \ H^i(X,\mathcal O_X)\ \to\ H^i(X,\mathcal O_X)\ \ \ \forall i>0\ .\] Then \[ \Gamma_\ast=0\colon\ \ \ A^n_{hom}(X)\ \to\ A^n(X)\ .\] \end{conjecture} \begin{conjecture}[Bloch \cite{B}]\label{bloch2} Let $X$ be a smooth projective variety of dimension $n$. Assume that \[ H^i(X,\mathcal O_X)=0\ \ \ \forall i>0\ .\] Then \[ A^n_{}(X)\cong\mathbb{Q}\ .\] \end{conjecture} The ``absolute version'' (conjecture \ref{bloch2}) is obtained from the ``relative version'' (conjecture \ref{bloch1}) by taking $\Gamma$ to be the diagonal. Conjecture \ref{bloch2} is famously open for surfaces of general type (cf. \cite{PW}, \cite{V8}, \cite{Gul} for some recent progress). Let us now suppose that $X$ is a hyperk\"ahler variety (i.e., a projective irreducible holomorphic symplectic manifold \cite{Beau0}, \cite{Beau1}), say of dimension $2m$. Suppose there exists a non--symplectic automorphism $\sigma\in\aut(X)$ of order $k>m$. This implies that \[ \bigl( \sigma+\sigma^2 + \ldots +\sigma^k\bigr){}_\ast=0\colon\ \ \ H^i(X,\mathcal O_X)\ \to\ H^{i}(X,\mathcal O_X)\ \ \ \forall i>0\ .\] Conjecture \ref{bloch1} (applied to the correspondence $\Gamma=\sum_{j=1}^k \Gamma_{\sigma^j}\in A^{2m}(X\times X)$, where $\Gamma_f$ denotes the graph of an automorphism $f\in\aut(X)$) then predicts the following: \begin{conjecture}\label{conjhk} Let $X$ be a hyperk\"ahler variety of dimension $2m$. Let $\sigma\in\aut(X)$ be an order $k$ non--symplectic automorphism, and assume $k>m$. Then \[ \begin{split} \bigl( \sigma+\sigma^2 + \ldots +\sigma^k\bigr){}_\ast&=0\colon\ \ \ A^{2m}_{hom}(X)\ \to\ A^{2m}(X)\ ;\\ \bigl( \sigma+\sigma^2 + \ldots +\sigma^k\bigr){}_\ast&=0\colon\ \ \ A^{2}_{AJ}(X)\ \to\ A^{2m}(X)\ .\\ \end{split} \] \end{conjecture} (Here, the second statement follows from the first by applying the Bloch--Srinivas argument \cite{BS}.) In \cite{nonsymp3}, this conjecture was verified for one family of hyperk\"ahler fourfolds, given by Fano varieties of lines on certain cubic fourfolds with an order $3$ non--symplectic automorphism. There exist three other families of cubic fourfolds with a polarized order $3$ automorphism inducing a non--symplectic automorphism on the Fano varieties (the four families are presented in \cite[Examples 6.4, 6.5, 6.6 and 6.7]{BCS}). The aim of the present note is to treat these $3$ remaining families. A first result is that conjecture \ref{conjhk} is true for one of the families: \begin{nonumbering}[=theorem \ref{main0}] Let $Y\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic defined by an equation \[ f(X_0,\ldots,X_4) + (X_5)^3=0\ ,\] where $f$ is homogeneous of degree $3$. Let $\sigma_Y\in\aut(Y)$ be the order $3$ automorphism induced by \[ \begin{split} \mathbb{P}^5(\mathbb{C})\ &\to\ \mathbb{P}^5(\mathbb{C})\ ,\\ [X_0:\ldots:X_5]\ &\mapsto\ [X_0:X_1:X_2:X_3: X_4:\nu X_5]\\ \end{split}\] (where $\nu$ is a $3$rd root of unity). Let $X=F(Y)$ be the Fano variety of lines in $Y$, and let $\sigma\in\aut(X)$ the non--symplectic automorphism induced by $\sigma_Y$. Then \[ (\ide +\sigma+\sigma^2)_\ast \ A^4_{hom}(X)=0\ .\] \end{nonumbering} Theorem \ref{main0} is proven using the theory of finite--dimensional motives \cite{Kim}. The argument is similar to that of \cite{nonsymp3}. For the two other families, we prove a weak version of conjecture \ref{conjhk}: \begin{nonumbering}[=theorem \ref{main}] Let $(Y,\sigma_Y)$ be one of the following: \noindent (a) $Y\subset\mathbb{P}^5(\mathbb{C})$ is a smooth cubic fourfold defined by an equation \[ f(X_0,X_1,X_2)+ g(X_3,X_4)+ (X_5)^3 + X_5\bigl( X_3\ell_1(X_0,X_1,X_2)+X_4\ell_2(X_0,X_1,X_2)\bigr)=0\ ,\] where $f,g$ are homogeneous polynomials of degree $3$ and $\ell_1,\ell_2$ are linear forms. $\sigma_Y\in\aut(Y)$ is the order $3$ automorphism induced by \[ \begin{split} \mathbb{P}^5(\mathbb{C})\ &\to\ \mathbb{P}^5(\mathbb{C})\ ,\\ [X_0:\ldots:X_5]\ &\mapsto\ [X_0:X_1:X_2:\nu X_3: \nu X_4:\nu^2 X_5]\\ \end{split}\] (where $\nu$ is a $3$rd root of unity). \noindent (b) $Y\subset\mathbb{P}^5(\mathbb{C})$ is a smooth cubic fourfold defined by an equation \[ \begin{split} X_2 f(X_0,X_1)+ X_3 g(X_0,X_1)+ (X_4)^2\ell_1(X_0,X_1) + X_4X_5\ell_2(X_0,X_1)+&\\ (X_5)^2\ell_3(X_0,X_1)+ X_4 h(X_2,X_3) +X_5 k(X_2,X_3)&=0\ ,\\ \end{split}\] where $f,g,h,k$ are homogeneous polynomials of degree $2$ and $\ell_1,\ell_2$ are linear forms. $\sigma_Y\in\aut(Y)$ is the order $3$ automorphism induced by \[ \begin{split} \mathbb{P}^5(\mathbb{C})\ &\to\ \mathbb{P}^5(\mathbb{C})\ ,\\ [X_0:\ldots:X_5]\ &\mapsto\ [X_0:X_1:\nu X_2:\nu X_3: \nu^2 X_4:\nu^2 X_5]\\ \end{split}\] (where $\nu$ is a $3$rd root of unity). Let $X=F(Y)$ be the Fano variety of lines in $Y$, and let $\sigma\in\aut(X)$ the non--symplectic automorphism induced by $\sigma_Y$. Then \[ \begin{split} (\ide +\sigma+\sigma^2)_\ast \ A^2_{(2)}(X)&=0\ ,\\ (\ide +\sigma+\sigma^2)_\ast \ A^4_{(2)}(X)&=0\ .\\ \end{split} \] \end{nonumbering} Here, the notation $A^\ast_{(\ast)}(X)$ refers to the {\em Fourier decomposition\/} of the Chow ring of $X$ as constructed by Shen--Vial \cite{SV}, which plays an important role in this note. Theorem \ref{main} is proven by using Voisin's method of ``spread'', as developed in \cite{V0}, \cite{V1}. The argument is similar to that of \cite{LFu2} (which dealt with symplectic automorphisms on Fano varieties of cubic fourfolds), and that of \cite{inv}, \cite{inv2} (which dealt with anti--symplectic involutions on Fano varieties of cubic fourfolds). I have not been able to prove the full conjecture \ref{conjhk} for the two families of theorem \ref{main}; the missing piece concerns the action on $A^4_{(4)}(X)$ (cf. remark \ref{problem} for discussion). As an immediate corollary, we find that Bloch's conjecture \ref{bloch2} is verified for the quotient of the Fano varieties in one family: \begin{nonumberingc}[=corollary \ref{triv}] Let $(X,\sigma)$ be as in theorem \ref{main0}, and let $Z:=X/<\sigma>$ be the quotient. Then \[ A^4_{hom}(Z)=0\ .\] \end{nonumberingc} Another corollary is the following property of the Chow ring of these Fano varieties (reminiscent of the Chow ring of $K3$ surfaces \cite{BV}): \begin{nonumberingc}[=corollary \ref{ring}] Let $X$ and $\sigma$ be as in theorem \ref{main0} or theorem \ref{main}. Let $a\in A^3(X)$ be a $1$--cycle of the form \[ a=\displaystyle\sum_{i=1}^r b_i\cdot D_i\ \ \ \in A^3(X)\ ,\] where $b_i\in A^2(X)^\sigma$ and $D_i\in A^1(X)_{}$. Then $a$ is rationally trivial if and only if $a$ is homologically trivial. \end{nonumberingc} \vskip0.6cm \begin{convention} In this article, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$. A {\sl subvariety\/} is a (possibly reducible) reduced subscheme which is equidimensional. {\bf All Chow groups will be with rational coefficients}: we will denote by $A_j(X)$ the Chow group of $j$--dimensional cycles on $X$ with $\mathbb{Q}$--coefficients; for $X$ smooth of dimension $n$ the notations $A_j(X)$ and $A^{n-j}(X)$ are used interchangeably. The notations $A^j_{hom}(X)$, $A^j_{AJ}(X)$ will be used to indicate the subgroups of homologically trivial, resp. Abel--Jacobi trivial cycles. For a morphism $f\colon X\to Y$, we will write $\Gamma_f\in A_\ast(X\times Y)$ for the graph of $f$. The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted $\mathcal M_{\rm rat}$. We will write $H^j(X)$ to indicate singular cohomology $H^j(X,\mathbb{Q})$. Given an automorphism $\sigma\in\aut(X)$, we will write $A^j(X)^\sigma$ (and $H^j(X)^\sigma$) for the subgroup of $A^j(X)$ (resp. $H^j(X)$) invariant under $\sigma$. \end{convention} \section{Preliminaries} \subsection{Quotient varieties} \label{ssquot} \begin{definition} A {\em projective quotient variety\/} is a variety \[ Z=X/G\ ,\] where $X$ is a smooth projective variety and $G\subset\hbox{Aut}(X)$ is a finite group. \end{definition} \begin{proposition}[Fulton \cite{F}]\label{quot} Let $Z$ be a projective quotient variety of dimension $n$. Let $A^\ast(Z)$ denote the operational Chow cohomology ring. The natural map \[ A^i(Z)\ \to\ A_{n-i}(Z) \] is an isomorphism for all $i$. \end{proposition} \begin{proof} This is \cite[Example 17.4.10]{F}. \end{proof} \begin{remark} It follows from proposition \ref{quot} that the formalism of correspondences goes through unchanged for projective quotient varieties (this is also noted in \cite[Example 16.1.13]{F}). We may thus consider motives $(Z,p,0)\in\mathcal M_{\rm rat}$, where $Z$ is a projective quotient variety and $p\in A^n(Z\times Z)$ is a projector. For a projective quotient variety $Z=X/G$, one readily proves (using Manin's identity principle) that there is an isomorphism \[ h(Z)\cong h(X)^G:=(X,\Delta_G,0)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\] where $\Delta_G$ denotes the idempotent ${1\over \vert G\vert}{\sum_{g\in G}}\Gamma_g$. \end{remark} \subsection{Finite--dimensional motives} We refer to \cite{Kim}, \cite{An}, \cite{J4}, \cite{MNP} for the definition of finite--dimensional motive. An essential property of varieties with finite--dimensional motive is embodied by the nilpotence theorem: \begin{theorem}[Kimura \cite{Kim}]\label{nilp} Let $X$ be a smooth projective variety of dimension $n$ with finite--dimensional motive. Let $\Gamma\in A^n(X\times X)_{}$ be a correspondence which is numerically trivial. Then there is $N\in\mathbb{N}$ such that \[ \Gamma^{\circ N}=0\ \ \ \ \in A^n(X\times X)_{}\ .\] \end{theorem} Actually, the nilpotence property (for all powers of $X$) could serve as an alternative definition of finite--dimensional motive, as shown by Jannsen \cite[Corollary 3.9]{J4}. Conjecturally, any variety has finite--dimensional motive \cite{Kim}. We are still far from knowing this, but at least there are quite a few non--trivial examples. \subsection{Spread} \begin{lemma}[Voisin \cite{V0}, \cite{V1}]\label{projbundle} Let $M$ be a smooth projective variety of dimension $n+1$, and $L$ a very ample line bundle on $M$. Let \[ \pi\colon \mathcal Y\to B\] denote a family of hypersurfaces, where $B\subset\vert L\vert$ is a Zariski open. Let \[ p\colon \widetilde{\mathcal Y\times_B \mathcal Y}\ \to\ \mathcal Y\times_B \mathcal Y\] denote the blow--up of the relative diagonal. Then $\widetilde{\mathcal Y\times_B \mathcal Y}$ is Zariski open in $V$, where $V$ is a projective bundle over $\widetilde{M\times M}$, the blow--up of $M\times M$ along the diagonal. \end{lemma} \begin{proof} This is \cite[Proof of Proposition 3.13]{V0} or \cite[Lemma 1.3]{V1}. The idea is to define $V$ as \[ V:=\Bigl\{ \bigl((x,y,z),\sigma\bigr) \ \vert\ \sigma\vert_z=0\Bigr\}\ \ \subset\ \widetilde{M\times M}\times \vert L\vert\ .\] The very ampleness assumption ensures $V\to\widetilde{M\times M}$ is a projective bundle. \end{proof} This is used in the following key proposition: \begin{proposition}[Voisin \cite{V1}]\label{voisin1} Assumptions as in lemma \ref{projbundle}. Assume moreover $M$ has trivial Chow groups. Let $R\in A^n(V)_{}$. Suppose that for all $b\in B$ one has \[ H^n(Y_b)_{prim}\not=0\ \ \ \ \hbox{and}\ \ \ \ R\vert_{\widetilde{Y_b\times Y_b}}=0\ \ \in H^{2n}(\widetilde{Y_b\times Y_b})\ .\] Then there exists $\delta\in A^n(M\times M)_{}$ such that \[ (p_b)_\ast \bigl(R\vert_{\widetilde{Y_b\times Y_b}}\bigr)= \delta\vert_{Y_b\times Y_b} \ \ \in A^{n}({Y_b\times Y_b})_{}\] for all $b\in B$. (Here $p_b$ denotes the restriction of $p$ to $\widetilde{Y_b\times Y_b}$, which is the blow--up of $Y_b\times Y_b$ along the diagonal.) \end{proposition} \begin{proof} This is \cite[Proposition 1.6]{V1}. \end{proof} The following is an equivariant version of proposition \ref{voisin1}: \begin{proposition}[Voisin \cite{V1}]\label{voisin2} Let $M$ and $L$ be as in proposition \ref{voisin1}. Let $G\subset\aut(M)$ be a finite group. Assume the following: \noindent (\rom1) The linear system $\vert L\vert^G:=\mathbb{P}\bigl( H^0(M,L)^G\bigr)$ has no base--points, and the locus of points in $\widetilde{M\times M}$ parametrizing triples $(x,y,z)$ such that the length $2$ subscheme $z$ imposes only one condition on $\vert L\vert^G$ is contained in the union of (proper transforms of) graphs of non--trivial elements of $G$, plus some loci of codimension $>n+1$. \noindent (\rom2) Let $B\subset\vert L\vert^G$ be the open parametrizing smooth hypersurfaces, and let $Y_b\subset M$ be a hypersurface for $b\in B$ general. There is no non--trivial relation \[ {\displaystyle\sum_{g\in G}} c_g \Gamma_g +\gamma=0\ \ \ \hbox{in}\ H^{2n}(Y_b\times Y_b)\ ,\] where $c_g\in\mathbb{Q}$ and $\gamma$ is a cycle in $\ima\bigl( A^n(M\times M)\to A^n(Y_b\times Y_b)\bigr)$. Let $R\in A^n(\mathcal Y\times_B \mathcal Y)$ be such that \[ R\vert_{{Y_b\times Y_b}}=0\ \ \in H^{2n}({Y_b\times Y_b})\ \ \ \forall b\in B\ .\] Then there exists $\delta\in A^n(M\times M)_{}$ such that \[ R\vert_{{Y_b\times Y_b}}= \delta\vert_{Y_b\times Y_b} \ \ \in A^{n}({Y_b\times Y_b})\ \ \ \forall b\in B\ .\] \end{proposition} \begin{proof} This is not stated verbatim in \cite{V1}, but it is contained in the proof of \cite[Proposition 3.1 and Theorem 3.3]{V1}. We briefly review the argument. One considers \[ V:=\Bigl\{ \bigl((x,y,z),\sigma\bigr) \ \vert\ \sigma\vert_z=0\Bigr\}\ \ \subset\ \widetilde{M\times M}\times \vert L\vert^G\ .\] The problem is that this is no longer a projective bundle over $\widetilde{M\times M}$. However, as explained in the proof of \cite[Theorem 3.3]{V1}, hypothesis (\rom1) ensures that one can obtain a projective bundle after blowing up the graphs $\Gamma_g, g\in G$ plus some loci of codimension $>n+1$. Let $M^\prime\to\widetilde{M\times M}$ denote the result of these blow--ups, and let $V^\prime\to M^\prime$ denote the projective bundle obtained by base--changing. Analyzing the situation as in \cite[Proof of Theorem 3.3]{V1}, one obtains \[ R\vert_{Y_b\times Y_b} =R_0\vert_{Y_b\times Y_b}+ {\displaystyle\sum_{g\in G}} \lambda_g \Gamma_g\ \ \ \hbox{in}\ A^n(Y_b\times Y_b) \ ,\] where $R_0\in A^n(M\times M)$ and $\lambda_g\in\mathbb{Q}$ (this is \cite[Equation (15)]{V1}). By assumption, $R\vert_{Y_b\times Y_b}$ is homologically trivial. Using hypothesis (\rom2), this implies that all $\lambda_g$ have to be $0$. \end{proof} \subsection{MCK decomposition} \begin{definition}[Murre \cite{Mur}] Let $X$ be a smooth projective variety of dimension $n$. We say that $X$ has a {\em CK decomposition\/} if there exists a decomposition of the diagonal \[ \Delta_X= \pi_0+ \pi_1+\cdots +\pi_{2n}\ \ \ \hbox{in}\ A^n(X\times X)\ ,\] such that the $\pi_i$ are mutually orthogonal idempotents and $(\pi_i)_\ast H^\ast(X)= H^i(X)$. (NB: ``CK decomposition'' is shorthand for ``Chow--K\"unneth decomposition''.) \end{definition} \begin{remark} The existence of a CK decomposition for any smooth projective variety is part of Murre's conjectures \cite{Mur}, \cite{J2}. \end{remark} \begin{definition}[Shen--Vial \cite{SV}] Let $X$ be a smooth projective variety of dimension $n$. Let $\Delta_X^{sm}\in A^{2n}(X\times X\times X)$ be the class of the small diagonal \[ \Delta_X^{sm}:=\bigl\{ (x,x,x)\ \vert\ x\in X\bigr\}\ \subset\ X\times X\times X\ .\] An {\em MCK decomposition\/} is a CK decomposition $\{\pi^X_i\}$ of $X$ that is {\em multiplicative\/}, i.e. it satisfies \[ \pi^X_k\circ \Delta_X^{sm}\circ (\pi^X_i\times \pi^X_j)=0\ \ \ \hbox{in}\ A^{2n}(X\times X\times X)\ \ \ \hbox{for\ all\ }i+j\not=k\ .\] (NB: ``MCK decomposition'' is shorthand for ``multiplicative Chow--K\"unneth decomposition''.) A {\em weak MCK decomposition\/} is a CK decomposition $\{\pi^X_i\}$ of $X$ that satisfies \[ \Bigl(\pi^X_k\circ \Delta_X^{sm}\circ (\pi^X_i\times \pi^X_j)\Bigr){}_\ast (a\times b)=0 \ \ \ \hbox{for\ all\ } a,b\in\ A^\ast(X)\ .\] \end{definition} \begin{remark} The small diagonal (seen as a correspondence from $X\times X$ to $X$) induces the {\em multiplication morphism\/} \[ \Delta_X^{sm}\colon\ \ h(X)\otimes h(X)\ \to\ h(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] Suppose $X$ has a CK decomposition \[ h(X)=\bigoplus_{i=0}^{2n} h^i(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ .\] By definition, this decomposition is multiplicative if for any $i,j$ the composition \[ h^i(X)\otimes h^j(X)\ \to\ h(X)\otimes h(X)\ \xrightarrow{\Delta_X^{sm}}\ h(X)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\] factors through $h^{i+j}(X)$. If $X$ has a weak MCK decomposition, then setting \[ A^i_{(j)}(X):= (\pi^X_{2i-j})_\ast A^i(X) \ ,\] one obtains a bigraded ring structure on the Chow ring: that is, the intersection product sends $A^i_{(j)}(X)\otimes A^{i^\prime}_{(j^\prime)}(X) $ to $A^{i+i^\prime}_{(j+j^\prime)}(X)$. It is expected (but not proven !) that for any $X$ with a weak MCK decomposition, one has \[ A^i_{(j)}(X)\stackrel{??}{=}0\ \ \ \hbox{for}\ j<0\ ,\ \ \ A^i_{(0)}(X)\cap A^i_{hom}(X)\stackrel{??}{=}0\ ;\] this is related to Murre's conjectures B and D, that have been formulated for any CK decomposition \cite{Mur}. The property of having an MCK decomposition is severely restrictive, and is closely related to Beauville's ``(weak) splitting property'' \cite{Beau3}. For more ample discussion, and examples of varieties with an MCK decomposition, we refer to \cite[Section 8]{SV}, as well as \cite{V6}, \cite{SV2}, \cite{FTV}. \end{remark} In what follows, we will make use of the following: \begin{theorem}[Shen--Vial \cite{SV}]\label{sv} Let $Y\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold, and let $X:=F(Y)$ be the Fano variety of lines in $Y$. There exists a self--dual CK decomposition $\{\Pi^X_i\}$ for $X$, and \[ (\Pi^X_{2i-j})_\ast A^i(X) = A^i_{(j)}(X)\ ,\] where the right--hand side denotes the splitting of the Chow groups defined in terms of the Fourier transform as in \cite[Theorem 2]{SV}. Moreover, we have \[ A^i_{(j)}(X)=0\ \ \ \hbox{if\ }j<0\ \hbox{or\ }j>i\ \hbox{or\ } j\ \hbox{is\ odd}\ .\] In case $Y$ is very general, the Fourier decomposition $A^\ast_{(\ast)}(X)$ forms a bigraded ring, and hence $\{\Pi^X_i\}$ is a weak MCK decomposition. \end{theorem} \begin{proof} This is a summary of results in \cite{SV} (cf. also \cite[Theorem 2.11]{nonsymp3} for precise attributions). \end{proof} \begin{remark}\label{pity} Unfortunately, it is not yet known that the Fourier decomposition of \cite{SV} induces a bigraded ring structure on the Chow ring for {\em all\/} Fano varieties of smooth cubic fourfolds. For one thing, it has not yet been proven that $A^2_{(0)}(X)\cdot A^2_{(0)}(X)\subset A^4_{(0)}(X)$ (cf. \cite[Section 22.3]{SV} for discussion). \end{remark} \subsection{Relative CK decomposition} \begin{notation}\label{not} Let \[ \mathcal Y\ \to\ B \] denote the universal family of smooth cubic fourfolds $Y_b$. Here $B$ is a Zariski open in the parameter space $\mathbb{P} H^0(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3))$. Let \[ \mathcal X := \{ (\ell, b)\ \vert\ \ell \subset Z_b \}\ \ \ \subset\ G(1,5)\times B \] be the corresponding family of Fano varieties of lines. (Here $G(1,5)$ denotes the Grassmannian of lines in $\mathbb{P}^5$.) A fibre $X_b$ of $\mathcal X\to B$ is a Fano variety of lines on a cubic $Y_b$. \end{notation} \begin{proposition}\label{relck} Let $\mathcal X\to B$ be as above. There exist relative correspondences \[ \Pi_i^\mathcal X\ \ \ \in A^4(\mathcal X\times_B \mathcal X)\ \ \ (i=2,6)\ ,\] with the property that for each $b\in B$, one has \[ \begin{split} \bigl((\Pi_2^\mathcal X)\vert_{X_b\times X_b}\bigr){}_\ast &= (\Pi_2^{X_b})_\ast\colon\ \ \ A^2(X_b)\ \to\ A^2(X_b) \ ,\\ \bigl((\Pi_6^\mathcal X)\vert_{X_b\times X_b}\bigr){}_\ast &= (\Pi_6^{X_b})_\ast\colon\ \ \ A^4(X_b)\ \to\ A^4(X_b) \ .\\ \end{split} \] Here $\Pi_i^{X_b}$ is the Chow--K\"unneth decomposition of theorem \ref{sv}(\rom1). \end{proposition} \begin{proof} The main point is that the Shen--Vial cycle $L\in A^2(X_b\times X_b)$ furnishing the Fourier decomposition \cite{SV} exists relatively: this is because by definition \[ L:= {1\over 3}(g_1^2+{3\over 2}g_1 g_2+g_2^2-c_1-c_2)-I\ \ \ \in A^2(X_b\times X_b)\ \] \cite[Equation (107)]{SV}. Here, $g:=-c_1(\mathcal E_2)\in A^1(X_b)$ and $c:=c_2(\mathcal E_2)\in A^2(X_b)$ (and $\mathcal E_2$ is the restriction of the rank $2$ tautological bundle on the Grassmannian), and $g_i:=(p_i)^\ast(g)$, $c_i:=(p_i)^\ast(c)$ (where $p_i\colon X_b\times X_b\to X_b$ is projection on the $i$th factor), and $I\subset X_b\times X_b$ is the incidence correspondence. Since $g_i, c_i$ and $I$ obviously exist relatively, the same goes for $L$, i.e. there exists a relative correspondence \[ \mathcal{L}\ \ \ \in A^2(\mathcal X\times_B \mathcal X) \] with the property that for any $b\in B$ the restriction \[ \mathcal{L}\vert_{X_b\times X_b}\ \ \ \in A^2(X_b\times X_b) \] is the Shen--vial class $L$ of \cite{SV}. This implies that the class $\ell\in A^2(X_b)$ of \cite{SV} (mentioned in theorem \ref{mult}(\rom2) below) also exists relatively: it is defined as \[ \ell:= (i_\Delta)^\ast(\mathcal{L}) \ \ \ \in A^2(\mathcal X)\ ,\] where $i_\Delta\colon \mathcal X\to \mathcal X\times_B \mathcal X$ denotes the embedding along the relative diagonal. (NB: this makes sense because $i_\Delta$ is a regular embedding.) Next, the classes $\ell_i:=(p_i)^\ast(\ell)\in A^2(X_b\times X_b)$ of \cite{SV} also exist relatively. Armed with these facts, let us inspect the construction of the $\{\Pi_i^{X_b}\}$ in \cite[Theorem 3.3]{SV}. As a first approach towards the construction of $\Pi^{X_b}_2$ and $\Pi^{X_b}_6$, Shen--Vial define \[ p_b:={1\over 25} L\cdot \ell_2\ \ \ \in A^4(X_b\times X_b)\ .\] Again, by the above remarks the cycle $p_b$ exists relatively (i.e. there is $\mathfrak p\in A^4(\mathcal X\times_B \mathcal X)$ which restricts to $p_b$ on each fibre). We define $\Pi_6^\mathcal X:=\mathfrak p\in A^4(\mathcal X\times_B \mathcal X)$. This does the job, for it is shown in \cite[Proof of Theorem 3.3]{SV} that $(p_b)_\ast$ acts as the identity on $A^4_{(2)}(X_b)$ and acts as $0$ on $A^4_{(0)}(X_b)\oplus A^4_{(4)}(X_b)$. We define $\Pi_2^\mathcal X$ as the transpose $\Pi_2^\mathcal X:={}^t \Pi_6^\mathcal X$. This does the job, for it is shown in loc. cit. that $(p_b)^\ast$ acts as the identity on $A^2_{(2)}(X_b)$ and acts as $0$ on $A^2_{(0)}(X_b)$. \end{proof} \subsection{Refined CK decomposition} \begin{theorem}[]\label{pi20} Let $Y\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold, and let $X:=F(Y)$ be the Fano variety of lines in $Y$. Then $X$ has a CK decomposition $\{\Pi^X_i\}$. Moreover, there exists a further splitting \[ \Pi^X_2 = \pi^X_{2,0} + \pi^X_{2,1}\ \ \ \hbox{in}\ A^4(X\times X)\ ,\] where $\pi^X_{2,1}$ is supported on $C\times D\subset X\times X$, where $C$ and $D$ are a curve, resp. a divisor on $X$. The action on cohomology verifies \[ (\pi^X_{2,0})_\ast H^\ast(X) = H^2_{tr}(X)\ ,\] where $H^2_{tr}(X)\subset H^2(X)$ is defined as the orthogonal complement of $NS(X)$ with respect to the Beauville--Bogomolov form. The action on Chow groups verifies \[ (\pi^X_{2,0})_\ast A^2(X) = A^2_{(2)}(X)\ .\] \end{theorem} \begin{proof} The existence of a CK decomposition is theorem \ref{sv}. To define the refined decomposition, let $\pi_4^Y\in A^4(Y\times Y)$ be a CK projector of $Y$. Pedrini \cite[Section 4]{Ped} has constructed a decomposition \[ \pi_4^Y = \pi^Y_{4,tr} + \pi^Y_{4,alg}\ \ \ \hbox{in}\ A^4(Y\times Y)\ ,\] where $\pi^Y_{4,tr}, \pi^Y_{4,alg}$ are mutually orthogonal idempotents, and \[ (\pi^Y_{4,tr})_\ast H^\ast(Y) = H^4_{tr}(Y)\ ,\] where $H^4_{tr}(Y)$ is by definition the orthogonal complement (under the cup product) of $N^2 H^4(Y)$. To obtain a similar splitting for $X$, one uses the following: let $P\in A^3(X\times Y)$ be the universal family of lines. Then \[ ({}^t P)_\ast\colon\ \ \ H^4(Y)\ \to\ H^2(X) \] is an isomorphism, the {\em Abel--Jacobi isomorphism\/} \cite{BD}. Moreover, there is a correspondence $Q\in A^5(X\times Y)$ inducing the inverse isomorphism, i.e. \[ \Pi^X_2 = {}^t P \circ \pi_4^Y \circ Q \ \ \ \hbox{in}\ H^8(X\times X)\ .\] (An explicit formula for $Q$ is $Q=-{1\over 6} P\circ \Gamma_{g^2}$, where $g\in A^1(X)$ is the Pl\"ucker polarization, and $\Gamma_{g^2}\in A^6(X\times X)$ is the correspondence acting as multiplication with $g^2$. This follows from \cite[Proof of Proposition 6]{BD}.) We now define \[ \begin{split} \pi^X_{2,1} &:= {}^t P \circ \pi_{4,alg}^Y \circ Q \ ,\\ \pi^X_{2,0}&:= \Pi^X_2-\pi^X_{2,1} \ \ \ \ \ \ \in A^4(X\times X)\ .\\ \end{split}\] Lieberman's lemma \cite[Lemma 3.3]{V3} gives an equality \[ \pi^X_{2,1}= ({}^t Q\times {}^t P)_\ast \pi_{4,alg}^Y\ \ \ \hbox{in}\ A^4(X\times X) \ .\] This implies that $\pi^X_{2,1}$ is supported on $C\times D\subset X\times X$, where $C\subset X$ is the curve \[ p_X \bigl( \hbox{Supp} \bigl( (p_Y)^\ast(V)\cdot ({}^t Q)\bigr)\bigr)\ \ \ \subset X\ ,\] and $D\subset X$ is the divisor \[ p_X \bigl( \hbox{Supp} \bigl( (p_Y)^\ast(V)\cdot ({}^t P)\bigr)\bigr)\ \ \ \subset X\ .\] For reasons of dimension, $\pi^X_{2,1}$ acts trivially on $A^2(X)$, and so \[ (\pi^X_{2,0})_\ast A^2(X) = (\Pi^X_2)_\ast A^2(X)= A^2_{(2)}(X)\ .\] Since (by construction) the projector $\pi^Y_{4,alg}$ is supported on $V\times V$, for some closed codimension $2$ subvariety $V\subset Y$, Finally, it is known that the Abel--Jacobi isomorphism is an isometry \cite{BD}, and so \[ ({}^t P)_\ast H^4_{tr}(Y)= H^2_{tr}(X)\ . \] \end{proof} \begin{remark} We do {\em not\/} claim that the correspondences $\pi^X_{2,0}, \pi^X_{2,1}$ are mutually orthogonal and idempotent ! This can presumably be achieved, but we do not need this. \end{remark} \subsection{A multiplicative result} Let $X$ be the Fano variety of lines on a smooth cubic fourfold. As we have seen (theorem \ref{sv}), the Chow ring of $X$ splits into pieces $A^i_{(j)}(X)$. The magnum opus \cite{SV} contains a detailed analysis of the multiplicative behaviour of these pieces. Here are the relevant results we will be needing: \begin{theorem}[Shen--Vial \cite{SV}]\label{mult} Let $Y\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold, and let $X:=F(Y)$ be the Fano variety of lines in $Y$. \noindent (\rom1) There exists $\ell\in A^2_{(0)}(X)$ such that intersecting with $\ell$ induces an isomorphism \[ \cdot\ell\colon\ \ \ A^2_{(2)}(X)\ \xrightarrow{\cong}\ A^4_{(2)}(X)\ .\] The inverse isomorphism is given by \[ {1\over 25} L_\ast\colon\ \ \ A^4_{(2)}(X)\ \xrightarrow{\cong}\ A^2_{(2)}(X)\ ,\] where $L\in A^2(X\times X)$ is the class defined in \cite[Equation (107)]{SV}. \noindent (\rom2) Intersection product induces a surjection \[ A^2_{(2)}(X)\otimes A^2_{(2)}(X)\ \twoheadrightarrow\ A^4_{(4)}(X)\ .\] \end{theorem} \begin{proof} Statement (\rom1) is \cite[Theorem 4]{SV}. Statement (\rom2) is \cite[Proposition 20.3]{SV}. \end{proof} The following is a reformulation of theorem \ref{mult}(\rom1). \begin{proposition}\label{withI} Let $Y\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold, and let $X$ be the Fano variety of lines in $Y$. Let $I\in A^2(X\times X)$ be the incidence correspondence, and let $g=-c_1(\mathcal E_2)\in A^1(X)$ be the Pl\"ucker polarization. Then \[ \cdot g^2\colon\ \ \ A^2_{(2)}(X)\ \xrightarrow{}\ A^4_{(2)}(X)\ \] is an isomorphism. The inverse isomorphism is given by \[ -{1\over 6} I_\ast\colon\ \ \ A^4_{(2)}(X)\ \to\ A^2_{(2)}(X)\ .\] \end{proposition} \begin{proof} This is implicit in the arguments of \cite{SV}. For any $a\in A^4_{hom}(X)$, there is equality \[ \ell\cdot L_\ast(a)=-{25\over 6} g^2\cdot I_\ast(a)\ \ \ \hbox{in}\ A^4(X)\ .\] (This follows from \cite[Equations (107) and (108)]{SV}, cf. the proof of \cite[Proposition 19.4]{SV}.) But for $a\in A^4_{(2)}(X)$, we know (theorem \ref{mult}(\rom2)) that $\ell\cdot L_\ast(a)=25a$, and so for any $a\in A^4_{(2)}(X)$ we get an equality \[ a= -{1\over 6} g^2\cdot I_\ast(a)\ \ \ \hbox{in}\ A^4(X)\ .\] Applying $I_\ast$ to this equality, we obtain an equality (for any $a\in A^4_{(2)}(X)$) \begin{equation}\label{Iast} I_\ast(a) =-{1\over 6} I_\ast(g^2\cdot I_\ast(a))\ \ \ \hbox{in}\ A^2(X)\ .\end{equation} But we know that \[ A^2_{(2)}(X) = I_\ast A^4_{hom}(X)= I_\ast A^4_{(2)}(X)\ .\] (Here, the first equality is \cite[Proof of Proposition 21.10]{SV}, and the second equality follows from the fact that $I_\ast A^4_{(4)}(X)=0$ \cite[Theorem 20.5]{SV}.) Equation (\ref{Iast}) thus becomes the statement that for any $b\in A^2_{(2)}(X)$ there is equality \[ b= -{1\over 6} I_\ast(g^2\cdot b)\ \ \ \hbox{in}\ A^2(X) \ . \] This proves the proposition. \end{proof} \begin{corollary}\label{decomp} Let $Y\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold, and let $X$ be the Fano variety of lines in $Y$. There exist correspondences $P\in A^3(X\times Y)$, $\Psi\in A^5(Y\times X)$ such that \[ \begin{split} ({}^t P\circ {}^t \Psi)_\ast&=\ide\colon\ \ \ A^2_{(2)}(X)\ \to\ A^2_{(2)}(X)\ ,\\ ( {} \Psi\circ P)_\ast&=\ide\colon\ \ \ A^4_{(2)}(X)\ \to\ A^4_{(2)}(X)\ .\\ \end{split}\] Moreover, if $\mathcal Y\to B$ and $\mathcal X\to B$ denote the universal families as in notation \ref{not}, there exist relative correspondences $\mathcal P\in A^3(\mathcal X\times_B \mathcal Y)$ and $\Psi\in A^5(\mathcal Y\times_B \mathcal X)$ with the above property on each fibre: i.e., for each $b\in B$ we have \[ \begin{split} \bigl( ({}^t \mathcal P\circ {}^t \Psi)\vert_{X_b\times X_b}\bigr){}_\ast &=(\Pi_2^{X_b})_\ast\colon\ \ \ A^2(X_b)\ \to\ A^2(X_b)\ ,\\ \bigl( (\Psi\circ \mathcal P)\vert_{X_b\times X_b}\bigr){}_\ast &=(\Pi_6^{X_b})_\ast\colon\ \ \ A^4(X_b)\ \to\ A^4(X_b)\ .\\ \end{split}\] \end{corollary} \begin{proof} The correspondence $P\in A^3(X\times Y)$ is defined as the universal family of lines on $Y$. Letting $I\subset X\times X$ denote the incidence correspondence, we have \[ I={}^t P\circ P\ \ \ \hbox{in}\ A^2(X\times X) \] \cite[Lemma 17.2]{SV}. Proposition \ref{withI} states that the composition \[ A^2_{(2)}(X)\ \xrightarrow{\cdot g^2}\ A^4_{(2)}(X) \ \xrightarrow{-{1\over 6}({}^t P\circ P)_\ast}\ A^2_{(2)}(X) \] is the identity, in other words \[ \bigl( -{1\over 6} {}^t P\circ P\circ \Gamma_{g^2}\bigr){}_\ast=\ide\colon\ \ \ A^2_{(2)}(X)\ \to\ A^2_{(2)}(X) \ .\] (Here $\Gamma_{g^2}\in A^6(X\times X)$ can be defined as ${1\over d}\, {}^t \Gamma_\tau\circ\Gamma_\tau$, where $\tau\colon R\to X$ denotes the inclusion of a smooth complete intersection of class $d g^2$). Defining \[ {}^t \Psi:= -{1\over 6} P\circ \Gamma_{g^2}\ \ \ \in\ A^5(X\times Y)\ ,\] we obtain the first equality of corollary \ref{decomp}. For the second equality, it suffices to take the transpose of the first equality, since we know that $\Pi^X_6={}^t \Pi^X_2$ (theorem \ref{sv}). As to the ``moreover'' part of the corollary: obviously both $P$ and $\Gamma_{g^2}$ exist relatively, and so the same goes for $\Psi$. \end{proof} \section{A family of triple covers} In this section, we consider cubic fourfolds that are triple covers over cubic threefolds. The main result here is as follows: \begin{theorem}\label{main0} Let $Y\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold defined by an equation \[ f(X_0,\ldots,X_4)+ (X_5)^3=0\ ,\] where $f$ is a homogeneous polynomial of degree $3$. Let $X=F(Y)$ be the Fano variety of lines in $Y$. Let $\sigma\in\aut(X)$ be the (non--symplectic) order $3$ automorphism induced by \[ \begin{split} \mathbb{P}^5(\mathbb{C})\ &\to\ \mathbb{P}^5(\mathbb{C})\ ,\\ [X_0:\ldots:X_5]\ &\mapsto\ [X_0:X_1:X_2:X_3: X_4:\nu X_5]\\ \end{split}\] (where $\nu$ is a $3$rd root of unity). Then \[ (\ide +\sigma+\sigma^2)_\ast \ A^4_{hom}(X)=0\ .\] \end{theorem} \begin{proof} (The family of Fano varieties of theorem \ref{main0} is described in \cite[Example 6.4]{BCS}, from which I learned that the automorphism $\sigma$ is non--symplectic.) We will use the Fourier decomposition $A^\ast_{(\ast)}(X)$ of the Chow ring of $X$ (theorem \ref{sv}). Indeed, to prove theorem \ref{main}, it suffices to prove the following statement: \begin{equation}\label{state} (\ide +\sigma+\sigma^2)_\ast \ A^i_{(j)}(X)=0\ \ \ \hbox{for}\ (i,j)\in\{(2,2),(4,2),(4,4)\}\ . \end{equation} Indeed, we observe that \[ \begin{split} A^4_{hom}(X)&= A^4_{(2)}(X)\oplus A^4_{(4)}(X)\ ,\\ \end{split}\] \cite{SV}, and so (\ref{state}) implies theorem \ref{main}. Let us now prove (\ref{state}). In a first step of the proof, we show that the automorphism $\sigma$ respects the Fourier decomposition of the Chow ring: \begin{proposition}\label{compat} Let $X$ and $\sigma$ be as in theorem \ref{main0}. Let $A^\ast_{(\ast)}(X)$ be the Fourier decomposition (theorem \ref{sv}). Then \[ \sigma_\ast \, A^i_{(j)}(X)\ \subset\ A^i_{(j)}(X)\ \ \ \forall (i,j)\ .\] Equivalently, if $\{\Pi_j^X\}$ is a CK decomposition as in theorem \ref{sv}, then we have \[ \sigma_\ast (\Pi_j^X)_\ast = (\Pi_j^X)_\ast \sigma_\ast (\Pi_j^X)_\ast\colon\ \ \ A^i(X)\ \to\ A^i(X)\ \ \ \forall (i,j)\ .\] \end{proposition} \begin{proof} (NB: The proof actually works for any $(X,\sigma)$, where $X=F(Y)$ is the Fano variety is of lines on a smooth cubic fourfold $Y$, and $\sigma\in\aut(X)$ is a polarized automorphism, i.e. induced by an automorphism of $Y$.) We start by recalling that the Shen--Vial cycle $L\in A^2(X\times X)$ (furnishing the Fourier decomposition of \cite{SV}) is defined as \begin{equation}\label{def} L:= {1\over 3}(g_1^2+{3\over 2}g_1 g_2+g_2^2-c_1-c_2)-I\ \ \ \in A^2(X\times X)\ \end{equation} \cite[Equation (107)]{SV}. Here $I\subset X\times X$ is the incidence correspondence, and $g_i, c_i$ are defined as follows: \[ \begin{split} g&:=-c_1(\mathcal E_2)\ \ \ \in\ A^1(X)\ ,\\ c&:=c_2(\mathcal E_2)\ \ \ \in\ A^2(X)\ ,\\ g_i&:= (p_i)^\ast(g) \ \ \ \in\ A^1(X\times X)\ \ \ (i=1,2)\ ,\\ c_i&:= (p_i)^\ast(c) \ \ \ \in\ A^2(X\times X)\ \ \ (i=1,2)\ ,\\ \end{split}\] where $\mathcal E_2$ is the rank $2$ vector bundle coming from the tautological bundle on the Grassmannian, and $p_i\colon X\times X\to X$ denote the two projections. We now observe that \[ (\sigma\times\sigma)^\ast(I) = I\ \ \ \hbox{in}\ A^2(X\times X)\ ,\] because a point $y\in Y$ is contained in a line $\ell\in X$ if and only if $\sigma^\ast(y)$ is contained in $\sigma^\ast(\ell)$. Next, we note that the automorphism $\sigma\in\aut(X)$ comes from an automorphism $\sigma_\mathbb{P}$ of $\mathbb{P}^5(\mathbb{C})$, hence in particular $\sigma$ is the restriction of an automorphism $\sigma_{Gr}$ of the Grassmannian $Gr=G(1,5)$ of lines in $\mathbb{P}^5$. As $\sigma_\mathbb{P}$ is linear, $(\sigma_{Gr})$ fixes the tautological vector bundle $\mathcal E_2$ on $Gr$, and so \[ (\sigma\times\sigma)^\ast(c_i)=c_i\ ,\ \ \ (\sigma\times\sigma)^\ast(g_i)=g_i\ \ \ \hbox{in}\ A^\ast(X\times X)\ .\] Applying $(\sigma\times\sigma)^\ast$ to equation (\ref{def}), it follows that \[ (\sigma\times\sigma)^\ast (L) = L\ \ \ \hbox{in}\ A^2(X\times X)\ .\] Using Lieberman's lemma \cite[Lemma 3.3]{V3}, this translates into the equality \[ \Gamma_\sigma\circ L \circ \Gamma_{\sigma^{-1}} = L\ \ \ \hbox{in}\ A^2(X\times X)\ ,\] or equivalently \[ \Gamma_\sigma\circ L = L\circ \Gamma_\sigma\ \ \ \hbox{in}\ A^2(X\times X)\ . \] This implies that also \[ \Gamma_\sigma\circ L^r = L^r\circ \Gamma_\sigma\ \ \ \hbox{in}\ A^{2r}(X\times X)\ \ \ \forall r\in\mathbb{N}\ . \] As the Fourier transform $\mathcal F\colon A^\ast(X)\to A^\ast(X)$ of \cite{SV} is defined by means of a polynomial in $L$, it follows that \[ \mathcal F(\sigma^\ast(a))=\sigma^\ast \mathcal F(a)\ \ \ \forall a\in A^\ast(X)\ ,\] proving the proposition. \end{proof} The second step of the proof is to ascertain that $X$ has finite--dimensional motive: \begin{proposition}\label{findim0} Let $Y\subset\mathbb{P}^5(\mathbb{C})$ and $X=F(Y)$ be as in theorem \ref{main0}. Then $Y$ and $X$ have finite--dimensional motive. \end{proposition} \begin{proof} The finite--dimensionality of $Y$ is proven in \cite{fam}. The main result of \cite{fano} now implies that the Fano variety $X=F(Y)$ also has finite--dimensional motive. \end{proof} The third step of the proof is to show the desired statement for $A^2_{(2)}(X)$, i.e. we now prove that \begin{equation}\label{22} (\ide +\sigma+\sigma^2)_\ast \ A^2_{(2)}(X)=0\ . \end{equation} In order to do so, let us abbreviate \[ \Delta_G:= \Delta_X +\Gamma_\sigma +\Gamma_{\sigma\circ\sigma}\ \ \ \in\ A^4(X\times X)\ .\] Since the action of $\sigma$ is non--symplectic \cite[Example 6.5 and Lemma 6.2]{BCS}, we have that \[ (\Delta_G)_\ast=0\colon\ \ \ H^2(X,\mathcal O_X)\ \to\ H^2(X,\mathcal O_X)\ .\] Using the Lefschetz $(1,1)$--theorem, we see that \[ \Delta_G\circ \Pi_2^X =\gamma \ \ \ \hbox{in}\ H^8(X\times X)\ ,\] where $\gamma$ is some cycle supported on $D\times D\subset X\times X$, for some divisor $D\subset X$. In other words, the correspondence \[ \Gamma:= \Delta_G\circ \Pi_2^X - \gamma \ \ \ \in A^4(X\times X) \] is homologically trivial. But then (since $X$ has finite-dimensional motive) there exists $N\in\mathbb{N}$ such that \[ \Gamma^{\circ N}=0\ \ \ \hbox{in}\ A^4(X\times X)\ .\] Upon developing this expression, one finds an equality \[ \Gamma^{\circ N}= (\Delta_G\circ \Pi_2^X)^{\circ N} + \gamma^\prime=0\ \ \ \hbox{in}\ A^4(X\times X)\ ,\] where $\gamma^\prime$ is supported on $D\times D\subset X\times X$. In particular, $\gamma^\prime$ acts trivially on $A^2_{(2)}(X)\subset A^2_{AJ}(X)$, and so \[ \bigl((\Delta_G\circ \Pi_2^X)^{\circ N}\bigr){}_\ast=0\colon\ \ \ A^2_{(2)}(X)\ \to\ A^2(X)\ .\] Proposition \ref{compat} (combined with the fact that $\Delta_G$ and $\Pi_2^X$ are idempotents) implies that \[ \bigl((\Delta_G\circ \Pi_2^X)^{\circ N}\bigr){}_\ast= (\Delta_G\circ \Pi_2^X){}_\ast\colon\ \ \ A^i(X)\ \to\ A^i(X)\ ,\] and so we find that \[ \bigl(\Delta_G\circ \Pi_2^X\bigr){}_\ast= (\Delta_G)_\ast= 0\colon\ \ \ A^2_{(2)}(X)\ \to\ A^2(X)\ .\] This proves equality (\ref{22}). The argument for $A^4_{(2)}(X)$ is similar: the correspondence $\Gamma$ being homologically trivial, its transpose \[ {}^t \Gamma= \Pi_6^X\circ \Delta_G -\gamma^{\prime\prime}\ \ \ \in A^4(X\times X) \] is also homologically trivial (where $\gamma^{\prime\prime}$ is supported on $D\times D$). Using the nilpotence theorem and proposition \ref{compat}, this implies (just as above) that \begin{equation}\label{42} ( \Pi_6^X\circ \Delta_G){}_\ast= \bigl(\Delta_G\circ \Pi_6^X\bigr){}_\ast= (\Delta_G)_\ast= 0\colon\ \ \ A^4_{(2)}(X)\ \to\ A^4(X)\ .\end{equation} In the final step of the proof, it remains to consider the action on $A^4_{(4)}(X)$. Ideally, one would like to use Vial's projector $\pi^X_{4,0}$ of \cite[Theorems 1 and 2]{V4} (mentioned in the proof of theorem \ref{pi20}). Unfortunately, this approach runs into problems. (The problem is that it seems impossible to prove that \[ \Delta_G\circ \pi^X_{4,0}=0\ \ \ \hbox{in}\ H^8(X\times X)\ ,\] short of knowing that (1) $H^4(X)\cap F^1 =N^1 H^4(X)$, and (2) $N^1 H^4(X)=\widetilde{N}^1 H^4(X)$, where $N^\ast$ is the usual coniveau filtration and $\widetilde{N}^\ast$ is Vial's niveau filtration. Both (1) and (2) seem difficult.) We therefore proceed somewhat differently: to establish the statement for $A^4_{(4)}(X)$, we use the following proposition: \begin{proposition}\label{44} Let $Y\subset\mathbb{P}^5(\mathbb{C})$ be a smooth cubic fourfold, and let $X=F(Y)$ be the Fano variety of lines in $Y$. Let $G\subset\aut(X)$ be a group of non--symplectic automorphisms. One has \[ \Delta_G\circ \Pi_4^X - R =0\ \ \ \hbox{in}\ H^8(X\times X)\ ,\] where $R\in A^4(X\times X)$ is a correspondence with the property that \[ R_\ast=0\colon\ \ \ A^4(X)\ \to\ A^4(X)\ .\] \end{proposition} \begin{proof} This is \cite[Proposition 3.8]{nonsymp3}. \end{proof} Obviously, proposition \ref{44} clinches the proof: using the nilpotence theorem, one sees that there exists $N\in\mathbb{N}$ such that \[ \bigl( \Delta_G\circ \Pi_4^X + R \bigr){}^{\circ N}=0\ \ \ \hbox{in}\ A^4(X\times X)\ .\] Developing, and applying the result to $A^4(X)$, one finds that \[ \bigl( ( \Delta_G\circ \Pi_4^X)^{\circ N}\bigr){}_\ast=0\colon\ \ \ A^4(X)\ \to\ A^4(X)\ .\] Proposition \ref{compat} (combined with the fact that $\Delta_G$ and $\Pi_4^X$ are idempotents) implies that \[ \bigl((\Delta_G\circ \Pi_4^X)^{\circ N}\bigr){}_\ast= (\Delta_G\circ \Pi_4^X){}_\ast\colon\ \ \ A^i(X)\ \to\ A^i(X)\ \ \ \forall i\not=2\ .\] Therefore, we conclude that \[ \bigl(\Delta_G\circ \Pi_4^X\bigr){}_\ast= (\Delta_G)_\ast= 0\colon\ \ \ A^4_{(4)}(X)\ \to\ A^4(X)\ ,\] and we are done. \end{proof} \begin{remark} The family of cubic fourfolds of theorem \ref{main0} also occurs in \cite{ACT} and \cite{LS} (where it is used as a tool in understanding the period map for cubic threefolds), as well as in \cite{vGI} (where it is shown the Kuga--Satake construction for these cubic fourfolds is algebraic). \end{remark} \section{The two remaining families} In this section, we consider the two remaining families of cubic fourfolds with an order $3$ automorphism that acts non--symplectically on the Fano variety. The main result here is theorem \ref{main}, stating that part of conjecture \ref{conjhk} is true for these two families. In addition, in theorem \ref{main2} we exhibit two subfamilies (of the two families of theorem \ref{main}) for which conjecture \ref{conjhk} is completely verified. \subsection{Main result} \begin{theorem}\label{main} Let $(Y,\sigma_Y)$ be one of the following: \noindent (a) $Y\subset\mathbb{P}^5(\mathbb{C})$ is a smooth cubic fourfold defined by an equation \[ f(X_0,X_1,X_2)+ g(X_3,X_4)+ (X_5)^3 + X_5\bigl( X_3\ell_1(X_0,X_1,X_2)+X_4\ell_2(X_0,X_1,X_2)\bigr)=0\ ,\] where $f,g$ are homogeneous polynomials of degree $3$ and $\ell_1,\ell_2$ are linear forms. $\sigma_Y\in\aut(Y)$ is the order $3$ automorphism induced by \[ \begin{split} \mathbb{P}^5(\mathbb{C})\ &\to\ \mathbb{P}^5(\mathbb{C})\ ,\\ [X_0:\ldots:X_5]\ &\mapsto\ [X_0:X_1:X_2:\nu X_3: \nu X_4:\nu^2 X_5]\\ \end{split}\] (where $\nu$ is a $3$rd root of unity). \noindent (b) $Y\subset\mathbb{P}^5(\mathbb{C})$ is a smooth cubic fourfold defined by an equation \[ \begin{split} X_2 f(X_0,X_1)+ X_3 g(X_0,X_1)+ (X_4)^2\ell_1(X_0,X_1) + X_4X_5\ell_2(X_0,X_1)+&\\ (X_5)^2\ell_3(X_0,X_1)+ X_4 h(X_2,X_3) +X_5 k(X_2,X_3)&=0\ ,\\ \end{split}\] where $f,g,h,k$ are homogeneous polynomials of degree $2$ and $\ell_1,\ell_2$ are linear forms. $\sigma_Y\in\aut(Y)$ is the order $3$ automorphism induced by \[ \begin{split} \mathbb{P}^5(\mathbb{C})\ &\to\ \mathbb{P}^5(\mathbb{C})\ ,\\ [X_0:\ldots:X_5]\ &\mapsto\ [X_0:X_1:\nu X_2:\nu X_3: \nu^2 X_4:\nu^2 X_5]\\ \end{split}\] (where $\nu$ is a $3$rd root of unity). Let $X=F(Y)$ be the Fano variety of lines in $Y$, and $\sigma\in\aut(X)$ the non--symplectic automorphism induced by $\sigma_Y$. Then \[ \begin{split} (\ide +\sigma+\sigma^2)_\ast \ A^2_{(2)}(X)&=0\ ,\\ (\ide +\sigma+\sigma^2)_\ast \ A^4_{(2)}(X)&=0\ .\\ \end{split} \] \end{theorem} \begin{proof} (NB: The two families of Fano varieties of theorem \ref{main} are described in \cite[Examples 6.6 and 6.7]{BCS}, where it is established that the automorphism $\sigma$ is non--symplectic in both cases.) In a first step of the proof, we show that the automorphism $\sigma$ respects the Fourier decomposition of the Chow ring: \begin{proposition}\label{compatagain} Let $X$ and $\sigma$ be as in theorem \ref{main}. Let $A^\ast_{(\ast)}(X)$ be the Fourier decomposition (theorem \ref{sv}). Then \[ \sigma_\ast \, A^i_{(j)}(X)\ \subset\ A^i_{(j)}(X)\ \ \ \forall (i,j)\ .\] Equivalently, if $\{\Pi_j^X\}$ is a CK decomposition as in theorem \ref{sv}, then we have \[ \sigma_\ast (\Pi_j^X)_\ast = (\Pi_j^X)_\ast \sigma_\ast (\Pi_j^X)_\ast\colon\ \ \ A^i(X)\ \to\ A^i(X)\ \ \ \forall (i,j)\ .\] \end{proposition} \begin{proof} The proof is exactly the same as that of proposition \ref{compat}. \end{proof} In the next step, we look at the action of $\sigma$ on $H^2(X)$ (where $(X,\sigma)$ is the Fano variety with the order $3$ automorphism as in theorem \ref{main}(a) or (b)). Let us use the short--hand \[ \Delta_G:={1\over 3}\bigl(\Delta_X +\Gamma_\sigma+\Gamma_{\sigma^2}\bigr)\ \ \ \in A^4(X\times X)\ .\] The anti--symplectic condition translates into the fact that \[ (\Delta_G)_\ast=0\colon\ \ \ H^{2,0}(X)\ \to\ H^{2,0}(X)\ .\] Since the kernel of $(\Delta_G)_\ast$ is a sub--Hodge structure, and $H^2_{tr}(X)\subset H^2(X)$ is the smallest sub--Hodge structure containing $H^{2,0}(X)$, it follows that also \[ (\Delta_G)_\ast=0\colon\ \ \ H^{2}_{tr}(X)\ \to\ H^{2}_{tr}(X)\ .\] This means that \[ \Delta_G\circ \pi_{2,0}^X=0\ \ \ \hbox{in}\ H^8(X\times X) \] (where $\pi_{2,0}^X$ is as in theorem \ref{pi20}), and so \[ \Delta_G\circ \pi_2^X = \gamma \ \ \ \hbox{in}\ H^8(X\times X) \ ,\] where $\gamma$ is a cycle supported on $C\times D\subset X\times X$, where $C$ is a curve and $D$ is a divisor in $X$. In particular, since the projector $\Pi_2^X$ of theorem \ref{sv} is equal to $\pi_2^X$ in cohomology, we get a homological equivalence \begin{equation}\label{onex} \Delta_G\circ \Pi_2^X = \gamma \ \ \ \hbox{in}\ H^8(X\times X) \ ,\end{equation} where $\gamma$ is supported on $C\times D$. Now we can start to play the ``let's spread things out'' game of \cite{V0}, \cite{V1}. We define \[ \pi\colon\ \ \mathcal Y\ \to\ B \] to be the family of all smooth cubic fourfolds given by an equation as in theorem \ref{main}. (That is, we let $G\subset\aut(\mathbb{P}^5)$ be the order $3$ group generated by the automorphism of $\mathbb{P}^5$ as in (a) or (b), and we define \[ B\ \subset\ \Bigl(\mathbb{P} H^0\bigl(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3)\bigr)\Bigr)^G \] as the Zariski open subset parametrizing smooth $G$--invariant cubics. Note that $\mathcal Y\to B$ is thus either the family of (a), or the family of (b); we treat the two families simultaneously.) We will write $ Y_b:=\pi^{-1}(b)$ for the fibre over $b\in B$. We also define the universal family of associated Fano varieties of lines: \[ \mathcal X := \bigl\{ (\ell,b)\ \vert\ \ell\subset Y_b\bigr\}\ \ \ Gr\times B \ ; \] the fibre of $\mathcal X$ over $b\in B$ is the Fano variety $X_b$ of lines in $Y_b$. Since $\mathcal X$ is invariant under $\sigma_{Gr}\times\ide_B$, there is an automorphism \[ \sigma\colon\ \ \ \mathcal X\ \to\ \mathcal X\ .\] We will write $\sigma_b\colon X_b\to X_b$ for the restriction to a fibre. Let us define a relative correspondence \[ \Delta_G^\mathcal X:={1\over 3}\, \bigl( \Delta_\mathcal X +\Gamma_\sigma + \Gamma_{\sigma^2}\bigr)\ \ \ \in A^4(\mathcal X\times_B \mathcal X)\ ,\] and consider the composition \[ \Delta_G^\mathcal X\circ \Pi_2^\mathcal X\ \ \ \in A^4(\mathcal X\times_B \mathcal X)\ .\] (For composition of relative correspondences in the setting of smooth quasi--projective families that are smooth over a base $B$, cf. \cite{CH}, \cite{GHM}, \cite{NS}, \cite{DM}, \cite[8.1.2]{MNP}.) In view of (\ref{onex}) above, this correspondence has the following property: for every $b\in B$, there exist a curve $C_b$ and a divisor $D_b\subset X_b$, and a cycle $\gamma_b$ supported on $C_b\times D_b$, such that \[ \bigl(\Delta_G^\mathcal X\circ \Pi_2^\mathcal X\bigr)\vert_{X_b\times X_b}=\gamma_b \ \ \ \in H^8(\mathcal X\times_B \mathcal X)\ .\] Applying Voisin's Hilbert schemes argument \cite[Proposition 3.7]{V0}, we can find a ``completely decomposed'' relative correspondence $\gamma\in A^4(\mathcal X\times_B \mathcal X)$ such that \begin{equation}\label{vanish} \bigl( \Delta_G^\mathcal X\circ \Pi_2^\mathcal X -\gamma\bigr)\vert_{X_b\times X_b} =0\ \ \ \hbox{in}\ H^8( X_b\times X_b) \ \ \ \forall b\in B\ .\end{equation} (By ``completely decomposed'' we mean the following: there exist subvarieties $\mathcal C, \mathcal D\subset \mathcal X$ of codimension $3$ resp. $1$, such that the cycle $\gamma$ is supported on $\mathcal C\times_B \mathcal D\subset \mathcal X\times_B \mathcal X$.) Now, we would like to apply proposition \ref{voisin2} (which acts as a magic wand, transforming a homological equivalence into a rational equivalence). For this reason, it is more convenient to move things to the family $\mathcal Y$. That is, we define a relative correspondence \[ \Gamma:= {}^t \Psi\circ\Bigl( \Delta_G^\mathcal X\circ \Pi_2^\mathcal X -\gamma\Bigr)\circ {}^t \mathcal P\ \ \ \in A^4(\mathcal Y\times_B \mathcal Y) \ ,\] where $\Psi$ and $\mathcal P$ are as in corollary \ref{decomp}. (NB: in the present set--up, the base $B$ is different from corollary \ref{decomp}; our present $B$ is actually $B^\sigma$. Let $B_{univ}$ be the base parametrizing all smooth cubics (as in corollary \ref{decomp}), and let $B\subset B_{univ}$ be the base parametrizing $\sigma$--invariant cubics (as in the present proof). Then the present $\Psi$ and $\mathcal P$ are obtained from $\Psi_{univ}, \mathcal P_{univ}$ (of corollary \ref{decomp}) by pullback \[ \Psi:= \tau^\ast(\Psi_{univ})\ ,\ \ \ {}^t\mathcal P:=\tau^\ast({}^t \mathcal P_{univ})\ ,\] where $\tau\colon \mathcal Y\times_B \mathcal X\to \mathcal Y_{univ}\times_{B_{univ}} \mathcal X_{univ}$ is the base change.) Property (\ref{vanish}) implies there is a fibrewise homological vanishing \[ \Gamma\vert_{Y_b\times Y_b}=0\ \ \ \hbox{in}\ H^8(Y_b\times Y_b)\ \ \ \forall b\in B\ .\] Applying proposition \ref{voisin2} (which is possible thanks to lemma \ref{OK} below), we find there exists $\delta\in A^4(\mathbb{P}^5\times \mathbb{P}^5)$ such that \begin{equation}\label{this} \Gamma\vert_{Y_b\times Y_b}=\delta\vert_{Y_b\times Y_b}\ \ \ \hbox{in}\ A^4(Y_b\times Y_b)\ \ \ \forall b\in B\ .\end{equation} Since $A^\ast_{hom}(\mathbb{P}^5)=0$, the restriction $\delta\vert_{Y_b\times Y_b}$ acts trivially on $A^\ast_{hom}(Y_b)$, and thus \[ \bigl( \Gamma\vert_{Y_b\times Y_b}\bigr){}_\ast=0\colon\ \ \ A^\ast_{hom}(Y_b)\ \to\ A^\ast_{hom}(Y_b)\ \ \ \forall b\in B\ .\] Composing on both sides, it follows that also \[ \bigl( ( {}^t \mathcal P\circ \Gamma\circ {}^t \Psi)\vert_{X_b\times X_b}\bigr){}_\ast=0\colon\ \ \ A^\ast_{hom}(X_b)\ \to\ A^\ast_{hom}(X_b)\ \ \ \forall b\in B\ \] (where $\mathcal P$ and $\Psi$ are as in corollary \ref{decomp}). By definition of $\Gamma$, this means that \[ \bigl( ( {}^t \mathcal P\circ {}^t \Psi\circ (\Delta_G^\mathcal X\circ \Pi_2^\mathcal X-\gamma)\circ {}^t \mathcal P \circ {}^t \Psi)\vert_{X_b\times X_b}\bigr){}_\ast=0\colon\ \ \ A^\ast_{hom}(X_b)\ \to\ A^\ast_{hom}(X_b)\ \ \ \forall b\in B\ .\] In the light of corollary \ref{decomp}, this boils down to \[ \bigl( \Pi_2^{X_b}\circ (\Delta_G^\mathcal X\circ \Pi_2^\mathcal X-\gamma)\vert_{X_b\times X_b}\circ \Pi_2^{X_b}\bigr){}_\ast =0\colon\ \ \ A^2_{hom}(X_b)\ \to\ A^2_{hom}(X_b)\ \ \ \forall b\in B\ .\] But $(\Pi_2^{X_b})_\ast A^2_{hom}(X_b)=A^2_{(2)}(X_b)$ and so \[ \bigl( \Pi_2^{X_b}\circ (\Delta_G^\mathcal X\circ \Pi_2^\mathcal X-\gamma)\vert_{X_b\times X_b}\bigr){}_\ast =0\colon\ \ \ A^2_{(2)}(X_b)\ \to\ A^2_{}(X_b)\ \ \ \forall b\in B\ .\] We can rewrite this as \[ \bigl( \Pi_2^{X_b}\circ \Delta_G^{X_b}\circ \Pi_2^{X_b} - \Pi_2^{X_b}\circ (\gamma\vert_{X_b\times X_b})\bigr){}_\ast =0\colon\ \ \ A^2_{(2)}(X_b)\ \to\ A^2_{}(X_b)\ \ \ \forall b\in B\ .\] But for general $b\in B$, the restriction $\gamma\vert_{X_b\times X_b}$ will be supported on (curve)$\times$(divisor), and as such will act trivially on $A^2(X_b)$. It follows that \[ \bigl( \Pi_2^{X_b}\circ \Delta_G^{X_b} \bigr){}_\ast=0\colon\ \ \ A^2_{(2)}(X_b)\ \to\ A^2_{}(X_b)\ \ \ \hbox{for\ general\ } b\in B\ .\] In view of proposition \ref{compat}, this implies that \[ ( \Delta_G^{X_b} ){}_\ast=0\colon\ \ \ A^2_{(2)}(X_b)\ \to\ A^2_{}(X_b)\ \ \ \hbox{for\ general\ } b\in B\ ,\] i.e. we have proven the first statement of theorem \ref{main} for general $b\in B$. To extend this to {\em all\/} $b\in B$, we observe that the above construction can be made locally around a given $b_0\in B$: the point is that in applying \cite[Proposition 3.7]{V0}, one can obtain a cycle $\gamma_{}$ supported on $\mathcal C\times_B \mathcal D$ such that $\mathcal C$ and $\mathcal D$ are in general position with respect to $X_b$. The above argument then proves the statement for $X_{b_0}$. The second statement of theorem \ref{main} (i.e., the action on $A^4_{(2)}(X)$) can be recovered from the above argument. Indeed, taking the transpose on both sides of equation (\ref{this}), one obtains that for all $b\in B$, there is equality \[ {}^t ( \Gamma\vert_{Y_b\times Y_b})={}^t (\delta\vert_{Y_b\times Y_b})\ \ \ \hbox{in}\ A^4(Y_b\times Y_b)\ .\] As the right--hand side acts trivially on $A^\ast_{hom}(Y_b)$, this implies that \[ \bigl( {}^t ( \Gamma\vert_{Y_b\times Y_b})\bigr){}_\ast =0\colon\ \ \ A^\ast_{hom}(Y_b)\ \to\ A^\ast_{hom}(Y_b)\ \ \ \forall b\in B\ .\] By definition of $\Gamma$ (combined with the fact that $\Pi_6^{X_b}={}^t \Pi_2^{X_b}$), this means that \[ \bigl( ( {} \Psi\circ {} \mathcal P\circ (\Pi_6^\mathcal X\circ\Delta_G^\mathcal X -{}^t \gamma)\circ {} \Psi \circ {} \mathcal P)\vert_{X_b\times X_b}\bigr){}_\ast=0\colon\ \ \ A^\ast_{hom}(X_b)\ \to\ A^\ast_{hom}(X_b)\ \ \ \forall b\in B\ .\] Using corollary \ref{decomp}, this simplifies to \[ \bigl( \Pi_6^{X_b}\circ (\Pi_6^\mathcal X\circ\Delta_G^\mathcal X -{}^t \gamma)\vert_{X_b\times X_b}\circ \Pi_6^{X_b} \bigr){}_\ast=0\colon\ \ \ A^4_{hom}(X_b)\ \to\ A^4_{hom}(X_b)\ \ \ \forall b\in B\ ,\] i.e. \[ \bigl( \Pi_6^{X_b}\circ (\Delta_G^\mathcal X -{}^t \gamma)\vert_{X_b\times X_b} \bigr){}_\ast=0\colon\ \ \ A^4_{(4)}(X_b)\ \to\ A^4_{}(X_b)\ \ \ \forall b\in B\ .\] This can be rewritten as \[ \bigl( \Pi_6^{X_b}\circ \Delta_G^{X_b} - \Pi_6^{X_b}\circ ({}^t (\gamma\vert_{X_b\times X_b}))\bigr){}_\ast =0\colon\ \ \ A^4_{(2)}(X_b)\ \to\ A^4_{}(X_b)\ \ \ \forall b\in B\ .\] For general $b\in B$, the transpose of the restriction ${}^t (\gamma\vert_{X_b\times X_b})$ is supported on (divisor)$\times$(curve), and hence will act trivially on $A^4(X_b)$. It follows that \[ \bigl( \Pi_6^{X_b}\circ \Delta_G^{X_b}\bigr){}_\ast =0\colon\ \ \ A^4_{(2)}(X_b)\ \to\ A^4(X_b)\ \ \ \hbox{for\ general\ } b\in B\ .\] As $\sigma$ preserves the Fourier decomposition (proposition \ref{compat}), this implies that \[ ( \Delta_G^{X_b}){}_\ast =0\colon\ \ \ A^4_{(2)}(X_b)\ \to\ A^4(X_b)\ \ \ \hbox{for\ general\ } b\in B\ .\] As above, this can be extended to {\em all\/} $b\in B$. Since we have used proposition \ref{voisin2}, we need to check the hypotheses of this proposition are satisfied. This is taken care of by the following lemma: \begin{lemma}\label{OK} Let $\mathcal Y\to B$ be one of the families of cubics as in (a) or (b) of theorem \ref{main}. Then $\mathcal Y\to B$ verifies the hypotheses of proposition \ref{voisin2} (with $M=\mathbb{P}^5(\mathbb{C})$, $L=\mathcal O_{\mathbb{P}^5}(3)$ and $G=<\sigma_\mathbb{P}>$, where $\sigma_\mathbb{P}\in\aut(\mathbb{P}^5)$ is as in theorem \ref{main}(a) resp. (b)). \end{lemma} \begin{proof} First, let us check that hypothesis (\rom1) of proposition \ref{voisin2} is met with in case (a). To this end, we consider the tower of morphisms \[ p\colon\ \ \mathbb{P}^5\ \xrightarrow{p_1}\ P^\prime:= \mathbb{P}^5/G\ \xrightarrow{p_2}\ P:=\mathbb{P}(1^3,3^3)\ ,\] where $\mathbb{P}(1^3,3^3)=\mathbb{P}^5/(\mathbb{Z}/3\mathbb{Z}\times \mathbb{Z}/3\mathbb{Z}\times \mathbb{Z}/3\mathbb{Z})$ denotes a weighted projective space. Let us write $\sigma_3, \sigma_4, \sigma_5$ for the order $3$ automorphisms of $\mathbb{P}^5$ \[ \begin{split} &\sigma_3 [x_0:\ldots:x_5] = [x_0:x_1:x_2:\nu x_3: x_4:x_5]\ ,\\ &\sigma_4 [x_0:\ldots:x_5] = [x_0:x_1:x_2:x_3:\nu x_4:x_5]\ ,\\ &\sigma_5 [x_0:\ldots:x_5] = [x_0:x_1:x_2:x_3:x_4:\nu x_5]\ ,\\ \end{split} \] where $\nu$ is again a primitive $3$rd root of unity. (We note that $\sigma_\mathbb{P}=\sigma_3\circ\sigma_4\circ\sigma_5$, and the weighted projective space $P$ is $\mathbb{P}^5/ <\sigma_3,\sigma_4,\sigma_5>$.) The sections in $\bigl(\mathbb{P} H^0\bigl(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3)\bigr)\bigr)^G$ are in bijection with $ \mathbb{P} H^0\bigl(P^\prime,\mathcal O_{P^\prime}(3)\bigr) $, and there is an inclusion \[ \Bigl(\mathbb{P} H^0\bigl(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3)\bigr)\Bigr)^G \ \supset\ p^\ast \mathbb{P} H^0\bigl( P,\mathcal O_P(3)\bigr)\ .\] Let us now assume $x,y\in\mathbb{P}^5$ are two points such that \[ (x,y)\not\in \Delta_{\mathbb{P}^5}\cup \bigcup_{r_3,r_4,r_5\in\{0,1,2\}} \Gamma_{(\sigma_3)^{r_3}\circ (\sigma_4)^{r_4}\circ (\sigma_5)^{r_5}}\ .\] Then \[ p(x)\not=p(y)\ \ \ \hbox{in}\ P\ ,\] and so (using lemma \ref{delorme} below) there exists $\sigma\in\mathbb{P} H^0\bigl(P^\prime,\mathcal O_{P^\prime}(3)\bigr)$ containing $p(x)$ but not $p(y)$. The pullback $p^\ast(\sigma)$ contains $x$ but not $y$, and so these points $(x,y)$ impose $2$ independent conditions on the parameter space $ \bigl(\mathbb{P} H^0\bigl(\mathbb{P}^5,\mathcal O_{\mathbb{P}^5}(3)\bigr)\bigr)^G$. It only remains to check that a generic element \[ (x,y)\ \ \ \in\Gamma_{(\sigma_3)^{r_3}\circ (\sigma_4)^{r_4}\circ (\sigma_5)^{r_5}} \] also imposes $2$ independent conditions, for any $(r_3,r_4,r_5)\not\in\{(0,0,0),(1,1,1),(2,2,2)\}$. Let us assume $(x,y)$ is generic on $\Gamma_{\sigma_3}$ (the argument for the other $r_i$ are similar). Let us write $x=[a_0:a_1:\ldots:a_5]$. By genericity, we may assume all $a_i$ are $\not=0$ (intersections of $\Gamma_{\sigma_3}$ with a coordinate hyperplane have codimension $>n+1$ and so need not be considered for hypothesis (\rom1) of proposition \ref{voisin2}). We can thus write \[ x=[1:a_1:a_2:a_3:a_4:a_5]\ ,\ \ \ y= [1:a_1:a_2:\nu a_3:a_4:a_5] \ ,\ \ \ a_i\not=0 \ .\] The cubic \[ a_4 (x_0)^2x_3 - a_3(x_0)^2x_4=0 \] is $G$--invariant and contains $x$ while avoiding $y$, and so the element $(x,y)$ again imposes $2$ independent conditions. This proves hypothesis (\rom1) is satisfied for case (a). The proof of hypothesis (\rom1) for case (b) is similar, using the weighted projective space $\mathbb{P}(1^2,3^4)$. To establish hypothesis (\rom2) of proposition \ref{voisin2}, we proceed by contradiction. Let us suppose hypothesis (\rom2) is not met with, i.e. there exists a smooth cubic $Y=Y_b$ as in theorem \ref{main}(b) or (c), and a non--trivial relation \[ a\,\Delta_{Y} +b\, \Gamma_{\sigma} + c\, \Gamma_{\sigma^2}+ \delta =0\ \ \ \hbox{in}\ H^8(Y\times Y)\ ,\] where $a,b,c\in \mathbb{Q}$ and $\delta\in\ima\bigl( A^4(\mathbb{P}^5\times\mathbb{P}^5)\to A^4(Y\times Y)\bigr)$. Looking at the action on a generator $\omega$ of $H^{3,1}(Y)$ (and using that $\delta_\ast(\omega)=0$ because $H^{p,q}(\mathbb{P}^5)=0$ for $p\not=q$), we find a relation \[ a+ \nu b + \nu^2 c=0\ .\] Next, looking at the action on $(H^4(Y)_{prim})^\sigma$ (which is non--zero, for there is an equivariant isomorphism $H^4(Y)\cong H^2(X)$, and proposition \ref{chiara} below ensures that $\dim H^2(X)^\sigma>1$), we find a relation \[ a+b+c=0\ .\] Combining these two relations, we find that the only solutions are $a=\nu c$, $b=\nu^2 c$. It follows that there are no rational solutions, and so hypothesis (\rom2) is satisfied. \begin{lemma}\label{delorme} Let $P$ be either $\mathbb{P}(1^3,3^3)$ or $\mathbb{P}(1^2,3^4)$. Let $r,s\in P$ and $r\not=s$. Then there exists $\sigma\in\mathbb{P} H^0\bigl(P,\mathcal O_P(3)\bigr)$ containing $r$ but avoiding $s$. \end{lemma} \begin{proof} It follows from Delorme's work \cite[Proposition 2.3(\rom3)]{Del} that the locally free sheaf $\mathcal O_P(3)$ is very ample. This means there exists $\sigma$ as required. \end{proof} \begin{proposition}[Boissi\`ere--Camere--Sarti \cite{BCS}]\label{chiara} Let $(X,\sigma)$ be a Fano variety $X=F(Y)$ with a polarized order $3$ automorphism $\sigma\in\aut(X)$ as in theorem \ref{main}(a) or (b). Let $T\subset NS(X)$ denote the $\sigma$--invariant part of the N\'eron--Severi group of $X$. The dimension of $T$ equals $7 $ in case (a), and equals $9$ in case (b). \end{proposition} \begin{proof} The lattice $T$ is completely determined in \cite[Examples 6.6 and 6.7]{BCS}. \end{proof} \end{proof} \end{proof} \begin{remark}\label{problem} To prove the full conjecture \ref{conjhk} for the two families of theorem \ref{main}, it remains to prove that \[ (\Delta_G)_\ast A^4_{(4)}(X)\stackrel{??}{=}0\ .\] The above argument is not strong enough to prove this, for the following reason: at one point in the argument, we moved from the family $\mathcal X$ (of Fano varieties) to the family $\mathcal Y$ (of cubics), using the correspondences $\Psi$ and $\mathcal P$ of corollary \ref{decomp}. This worked because (thanks to corollary \ref{decomp}) this moving back and forth did {\em not\/} affect $A^4_{(2)}(X)$ and $A^2_{(2)}(X)$. However, this cannot work for $A^4_{(4)}(X)$. (One might expect, in light of \cite{fano}, that $A^4_{(4)}(X)$ is related to $A^6(Y^{[2]})$. If so, one could move from $A^\ast(\mathcal X\times_B \mathcal X)$ to $A^\ast( \mathcal Y^{4/B})$. Then, one would need a version of proposition \ref{voisin2} for the $4$th fibre product $\mathcal Y^{4/B}$; this seems difficult !) \end{remark} \subsection{Two subfamilies} In this subsection, we restrict to two special subfamilies (of the families of theorem \ref{main}), in order for Kimura's notion of {\em finite--dimensional motive\/} \cite{Kim} to apply. \begin{theorem}\label{main2} Let $(Y,\sigma_Y)$ be one of the following: \noindent (a) $Y\subset\mathbb{P}^5(\mathbb{C})$ is a smooth cubic fourfold defined by an equation \[ f(X_0,X_1,X_2)+ g(X_3,X_4)+ (X_5)^3 =0\ ,\] and $\sigma_Y\in\aut(Y)$ is as in theorem \ref{main}(a). \noindent (b) $Y\subset\mathbb{P}^5(\mathbb{C})$ is a smooth cubic fourfold defined by an equation \[ f(X_0,X_1)+g(X_2,X_3)+h(X_4,X_5)=0\ ,\] and $\sigma_Y\in\aut(Y)$ is as in theorem \ref{main}(b). Let $X=F(Y)$ be the Fano variety of lines in $Y$, and let $\sigma\in\aut(X)$ be the non--symplectic automorphism induced by $\sigma_Y$. Then \[ (\ide +\sigma+\sigma^2)_\ast \ A^4_{hom}(X)=0\ .\] \end{theorem} \begin{proof} Since we have \[ A^4_{hom}(X)=A^4_{(4)}(X)\oplus A^4_{(2)}(X)\ ,\] and we already know (theorem \ref{main}) that \[ (\ide +\sigma+\sigma^2)_\ast \ A^4_{(2)}(X)=0\ ,\] it suffices to prove that also \begin{equation}\label{sub4} (\ide +\sigma+\sigma^2)_\ast \ A^4_{(4)}(X)=0\ .\end{equation} We will exploit the following fact: \begin{lemma}\label{findim} Let $X$ be as in theorem \ref{main2}. Then $X$ has finite--dimensional motive. \end{lemma} \begin{proof} One first proves the cubics $Y$ as in theorem \ref{main2} have finite--dimensional motive. This follows from Shioda's inductive structure \cite{Sh}, \cite[Remark 1.10]{KS}, combined with the fact that cubic curves, cubic surfaces and cubic threefolds have finite--dimensional motive. The result for $X=F(Y)$ now follows from \cite{fano}. \end{proof} Now, we go on to prove (\ref{sub4}). Proposition \ref{44} applies, and so we have \[ \Delta_G\circ \Pi_4^X - R =0\ \ \ \hbox{in}\ H^8(X\times X)\ ,\] where $R\in A^4(X\times X)$ is a correspondence acting trivially on $A^4(X)$. We conclude just as at the end of the proof of theorem \ref{main0}: using the nilpotence theorem (theorem \ref{nilp}), one finds $N\in\mathbb{N}$ such that \[ \bigl( \Delta_G\circ \Pi_4^X + R \bigr){}^{\circ N}=0\ \ \ \hbox{in}\ A^4(X\times X)\ .\] Developing, and applying the result to $A^4(X)$, it follows that \[ \bigl( ( \Delta_G\circ \Pi_4^X)^{\circ N}\bigr){}_\ast=0\colon\ \ \ A^4(X)\ \to\ A^4(X)\ .\] Proposition \ref{compat} (combined with the fact that $\Delta_G$ and $\Pi_4^X$ are idempotents) implies that \[ \bigl((\Delta_G\circ \Pi_4^X)^{\circ N}\bigr){}_\ast= (\Delta_G\circ \Pi_4^X){}_\ast\colon\ \ \ A^i(X)\ \to\ A^i(X)\ \ \ \forall i\ .\] Therefore, we may conclude that \[ \bigl(\Delta_G\circ \Pi_4^X\bigr){}_\ast= (\Delta_G)_\ast= 0\colon\ \ \ A^4_{(4)}(X)\ \to\ A^4(X)\ .\] \end{proof} \section{Some corollaries} \subsection{A succinct restatement} \begin{corollary}\label{done} Let $X$ be the Fano variety of lines of a smooth cubic fourfold. Let $\sigma\in\aut(X)$ be a polarized, prime order automorphism that is non--symplectic. Then \[ (\Delta_G)_\ast A^i_{(2)}(X)=0\ \ \ \hbox{for}\ i\in\{2,4\}\ .\] \end{corollary} \begin{proof} The point is that smooth cubic fourfolds with automorphisms of prime order have been classified. If the induced automorphism on the Fano variety $X$ is non--symplectic, the only possibilities are that $\sigma$ is of order $2$ or $3$ (\cite{GAL}, combined with the corrigendum in \cite[Remark 6.3]{BCS} to rule out the case of order $5$). For $\sigma$ of order $2$, there are two families of cubic fourfolds, and these have been treated in \cite{inv}, \cite{inv2}. For $\sigma$ of order $3$, there are $4$ families \cite[Examples 6.4, 6.5, 6.6 and 6.7]{BCS}; the first is treated in \cite{nonsymp3}, the others in theorems \ref{main0} and \ref{main}. \end{proof} \subsection{Bloch conjecture} \begin{corollary}\label{triv} Let $X$ and $\sigma$ be as in theorem \ref{main0} or theorem \ref{main2}. Let $Z:=X/<\sigma>$ be the quotient. Then \[ A^4(Z)\cong \mathbb{Q}\ .\] \end{corollary} \begin{proof} This readily follows from the natural isomorphism $A^4(Z)\cong A^4(X)^\sigma$, cf. \cite[Proof of Corollary 4.1]{nonsymp3}. \end{proof} \subsection{Generalized Hodge conjecture} \begin{corollary}\label{ghc} Let $X$ and $\sigma$ be as in theorem \ref{main0} or theorem \ref{main2}. Then the invariant part of cohomology \[ H^4(X)^\sigma \ \subset \ H^4(X) \] is supported on a divisor. \end{corollary} \begin{proof} This is an application of the Bloch--Srinivas argument \cite{BS}, cf. \cite[Proof of Corollary 4.2]{nonsymp3}. \end{proof} \subsection{Intersection of cycles} \begin{corollary}\label{ring} Let $X$ and $\sigma$ be as in theorem \ref{main0} or theorem \ref{main}. Let $a\in A^3(X)$ be a $1$--cycle of the form \[ a=\displaystyle\sum_{i=1}^r b_i\cdot D_i\ \ \ \in A^3(X)\ ,\] where $b_i\in A^2(X)^\sigma$ and $D_i\in A^1(X)_{}$. Then $a$ is rationally trivial if and only if $a$ is homologically trivial. \end{corollary} \begin{proof} Since $\sigma$ respects the bigrading $A^\ast_{(\ast)}(X)$ (proposition \ref{compat}), theorem \ref{main0} or theorem \ref{main} implies that \[ A^2(X)^\sigma\ \subset\ A^2_{(0)}(X)\ .\] Since $A^2_{(0)}(X)\cdot A^1(X)_{}\subset A^3_{(0)}(X)$ (this is \cite[Proposition A.7]{FLV}, which improves upon \cite[Proposition 22.7]{SV}), it follows that \[ a\ \ \in A^3_{(0)}(X)\ .\] But $A^3_{(0)}(X)$ injects into cohomology under the cycle class map \cite{SV}. ( A quick way of proving this injectivity can be as follows: let $\mathcal F$ be the Fourier transform of \cite{SV}. We have that $a\in A^3(X)$ is in $A^3_{(0)}(X)$ if and only if $\mathcal F(a)\in A^1_{(0)}(X)=A^1(X)$ \cite[Theorem 2]{SV}. Suppose $a\in A^3_{(0)}(X)$ is homologically trivial. Then also $\mathcal F(a)\in A^1(X)$ is homologically trivial, hence $\mathcal F(a)=0$ in $A^1(X)$. But then, using \cite[Theorem 2.4]{SV}, we find that \[ {25\over 2} a = \mathcal F\circ \mathcal F(a)=0\ \ \ \hbox{in}\ A^3(X)\ .)\] \end{proof} \begin{corollary}\label{cheat} Let $X$ and $\sigma$ be as in theorem \ref{main0} or theorem \ref{main}. Let $\phi\colon X\dashrightarrow X$ be the degree $16$ rational map first defined in \cite{V21}. Let $a\in A^2(X)$ be a $2$--cycle of the form \[ a=\phi^\ast(b)-4b\ \ \ \in A^2(X)\ ,\] where $b\in A^2(X)$ is a sum of $\sigma$--invariant cycles and intersections of divisors. Then $a$ is rationally trivial if and only if $a$ is homologically trivial. \end{corollary} \begin{proof} We know from theorem \ref{main0} or theorem \ref{main} (combined with the fact that $A^1(X)\cdot A^1(X)\subset A^2_{(0)}(X)$ \cite[Lemma 4.4]{SV}) that $b$ is in $A^2_{(0)}(X)$. Let $V^2_\lambda$ denote the eigenspace \[ V^2_\lambda:=\{ c\in A^2(X)\ \vert\ \phi^\ast(c)=\lambda\cdot c\}\ .\] Shen--Vial have proven that there is a decomposition \[ A^2_{(0)}(X)=V^2_{31}\oplus V^2_{-14}\oplus V^2_{4}\ \] \cite[Theorem 21.9]{SV}. The ``troublesome part'' $A^2_{(0)}(X)\cap A^2_{hom}(X)$ is contained in $V^2_4$ \cite[Lemma 21.12]{SV}. This implies that \[ (\phi^\ast-4(\Delta_X)^\ast)A^2_{(0)}(X)=V^2_{31}\oplus V^2_{-14} \] injects into cohomology. \end{proof} \begin{remark} Corollary \ref{cheat} is probably less than optimal: conjecturally, for any $b\in A^2(X)$ as in corollary \ref{cheat}, one should have that $b$ is rationally trivial if and only if $b$ is homologically trivial. The problem, in proving such a statement along the lines of the present note, is that we cannot prove that \[ A^2_{(0)}(X)\cap A^2_{hom}(X)\stackrel{??}{=}0\ .\] \end{remark} \vskip1cm \begin{nonumberingt} Thanks to all participants of the Strasbourg 2014/2015 ``groupe de travail'' based on the monograph \cite{Vo} for a very stimulating atmosphere. Many thanks to Mrs. Yasuyo Ishitani, Director of the Special Program ``O-Uchi De O-Shigoto'', for an excellent working environment. \end{nonumberingt} \vskip1cm
{ "timestamp": "2018-02-21T02:07:18", "yymm": "1802", "arxiv_id": "1802.07030", "language": "en", "url": "https://arxiv.org/abs/1802.07030" }
\chapter{Introduction} \input{introduction} \chapter{Grothendieck rings of varieties and motivic vanishing cycles}\label{grothrings} \input{grothrings} \chapter{Motivic Euler products}\label{eulerproducts} \input{eulerproducts} \chapter{Mixed Hodge modules and convergence of Euler products}\label{hodgemodules} \input{hodgemodules} \chapter{The motivic Poisson formula}\label{poissonformula} \input{poissonformula2} \chapter{Motivic height zeta functions}\label{motheightzeta} \input{motheightzeta2} \backmatter \cleardoublepage \section{Symmetric products}\label{symmprod} \subsection{Introduction: Symmetric powers of a variety}\label{sympower} Let $X$ be a quasi-projective variety over a perfect field $k$. For every non-negative integer $n$, there is a natural action of the symmetric group $\mathfrak{S}_n$ on the product $X^n$ by permuting the coordinates, and it is a classical result that the quotient $S^nX = X^n / \mathfrak{S}_n$ exists as a variety (if $X$ is quasi-projective, which we assumed). We will call this variety the $n$-th \textit{symmetric power} \index{symmetric power!of a variety} of $X$. By convention $S^0X$ will be $\mathrm{Spec}\, k$. The rational points of the variety $S^nX$ correspond to effective zero-cycles of degree $n$ on $X$. Any such zero-cycle $\sum_xn_xx$ determines a partition $$\sum_x\underbrace{(n_x + \ldots + n_x)}_{\deg x\ \text{terms}} = n$$ of $n$, which we will denote by $\pi$. The subset $S^{\pi} X$ of $S^nX$ consisting of the zero-cycles inducing this partition $\pi$ is locally closed in $S^nX$, and can be constructed directly in the following way: for all $i\geq 1$, denote by $n_i$ the number of times the integer $i$ occurs in this partition. Every zero-cycle determining the partition $\pi$ is of the form $$\sum_{i\geq 1}i(x_{i,1} + \ldots +x_{i,n_i})$$ where the points $x_{i,j}$ are geometric points of $X$, all distinct. Consider therefore the product $X^{\sum_{i\geq 1}n_i}$, from which we remove the diagonal $\Delta$, that is, the points having at least two equal coordinates. The product $\prod_{i\geq 1}\mathfrak{S}_{n_i}$ of symmetric groups has a left action on $\prod_{i\geq 1}X^{n_i} = X^{\sum_{i\geq 1}n_i}$ via $$(\sigma_i)_{i\geq 1}\cdot (x_{i,1},\ldots x_{i,n_i})_{i\geq 1}\mapsto (x_{i,\sigma_i^{-1}(1)},\ldots,x_{i,\sigma_i^{-1}(n_i)})_{i\geq 1}$$ for any $\sigma = (\sigma_i)_{i\geq 1}\in\prod_{i\geq 1}\mathfrak{S}_{n_i}$. This action restricts to $X^{\sum_{i\geq 1}n_i}\backslash \Delta$, and the quotient will be naturally isomorphic to the above locally closed subset $S^{\pi}X$ . This observation will be the starting point of the construction in the following paragraph. The variety $S^nX$ can thus be written as a disjoint union of locally closed sets $S^{\pi}X$ with~$\pi$ ranging over all partitions of $n$. In particular, we will denote by $S^n_*X$ the open subset of $S^nX$ corresponding to the partition $[1,\ldots,1]$ of $n$, which parametrises étale zero-cycles of degree $n$ on~$X$. This construction may be done with $k$ replaced by a $k$-variety $R$, and products replaced by fibred products over $R$. The resulting objects will be varieties over $R$, denoted $S^nX$ and $S^{\pi}X$ as well, or $S^n(X/R)$ and $S^{\pi}(X/R)$ if we want to keep track of the base variety. For any point $v\in R$, the fibre of $S^{\pi}(X/R)$ above $v$ will be isomorphic to $S^{\pi}(X_v/\kappa(v))$ where $X_v$ is the fibre of $X$ over $v$. \begin{remark} Though we may define symmetric powers also over non-perfect fields, the above description of points will fail in this case. This justifies our condition on the base variety $R$. \end{remark} \subsection{Quotients of schemes by finite group actions}\label{appendix} We gather here some facts about quotients by finite group actions. A detailed account may be found in Chapter 0 of \cite{GIT}. Let $X$ be a scheme endowed with an algebraic action of a finite group $G$. \begin{definition} A (categorical) quotient of $X$ by $G$ is a morphism of schemes $\pi: X\rightarrow Y$ with the following two properties: \begin{itemize}\item $\pi$ is $G$-invariant; \item $\pi$ is universal with this property: for every scheme $Z$ over $k$ and every $G$-invariant morphism $f:X\rightarrow Z$, there is a unique morphism $h:Y\rightarrow Z$ such that $h\circ\pi = f$. \end{itemize} \end{definition} Because of the universality, a quotient is unique up to canonical isomorphism if it exists. In this case, we write $Y = X/G$. Note that the universal property implies in particular that if $X$ is an $S$-scheme, then so is $X/G$. If $X = \mathrm{Spec}\, A$ is affine, of finite type over $S$ (which may be assumed to be affine, equal to $\mathrm{Spec}\, C$ for some ring $C$), then $A$ has a $G$-action, and we may define the subring $A^G$ of $A$ of all $G$-invariant elements of $A$. This induces a morphism $$\pi: \mathrm{Spec}\, A\longrightarrow \mathrm{Spec}\, A^{G}.$$ One can show that this is the quotient of $X$ by $G$. \begin{definition} We say that the action of $G$ on $X$ is good if any $x\in X$ has an open affine neighbourhood that is preserved by the $G$-action. \end{definition} For example, if $X$ is quasi-projective over a field $k$, then the action of $G$ is good. If the action of $G$ on the variety $X$ is good, then taking an affine cover $(U_i)_i$ of $X$ by such affine subsets, one may construct a quotient $X/G$ by glueing together the quotients $U_i/G$. It follows from \cite{GIT}, theorem 1.10, that this quotient is quasi-projective. \begin{prop}[\cite{Mustata}, Proposition A.8]\label{subquotient} Let $G$ be a finite group acting by algebraic automorphisms on a quasi-projective variety $X$ over $k$. Let $H$ be a subgroup of $G$, and $Y$ an open subset of $X$ such that \begin{enumerate}\item $Y$ is preserved by the action of $H$ on $X$. \item If $Hg_1,\ldots,Hg_r$ are the right equivalence classes of $G$ modulo $H$, then $$X = \bigcup_{i=1}^rYg_i$$ is a disjoint cover. \end{enumerate} Then the natural morphism $Y/H\longrightarrow X/G$ is an isomorphism. \end{prop} \subsection{Symmetric products of a family of varieties} \label{definition} Let $X$ be a variety over $R$, and let $\mathscr{X}=(X_i)_{i\geq 1}$ be a family of $X$-varieties with structural morphisms $\phi_i:X_i\longrightarrow X$. All products in this section are fibred products over~$R$. Let $\pi = (n_i)_{i\geq 1}$ be a partition of the integer $n$. The variety $\prod_{i}X_i^{n_i}$ has a morphism $\prod_{i\geq 1}\phi_i^{n_i}$ to $\prod_{i}X^{n_i}$. Denote by $$\left(\prod_{i\geq 1}X^{n_i}\right)_*$$ the open subset of the latter obtained by removing the diagonal, that is, the points having at least two equal coordinates. By base change, we get the open subset $$\prod_{i}X_i^{n_i}\times_{\prod_{i}X^{n_i}}\left(\prod_{i}X^{n_i}\right)_*$$ of elements mapping to $\sum_{i}n_i$-tuples of $X$ which do not belong to the diagonal. This can be summarised by the following cartesian diagram: $$\xymatrix{ \left( \prod_{i\geq 1}X_i^{n_i}\right)\times_{\prod_{i\geq 1}X^{n_i}}\left(\prod_{i\geq 1}X^{n_i}\right)_*\ar@{^{(}->}[r] \ar[d] & \prod_{i\geq 1}X_i^{n_i} \ar[d]^{\prod_{i\geq 1}\phi_i^{n_i}} \\ \left(\prod_{i\geq 1}X^{n_i} \right)_* \ar@{^{(}->}[r] & \prod_{i\geq 1}X^{n_i} }$$ For simplicity, in what follows we will write $\left(\prod_{i\geq 1}X_i^{n_i}\right)_*$ for the variety at the top-left corner of this diagram (when we want to specify that points were removed with respect to coordinates in~$X$, we may write $\left(\prod_{i\geq 1}X^{n_i}\right)_{*,X}$ ). Now the product $\prod_{i\geq 1}\mathfrak{S}_{n_i}$ of symmetric groups acts naturally on the varieties occurring in the right column of this diagram: each~$\mathfrak{S}_{n_i}$ acts on the corresponding~$X_i^{n_i}$ and $X^{n_i}$ by permutation of coordinates. It restricts to the varieties in the left column, and is compatible with the vertical maps. Passing to the quotient, the left column gives us a variety which we will denote by $S^{\pi}(\mathscr{X}/R)$, or simply $S^{\pi}\mathscr{X}$, with a map to the variety~$S^{\pi}X$ defined in the previous section. Finally, taking the disjoint union $\cup_{\pi}S^{\pi}\mathscr{X}$ over all partitions of $n$, we get a variety~$S^{n}\mathscr{X}$. \index{symmetric product!of varieties} \begin{remark}\label{diagonal} The horizontal inclusion maps of the cartesian square $$\xymatrix{\left(\prod_{i\geq 1}X_i^{n_i}\right)_*\ar[r]\ar[d] & \prod_{i\geq 1}X_i^{n_i}\ar[d]\\ \left(\prod_{i\geq 1}X^{n_i}\right)_*\ar[r] & \prod_{i\geq 1}X^{n_i} } $$ are compatible with taking the quotient, so we get well-defined maps $$\xymatrix{S^{\pi}\mathscr{X}\ar[r]\ar[d] & \prod_{i\geq 1}S^{n_i}X_i\ar[d]\\ S^{\pi}X\ar[r] & \prod_{i\geq 1}S^{n_i}X } $$ The diagonal is a closed subset $\Delta_X$ in $\prod_{i\geq 1}X^{n_i}$. It is stable by the action of $\prod_{i\geq 1}\mathfrak{S}_{n_i}$, and maps to a closed subset $\Delta_{X,\pi}$ inside $\prod_{i\geq 1}S^{n_i}X$ such that $S^{\pi}X\simeq \left(\prod_{i\geq 1}S^{n_i}X\right) \backslash \Delta_{X,\pi}$, and therefore $S^{\pi}\mathscr{X}$ is exactly the restriction of $\prod_{i\geq 1}S^{n_i}X_i$ to points mapping outside~$\Delta_{X,\pi}$. In other words, the diagonal can be removed before or after passing to the quotient. \end{remark} \begin{notation}\label{allequal} If the family $\mathscr{X}$ is constant, that is, all $X_i$ are equal to some $X$-variety $Y$, then the resulting symmetric products will be denoted $S^{\pi}_X(Y)$ (resp. $S^n_X(Y)$). In particular, by definition, we have $S^{\pi}_X(X) = S^{\pi}X$. \end{notation} \begin{example}\label{firstex}\begin{enumerate}\item If $\pi$ is the partition $[1,\ldots,1]$, we are going to write $S^{\pi}\mathscr{X} = S^{n}_*\mathscr{X}$. It corresponds to the variety $S^n_{*,X}X_1$ parametrising effective zero-cycles on~$X_1$ of degree~$n$ mapping to effective zero-cycles on~$X$ of degree $n$ in which no point occurs with multiplicity strictly greater than one. \item If $\pi$ is the partition $[n]$, $S^{\pi}\mathscr{X} = X_n$. \item If we take all $X_i$ to be equal to some $X$-variety $Y$, then $S^{\pi}\mathscr{X} = S^{\pi}_XY$ (see notation \ref{allequal}) corresponds exactly to effective zero-cycles of degree $n$ on $Y$ mapping to zero-cycles on $X$ with partition $\pi$. In particular, if all $X_i$ are equal to $X$, then we get the locally closed subset $S^{\pi}X$ of $S^{n}X$ described in section \ref{sympower}. \item If $X = \mathrm{Spec}\, R$, then $\left(\prod_{i\geq 1}X^{n_i}\right)_*$ is empty whenever there is more than one factor, that is, except if $n_i = 0$ for all $i\geq 1$ but one (recall the product is over $R$). Since the $n_i$ are subject to the relation $\sum_{i}in_i = n$, this means that $n_i=0$ for $i<n$, and $n_n = 1$. Thus, $S^{\pi}\mathscr{X}$ is empty for all $\pi$ but $[n]$, and we have $S^n\mathscr{X} = X_n$. \end{enumerate} \end{example} \section{Iteration of the symmetric product construction}\label{relativesetting} If $\mathscr{X} = (X_i)_{i\geq 1}$ is a family of varieties over $X$, and $X$ itself is a variety over some scheme $R$, then the symmetric product construction over $R$ gives rise to a family of varieties $$S^{\bullet}(\mathscr{X}/R) = (S^{\pi}(\mathscr{X}/R))_{\pi\in\mathbf{N}^{(\mathbf{N}^*)}}$$ over $R$, indexed by all partitions $\pi$. As it is now, our definition of symmetric products doesn't allow us to carry on and construct a symmetric product of this family. The aim of this section is to generalise our construction in a way that will make this possible. This generalisation is important in itself, as it gives the correct general setting in which symmetric products may be defined. The idea is to replace families indexed by the set $\mathbf{N}^*$ of positive integers by families indexed by any set $I$. Then the family of their symmetric products will be indexed by the set $$\mathbf{N}^{(I)} = \{(n_i)_{i\in I}\in \mathbf{N}^{I},\ n_i = 0\ \text{for almost all}\ i\},$$ and the family of the symmetric products of those will be indexed by $\mathbf{N}^{\left(\mathbf{N}^{(I)}\right)}.$ \subsection{Symmetric products of varieties indexed by any set}\label{anyset} Let $I$ be a set. The construction is completely analogous to the construction of symmetric products in the case where $I$ is just the set of positive integers. Let $X$ be a quasi-projective variety over~$R$, and $\mathscr{X} = (X_i)_{i\in I}$ a family of quasi-projective $X$-varieties. Fix $\pi= (n_i)_{i\in I}$. The product $$\prod_{i\in I}X_i^{n_i}$$ has a morphism to $\prod_{i\in I}X^{n_i}$. We consider the open subset $$\left(\prod_{i\in I}X_i^{n_i}\right)_{\!\!\!*}\subset \prod_{i\in I}X_i^{n_i}$$ of points lying above the complement of the diagonal of $\prod_{i\in I}X^{n_i}$, that is mapping to points having pairwise distinct coordinates. We have a natural action of the product of symmetric groups $\prod_{i\in I} \mathfrak{S}_{n_i}$ by permutation of coordinates, and we define $$S^{\pi}\mathscr{X} := \left(\prod_{i\in I}X_i^{n_i}\right)_{\!\!\!*}/\prod_{i\in I}\mathfrak{S}_{n_i},$$ which is a variety because the varieties we started with were quasi-projective. \index{symmetric product! of varieties} Note that in particular, in the case $\pi = 0$, we get $S^{0}\mathscr{X} = R$. \begin{remark} The construction is functorial in the sense that if we have two families of $X$-varieties $\mathscr{X} = (X_i)_{i\in I}$ and $\mathscr{Y} = (Y_i)_{i\in I}$, and if we are given, for every $i$, a morphism $f_i:X_i\to Y_i$, then the family of morphisms $f =(f_i)_{i\in I}$ induces, for every $\pi\in\mathbf{N}^{(I)}$, a morphism $S^{\pi}f:S^{\pi}\mathscr{X}\to S^{\pi}\mathscr{Y}$. \end{remark} \begin{example}\label{generalex} If $X= R$ and $\pi\neq 0$, then as in Example \ref{firstex}, $S^{\pi}\mathscr{X}$ is empty except if there exists $i_0\in I$ such that $\pi = (n_i)_{i\in I}$ satisfies $n_i=0$ for all $i\neq i_0$, and $n_{i_0} = 1$, and in this case $S^{\pi}\mathscr{X} = X_{i_0}$. \end{example} \bigskip \begin{remark}[Case when $I$ is a semigroup]\label{Isemigroup} Assume for a moment that $I$ is of the form $I_0\setminus \{0\}$ where $I_0$ is a commutative monoid, that is, $I_0$ is endowed with some associative and commutative composition law with zero-element 0. Then there is a well-defined map $$\begin{array}{rccc}\lambda:& \mathbf{N}^{(I)}& \to & I_0\\ & \pi=(n_i)_{i\geq 1} & \mapsto & \sum_{i\in I}{in_i} \end{array}$$ and we may also define, for any $n\in I_0$, $S^{n}\mathscr{X}$ to be the disjoint union of all the $S^{\pi} \mathscr{X}$ for $\pi\in \lambda^{-1}(n)$. \end{remark} \begin{example}\label{example.allxiequal}In the particular case where $I=\mathbf{N}^*$ all the $X_i$ are equal to $X$, by definition, for any integer $n$, $S^n\mathscr{X}$ is the disjoint union of the locally closed subsets $S^{\pi}X$ described in section \ref{sympower}. In particular, we have the equality of classes $[S^n\mathscr{X}] = [S^nX]$ in $\mathrm{KVar}^+_{R}$. Note that however, the natural scheme structure of the symmetric power $S^nX$ is not the same as the scheme structure of $S^n\mathscr{X}$. This won't have any importance for us because we will be mainly working in Grothendieck (semi)rings. \end{example} \begin{example}\label{generalexn} If $X=R$ then by example \ref{generalex}, $S^{n}\mathscr{X} = X_{n}$. \end{example} The following definition comes as a natural generalisation of Kapranov's zeta function. \begin{definition}\label{def.zetafunction} \index{motivic zeta function!of a family of varieties} Let $X$ be variety over $R$, $\mathscr{X} = (X_i)_{i\in I}$ a family of varieties over $X$. Consider also a family $\mathbf{t} = (t_i)_{i\in I}$ of indeterminates and denote by $\mathrm{KVar}^+_{R}[[\mathbf{t}]]$ the semi-ring of power series in those indeterminates over $\mathrm{KVar}^+_{R}$. The zeta function associated to $\mathscr{X}$ is the formal power series given by $$Z_{\mathscr{X}}(\mathbf{t}) = \sum_{\pi\in\mathbf{N}^{(I)}}[S^{\pi}\mathscr{X}]\t^{\pi}\ \ \in\mathrm{KVar}^+_{R}[[\mathbf{t}]],$$ where $\t^{\pi} := \prod_{i\geq 1}t_i^{n_i}$. \index{ZX@$Z_{\mathscr{X}}(\t)$} In particular, if one assumes $I = \mathbf{N}^*$ and specialises the $t_i$ to $t_i = t^i$ for a single variable~$t$, one gets a power series $$Z_{\mathscr{X}}(t) = \sum_{n\geq 0}[S^{n}\mathscr{X}]t^{n}\ \ \in\mathrm{KVar}^+_{R}[[t]].$$ More generally, if we assume $I = \mathbf{N}^p\backslash\{0\}$ for some integer $p\geq 1$, we get a multi-variate variant of the above zeta-function: $$Z_{\mathscr{X}}(t_1,\ldots,t_p) = \sum_{\mathbf{n}\in \mathbf{N}^p}[S^{\mathbf{n}}\mathscr{X}]t_1^{n_1}\ldots t_p^{n_p}\ \ \in\mathrm{KVar}^+_{R}[[t_1,\ldots,t_p]].$$ where for every $\mathbf{n} = (n_1,\ldots,n_p)\in\mathbf{N}^p$, the variety $S^{\mathbf{n}}\mathscr{X}$ is the disjoint union of the $S^{\pi}\mathscr{X}$ for all $\pi = (n_i)_{i\in I}\in\mathbf{N}^{(I)}$ such that $\sum_{i\in I}in_i = \mathbf{n}$. When we want to specify $R$, we are going to write $Z_{\mathscr{X}/R}$ instead. \end{definition} \begin{example} Taking $X_i = X$ for all $i\geq 1$, and using the fact that by example \ref{example.allxiequal} in this case $[S^n\mathscr{X}] = [S^nX]$, we recover Kapranov's zeta function $$Z_X(t) = \sum_{n\geq 0}[S^nX]t^n\ \ \in\mathrm{KVar}^+_{R}[[t]].$$ \end{example}\index{Kapranov's zeta function} \subsection{Describing points of symmetric products}\label{symprodpoints}\index{symmetric product!points} Assume that $I$ is a commutative semigroup. To describe the points of these symmetric products, it is convenient to use the term ``effective zero-cycle'' rather loosely, so that it applies to any finite formal sum of closed (or Galois orbits of geometric) points with coefficients in some semigroup. Recall each $X_i$ comes with a morphism $\phi_i:X_i\to X$. Each point of $S^{\pi}\mathscr{X}$ has an image in $S^{\pi}X$ which, by construction, can be written as an effective zero-cycle on $X$ of the form \begin{equation}\label{0cycle}\sum_{i\in I} i(v_{i,1}+\ldots +v_{i,n_i})\in S^{\pi}X ,\end{equation} the $v_{i,j}$ being distinct (geometric) points of $X$. Moreover, for every $i\in I$, the degree $i$ part $$i(v_{i,1} +\ldots + v_{i,n_i})$$ comes from $(v_{i,1},\ldots,v_{i,n_i})\in X^{n_i}$, which in turn by definition comes from a point of~$X_i^{n_i}$. Thus, by analogy with the notation used in (\ref{0cycle}), we will write elements of $S^{\pi}\mathscr{X}$ as effective zero-cycles on the disjoint union of (a finite number of) the $X_i$, of the form $$D' =\sum_{i\in I}i(x_{i,1} + \ldots + x_{i,n_i}) $$ such that for all $i\in I$ and for all $j\in\{1,\ldots,n_i\},\ x_{i,j}$ is a geometric point of $X_i$, and $$\sum_{i\in I}i(\phi_i(x_{i,1}) + \ldots + \phi_i(x_{i,n_i})) \in S^{\pi}X,$$ that is, the $\phi_i(x_{i,j})$ are distinct geometric points of $X$. One may also view an element of $S^{\pi}\mathscr{X}$ simply as a collection of effective zero-cycles $(D_i)_{i\in I}$ where for all $i\in I$, $D_i\in S^{n_i}X_i$, the support of the image of $D_i$ in $X$ is composed of $n_i$ distinct geometric points, and the supports of the images of $D_i$ and $D_j$ for $i\neq j$ are disjoint. If $D\in S^{\pi}X(\Omega)$ is a geometric point, for some algebraically closed field $\Omega$, then for all $i\in I$ and $1\leq j\leq n_i$, $v_{i,j}\in X(\Omega).$ Let $\Omega'\supset \Omega$ be an algebraically closed field. Then the $\Omega'$-points of the fibre of $S^{\pi}X$ above $D$ are exactly those where for all $i,j$, $x_{i,j}\in X_i(\Omega')$. For $n\in I$, a geometric point of $S^{n}X$ is of the form $D = \sum_{v\in X} n_v v$, where the $n_v$ are non-negative integers, almost all zero and such that $\sum_{v}n_v = n$, and the points $v$ are distinct geometric points of $X$. The zero-cycle $D$ is an element of $S^{\pi}X$ if and only if the partition of the integer $n$ defined by the integers $(n_v)_v$ is exactly $\pi$. A geometric point of $S^{\pi}\mathscr{X}$ lying above $D$ will be written in the form $\sum_{v\in X} n_v x_v$, where for every $v$, $x_v$ is a geometric point of $X_{n_v,v}$. The fibre of $S^{\pi}\mathscr{X}$ above a geometric point $D\in S^{\pi}X$ is \begin{equation}\label{formula.fibre}(S^{\pi}\mathscr{X})_D = \prod_{v\in D}X_{n_v,v}.\end{equation} \subsection{Symmetric product of a family of symmetric products}\label{symproductquestion} Assume we are given a family of varieties $\mathscr{X} = (X_i)_{i\in I}$ over some variety $X$, which itself is defined over $R$, and assume $R$ is itself a variety over some $k$-variety $R'$. For every $\pi\in \mathbf{N}^{(I)}$, this gives rise to a variety $S^{\pi}(\mathscr{X}/R)$ over $R$. We can now consider the family $$S^{\bullet}(\mathscr{X}/R) = (S^{\pi}(\mathscr{X}/R))_{\pi\in \mathbf{N}^{(I)}},$$ of varieties over $R$ and, replacing $I$ by $\mathbf{N}^{(I)}$ in the previous paragraph, do the same construction again. We get a family of varieties indexed by the set $$\mathbf{N}^{(\mathbf{N}^{(I)})} = \{(m_{\pi})_{\pi}\in \mathbf{N}^{\mathbf{N}^{(I)}}, \ \ m_{\pi} = 0\ \text{for all but finitely many $\pi$}\}.$$ For $\varpi\in \mathbf{N}^{(\mathbf{N}^{(I)})}$, we have $$S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R') = \left(\prod_{\pi\in \mathbf{N}^{(I)}}(S^{\pi}(\mathscr{X}/R))^{m_{\pi}}\ \ _{/R'}\right)_{\!\!\!*,R}/\prod_{\pi\in \mathbf{N}^{(I)}}\mathfrak{S}_{m_{\pi}},$$ where $/R'$ means the product is over $R'$. A natural question arises now: What is the link between this family $$(S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R'))_{\varpi\in\mathbf{N}^{\left(\mathbf{N}^{I}\right)}},$$ and the family $(S^{\pi}(\mathscr{X}/R'))_{\pi\in\mathbf{N}^{(I)}}$ obtained by doing the symmetric product construction for the family $\mathscr{X}$ but seeing $X$ directly as an $R'$-variety? \subsection{Main result} \begin{definition} \label{mu} The map $\mu:\mathbf{N}^{(\mathbf{N}^{(I)}\backslash\{0\})}\longrightarrow \mathbf{N}^{(I)}$ is defined to be the map that sends an element $(m_{\pi})_{\pi\in \mathbf{N}^{(I)}\backslash\{0\}}$ to $$\sum_{\pi\in \mathbf{N}^{(I)}\backslash\{0\}}m_{\pi}\pi\in \mathbf{N}^{(I)}.$$ In terms of the other notation, $\mu$ sends an element $$[[a_{1,1},\ldots,a_{1,m_1}],\ldots,[a_{r,1},\ldots,a_{r,m_r}]]\in \mathbf{N}^{(\mathbf{N}^{(I)}\backslash\{0\})}$$ to $$[a_{1,1},\ldots,a_{1,m_1},a_{2,1},\ldots,a_{r,1},\ldots,a_{r,m_r}]\in \mathbf{N}^{(I)}.$$ \end{definition} Before stating the main proposition, let us give a motivating example. \begin{example}\label{ex.112} Let $I = \mathbf{N}^{*}$ and $\pi = [1,1,2]\in\mathbf{N}^{(I)}$, so that $$\mu^{-1}(\pi) = \{\ [[1],[1],[2]],\ [[1,1],[2]],\ [[1],[1,2]],\ [[1,1,2]]\ \}.$$ We keep the notation from section \ref{symproductquestion}. The points of the variety \begin{equation}\label{sym.112}S^{\pi}(\mathscr{X}/R') = \left(X_1\times_{R'}X_1\times_{R'}X_2\right)_{*,X}/\mathfrak{S}_2\times \mathfrak{S}_1\end{equation} are zero-cycles of the form $x + y + 2z$, with $x,y,z$ having distinct images in $X$, but all mapping to the same $r\in R'$. We therefore may classify them depending on the relative positions of their images in $R$, which may be encoded by an element of $\mu^{-1}(\pi)$, by adding square brackets to gather integers corresponding to points having the same image in $R$. There are several cases to consider: \begin{itemize}\item The points $x,y,z$ all have distinct images in $R$: this may be encoded by $\varpi_1 = [[1],[1],[2]]$. \item The points $x$ and $y$ have the same image in $R$, but not $z$: this corresponds to $\varpi_2 = [[1,1],[2]]$. \item The point $z$ has the same image as one of the points $x$ or $y$, but not the other: this is represented by $\varpi_3 = [[1],[1,2]]$. \item They all have the same image: this gives $\varpi_4 = [[1,1,2]]$. \end{itemize} Thus, we have a decomposition of $S^{\pi}(\mathscr{X}/R')$ into four locally closed subsets $S^{\pi}_{\varpi_i}(\mathscr{X}/R')$, $i=1,2,3,4$ corresponding to these four cases. Proposition \ref{iterate} gives a direct way of constructing varieties isomorphic to these locally closed subsets, in the flavour of what has been done in section \ref{sympower}, when we gave direct constructions for the locally closed subsets $S^{\pi}X$ of $S^{n}X$. For $\varpi_2$ for example, we may remark that giving an element of $S^{\pi}_{\varpi_2}(\mathscr{X}/R')$ is equivalent to giving a zero-cycle in $S^{[1,1]}(\mathscr{X}/R)$ and a zero-cycle in $S^{[2]}(\mathscr{X}/R)$, and making sure they have distinct images in $R$ but the same image in $R'$, which results in: \begin{equation}\label{var.112}\left(S^{[1,1]}(\mathscr{X}/R)\times_{R'} S^{[2]}(\mathscr{X}/R)\right)_{*,R} = S^{\varpi_2}(S^{\bullet}(\mathscr{X}/R)/R')\end{equation} The right-hand side means that this amounts exactly to applying the symmetric product construction for the element $\varpi_2\in\mathbf{N}^{(N^{(I)}\setminus\{0\})}$, and the family $S^{\bullet}(\mathscr{X}/R)$ above the $R'$-variety $R$. \end{example} \begin{prop}\label{iterate} Let $R'$ be a variety over $k$, $R$ a variety over $R'$, $X$ a variety over~$R$, and let $\mathscr{X} = (X_i)_{i\in I}$ be a family of varieties over $X$, indexed by a set $I$. Then for every $\pi\in\mathbf{N}^{(I)}$ and for every $\varpi\in\mu^{-1}(\pi)$, there is a piecewise isomorphism of the variety $S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R')$ onto a locally closed subset $S^{\pi}_{\varpi}(\mathscr{X}/R')$ of $S^{\pi}(\mathscr{X}/R')$, so that moreover $S^{\pi}(\mathscr{X}/R')$ is equal to the disjoint union of the sets $S^{\pi}_{\varpi}(\mathscr{X}/R')$. In particular, we have the equality $$\sum_{\varpi\in \mu^{-1}(\pi)}[S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R')] = [S^{\pi}(\mathscr{X}/R')]$$ in $\mathrm{KVar}^+_{R'}$. \end{prop} \subsection{Proof of proposition \ref{iterate}} To prove proposition \ref{iterate}, by a spreading-out argument, we may assume $R'=k$ is a field. We write $J = \mathbf{N}^{(I)}\backslash\{0\}$, and for any $j\in J$, $\pi_j = (n_i^j)_{i\in I}\in \mathbf{N}^{(I)}$, so that every element $\varpi$ of $\mathbf{N}^{(J)}$ may be given as a family of multiplicities $(m_j)_{j\in J}$. Put $\pi = (n_i)_{i\in I}$, and fix $\varpi\in\mu^{-1}(\pi)$. The condition that $\pi = \mu(\varpi)$ is equivalent to $$n_i = \sum_{j\in J}m_j n_i^{j}$$ for all $i\in I$. Our proof decomposes in several steps. Both $S^{\varpi}(S^{\bullet}(\mathscr{X}/R))$ and $S^{\pi}(\mathscr{X})$ are constructed as some quotient of some product of the $X_i$. We are going to refrain from taking quotients first, and construct an immersion from the product giving the former to the product giving the latter. \paragraph{The immersion before quotients} For any two varieties $V$ and $W$ over $R$, there is an immersion $V \times_R W\hookrightarrow V\times W$. Thus, for every $j\in J$ there is an immersion $$\prod_{i\in I}X_i^{n_i^j}\ _{/R}\hookrightarrow \prod_{i\in I}X_i^{n_i^j}.$$ Restricting to the complement of the diagonal in $\prod_{i\in I}X^{n_i^{j}}$, we get an immersion $$\left(\prod_{i\in I}X_i^{n_i^j}\ _{/R}\right)_{*,X}\hookrightarrow \left(\prod_{i\in I}X_i^{n_i^j}\right)_{\!\!\!*,X}.$$ Taking the product over all $j\in J$ of the $m_j$-th powers of those varieties gives: $$\prod_{j\in J}\left(\left(\prod_{i\in I}X_i^{n_i^j}\ _{/R}\right)_{\!\!\!*,X}\right)^{m_j} \hookrightarrow \prod_{j\in J}\left(\left(\prod_{i\in I}X_i^{n_i^j}\right)_{\!\!\!*,X}\right)^{m_j}.$$ For any two varieties $V$ and $W$ over $X$, we have the commutative diagram $$\xymatrix{(V\times W)_{*,R} \ar@{^{(}->}[r] \ar[d]& (V\times W)_{*,X} \ar@{^{(}->}[r] \ar[d]& V\times W \ar[d]\\ (X\times X)_{*,R}\ar@{^{(}->}[r]\ar[d] & (X\times X)_{*,X} \ar@{^{(}->}[r]& X\times X\ar[d] \\ (R\times R)_{*,R} \ar@{^{(}->}[rr]& & R\times R}$$ where the horizontal arrows are all open immersions: indeed, recall that we assumed~$R$ and~$X$ to be quasi-projective over $k$, and therefore separated, so that complements of diagonals are open (and therefore so are their inverse images by the structural morphisms). In particular, we have an open immersion $(V\times W)_{*,R}\to (V\times W)_{*,X}.$ Thus, we can restrict to the complement of the diagonal of $\prod_{j\in J}R^{m_j}$ on the left, and to the complement of the diagonal of $\prod_{j\in J}\left(\prod_{i\in I}X^{n_i^j}\right)^{m_j}$ on the right, to get \begin{equation}\label{VtoW1}\left(\prod_{j\in J}\left(\left(\prod_{i\in I}X_i^{n_i^j}\ _{/R}\right)_{\!\!\!*,X}\right)^{m_j}\right)_{\!\!\!*,R} \hookrightarrow \left(\prod_{j\in J}\left(\prod_{i\in I}X_i^{n_i^j}\right)^{m_j}\right)_{\!\!\!*,X}.\end{equation} Note that using the assumption $\mu(\varpi) = \pi$, we may write \begin{equation}\label{rewrite}\left(\prod_{j\in J}\left(\prod_{i\in I} X_{i}^{n_i^j}\right)^{m_j}\right)_{\!\!\!*,X} = \left(\prod_{i\in I} X_i^{\sum_{j\in J}n_i^jm_j}\right)_{\!\!\!*,X} = \left(\prod_{i\in I}X_i^{n_i}\right)_{\!\!\!*,X}. \end{equation} Composing (\ref{VtoW1}) with this identification, we get an immersion \begin{equation}\label{VtoW}\left(\prod_{j\in J}\left(\left(\prod_{i\in I}X_i^{n_i^j}\ _{/R}\right)_{\!\!\!*,X}\right)^{m_j}\right)_{\!\!\!*,R} \hookrightarrow \left(\prod_{i\in I}X_i^{n_i}\right)_{\!\!\!*,X}.\end{equation} \begin{example} In example \ref{ex.112}, the immersion $(\ref{VtoW})$ corresponding to $\varpi_2$ is written in the form $$\left((X_1\times_R X_1)_{*,X}\times X_2 \right)_{*,R} \hookrightarrow (X_1\times X_1\times X_2)_{*,X},$$ where the variety on the left-hand side (resp. right-hand side) is exactly the one in $(\ref{var.112})$ (resp. $(\ref{sym.112})$), just without the permutation action quotients. (Recall we took $R' = k$. \end{example} \paragraph{Description of the permutation actions} Let $V$ be the variety on the left-hand side, as well as its image through this morphism, and $W$ the variety on the right-hand side. There is a natural action of $G = \prod_{i\in I}\mathfrak{S}_{n_i}$ on $W$. As for $V$, we can distinguish two groups acting on it. The first one is $$\prod_{j\in J}\left(\prod_{i\in I}\mathfrak{S}_{n_{i}^j}\right)^{m_j}$$ which comes from the natural permutation action of $\mathfrak{S}_{n_i^j}$ on each product $\prod_{i\in I}X_i^{n_i^j}\ _{/R}$ for all $i\in I$ and all $j\in J$. Composing the morphisms $X_i\to X$ with the morphism $X\to R$, we get a map \begin{equation}\label{maptoR}\phi: \left(\prod_{j\in J}\left(\left(\prod_{i\in I}X_i^{n_i^j}\ _{/R}\right)_{\!\!\!*,X}\right)^{m_j}\right)_{\!\!\!*,R}\longrightarrow\left( \prod_{j\in J}R^{m_j}\right)_{*}\end{equation} the fibres of which are stable with respect to that action. On the other hand, there is also a permutation action of $\prod_{j\in J}\mathfrak{S}_{m_j}$ on $\left( \prod_{j\in J}R^{m_j}\right)_{*}$, which pulls back to an action on the variety on the left-hand side in the following manner: for $x\in V$, denoting for every $j\in J$ and every $\ell\in\{1,\ldots,m_j\}$ by $x_{j,\ell}$ the projection of $x$ on the $\ell$-th copy of $\left(\prod_{i\in I}X_i^{n_i^j}\ _{/R}\right)_{*,X}$ occurring in $V$, the element $\sigma = (\sigma_j)_{j\in J}\in \prod_{j\in J}\mathfrak{S}_{m_j}$ acts on $x = (x_{j,1},\ldots,x_{j,m_j})_{j\in I}$ via $$\sigma\cdot \left((x_{j,1},\ldots,x_{j,m_j})_{j\in I}\right) = \left(\left(x_{j,\sigma_j^{-1}(1)},\ldots,x_{j,\sigma_j^{-1}(m_j)}\right)_{j\in J}\right).$$ Through immersion (\ref{VtoW}), these two actions give us two subgroups~$H_1$ and $H_2$ of $G = \prod_{i\in I}\mathfrak{S}_{n_i}$. \begin{example}\label{example.subgroup}\begin{enumerate}\item In example \ref{ex.112}, we have $W = (X_1^2 \times X_2)_{*,X}$. Let us examine the subgroups of $G := \mathfrak{S}_2\times \mathfrak{S}_1$ corresponding to the different $\varpi_i$ occurring in that example. \begin{itemize}\item For $\varpi_1 = [[1],[1],[2]]$, we have $V = (X_1^2\times X_2)_{*,R}$, $H_1 = \{1\}$ and $H_2 = G$. \item For $\varpi_{2} = [[1,1],[2]]$, we have $V = ((X_1\times_R X_1)_{*,X}\times X_2)_{*,R}$, $H_1 = G$ and $H_2 = \{1\}$. \item For $\varpi_{3} = [[1,2],[1]]$, we have $V = ((X_1\times_R X_2)_{*,X}\times X_1)_{*,R}$, and $H_1 = H_2 = \{1\}$. \item For $\varpi_{4} = [[1,1,2]]$, we have $V = ((X_1\times_RX_1\times_R X_2)_{*,X}$, $H_1 = G$ and $H_2 = \{1\}$. \end{itemize} \item Let us examine another example: $\pi = [1,1,1,1,1,1]$, $\varpi = [[1,1],[1,1],[1],[1]]$. Then $W = (X_1^6)_{*,X}$, $G = \mathfrak{S}_6$, and $$V = \left((X_1\times_R X_1)_{*,X}\times (X_1\times_R X_1)_{*,X} \times X_1\times X_1\right)_{*,R},$$ so that $H_1$ is the subgroup of $G$ generated by the permutations $(12)$ and $(34)$, whereas~$H_2$ is generated by the permutations $(13)(24)$ and $(56)$. \end{enumerate} \end{example} \begin{lemma} The subgroup $H_1$ is normalised by $H_2$. The subgroup $H:=H_1H_2$ they generate is the largest subgroup of $\prod_{i\in I}\mathfrak{S}_{n_i}$ under the action of which $V$ is invariant inside~$W$. \end{lemma} \begin{proof} For all $\sigma\in H_1$ and $\tau\in H_2$, the element $\tau\sigma\tau^{-1}\in G$ stabilises the fibres of the morphism~$\phi$ in (\ref{maptoR}), which means that it stabilises each factor $\prod_{i}X_i^{n_i^{j}}\ _{/R}$ of $V$. Thus, $\tau\sigma\tau^{-1}$ is an element of~$H_1$. Now, let $\sigma \in\prod_{i\in I}\mathfrak{S}_{n_i}$ be such that for all $x\in V$, $\sigma x\in V$. Let $x\in V$, and let $\tau\in H_2$ be the element such that $\tau(\phi(x)) = \phi(\sigma(x))$. Then $\sigma\tau^{-1}$ stabilises the fibres of the map~$\phi$ in (\ref{maptoR}). This means that its action stabilises each factor $\prod_{i}X_i^{n_i^{j}}\ _{/R}$ of $V$, so $\sigma\tau^{-1}$ is an element of~$H_1$. \end{proof} Our aim now is to describe a locally closed subset $W(\varpi)$ of $W$ containing $V$ and stable under the natural $\prod_{i\in I}\mathfrak{S}_{n_i}$-action on $W$, and show that we can apply Proposition \ref{subquotient} to $V$ and $W(\varpi)$ to get an isomorphism $$V/H \simeq W(\varpi)/\prod_{i\in I}\mathfrak{S}_{n_i}$$ where the variety on the right-hand side will be called $S^{\pi}_{\varpi}(\mathscr{X})$. \paragraph{Equivalence relations on coordinates of points of $W$} Recall that $W$ is the variety $$\left(\prod_{i\in I} X_i^{n_i}\right)_{*,X}.$$ A point of this variety is of the form $x = (x_{i,p})_{\substack{i\in I\\ 1\leq p\leq n_i}}$ where for all $i\in I$ and for all $1\leq p\leq n_i$, $x_{i,p}\in X_i$ and all coordinates $x_{i,p}$ have distinct images in $X$. Consider an equivalence relation $\rho$ on the set of indices $\{(i,p)\}_{\substack{i\in I\\ 1\leq p\leq n_i}}$. Each equivalence class~$E$ is a subset of the latter, which we write in the form $E = \bigcup_{i\in I}E_i$ (disjoint union) where $$E_i = E\cap X_i= \{(i,\alpha_{i,1}),\ldots,(i,\alpha_{i,\ell_i})\}$$ for some integers $(\ell_i)_{i\in I}$ (with $\ell_i\leq n_i$ for all $i$), and $$1\leq \alpha_{i,1}<\ldots<\alpha_{i,\ell_i}\leq n_i\ \ \text{for all}\ \ i\in I.$$ Note that the equivalence classes $E$ of $\rho$ form a partition of the set of indices of the coordinates of the point $x$, and that therefore for every $i\in I$, the sets $E_i = E\cap X_i$ form a partition of the set $\{(i,1),\ldots,(i,n_i)\}$. Thus, the sum of the $\ell_i$ over all equivalence classes~$E$ is equal to~$n_i$. To each such non-empty $E$ we can associate the non-zero element $\pi(E) = (\ell_i)_{i\in I}\in\mathbf{N}^{(I)}$. The collection of all $\pi(E)$ for all equivalence classes $E$ of $\rho$, counted with multiplicities, then gives an element $\varpi(\rho)\in \mathbf{N}^{(\mathbf{N}^{(I)}\backslash\{0\})}$ such that $\mu(\varpi(\rho)) = \pi$, since the sum of the $\ell_i$ over all equivalence classes $E$ is $n_i$. \paragraph{Definition of $W(\varpi)$} For every $x\in W$, we define an equivalence relation $\rho_x$ on the set $$\{(i,p)\}_{\substack{i\in I\\ 1\leq p\leq n_i}}$$ by: $(i,p)\sim (i',p')$ if and only if the coordinates $x_{i,p}$ and $x_{i',p'}$ have the same image in $R$. \begin{definition} For every equivalence relation $\rho$ on $\{(i,p)\}_{\substack{i\in I\\ 1\leq p\leq n_i}}$ occurring in this way, define the locally closed subsets $$W_{\rho} = \{x\in W,\ \rho_{x} = \rho\}\subset W$$ and $$W(\varpi) = \bigcup_{\varpi(\rho) = \varpi}W_{\rho}\subset W,$$ (this is a finite and disjoint union). \end{definition} \begin{example} Let $I = \mathbf{N}^{*}$ and $\pi = [1,1,2]$, so that $W = (X_1^2\times X_2)_{*,X}$. An element of~$W$ will be written $(x_{1,1},x_{1,2},x_{2,1}).$ Let $\rho$ be the equivalence relation on the set $\{(1,1), (1,2),(2,1)\}$ with equivalence classes $\{(1,1),(2,1)\}$ and $\{(1,2)\}$, giving rise respectively to the partitions $[1,2]$ and $[1]$, so that $\varpi:=\varpi(\rho) = [[1,2],[1]]$. The only other equivalence relation giving this element of $\mu^{-1}(\pi)$ is the one with equivalence classes $\{(1,2),(2,1)\}$ and $\{(1,1)\}$, denoted by $\rho'$. Thus $W_{\rho}$ is the locally closed subset of triples $(x_{1,1},x_{1,2},x_{2,1})$ such that in~$R$, $x_{2,1}$ becomes equal to $x_{1,1}$ but not to $x_{1,2}$. In the same way, $W_{\rho'}$ corresponds to triples such that $x_{2,1}$ becomes equal to $x_{1,2}$ but not to $x_{1,1}$. Finally, $W(\varpi)$ is the union of these two sets, namely the set of triples such that in $R$, $x_{2,1}$ becomes equal either to $x_{1,1}$ or to~$x_{1,2}$, where the ``or'' is exclusive. \end{example} \paragraph{Taking quotients} \begin{lemma} \begin{enumerate}[(a)]\item $W(\varpi)$ is stable under the action of $\prod_{i\in I}\mathfrak{S}_{n_i}$ on $W$. \item The group $\prod_{i\in I}\mathfrak{S}_{n_i}$ acts transitively on the set of the $W_{\rho}$ with $\varpi(\rho) = \varpi$. \end{enumerate} \end{lemma} \begin{proof}\begin{enumerate}[a)]\item Let $\sigma = (\sigma_{i})_{i\in I}\in \prod_{i\in I}\mathfrak{S}_{n_i}$, $\rho$ some equivalence relation and $x\in W_{\rho}$. For every equivalence class $E$, the equivalence class $\sigma E = \bigcup_{i\in I}\sigma_i(E_i)$ gives the same numbers $\ell_i$: therefore $\varpi(\sigma\rho) = \varpi$. \item Let $\rho$ and $\rho'$ be two different equivalence relations such that $\varpi(\rho) = \varpi(\rho')$. Since they give rise to the same $\varpi$, they have the same number of equivalence classes, and moreover, to each equivalence class $E$ of $\rho$ we may associate an equivalence class~$E'$ of~$\rho'$ such that for every $i\in I$, we have $$\# E_i = \# E'_i.$$ Denoting by $\ell_i$ this common value, write $$E_i = \{(i,\alpha_{i,1}),\ldots,(i,\alpha_{i,\ell_i})\}\ \ \ \text{and}\ \ \ E_i' = \{(i,\beta_{i,1}),\ldots,(i,\beta_{i,\ell_i})\}$$ for all $i\in I$. Define the restriction of the element $\sigma = \prod_{i\in I}\sigma_i\in \prod_{i\in I}\mathfrak{S}_{n_i}$ to $$\prod_{i\in I}\{\alpha_{i,1},\ldots,\alpha_{i,\ell_i}\}$$ by $\sigma_i(\alpha_{i,p}) = \beta_{i,p}.$ Since the equivalence classes of $\rho$ form a partition of $\{(i,p)\}_{\substack{i\in I\\ 1\leq p\leq n_i}}$, doing this for all equivalence classes completely defines an element $\sigma\in \prod_{i\in I}\mathfrak{S}_{n_i}$ such that for every $x\in W_{\rho}$, $\sigma x\in W_{\rho'}$. \end{enumerate} \end{proof} \begin{definition} For every $\varpi\in\mu^{-1}(\pi)$ we define $S^{\pi}_{\varpi}(\mathscr{X})$ to be the locally closed subset of $S^{\pi}\mathscr{X}$ given by taking the quotient of $W(\varpi)\subset W$ by $\prod_{i\in I}\mathfrak{S}_{n_i}$. \end{definition} \begin{lemma} There is an equivalence relation $\rho$ such that the image $V$ of the immersion in (\ref{VtoW}) is equal to $W_{\rho}$. \end{lemma} \begin{proof} Fix $x=(x_{i,p})_{\substack{i\in I\\ 1\leq p\leq n_i}}\in W$ in the image of the morphism in (\ref{VtoW}), and for all $i\in I,\ j\in J$ and $1\leq q \leq m_j$, denote by $E_{i,j,q}$ the set of indices of the coordinates of the projection of $x$ to the $q$-th factor~$X_{i}^{n_i^j}$. By definition, all coordinates $x_{i,p}$ of $x$ with index $(i,p)\in E_{j,q}:= \bigcup_{i\in I} E_{i,j,q}$ have the same image $r_{j,q}(x)\in R$, and the elements~$r_{j,q}(x)$ for all $j\in J$ and $1\leq q \leq m_j$ are distinct. Therefore, the sets $E_{j,q}$ in fact don't depend on $x$, so that the elements of $V$ are exactly those subject to the equivalence relation $\rho$ with classes $(E_{j,q})_{\substack{j\in J\\ 1\leq q\leq m_j}}.$ \end{proof} Putting everything together and applying Proposition \ref{subquotient} to $V\subset W$ with actions of the groups $H\subset \prod_{i\in I}\mathfrak{S}_{n_i}$, we get the result, since $S^{\varpi}(S^{\bullet}(\mathscr{X}/R))$ is by definition equal to $V/H$. \section{Cutting into pieces}\label{cuttingintopieces} \subsection{Introduction} The aim of this section is to state and prove an analogue in our setting of the following classical result about symmetric powers (see for example \cite{CNS}, chapter 6, proposition 1.1.7): \begin{prop}\label{cut} Let $X$ be a quasi-projective variety over a field $k$, and $Y$ a closed subvariety of $X$ with open complement $U$. For any integers $n\geq 1$ and $r\in\{0,\ldots,n\}$, the variety $$S^rU\times S^{n-r} Y$$ can be identified with the locally closed subset of $S^nX$ corresponding to effective zero-cycles of degree~$n$ the restriction of which to $U$ has degree $r$. Moreover, $S^nX$ is the disjoint union of these locally closed subsets. In particular, in terms of classes in $\mathrm{KVar}^+_k$, we have $$[S^nX] = \sum_{r=0}^n[S^rU][S^{n-r}Y]\in\mathrm{KVar}^+_k.$$\end{prop} Thus, starting with a zero-cycle $D$ of degree $n$ in $X$, we get through restriction a pair $(D_U,D_Y)$ of zero-cycles in $S^rU\times S^{n-r}Y$ for some $r$. Conversely, starting with a point of the latter variety, the sum of the two components yields an element in $S^nX$. More precisely, by the same argument we even have the following: \begin{prop}\label{cut2} (Refinement of Proposition \ref{cut}) Let $k, X, U, Y$ be as in Proposition \ref{cut}, and let $\pi\in\mathbf{N}^{(\mathbf{N}^*)}$ be a partition. Then $S^{\pi}X$ is the disjoint union of locally closed subsets isomorphic to $S^{\pi'}U\times S^{\pi-\pi'} Y$ where $\pi'$ runs through all partitions such that $\pi'\leq \pi$. In particular, in terms of classes in $\mathrm{KVar}^+_k$, we have $$[S^{\pi}X] = \sum_{\pi'\leq \pi}[S^{\pi'}U][S^{\pi-\pi'} Y]\in\mathrm{KVar}^+_k.$$\end{prop} To motivate the construction in the following paragraph, let us examine what exactly we need to get a result of this flavour for symmetric products. Fix a set $I$, and let $X$ be a variety over~$k$, and $\mathscr{X} = (X_i)_{i\in I}$ a family of varieties over~$X$. For all~$i\in I$, let~$Y_i$ be a closed subvariety of $X_i$, and~$U_i$ its complement, so that we get families of~$X$-varieties $\mathscr{Y} = (Y_i)_{i\in I}$ and $\mathscr{U} = (U_i)_{i\in I}$. Let $\pi = (n_i)_{i\in I}\in\mathbf{N}^{(I)}$. Any point $ D\in S^{\pi}\mathscr{X}$ is a zero-cycle contained in the disjoint union of (a finite number of) the $X_i$, and we can consider its restriction $D_{\mathscr{U}}$ to $\cup_{i\in I}U_i$. As in the discussion above, this clearly gives us a point in $S^{\pi'}\mathscr{U}$ for some $\pi'\leq \pi$, and the restriction $D_{\mathscr{Y}}$ to the elements of the family $\mathscr{Y}$ will then be a point in $S^{\pi-\pi'}\mathscr{Y}$. In other words, there is a well-defined immersion $$\alpha: S^{\pi}\mathscr{X}\longrightarrow \bigcup_{\pi'\leq \pi}S^{\pi'}\mathscr{U}\times S^{\pi-\pi'}\mathscr{Y}.$$ On the other hand, this morphism $\alpha$ will in general not be an isomorphism. Indeed, the inverse mapping $(D_1,D_2)\mapsto D_1 + D_2$ that worked in the above cases is well-defined only when $D_1$ and $D_2$ have disjoint supports, since, by definition, points of $S^{\pi}\mathscr{X}$ are zero-cycles with supports mapping injectively to $S^{\pi}X$. Thus the image of $\alpha$ is the subset of $\bigcup_{\pi'\leq \pi}S^{\pi'}\mathscr{U}\times S^{\pi-\pi'}\mathscr{Y}$ mapping to pairs in $\bigcup_{\pi'\leq \pi}S^{\pi'}X\times S^{\pi-\pi'}X$ with disjoint supports. Thus, to generalise proposition \ref{cut2}, we are going to define more general symmetric products which are \textit{mixed}, in the sense that we will combine different families of varieties before restricting to the complement of the diagonal in the base variety. In the case studied above, this construction will give us varieties $S^{\pi',\pi-\pi'}(\mathscr{U},\mathscr{Y})$, the union of which for all $\pi'\leq \pi$ corresponds exactly to the image of $\alpha$, so that the analogue of proposition \ref{cut2} for symmetric products will take the form $$[S^{\pi}\mathscr{X}] = \sum_{\pi'\leq \pi}[S^{\pi',\pi-\pi'}(\mathscr{U},\mathscr{Y})]$$ in $\mathrm{KVar}^+_k$. \subsection{Mixed symmetric products}\label{mixedsymproducts} Let $X$ be a variety over $R$, $p\geq 1$ an integer, and $\mathscr{X}_{1},\ldots,\mathscr{X}_{p}$ families of varieties over~$X$, all indexed by the same set $I$. For all $j\in\{1,\ldots,p\}$ we write $\mathscr{X}_j = (X_{i,j})_{i\in I}$. We also fix for every $j\in\{1,\ldots,p\}$ an almost zero family of non-negative integers $\pi_j = (r_{i,j})_{i\in I}$ (that is, all $r_{i,j}$ but a finite number are zero). The product (over $R$) $$\prod_{i\in I}X_{i,1}^{r_{i,1}}\times\ldots \times X_{i,p}^{r_{i,p}}$$ is a variety over $$\prod_{i\in I} X^{r_{i,1}}\times \ldots \times X^{r_{i,p}}.$$ As before, we restrict it to the open subset $\left(\prod_{i\in I}X_{i,1}^{r_{i,1}}\times\ldots \times X_{i,p}^{r_{i,p}}\right)_*$ lying above the complement of the diagonal of $\prod_{i\in I} X^{r_{i,1}}\times \ldots \times X^{r_{i,p}} = X^{\sum_{i,j}r_{i,j}}$, that is, we remove points mapping to points having at least two equal coordinates. Then we take the quotient by the natural permutation action of $\prod_{i\in I}\mathfrak{S}_{r_{i,1}}\times \ldots \times \mathfrak{S}_{r_{i,j}}$. The resulting variety will be denoted by $$S^{\pi_1,\ldots,\pi_p}(\mathscr{X}_1,\ldots,\mathscr{X}_p).$$ \index{symmetric product!mixed} If for every $j\in\{1,\ldots,p\}$ all varieties $X_{i,j}$ are equal to some $X_j$, we may write this simply $S^{\pi_1,\ldots,\pi_p}(X_1,\ldots,X_p)$ As it was the case in the construction of simple symmetric products, we could also first take the quotient, and then remove the closed set lying above the image of the diagonal by the quotient map. The following properties are obvious consequences of the definition: \begin{fact}\begin{enumerate}\item In the case $p=1$, we recover exactly the symmetric product $S^{\pi}\mathscr{X}$ from section \ref{anyset}. \item For all $\sigma\in \mathfrak{S}_p$, there is an isomorphism $$S^{\pi_1,\ldots,\pi_p}(\mathscr{X}_1,\ldots,\mathscr{X}_p) \simeq S^{\pi_{\sigma(1)},\ldots,\pi_{\sigma(p)}}(\mathscr{X}_{\sigma(1)},\ldots,\mathscr{X}_{\sigma(p)}).$$ \item If $\pi_p= 0$, then $$S^{\pi_1,\ldots,\pi_p}(\mathscr{X}_1,\ldots,\mathscr{X}_p) = S^{\pi_1,\ldots,\pi_{p-1}}(\mathscr{X}_1,\ldots,\mathscr{X}_{p-1}).$$ \item If there exists $i\in I$ such that $r_{i,p}>0$ and $X_{i,p} = \emptyset$, then $$S^{\pi_1,\ldots,\pi_p}(\mathscr{X}_1,\ldots,\mathscr{X}_p) = \emptyset.$$ \end{enumerate} \end{fact} \begin{remark}\label{separate} The open set $\left(\prod_{j=1}^pX^{\sum_{i\in I}r_{i,j}}\right)_*$ is a subset of $\prod_{j=1}^p \left(X^{\sum_{i\in I}r_{i,j}}\right)_*$ consisting of points with projection to $X^{\sum_{i\in I}r_{i,j}}$ having distinct coordinates for all $j$. Thus, there is an open immersion \begin{equation}\label{prodimmersion}S^{\pi_1,\ldots,\pi_p}(\mathscr{X}_1,\ldots,\mathscr{X}_p)\longrightarrow S^{\pi_1}\mathscr{X}_1\times \ldots \times S^{\pi_p}\mathscr{X}_p.\end{equation} It is an isomorphism if there is a partition of $X$ into locally closed subsets $V_1,\ldots,V_p$ such that for all $i\in I$ and all $j\in\{1,\ldots,p\}$, the image of $X_{i,j} $ in $X$ is contained in $V_i$. Indeed, then all elements in any $p$-tuple $(D_1,\ldots,D_p)\in S^{\pi_1}\mathscr{X}_1\times \ldots \times S^{\pi_p}\mathscr{X}_p$ have disjoint supports, and the inverse mapping is given by $(D_1,\ldots,D_p)\mapsto D_1 + \ldots + D_p$. \end{remark} We can now state our generalisation of Propositions \ref{cut} and $\ref{cut2}$. \begin{prop}\label{pieces} Let $X, \mathscr{X}_1,\ldots,\mathscr{X}_p$ be as above, and consider moreover another family $\mathscr{X} = (X_i)_{i\in I}$ of quasi-projective varieties over $X$, together with families $\mathscr{Y} = (Y_i)_{i\in I}$ and $\mathscr{U} = (U_i)_{i\in I}$ such that for every $i\in I$, $Y_i$ is a closed subvariety of $X_i$ and $U_i$ its complement. Let $\pi_1,\ldots,\pi_p,\pi$ be elements of $\mathbf{N}^{(I)}$. Then for every $\pi'\leq \pi$ the variety $$S^{\pi_1,\ldots,\pi_{p},\pi',\pi -\pi'}(\mathscr{X}_1,\ldots,\mathscr{X}_{p},\mathscr{U},\mathscr{Y})$$ is isomorphic to the locally closed subset of $S^{\pi_1,\ldots,\pi_p,\pi}(\mathscr{X}_1,\ldots,\mathscr{X}_p,\mathscr{X})$ corresponding to zero-cycles with $\mathscr{X}$-component inducing partition $\pi'$ on $\mathscr{U}$. Moreover, the variety $$S^{\pi_1,\ldots,\pi_p,\pi}(\mathscr{X}_1,\ldots,\mathscr{X}_p,\mathscr{X})$$ is the disjoint union of these locally closed subsets, so that in terms of classes in $\mathrm{KVar}^+_{R}$, we have $$\left[S^{\pi_1,\ldots,\pi_p,\pi}(\mathscr{X}_1,\ldots,\mathscr{X}_p,\mathscr{X})\right] = \sum_{\pi'\leq \pi}\left[S^{\pi_1,\ldots,\pi_{p},\pi',\pi-\pi'}(\mathscr{X}_1,\ldots,\mathscr{X}_{p},\mathscr{U},\mathscr{Y})\right].$$ \end{prop} \begin{proof} We write $\pi = (n_i)_{i\in I}$, $\pi' = (m_i)_{i\in I}$ and for every $j\in\{1,\ldots,p\}$, $\pi_j = (r_{i,j})_{i\in I}$ According to proposition \ref{cut}, the variety $\prod_{i\in I}S^{r_{i,1}}X_{i,1}\times\ldots\times S^{r_{i,p}}X_{i,p}\times S^{n_i}X_i$ is the union of locally closed subsets isomorphic to $$\prod_{i\in I}S^{r_{i,1}}X_{i,1}\times\ldots\times S^{r_{i,p}}X_{i,p}\times S^{m_{i}}U_i\times S^{n_i-m_i}Y_i $$ for all $\pi\leq \pi_p$. Restricting to the images of points lying above the complements of the diagonal, we get the result. \end{proof} \subsection{Applications}\label{sect.cutapplications} As an immediate consequence of Proposition \ref{pieces} (for $p=0$) and Remark \ref{separate}, we get \begin{cor}\label{multzeta} Let $X$ be a variety over $R$ and $\mathscr{X} = (X_i)_{i\in I}$ a family of varieties over~$X$. Let $Y$ be a closed subvariety of $X$, $U$ its open complement, and for all $i\in I$, define varieties $Y_i = X_i\times_X Y$, $U_i = X_i\times_X U$, and families $\mathscr{Y} = (Y_i)_{i\in I}$ and $\mathscr{U} = (U_i)_{i\in I}$ of varieties over $Y$ and~$U$, respectively. Then for all $\pi\in\mathbf{N}^{(I)}$ and for all $\pi'\leq \pi$, the variety $S^{\pi'}\mathscr{U}\times S^{\pi-\pi'}\mathscr{Y}$ is isomorphic to the locally closed subset of $S^{\pi}\mathscr{X}$ corresponding to families of zero-cycles inducing partition $\pi'$ on $\mathscr{U}$. Moreover, $S^{\pi}\mathscr{X}$ is the disjoint union of these locally closed subsets. In particular, in terms of classes in $\mathrm{KVar}^+_R$, we have $$[S^{\pi}\mathscr{X}] = \sum_{\pi'\leq \pi}[S^{\pi'}\mathscr{U}][S^{\pi-\pi'}\mathscr{Y}].$$ In terms of zeta-functions, this means that $$Z_{\mathscr{X}}(\mathbf{t}) = Z_{\mathscr{U}}(\mathbf{t})Z_{\mathscr{Y}}(\mathbf{t})$$ in $\mathrm{KVar}^+_R[[\t]].$ \end{cor} \begin{prop}\label{lociso} Let $X$ be a variety over $R$ and $\mathscr{X} = (X_i)_{i\in I}$ and $\mathscr{Y} = (Y_i)_{i\in I}$ families of varieties over $X$. Fix an element $\pi = (n_i)_{i\in I}\in\mathbf{N}^{(I)}$, and assume that for all $i\in I$ such that $n_i>0$, $X_i$ and $Y_i$ are piecewise isomorphic. Then $S^{\pi}\mathscr{X}$ and $S^{\pi}\mathscr{Y}$ are piecewise isomorphic: in other words, we have the equality $$Z_{\mathscr{X}}(\t) = Z_{\mathscr{Y}}(\t)$$ in $\mathrm{KVar}^+_R[[\t]].$ \end{prop} \begin{proof} By assumption, there is an integer $p$ with the property that, for all $i\in I$ satisfying $n_i>0$, we can partition $X_i$ (resp. $Y_i$) into locally closed sets $X_{i,1},\ldots,X_{i,p}$ (resp. $Y_{i,1},\ldots,Y_{i,p}$) such that for all $j\in\{1,\ldots,p\}$, $X_{i,j}$ is isomorphic to $Y_{i,j}$. For every $j\in\{1,\ldots,p\}$, we denote by $\mathscr{X}_j$ (resp. $\mathscr{Y}_j$) the family $(X_{i,j})_{i\in I}$ (resp. $(Y_{i,j})_{i\in I}$). Proposition \ref{pieces} together with an induction shows that $S^{\pi}\mathscr{X}$ is the disjoint union of locally closed subsets isomorphic to $S^{\pi_1,\ldots,\pi_p}(\mathscr{X}_1,\ldots,\mathscr{X}_p)$ for partitions $\pi_1,\ldots,\pi_p$ such that $\pi_1 + \ldots + \pi_p = \pi$. Since the same is true for $\mathscr{Y},\mathscr{Y}_1,\ldots,\mathscr{Y}_p$, and since the isomorphisms between the $X_{i,j}$ and $Y_{i,j}$ give isomorphisms $$S^{\pi_1,\ldots,\pi_p}(\mathscr{X}_1,\ldots,\mathscr{X}_p)\simeq S^{\pi_1,\ldots,\pi_p}(\mathscr{Y}_1,\ldots,\mathscr{Y}_p) $$ for all $\pi_1,\ldots,\pi_p$ such that $\pi_1+\ldots + \pi_p = \pi$, the result follows. \end{proof} \section{Symmetric products and affine spaces}\label{affinespaces} The following proposition is a direct generalisation of a result by Totaro (see \cite{CNS}, Proposition 5.1.5). In fact, it may be obtained directly by applying Totaro's statement to each factor in $\prod_{i\in I}S^{n_i}X_i$ and then restricting to the open subset $S^{\pi}X\subset \prod_{i\in I}S^nX$, but we prefer to give a direct proof in the flavour of Totaro's argument: it is more straightforward here since one does not need to go through the step of cutting up the symmetric power with respect to different partitions. \begin{prop}\label{affine} Let $X$ be a variety over $R$ and $\mathscr{X} = (X_i)_{i\in I}$ a family of varieties over $X$. Moreover, let $\mathbf{m} = (m_i)_{i\in I}$ be a family of non-negative integers, $\mathscr{L}^{\bf{m}}$ the family of affine spaces $(\mathbf{A}_{R}^{m_i})_{i\in I}$ and $\mathscr{X}\times \mathscr{L}^{\bf{m}}$ the family $(X_i\times \mathbf{A}_{R}^{m_i})_{i\in I}$, each $X_i\times \mathbf{A}_{R}^{m_i}$ being seen as an $X$-variety through the first projection. Then for all $\pi = (n_i)_{i\in I}\in \mathbf{N}^{(I)}$, the variety $S^{\pi}(\mathscr{X}\times\mathscr{L}^{\mathbf{m}})$ is endowed with the structure of a vector bundle of rank $\sum_{i\in I} m_in_i$ over $S^{\pi}(\mathscr{X})$, so that in particular $$[S^{\pi}(\mathscr{X}\times \mathscr{L}^{\bf{m}})] = [S^{\pi}(\mathscr{X})]\mathbf{L}^{\sum_{i\in I}m_in_i}$$ in $\mathrm{KVar}^+_{R}$. \end{prop} \begin{proof} The first projections induce natural maps $$\prod_{i\in I}(X_i\times \mathbf{A}_{R}^{m_i})^{n_i}\longrightarrow \prod_{i\in I}X_i^{n_i}$$ over $\prod_{i\in I}X^{n_i}$. Restricting to points mapping to the diagonal in $\prod_{i\in I}X^{n_i}$, we get a map $$\left(\prod_{i\in I}(X_i\times \mathbf{A}_{R}^{m_i})^{n_i}\right)_*\longrightarrow \left(\prod_{i\in I}X_i^{n_i}\right)_*$$ that gives us in turn a map $$p:S^{\pi}(\mathscr{X}\times \mathscr{L}^{\bf{m}})\longrightarrow S^{\pi}(\mathscr{X})$$ after taking the quotient by the natural action of the group $\prod_{i\in I}\mathfrak{S}_{n_i}$. On the other hand, the variety $\left(\prod_{i\in I}(X_i\times \mathbf{A}_{R}^{m_i})^{n_i}\right)_*$ is by definition isomorphic to $$\prod_{i\in I}(X_i\times \mathbf{A}_{R}^{m_i})^{n_i}\times_{\prod_{i\in I}X^{n_i}}\left(\prod_{i\in I}X^{n_i}\right)_*\simeq \left(\prod_{i\in I}X_i^{n_i}\right)_*\times \mathbf{A}_{R}^{\sum_{i\in I}n_im_i}.$$ Thus, we have a cartesian diagram $$\xymatrix{\left(\prod_{i\in I}X_i^{n_i}\right)_*\times \mathbf{A}_{R}^{\sum_{i\in I}n_im_i} \ar[r]\ar[d]^{q'} & \left(\prod_{i\in I}X_i^{n_i}\right)_*\ar[d]^q\\ S^{\pi}(\mathscr{X}\times \mathscr{L}^{\bf{m}})\ar[r]^{\ p}& S^{\pi}(\mathscr{X}) }$$ with $q'$ being the quotient by the permutation action of $\prod_{i\in I}\mathfrak{S}_{n_i}$. Through our identification, this permutation action becomes linear, endowing $S^{\pi}(\mathscr{X}\times \mathscr{L}^{\bf{m}})$ étale-locally with a structure of vector bundle of rank $\sum_{i\in I}n_im_i$ over $S^{\pi}(\mathscr{X})$. By Hilbert's theorem 90, $S^{\pi}(\mathscr{X}\times \mathscr{L}^{\bf{m}})$ is a vector bundle of the same rank over $S^{\pi}(\mathscr{X})$, whence the result. \end{proof} This implies the following formula for zeta-functions: \begin{cor}\label{zetaaffine} Let $X$, $\mathscr{X}$, $\mathscr{L}^{\bf{m}}$ be as above. Then writing $\mathbf{L}^{\mathbf{m}}\t = (\mathbf{L}^{m_i}t_i)_{i\in I}$ we have $$Z_{\mathscr{X}}(\mathbf{L}^{\mathbf{m}}\t) = Z_{\mathscr{X}\times \mathscr{L}^{\mathbf{m}}}(\t)$$ in $\mathrm{KVar}^+_{R}[[\t]]$. \end{cor} \section{Symmetric products of non-effective classes}\label{symprodclasses} For the moment, we have only defined the symmetric products $S^{\pi}\mathscr{X}$ in the case where the family~$\mathscr{X}$ is a family of quasi-projective varieties over $X$. The aim of this section is to generalise this definition in the case when $\mathscr{X}$ is merely a family of classes in $\mathrm{KVar}_{X}$. For this, recall that in remark \ref{diagonal}, we noted that one could define the symmetric product $S^{\pi}\mathscr{X}$ of a family of varieties $\mathscr{X} = (X_i)_{i\in I}$ over a variety $X$ for $\pi = (n_i)_{i\in I}$ alternatively by first considering the product $\prod_{i\in I}S^{n_i}X_i$ and then restricting to the appropriate open subset, namely the one lying above the open subset $S^{\pi}X$ of $\prod_{i\in I}S^{n_i}X.$ In this section, we are first going to define, for a class $Y\in \mathrm{KVar}_X$, its symmetric power $S^nY$ as an element of $\mathrm{KVar}_{S^nX}$. The symmetric product $S^{\pi}\mathscr{X}$ of a family of classes $(X_i)_{i\in I}$ in $\mathrm{KVar}_X$ will then be defined, by analogy with remark \ref{diagonal}, as the pullback to $\mathrm{KVar}_{S^{\pi}X}$ of the element $$\prod_{i\in I}S^{n_i}X_i\in \mathrm{KVar}_{\prod_{i\in I}S^{n_i}X}.$$ \subsection{Relative symmetric powers}\label{sect.symprodgrouplaw} Let $X$ be a quasi-projective variety over $R$. All (fibred and exterior) products in this section are over $R$, though we will not write it to simplify notation. The product $\prod_{n\geq 1}\mathrm{KVar}_{S^nX}$ is an additive group, but it may also be endowed with a commutative multiplicative group structure, in the following manner: For $a = (a_n)_{n\geq 1}$ and $b = (b_n)_{n\geq 1}\in \prod_{n\geq 1}\mathrm{KVar}_{S^nX}$, we put $$ (ab)_n = \sum_{k=0}^na_k\boxtimes b_{n-k}$$ where by convention $a_0 = b_0 = 1$, and $a_k\boxtimes b_{n-k}$ is the image of $(a_k,b_{n-k})$ through the composition $$\mathrm{KVar}_{S^kX}\times \mathrm{KVar}_{S^{n-k}X}\xrightarrow{\boxtimes}\mathrm{KVar}_{S^kX\times S^{n-k}X}\to \mathrm{KVar}_{S^nX}$$ where the latter morphism is obtained from the natural map $X^n\to S^nX$ by passing to the quotient with respect to the natural action of the group $\mathfrak{S}_k\times \mathfrak{S}_{n-k}$ on $X^n$. Observe that the neutral element for this law is the family $(e_n)_{n\geq 1}$ with $e_i = 0$ for all $i\geq 1$. Associativity is obtained by using, for all $n\geq 1$ and all $i,j,k$ such that $i + j + k = n$ the commutativity of the diagram $$\xymatrix{(S^i X \times S^{j} X) \times S^kX \ar[r]\ar[d]^{\simeq} & S^{i+j}X\times S^k X \ar[r] & S^{n} X\\ S^iX\times (S^{j} X \times S^kX) \ar[r] & S^iX\times S^{j+k}X \ar[ur] }$$ On the other hand, starting from $a = (a_n)_{n\geq 1}\in \prod_{n\geq 1}\mathrm{KVar}_{S^nX}$, we may define its inverse $b = (b_n)_{n\geq 1}\in \prod_{n\geq 1}\mathrm{KVar}_{S^nX}$ by induction: put $b_0 =1$, $b_1 = -a_1$, and assume $b_0,\ldots,b_{n-1}$ to be constructed. Then $b_n$ is the element of $\mathrm{KVar}_{S^nX}$ defined by $$b_n = -\sum_{k=1}^n a_k\boxtimes b_{n-k}.$$ \begin{notation} We denote by $\mathrm{KVar}_{S^{\bullet}X}$ the group $\prod_{n\geq 1}\mathrm{KVar}_{S^nX}$ with this law. \end{notation} \begin{lemma}\label{varmorphism} There is a unique group morphism $$S:\mathrm{KVar}_X \to \mathrm{KVar}_{S^{\bullet}X},$$ such that the image of the class of a quasi-projective variety $Y$ over $X$ is the family $(S^nY)_{n\geq 1}$. \index{S@$S$} \end{lemma} \begin{proof} The condition in the statement defines a morphism $S'$ on the free abelian group with generators the isomorphism classes of quasi-projective varieties over $X$. Consider a quasi-projective variety $Y$ over $X$, and a closed subscheme $Z$ of $Y$, with open complement $U$. We know that $S^nY$ is the disjoint union of locally closed subsets isomorphic to $S^kZ\times S^{n-k} U$ for $k=0,\ldots,n$, and these isomorphisms are over $S^nX$, so $S'(Y) = S'(Z)S'(U)$, and $S'$ descends to a well-defined group morphism as in the statement of the lemma, using the fact that $\mathrm{KVar}_X$ is generated by classes of quasi-projective varieties. \end{proof} \begin{cor} For every $n\geq 1$ the class in $\mathrm{KVar}_{S^nX}$ of the symmetric power $S^{n}Y$ of a variety $Y$ over $X$ depends only on the class of $Y$ in $\mathrm{KVar}_{X}$. \end{cor} \begin{remark} In the same manner, one may put a (commutative) monoid structure on $\mathrm{KVar}^+_{S^{\bullet}X} := \prod_{i\geq 1} \mathrm{KVar}^+_{S^iX}$, and there is a unique morphism of monoids $$S^{+}:\mathrm{KVar}^+_X \to \mathrm{KVar}^+_{S^{\bullet}X},$$ such that the image of the class of a quasi-projective variety $Y$ over $X$ is the family $(S^nY)_{n\geq 1}$. Moreover, $S^{+}$ induces $S$ on $\mathrm{KVar}_{X}$. \end{remark} \begin{definition}\label{sympowerdef} Let $\a \in\mathrm{KVar}_X$ and $n\geq 1$. The $n$-th symmetric power of $\a$, denoted by $S^n\a$, \index{symmetric power! of a class} is the image of $S(\a)$ through the projection $$\prod_{i\geq 1}\mathrm{KVar}_{S^iX}\to \mathrm{KVar}_{S^nX}$$ onto the $n$-th component. \end{definition} \begin{remark}\label{generalcut} The lemma implies that for any $\a,\b\in\mathrm{KVar}_X$ and for any $n\geq 1$, $$S^{n}(\a + \b) = \sum_{k=0}^nS^{k}\a\boxtimes S^{n-k}\b$$ in $\mathrm{KVar}_{S^nX}$, where every term on the right-hand side is seen as an element of $S^{n}X$ via the natural map $S^{k}X\times S^{n-k}X\to S^nX$. In particular, we have the equality of classes $$[S^{n}(\a + \b)]= \sum_{k=0}^n[S^{k}\a][S^{n-k}\b]$$ in $\mathrm{KVar}_R$. \end{remark} \subsection{Definition of symmetric products of classes in $\mathrm{KVar}_X$} Let $X$ be a quasi-projective variety over $R$, $\mathscr{A} = (\a_i)_{i\in I}$ a family of elements of $\mathrm{KVar}_X$, $n\geq 1$ an integer and $\pi = (n_i)_{i\in I}$ a partition of $n$. Consider the natural projection morphism $$p: \prod_{i \in I}X^{n_i}\to \prod_{i\in I}S^{n_i}X,$$ which is finite and surjective. Let $U := \left(\prod_{i\in I}X^{n_i}\right)_*\subset \prod_{i\in I}X^{n_i}$ be the complement of the diagonal, that is, the open subset of points with all coordinates being distinct. Then by construction, $S^{\pi}X$ is the open subset $p(U)$ of the variety $\prod_{i\in I}S^{n_i}X.$ Using definition \ref{sympowerdef}, we may consider the element $\prod_{i\in I}S^{n_i}\a_i \in \mathrm{KVar}_{\prod_{i\in I}S^{n_i}X}$. \begin{definition}\label{symproddef}\begin{enumerate}\item The element $S^{\pi}\mathscr{A}\in \mathrm{KVar}_{S^{\pi}X}$ is defined to be the image of $\prod_{i\in I}S^{n_i}\a_i $ through the restriction morphism $$\mathrm{KVar}_{\prod_{i\in I}S^{n_i}X}\to \mathrm{KVar}_{S^{\pi}X}.$$ \item More generally, let $p\geq 1$ be an integer, $X_1,\ldots,X_p$ quasi-projective varieties over~$R$, and for every $j\in\{1,\ldots,p\}$, let $ \mathscr{A}_j= (\a_{ij})_{i\in I}$ be a family of elements of $\mathrm{KVar}_{X_i}$. For all partitions $\pi_1,\ldots,\pi_p$ with $\pi_i = (r_{i,j})_{i\in I}$ we define the \textit{mixed symmetric product} $S^{\pi_1,\ldots,\pi_p}(\mathscr{A}_1,\ldots,\mathscr{A}_p)$ as the image of $\prod_{i\in I}S^{r_{i,1}}\a_{i,1}\times\ldots \times S^{r_{i,p}}\a_{i,p}$ through the restriction morphism $$\mathrm{KVar}_{\prod_{i\in I}S^{r_{i,1}}X_1\times\ldots\times S^{r_{i,p}}X_p}\to \mathrm{KVar}_{S^{\pi_1,\ldots,\pi_p}(X_1,\ldots,X_p)}.$$ \end{enumerate} \end{definition} \begin{remark}\label{separate2} The above restriction morphism factors through $\mathrm{KVar}_{S^{\pi_1}X_1\times \ldots\times S^{\pi_p}X_p}$ by remark~\ref{separate}. In the case where immersion (\ref{prodimmersion}) is an isomorphism, we have the equality $$S^{\pi_1}\mathscr{A}_1\boxtimes \ldots\boxtimes S^{\pi_p}\mathscr{A}_p = S^{\pi_1,\ldots,\pi_p}(\mathscr{A}_1,\ldots,\mathscr{A}_p)$$ in $\mathrm{KVar}_{S^{\pi_1}X_1\times \ldots\times S^{\pi_p}X_p} = \mathrm{KVar}_{S^{\pi_1,\ldots,\pi_p}(X_1,\ldots,X_p)}.$ \end{remark} \begin{remark}\label{generalzeta}Using the first part of the definition, we may extend definition \ref{def.zetafunction} and get a notion of zeta-function $Z_{\mathscr{A}}(\t)\in\mathrm{KVar}_{R}[[\t]]$ of a family of elements of $\mathrm{KVar}_X$. \end{remark} \begin{prop}\label{generalcut2mixed} Let $X$ be a quasi-projective variety over $R$. Let $\mathscr{A} = (\a_i)_{i\in I}$, $\mathscr{B} = (\b_i)_{i\in I}$, $\mathscr{C} = (\mathfrak{c}_i)_{i\in I}$ be families of elements of $\mathrm{KVar}_X$ such that for every $i\in I$, $\a_i = \b_i + \mathfrak{c}_i$. For every partition $\pi= (n_i)_{i\in I }$ we have the equality $$S^{\pi}\mathscr{A} = \sum_{\pi'\leq \pi}S^{\pi',\pi-\pi'}(\mathscr{B},\mathscr{C})$$ in $\mathrm{KVar}_{S^{\pi}X}$, where each term on the right-hand side is considered as an element of $\mathrm{KVar}_{S^{\pi}X}$ via the natural morphism $S^{\pi'}X\times S^{\pi-\pi'}X \to S^{\pi}X$. \end{prop} \begin{proof} According to remark \ref{generalcut}, we may write $$\prod_{i\in I}S^{n_i}\a_i = \prod_{i\in I}\left(\sum_{0\leq m_i\leq n_i}S^{m_i}\b_i\boxtimes S^{n_i-m_i}\mathfrak{c}_i\right) = \sum_{\pi' = (m_i)_{i\in I}\leq\pi}\prod_{i\in I}S^{m_i}\b_i\boxtimes S^{n_i-m_i}\mathfrak{c}_i$$ in $\mathrm{KVar}_{\prod_{i\in I}S^{n_i}X}$, where each $S^{m_i}\b_i\times S^{n_i-m_i}\mathfrak{c}_i$ is seen as an element of $\mathrm{KVar}_{S^{n_i}X}$ through the natural morphism $S^{m_i}X\times S^{n_i-m_i}X\to S^{n_i}X$. This gives the result by restriction to~$S^{\pi}X$. \end{proof} \begin{cor}\label{generalcut2} Let $X$ be a quasi-projective variety over $R$, $Y$ a closed subscheme of $X$ and $U$ its open complement. Let $\mathscr{A} = (\a_i)_{i\in I}$ be a family of elements of $\mathrm{KVar}_X$, and define $\mathscr{B} = (\b_i)_{i\in I}$ and $\mathscr{C} = (\mathfrak{c}_i)_{i\in I}$ the families of elements of $\mathrm{KVar}_U$ (resp. $\mathrm{KVar}_Y$) obtained by restriction from~$\mathscr{A}$. For every partition $\pi= (n_i)_{i\in I }$ we have the equality $$S^{\pi}\mathscr{A} = \sum_{\pi'\leq \pi}S^{\pi'}\mathscr{B}\boxtimes S^{\pi-\pi'}\mathscr{C}$$ in $\mathrm{KVar}_{S^{\pi}X}$, where each term on the right-hand side is considered as an element of $\mathrm{KVar}_{S^{\pi}X}$ via the natural immersion $S^{\pi'}U\times S^{\pi-\pi'}Y \to S^{\pi}X$. In particular, we have the equality $Z_{\mathscr{A}}(\t) = Z_{\mathscr{B}}(\t)Z_{\mathscr{C}}(\t)$ in $\mathrm{KVar}_{R}[[\t]]$. \end{cor} \section{Symmetric products of varieties with exponentials}\label{sect.symprodexp} \subsection{The symmetric product of a family of varieties with exponentials}\label{sect.symprodexpvarietiesdef} Fix a variety $X$ over $R$, as well as $(\mathscr{X},f) = (X_i,f_i)_{i\in I}$ a family of varieties over $X$ with exponentials. For any $\pi\in\mathbf{N}^{(I)}$, recall that the symmetric product $S^{\pi}\mathscr{X}$ is constructed as a quotient of an open subset of the product $$\prod_{i\in I}X_i^{n_i}.$$ The latter is endowed with the map $$\begin{array}{lccc}f^{\pi}:&\prod_{i\in I}X_i^{n_i}&\longrightarrow& \mathbf{A}^1\\ &(x_{i,1},\ldots,x_{i,n_i})_{i\in I} & \mapsto & \sum_{i\in I}(f_{i}(x_{i,1}) + \ldots + f_{i}(x_{i,n_i})). \end{array}$$ This map restricts to points lying above the complement of the diagonal, and is invariant modulo the action of $\prod_{i\in I}\mathfrak{S}_{n_i}$. Therefore it descends to a map $$f^{(\pi)}:S^{\pi}\mathscr{X}\longrightarrow \mathbf{A}^1,$$\index{fpi@$f^{(\pi)}$} and we may define the symmetric product $S^{\pi}(\mathscr{X},f)$ to be the variety $S^{\pi}\mathscr{X}$ endowed with the map $f^{(\pi)}$. \index{symmetric product!of varieties with exponentials} \begin{remark} Let $X$ be a variety over $R$, $f:X\to \mathbf{A}^1$ a morphism and $n\geq 1$ an integer. Then there is a morphism $$\begin{array}{lccc}f^{(n)}:&S^nX&\to &\mathbf{A}^1\\ &x_1 + \ldots + x_n & \mapsto & f(x_1) + \ldots + f(x_n) \end{array}$$ induced by $f^{\oplus n}:X^n\to \mathbf{A}^1$. Its restriction to the locally closed subset $S^{\pi}X$ is given by $$\sum_{i\geq 1}i(x_{i,1} + \ldots + x_{i,n_i}) \mapsto \sum_{i\geq 1} i(f(x_{i,1}) + \ldots + f(x_{i,n_i})).$$ Thus, the restriction $f^{(n)}_{|S^{\pi}X}$ to $S^{\pi}X$ is exactly the map $g^{(\pi)}$ where $g$ is the family $(if)_{i\geq 1}$ of morphisms $X\to \mathbf{A}^1$. \end{remark}\index{fn@$f^{(n)}$} \subsection{Iterated symmetric products} Here we generalise proposition \ref{iterate} to varieties with exponentials. We refer to the corresponding section for the definition of the map $\mu$ and other pieces of notation. Let $X$ be a variety over a variety $R$, which itself lies above some $k$-variety~$R'$, and let $(\mathscr{X},f) = (X_i,f_i)_{i\in I}$ be a family of varieties with exponentials over $X$. Let $\pi\in\mathbf{N}^{(I)}$ and let $\varpi = (m_j)_{j\in J}\in\mu^{-1}(\pi)$. Recall from (\ref{VtoW}) and the discussion that follows that we have an immersion \begin{equation}\label{VtoWbis}\left(\prod_{j\in J}\left(\left(\prod_{i\in I}X_i^{n_i^j}\ _{/R}\right)_{\!\!\!*,X}\right)^{m_j}_{/R'}\right)_{\!\!\!*,R} \hookrightarrow \left(\prod_{i\in I}X_i^{n_i}\ _{/R'}\right)_{\!\!\!*,X}.\end{equation} which after passing to the quotient by the appropriate group actions, induces the isomorphism $$u_{\varpi}: S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R')\to S^{\pi}_{\varpi}(\mathscr{X}/R').$$ By the construction in section\ref{sect.symprodexpvarietiesdef} there is a morphism $f^{(\varpi)}:S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R')\to \mathbf{A}^1$ induced by the morphism $$\prod_{j\in J}\left(f^{(\pi_j)}\right)^{\oplus m_j}:\ \ \prod_{j\in J}\left(S^{\pi_j}(\mathscr{X}/R)\right)^{m_j}\ _{/R'}\to \mathbf{A}^1$$ where each $f^{(\pi_j)}$ is itself induced by $$f^{\pi} = \prod_{i\in I} f_i^{\oplus n_i^j}:\ \prod_{i\in I}X_i^{n_i^{j}}\ _{/R}\to \mathbf{A}^1.$$ Thus, $f^{(\varpi)}$ is induced, via $(\ref{VtoWbis})$ and passing to the quotient, by the morphism $f^{\pi}= \prod_{i\in I}f_i^{\oplus^{n_i}} = \prod_{j\in J}\left(\prod_{i\in I}f_i^{\oplus n_i^j}\right)^{\oplus m_j}$ defined on $\prod_{i\in I}X_i^{n_i}\ _{/R}$. We may conclude that for all $\varpi$, we have $f^{(\pi)}\circ u_{\varpi} = f^{(\varpi)}$, which gives the following proposition: \begin{prop}\label{expiterate} Let $X$ be a variety over a variety $R$, which itself lies above some $k$-variety~$R'$, and let $(\mathscr{X},f) = (X_i,f_i)_{i\in I}$ be a family of varieties with exponentials over~$X$. Then for every $\pi\in\mathbf{N}^{(I)}$ and for every $\varpi\in\mu^{-1}(\pi)$, there is an isomorphism $u_{\varpi}$ of the constructible set $S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R')$ onto a locally closed subset $S^{\pi}_{\varpi}(\mathscr{X}/R')$ of $S^{\pi}(\mathscr{X}/R')$, so that moreover $S^{\pi}(\mathscr{X}/R')$ is equal to the disjoint union of the sets $S^{\pi}_{\varpi}(\mathscr{X}/R')$ and $f^{(\pi)}\circ u_{\varpi} = f^{(\varpi)}$. In particular, we have the equality $$\sum_{\varpi\in \mu^{-1}(\pi)}[S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R'),f^{(\varpi)}] = [S^{\pi}(\mathscr{X}/R'),f^{(\pi)}]$$ in $\mathrm{KExpVar}^+_{R'}$. \end{prop} \subsection{Cutting into pieces} More generally, starting from families of varieties with exponentials $(\mathscr{X}_1,f_1),\ldots,(\mathscr{X}_p,f_p)$ over $X$, for any $\pi_1,\ldots,\pi_p\in \mathbf{N}^{(I)}$ the mixed symmetric product $$S^{\pi_1,\ldots,\pi_p}(\mathscr{X}_1,\ldots,\mathscr{X}_p)$$ from \ref{mixedsymproducts} may be endowed with a map $(f_1,\ldots,f_p)^{(\pi_1,\ldots,\pi_p)}$ to $\mathbf{A}^1$. Moreover, proposition \ref{pieces} may be extended in the following way: \begin{prop}\label{exppieces} Let $X$ be a variety over $R$, and $(\mathscr{X}_1,f_1),\ldots,(\mathscr{X}_p,f_p)$ families of varieties with exponentials over~$X$, and consider moreover another family $(\mathscr{X},g) = (X_i,g_i)_{i\in I}$ of varieties over~$X$, together with families $$(\mathscr{U},g_{|\mathscr{U}}) = (U_i, g_{i|U_i})_{i\in I}\ \ \text{and}\ \ (\mathscr{Y},g_{|\mathscr{Y}}) = (Y_i, g_{i|Y_i})_{i\in I}$$ such that for every $i\in I$, $Y_i$ is a closed subvariety of $X_i$ and $U_i$ its complement. Let $\pi_1,\ldots,\pi_p,\pi$ be elements of~$\mathbf{N}^{(I)}$. Then for every $\pi'\in\mathbf{N}^{(I)}$ such that $\pi'\leq \pi$ there is an isomorphism $u_{\pi'}$ from the variety $$S^{\pi_1,\ldots,\pi_{p},\pi',\pi -\pi'}(\mathscr{X}_1,\ldots,\mathscr{X}_{p},\mathscr{U},\mathscr{Y})$$ to the locally closed subset of $S^{\pi_1,\ldots,\pi_p,\pi}(\mathscr{X}_1,\ldots,\mathscr{X}_p,\mathscr{X})$ corresponding to points with $\mathscr{X}$-component inducing partition $\pi'$ on $\mathscr{U}$, such that $$(f_1,\ldots,f_p,g)^{(\pi_1,\ldots,\pi_p,\pi)}\circ u_{\pi'} = (f_1,\ldots,f_p,g_{|\mathscr{U}},g_{|\mathscr{Y}})^{(\pi_1,\ldots,\pi_p,\pi',\pi-\pi')}.$$ Moreover, $S^{\pi_1,\ldots,\pi_p,\pi}(\mathscr{X}_1,\ldots,\mathscr{X}_p,\mathscr{X})$ is the disjoint union of these locally closed subsets, so that in terms of classes in $\mathrm{KExpVar}^+_{R}$, we have $$\left[S^{\pi_1,\ldots,\pi_p,\pi}(\mathscr{X}_1,\ldots,\mathscr{X}_p,\mathscr{X}),(f_1,\ldots,f_p,g)^{(\pi_1,\ldots,\pi_p)}\right] $$ $$= \sum_{\pi'\leq \pi}\left[S^{\pi_1,\ldots,\pi_{p},\pi',\pi-\pi'}(\mathscr{X}_1,\ldots,\mathscr{X}_{p},\mathscr{U},\mathscr{Y}),(f_1,\ldots,f_p,g_{|\mathscr{U}},g_{|\mathscr{Y}})^{(\pi_1,\ldots,\pi_p,\pi',\pi-\pi')}\right].$$ \end{prop} \subsection{Compatibility with affine spaces} \begin{lemma}\label{expcompaffspaces} Let $Y$ be a quasi-projective variety over a field $k$ and $f:Y\to \mathbf{A}^1$ a morphism. Then for every $\lambda\in k$, and every integer $n\geq 0$ we have the equality: $$[S^{n}(Y\times\mathbf{A}^1, f\oplus \lambda\mathrm{id})] = \left\{\begin{array}{cc}[S^nY]\mathbf{L}^{n}& \text{if}\ \lambda = 0\\ 0 & \text{otherwise}\end{array}\right. $$ in $\mathrm{KExpVar}_{S^{n}Y}$. \end{lemma} \begin{proof} Consider the commutative diagram $$\xymatrix{\left(Y\times \mathbf{A}^1\right)^n\ar[d]^{q'} \ar[r]^-{p'} & Y^n\ar[d]^q\\ S^n(Y\times \mathbf{A}^1)\ar[r]^-{p} & S^nY}$$ where the vertical arrows are the quotient maps. By the proof of Totaro's lemma (see lemma \ref{affinespaces}, or \cite{CNS} chapter 6, Proposition 3.1.5), for every partition $\pi$ of $n$, the lower arrow endows $p^{-1}(S^{\pi}Y)$ with the structure of a vector bundle of rank $n$ over $S^{\pi}Y$ which is locally trivial for the Zariski topology. On the other hand, by definition, the symmetric product $S^{n}(Y\times \mathbf{A}^1)$ is endowed with the morphism $(f\oplus \lambda\mathrm{id})^{(n)}$ induced by the morphism $(f\oplus \lambda\mathrm{id})^{\oplus n}$ given by $$\begin{array}{ccc} (Y\times \mathbf{A}^1)^{n} = Y^n\times \mathbf{A}^n& \to & \mathbf{A}^1\\ (y_1,\ldots,y_n,t_1,\ldots,t_n)& \mapsto & f(y_1) + \ldots + f(y_n) +\lambda(t_1+\ldots + t_n) \end{array}$$ Consider a point $y\in S^{\pi}Y$. We know that the fibre of $p^{-1}(S^{\pi}Y)$ above this point is an affine space $\mathbf{A}^n_{\kappa(y)}$. The linear form $(t_1,\ldots , t_n) \mapsto \lambda(t_1 + \ldots + t_n)$ on the general fibre of the trivial vector bundle in the top row of the diagram descends (via the permutation action, which is linear) to some linear form $\ell$ on $\mathbf{A}^n_{\kappa(y)}$, which will be zero if and only if $\lambda$ is zero. Thus, since by construction the morphism $(f \oplus\lambda\mathrm{id})^{(n)}$ coincides with $f^{(n)}$ on the zero-section of the vector bundle $p^{-1}(S^{\pi}Y)\to S^{\pi}Y$, we have, for any $(t_1,\ldots,t_n)\in\mathbf{A}^n_{\kappa(y)}$ $$(f\oplus \lambda\mathrm{id})^{(n)}(y,t_1,\ldots,t_n) = f^{(n)}(y) + \ell(t_1,\ldots,t_n),$$ with $\ell$ a linear form, which is zero if and only if $\lambda = 0$. Note that because of the last axiom defining $\mathrm{KExpVar}_k$, we have, using a linear change of basis, that $$ [\mathbf{A}^n,\ell] = \left\{\begin{array}{rc}\mathbf{L}^n & \text{if}\ \ell= 0\\ 0 & \text{otherwise.}\end{array}\right.$$ Taking $y$ to be a generic point of $S^{\pi}Y$ and spreading out to some trivialising open set $U$ for the vector bundle $p^{-1}(S^{\pi}Y)\to S^{\pi}Y$, we have $$[p^{-1}(S^{\pi}Y)_{|U}, (f\oplus\lambda\mathrm{id})^{(n)}_{|p^{-1}(S^{\pi}(Y))_{|U}}] = [U,f^{(n)}_{|U}][\mathbf{A}^n,\ell] = \left\{\begin{array}{rc}[U,f^{(n)}_{|U}]\mathbf{L}^n & \text{if}\ \lambda= 0\\ 0 & \text{otherwise.}\end{array}\right.$$ Repeat the process for a generic point of $S^{\pi}Y\setminus U$. By Noetherian induction, the result follows by taking the sum over all partitions of $n$. \end{proof} \subsection{Symmetric products of classes in Grothendieck rings with exponentials} Let $X$ be a quasi-projective variety. The same procedure as in section \ref{sect.symprodgrouplaw} endows the product $\prod_{i\geq 1}\mathrm{KExpVar}_{S^iX}$ with a group law, and this group will be denoted $\mathrm{KExpVar}_{S^{\bullet}X}$. \begin{prop} Let $X$ be a quasi-projective variety. There is a unique group morphism $$S^{\mathrm{exp}}:\mathrm{KExpVar}_X \to \mathrm{KExpVar}_{S^{\bullet}X}$$\index{Se@$S^{\exp}$} sending the class $[Y,f]$ of a quasi-projective variety $Y$ with a morphism $f:Y\to \mathbf{A}^1$, to the family of classes $([S^{i}Y,f^{(i)}])_{i\geq 1}.$ Moreover, there is a commutative diagram $$\xymatrix{\mathrm{KVar}_X \ar[r]^S \ar[d]&\mathrm{KVar}_{S^{\bullet}X}\ar[d]\\ \mathrm{KExpVar}_X \ar[r]^{S^{\exp}}& \mathrm{KExpVar}_{S^{\bullet}X}}$$ where the vertical arrows are given by the injections $\mathrm{KVar}\to\mathrm{KExpVar}$. \end{prop} \begin{proof} Define a morphism $S'$ from the free abelian group on pairs $(Y,f)$, where $Y$ is a quasi-projective variety over $X$ and $f:Y\to \mathbf{A}^1$ a morphism, to $\mathrm{KExpVar}_{S^{\bullet}X}$ by $S'((Y,f)) = ([S^{i}Y,f^{(i)}])_{i\geq 1}$. It suffices now to check that $S'$ passes to the quotient through the three relations defining $\mathrm{KExpVar}_X$. It is clear that $S'$ is constant on isomorphic pairs. If $Z$ is a closed subscheme of $Y$ with open complement $U$, then for any $n\geq 1$, $[S^nY,f^{(n)}] = \sum_{i=0}^n[S^iU,f_{|U}^{(i)}][S^{n-i}Z,f_{|Z}^{(n-i)}]$, so that $S'(Y) = S'(U)S'(Z)$. Finally, it follows from lemma \ref{expcompaffspaces} that for any quasi-projective variety $Y$ over $X$, the class $[Y\times_{\mathbf{Z}} \mathbf{A}^1, \mathrm{pr}_2]$ goes to zero. The commutativity of the diagram may be checked for classes of varieties $[Y\to X]$, where it is immediate. \end{proof} The notion of symmetric product of a class in $\mathrm{KVar}_X$ may therefore be extended to $\mathrm{KExpVar}_X$ in the following manner: \begin{definition} Let $\a\in\mathrm{KExpVar}_X$ and $n\geq 1$. The $n$-th symmetric product of $\a$, denoted by $S^n\a$ is the image of $S^{\exp}(\a)$ through the projection $$\prod_{i\geq 1}\mathrm{KExpVar}_{S^iX} \to \mathrm{KExpVar}_{S^nX}$$ onto the $n$-th component. \end{definition} \begin{remark}\label{symprodexpdefandprop} If $X$ be a quasi-projective variety over $R$ and $\mathscr{A} = (A_i)_{i\geq 1}$ a family of varieties with exponentials, definition \ref{symproddef} may be extended in a compatible way to define, for any partition $\pi$, the symmetric product $S^{\pi}\mathscr{A}$ as an element of $\mathrm{KExpVar}_{S^{\pi}X}$. We can also define mixed symmetric products, and proposition \ref{generalcut2mixed} and corollary \ref{generalcut2} are true more generally with $\mathrm{KVar}$ replaced by $\mathrm{KExpVar}$. \end{remark} \section{Symmetric products in localised Grothendieck rings}\label{sect.locsymproducts} In this section, we are going to generalise the notion of symmetric product further, to include classes in the localised Grothendieck rings $\mathscr{M}_X$ and $\mathscr{E}xp\mathscr{M}_X$. Since the Grothendieck ring $\mathrm{KVar}$ injects into $\mathrm{KExpVar}$, it does not restrict generality to work with $\mathrm{KExpVar}$ directly, which we will do in this section. \begin{lemma}\label{gencompaffinespaces} For every $\a\in \mathrm{KExpVar}_X$, for any $n\geq 1$ and for any $m\geq 1$, one has $$S^{n}(\a\mathbf{L}^{m}) = S^{n}(\a)\mathbf{L}^{nm}$$ in $\mathrm{KExpVar}_{S^nX}$. \end{lemma} \begin{proof} Lemma \ref{expcompaffspaces} shows that this holds for effective elements of $\mathrm{KExpVar}_X$. It suffices to prove that it holds for a difference $Y-Z$ of two effective elements. Using the fact that $S$ (resp. $S^{\exp}$) is a group morphism, one may write, by induction on $n$, \begin{eqnarray*}S^{n}((Y-Z)\mathbf{L}^{m}) &=& S^{n}(Y\mathbf{L}^{m}) - \sum_{k=0}^{n-1}S^{n-1-k}((Y-Z)\mathbf{L}^{m})S^k(Z\mathbf{L}^m)\\ & = & S^n(Y)\mathbf{L}^{mn} - \mathbf{L}^{mn}\sum_{k=0}^{n-1}S^{n-1-k}(Y-Z)S^{k}(Z) \\ & = & S^n(Y-Z)\mathbf{L}^{mn}. \end{eqnarray*} \end{proof} Let $X$ be a quasi-projective variety. The same procedure as in section \ref{sect.symprodgrouplaw} endows the product $\prod_{i\geq 1}\mathscr{E}xp\mathscr{M}_{S^iX}$ with a group law, and this group will be denoted $\mathscr{E}xp\mathscr{M}_{S^{\bullet}X}$. \begin{lemma}\label{sloc} There is a unique group morphism $$S^{\mathrm{loc}}:\mathscr{E}xp\mathscr{M}_{X}\to \mathscr{E}xp\mathscr{M}_{S^{\bullet}X}$$ given by $\a\mathbf{L}^{-m}\mapsto ((S^i\a)\mathbf{L}^{-mi})_{i\geq 1}.$ \end{lemma}\index{Sloc@$S^{\mathrm{loc}}$} \begin{proof} We already have a group morphism \begin{equation}\label{slocequation}\mathrm{KExpVar}_X\to \mathscr{E}xp\mathscr{M}_{S^{\bullet}X}\end{equation} obtained from $S$ by composing with the localisation morphism $\mathrm{KExpVar}_{S^{\bullet} X}\to \mathscr{E}xp\mathscr{M}_{S^{\bullet}X},$ and given by $\a \mapsto (S^{i}\a)_{i\geq 1}.$ Lemma \ref{gencompaffinespaces} shows that an element in the kernel of the localisation morphism $\alpha_X:\mathrm{KExpVar}_X\to \mathscr{E}xp\mathscr{M}_X$ will go to 0, so (\ref{slocequation}) factors through~$\alpha_X$, inducing a morphism $$\mathrm{im}(\alpha_X)\to \mathscr{E}xp\mathscr{M}_{S^{\bullet}X}.$$ Let $\a \in \mathscr{E}xp\mathscr{M}_{X}$. Then there exists an integer $m\geq 1$ such that $\a\mathbf{L}^{m}$ belong to $\mathrm{im}(\alpha_X)$. Put $$S^{\mathrm{loc}}(\a) = (S^{i}(\a\mathbf{L}^m)\mathbf{L}^{-mi})_{i\geq 1}$$ By lemma \ref{gencompaffinespaces} this does not depend on the choice of $m$, so $S^{\mathrm{loc}}$ is well-defined. \end{proof} \begin{remark}\label{finalsymproddef} We may now define, for any $\a\in \mathscr{E}xp\mathscr{M}_X$, its symmetric powers $S^{i}(\a)$ to be the components of $S^{\mathrm{loc}}(\a)$. More generally, for any partition $\pi = (n_i)_{i\in I}$ we may define the symmetric products $S^{\pi}\mathscr{A}$ of any family $\mathscr{A}$ of elements of $\mathscr{E}xp\mathscr{M}_X$. Furthermore, we may define mixed symmetric products of a finite number of families of elements of $\mathscr{E}xp\mathscr{M}_X$, and as in remark \ref{symprodexpdefandprop}, proposition \ref{generalcut2mixed} remains true with $\mathrm{KVar}$ replaced with $\mathscr{M}, \mathrm{KExpVar}$ or $\mathscr{E}xp\mathscr{M}$. \end{remark} We also have, for $A\in \{\mathrm{KVar},\mathscr{M},\mathrm{KExpVar},\mathscr{E}xp\mathscr{M}\}$: \begin{prop}\label{genaffinespaces} Let $X$ be a variety over $R$, and $(X_i)_{i\in I}$ a family of elements in $A_X$. Let $\mathbf{m} = (m_i)_{i\in I}$ be a family of non-negative integers, and denote by $\mathscr{X}\mathbf{L}^{\mathbf{m}}$ the family $(X_i\mathbf{L}^{m_i})_{i\in I}$. Then for all $\pi = (n_i)_{i\in I}\in\mathbf{N}^{(I)}$, we have $$S^{\pi}(\mathscr{X}\mathbf{L}^{\mathbf{m}}) = (S^{\pi}\mathscr{X})\mathbf{L}^{\sum_{i\in I}m_in_i}$$ in $A_{S^{\pi}X}.$ \end{prop} \begin{proof} By lemma \ref{gencompaffinespaces}, we have, for all $i\in I$, $$S^{n_i}(X_i\mathbf{L}^{m_i}) = S^{n_i}(X_i)\mathbf{L}^{m_in_i}$$ in $A_{S^{n_i}X}$. Taking exterior products over $i\in I$ and restricting to $S^{\pi}X$ we get the result. \end{proof} \section{Euler products}\label{eulerprod} \subsection{Definition and first properties} Let $A\in\{\mathrm{KVar},\mathscr{M}, \mathrm{KExpVar},\mathscr{E}xp\mathscr{M}\}$. Let $X$ be a variety over $R$ and $\mathscr{X} = (X_i)_{i\in I}$ a family of elements of $A_X$. We define the Euler product notation to be $$\prod_{u\in X/R}\left( 1 + \sum_{i\in I} X_{i,u}t_i\right) := \sum_{ \pi\in\mathbf{N}^{(I)}} [S^{\pi}(\mathscr{X}/R)]\t^{\pi} = Z_{\mathscr{X}/R}(\t)\ \ \ \in A_{R}[[\t]],$$\index{Euler product} where the $t_i,\ i\in I$ are variables, and $\t^{\pi}$ is defined to be the finite product $\prod_{i\in I}t_i^{n_i}$, where $\pi = (n_i)_{i\in\mathcal{I}}$. When $R = \mathrm{Spec}\, k$ for a field $k$, we will leave out the mention of $R$ in the product, writing simply $\prod_{u\in X}$. We are going to start by checking that our ``product'' actually behaves like a product, thereby justifying our notation. \paragraph{Properties:} Let $X$ be a variety over $R$ and $\mathscr{X} = (X_i)_{i\in I}$ a family of classes in $A_X$. \begin{enumerate} \item (Product with one factor) When $X = R$, the last part of Example $\ref{firstex}$ gives \begin{equation}\label{onepoint}\prod_{u\in R/R}\left(1 + \sum_{i\in I}X_{i,u}t_i\right) = 1 + \sum_{i\in I}X_i t_i.\end{equation \item (Associativity) Let $X = U\cup Y$ be a partition of $X$ into a closed subscheme $Y$ and its complement $U$, and $\mathscr{U} = (U_i)_{i\in I}$ (resp. $\mathscr{Y}= (Y_i)_{i\in I}$) the restriction of $\mathscr{X}$ to $U$ (resp. to~$Y$). Then corollary \ref{generalcut2} can be reformulated as \begin{equation}\label{associativity} \prod_{u\in X/R}\left(1 + \sum_{i\in I}X_{i,u}t_i\right) = \prod_{u\in U/R}\left(1 + \sum_{i\in 1} U_{i,u}t_i\right) \prod_{u\in Y/R}\left(1 + \sum_{i\in I}Y_{i,u}t_i\right).\end{equation} Here we use remarks \ref{symprodexpdefandprop} and \ref{finalsymproddef}, which state that corollary \ref{generalcut2} is true more generally with $\mathrm{KVar}$ replaced with $\mathrm{KExpVar}$, $\mathscr{M}$ or $\mathscr{E}xp\mathscr{M}$. \item (Finite products) Combining the previous two properties, we see that in the case where $X$ is a disjoint union of $m$ varieties $Y_1,\ldots,Y_m$ isomorphic to $R$, \begin{equation}\label{finiteprod}\prod_{v\in X/R}\left(1 + \sum_{i\in I}X_{i,v}t_i\right)=\prod_{j=1}^m\left(1 + \sum_{i\in I}X_{i,j} t_i\right)\in A_{R}[[\t]]\end{equation} where for all $i\in I$ $X_{i,j}$ is the restriction of $X_i$ to $Y_j\simeq R$, and the product on the right-hand side is a finite product of power series (in the classical sense). \item (Change of variables of the form $\t\mapsto \mathbf{L}^{\mathbf{m}}\t$) In terms of Euler products, Proposition \ref{genaffinespaces} means that for any $(m_i)_{i\in I}\in\mathbf{N}^{I}$, $$\prod_{u\in X/R}\left(1 + \sum_{i\in I}X_{i,u}(\mathbf{L}^{m_i}t_i)\right) = \prod_{u\in X/R}\left(1 + \sum_{i\in I}(X_{i,u}\mathbf{L}^{m_i})t_i\right),$$ where the right-hand-side is the Euler product associated to the family $(X_i\mathbf{L}^{m_i})_{i\in I}$, that is, the Euler product notation is compatible with respect to changes of variables of the form $(t_i\mapsto \mathbf{L}^{m_i}t_i)_{i\in I},$ and factors of the form $\mathbf{L}^{m_i}$ may pass from the variables to the coefficients. One must pay attention that this is specific to affine spaces because of their good behaviour with respect to Euler products. \end{enumerate} \begin{remark} Let us now try to get a grip on what each factor of such an infinite product actually represents. For simplicity, we will consider $R = \mathrm{Spec}\, k$. If $X = \mathrm{Spec}\, k$, then according to (\ref{onepoint}), $$\prod_{u\in \mathrm{Spec}\, k}\left(1 + \sum_{i \in I}X_{i,u}t_i\right) = 1 + \sum_{i\in I}X_it^i \in\mathrm{KVar}_{k}[[t]].$$ In this case the left-hand side has ``only one factor'', and identifying coefficients on both sides suggests that we can think of those objects $X_{i,u}$ as being the classes in $\mathrm{KVar}_{k}$ of the fibres of the~$X_i$ above the single closed point $u\in X$. Now let~$X$ be the disjoint union of a copy $Y$ of $\mathrm{Spec}\, k$ (with a single closed point $y$) and of its open complement $U$, and let $\mathscr{Y}$ and $\mathscr{U}$ be the respective restrictions of $\mathscr{X}$ to $Y$ and $U$. Then using what we just remarked together with (\ref{associativity}), \begin{eqnarray*}\prod_{x\in X}\left( 1 + \sum_{i \in I}X_{i,x}t_i\right) &= &\prod_{y\in Y}\left(1 + \sum_{i \in I}Y_{i,y}t_i\right)\prod_{u\in U}\left(1 + \sum_{i \in I}U_{i,u}t_i\right)\\ &=& \left(1 + \sum_{i\in I}Y_{i} t_i\right)\prod_{u\in U}\left(1 + \sum_{i \in I}U_{i,u}t_i\right) \end{eqnarray*} This means that adding a closed point $\mathrm{Spec}\, k$ to the variety consists exactly in adding a factor with coefficient of $t_i$ representing the class in $\mathrm{KVar}_{k(y)}$ of the fibre $Y_i = X_{i,y}$ above the added point~$y$. \end{remark} \begin{example}\label{eulerprodexample} \begin{enumerate}\item \label{Kapranovexample} Kapranov's zeta function is obtained when taking $I = \mathbf{N}^{*}$, $X_i = X$ and specialising $t_i$ to $t^i$ for some single indeterminate $t$, for all $i$. Thus, for all closed points $v\in X$, the class of $X_{i,v}$ in $\mathrm{KVar}^+_{k(v)}$ is 1 and the Euler product decomposition of Kapranov's zeta function can be written as $$\prod_{v\in X}\left(\sum_{i\geq 0}t^i\right) = 1 + \sum_{n\geq 1}[S^nX]t^n,$$\index{Kapranov's zeta function!Euler product} or even $$\prod_{v\in X}(1-t)^{-1} = 1 + \sum_{n\geq 1}[S^nX]t^n,$$ \item For $I = \mathbf{N}^{*}$, put $X_1 = X$ and $X_i = \varnothing$ for all $i \geq 2$. Then the class of $X_{i,v}$ in $\mathrm{KVar}^+_{k(v)}$ is $1$ if $i=1$ and 0 otherwise. Thus, writing $t_1=t$, we get the following Euler product decomposition for the generating function of zero-cycles without repetitions: $$\prod_{v\in X}(1 + t) = 1 + \sum_{n\geq 1}[S^n_*X]t^n.$$ \end{enumerate} \end{example} \section{Double products} \subsection{Double products with effective coefficients} In this section we translate proposition \ref{iterate} into the language of Euler products, which will show the good behaviour of double products with \textit{effective} coefficients. We will then use this property in the next sections to deduce that this good behaviour extends in some sense to double products with not necessarily effective coefficients, when some natural assumptions on the set $I$ and the family of indeterminates $(t_i)_{i\in I}$ are made. We place ourselves in the situation of proposition~\ref{iterate}: we consider $R'$ a variety over~$k$, $R$ a variety over $R'$, $X$ a variety over~$R$, and $\mathscr{X} = (X_i)_{i\in I}$ a family of varieties over $X$, indexed by a set $I$. Using our notation for the family of varieties $S^{\bullet}(\mathscr{X}/R)$ over $R$, we have $$\prod_{v\inR/R'}\left(1 + \sum_{\pi\in\mathbf{N}^{(I)}\backslash\{0\}}(S^{\pi}(\mathscr{X}/R))_v\t^{\pi}\right) = \sum_{\varpi\in\mathbf{N}^{\left(\mathbf{N}^{(I)}\backslash\{0\}\right)}}[S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R'))]\t^{\varpi},$$ where, writing $\varpi = (m_{\pi})_{\pi\in \mathbf{N}^{(I)}\backslash\{0\}}$, and denoting by $\pi_i$ the number of times $i$ occurs in~$\pi$, $\t^{\varpi} $ is by definition $$\prod_{\pi\in\mathbf{N}^{(I)}\backslash\{0\}}(\t^{\pi})^{m_{\pi}} = \prod_{\pi\in\mathbf{N}^{(I)}\backslash\{0\}}\left(\prod_{i\in I}t_i^{\pi_i}\right)^{m_{\pi}} = \prod_{i\in I}t_i^{\sum_{\pi\in\mathbf{N}^{(I)}\backslash\{0\}}m_{\pi}\pi_i} = \t^{\mu(\varpi)}.$$ In particular, $\t^{\varpi}$ and $\t^{\varpi'}$ are equal if and only if, in the notations of definition \ref{mu}, $\mu(\varpi) = \mu(\varpi')$. Thus, the above product can be rewritten in the form $$\prod_{v\inR/R'}\left(1 + \sum_{\pi\in\mathbf{N}^{(I)}\backslash\{0\}}(S^{\pi}(\mathscr{X}/R))_v\t^{\pi}\right) $$ $$= 1 + \sum_{\pi\in\mathbf{N}^{(I)}\backslash\{0\}}\left(\sum_{\varpi\in\mu^{-1}(\pi)}[S^{\varpi}(S^{\bullet}(\mathscr{X}/R)/R')]\right)\t^{\pi}.$$ We may therefore conclude, using proposition \ref{iterate} that the double product $$\prod_{v\inR/R'}\left(\prod_{u\in X/R}\left(1 + \sum_{i\in I}X_{i,u}t_i\right)\right)_v$$ makes sense and that the following result is true: \begin{prop}\label{prop.effectivedoubleprod} Let $R'$ be a variety over $k$, $R$ a variety over $R'$, $X$ a variety over~$R$, and $\mathscr{X} = (X_i)_{i\in I}$ a family of varieties over $X$, indexed by a set $I$. Then we have the equality \begin{equation}\label{doubleprod}\prod_{v\inR/R'}\left(\prod_{u\in X/R}\left(1 + \sum_{i\in I}X_{i,u}t_i\right)\right)_v = \prod_{u\in X/R'}\left(1 + \sum_{i\in I}X_{i,u}t_i\right)\end{equation} in $\mathrm{KVar}_{R'}[[(t_i)_{i\in I}]]$. \end{prop} By proposition \ref{expiterate} this remains true with $\mathrm{KVar}$ replaced by $\mathrm{KExpVar}$ if $\mathscr{X}$ is a family of varieties with exponentials. We may also pass to the respective localisations $\mathscr{M}$ and $\mathscr{E}xp\mathscr{M}$. Assume now that $X$ is the disjoint union of two copies of $R$, written respectively $Y$ and~$Z$. For all $i \in I$, we will write $Y_i$ (resp. $Z_i$) for the restriction of $X_i$ to $Y$ (resp. to $Z$). Using $(\ref{finiteprod})$, we get that $$\prod_{u\in X/R}\left(1 + \sum_{i \in I}X_{i,u}t_i\right) = \left(1 + \sum_{i \in I}Y_it_i\right)\left(1 + \sum_{i \in I}Z_it_i\right).$$ Taking the product over $R$ relatively to $R'$, we get $$\prod_{v\inR/R'}\left(1 + \sum_{i \in I}Y_{i,v}t_i\right)\left(1 + \sum_{i \in I}Z_{i,v}t_i\right)$$ On the other hand, by (\ref{associativity}), we have $$\prod_{v\in X/R'}\left(1 + \sum_{i \in I}X_{i,v}t_i\right) = \prod_{v\in Y/R'}\left(1 + \sum_{i \in I}Y_{i,v}t_i \right)\prod_{v\in Z/R'}\left(1 + \sum_{i \in I}Z_{i,v}t_i \right).$$ Using $(\ref{doubleprod})$, we finally get: \begin{prop} Let $R'$ be a variety over $k$, $R$ a variety over~$R'$, and $(Y_i)_{i\in I}$, $(Z_i)_{i\in I}$ two families of varieties over~$R$. Then we have the equality $$\prod_{v\inR/R'}\left(1 + \sum_{i \in I}Y_{i,v}t_i\right)\left(1 + \sum_{i \in I}Z_{i,v}t_i\right) = \prod_{v\in R/R'}\left(1 + \sum_{i \in I}Y_{i,v}t_i \right)\prod_{v\in R/R'}\left(1 + \sum_{i \in I}Z_{i,v} t_i\right) $$ in $A_{R'}[[(t_i)_{i\in I}]]$, where $A\in\{\mathrm{KVar}, \mathrm{KExpVar}, \mathscr{M}, \mathscr{E}xp\mathscr{M}_k\}.$ \end{prop} \begin{example} Let $X$ be a variety over $R$ and $n\geq 1$ an integer. As an application of double products, let us compute Kapranov's zeta function of a $\mathbf{P}^n$-bundle $Y$ over $X$ in terms of $Z_X$. By definition, $$Z_{Y} = \prod_{y\in Y}\frac{1}{1-t}.$$ Using (\ref{doubleprod}), we have \begin{eqnarray*}Z_{Y}& =& \prod_{x\in X}\left(\prod_{y\in Y/X}\frac{1}{1-t}\right)_x\\ & = &\prod_{x\in X}\prod_{y\in \mathbf{P}^n}\frac{1}{1-t}\\ &= &\prod_{x\in X}\frac{1}{1-t}\frac{1}{1-\mathbf{L} t}\ldots\frac{1}{1-\mathbf{L}^{n}t} \end{eqnarray*} where the last line comes from the exact expression of $Z_{\mathbf{P}^{n}}(t)$. Using compatibility with finite products, we finally get $$Z_{Y}= \prod_{i=0}^n\prod_{x \in X}\frac{1}{1-\mathbf{L}^i t}= Z_X(t)Z_X(\mathbf{L} t)\ldots Z_X(\mathbf{L}^{n} t).$$ Note that this result can be obtained without the Euler product, by cutting the projective bundle into affine bundles and applying propositions \ref{affine} and \ref{multzeta} (see \cite{LL}, Corollary~3.6). \end{example} \subsection{Double products with arbitrary coefficients} To get a multiplicativity result of this type for non-effective coefficients, we will make assumptions on the set $I$ and the indeterminates $(t_i)_{i\in I}$. \paragraph{Hypothesis on $I$.} We assume that $I$ is of the form $I_0\setminus \{0\}$ where $I_0$ is a commutative monoid such that \begin{enumerate} \item $I_0$ is endowed with a total order $<$ compatible with the monoid law, in the sense that if we have $p+q = i$ for some $q\neq 0$, then $p<i$. \item for all $i\in I$, the set $\{p\in I,\ p<i\}$ is finite. \end{enumerate} A consequence of this is that $I$ has a minimal element. \paragraph{Hypothesis on $(t_i)_{i\in I}$.} We are going to consider collections of indeterminates $(t_i)_{i\in I}$ satisfying the condition $t_pt_q = t_{p+q}$ for all $p,q\in I$. We will sometimes write $t_0 =1$. \begin{notation} Whenever we write sums or products indexed by the condition $p+q = i$, we mean that they go over all pairs $(p,q)\in I_0^2$ such that $p+q = i$. \end{notation} \begin{prop} \label{prop.findeffective} Assuming the above conditions on $I$ and $(t_i)_{i\in I}$, for every $k$-variety $X$ and for every family $(X_i)_{i\in I}$ of elements of $\mathrm{KVar}_X$, there exists a family of effective elements $(Y_i)_{i\in I}$ of $\mathrm{KVar}_X$ such that the coefficients of the expansion of the series $$\left(1 + \sum_{i\in I}X_it_i\right)\left(1 + \sum_{i\in I} Y_it_i\right)\in \mathrm{KVar}_X[[\t]]$$ are effective. \end{prop} \begin{proof} We construct $(Y_i)_{i\in I}$ by induction, using the order $<$. We put $Y_0 = X$. Assume the $Y_p$ are constructed for all $p$ satisfying $p<i$ for some $i\in I$. We are looking for an effective $Y_i$ such that $$\sum_{\substack{p+q = i\\ q\neq 0}}X_pY_q + Y_i$$ is effective. By the first condition on $I$ and by the induction hypothesis, all the $Y_i$ appearing in the sum are already defined, so such a $Y_i$ exists. \end{proof} \begin{example} The main example of this situation that will be of interest to us is when $I_0 = \mathbf{Z}_{\geq 0}^{r}$ for some integer $r\geq 1$ (with the lexicographical order), and for $i = (i_1,\ldots,i_r)\in I$, the indeterminate $t_i$ is given by $s_1^{i_1}\ldots s_r^{i_r}$ for $r$ independent indeterminates $s_1,\ldots,s_r$. \end{example} The aim of the following sections is to prove: \begin{prop}\label{compfinprodnoneff} Let $R$ be a variety, $X$ a variety over~$R$, $\mathscr{A} = (A_i)_{i\in I}$ a family of effective elements of $\mathrm{KVar}_X$, and $\mathscr{B} = (B_i)_{i\in I}$ a family of elements of $\mathrm{KVar}_X$. Then, under the above hypotheses on~$I$ and $(t_i)_{i\in I}$, $$\prod_{x\in X}\left(1 + \sum_{i\in I} A_{i,x}t_i\right)\left(1 + \sum_{i\in I} B_{i,x}t_i\right) = \prod_{x\in X}\left(1 + \sum_{i\in I} A_{i,x}t_i\right)\prod_{x\in X} \left(1 + \sum_{i\in I} B_{i,x}t_i\right)$$ in $\mathrm{KVar}_R[[(t_i)_{i\in I}]]$. \end{prop} Before proving the proposition, let us state the following corollary: \begin{cor} Let $R'$ be a variety, $R$ a variety over $R'$, $X$ a variety over~$R$, and $\mathscr{X} = (X_i)_{i\in I}$ a family of classes in $\mathrm{KVar}_X$, indexed by a set $I$. Then, under the above hypotheses on~$I$ and $(t_i)_{i\in I}$, we have the equality \begin{equation}\label{doubleprodnoneff}\prod_{v\inR/R'}\left(\prod_{u\in X/R}\left(1 + \sum_{i\in I}X_{i,u}t_i\right)\right)_v = \prod_{u\in X/R'}\left(1 + \sum_{i\in I}X_{i,u}t_i\right).\end{equation} in $\mathrm{KVar}_{R'}[[(t_i)_{i\in I}]]$. \end{cor} \begin{proof} By proposition \ref{prop.effectivedoubleprod}, this is already known when the classes $X_i$ are effective. Using proposition \ref{prop.findeffective}, choose a family of effective classes $\mathscr{Y} = (Y_i)_{i\in I}$ such that $$\left(1 + \sum_{i\in I}X_it_i\right)\left(1 + \sum_{i\in I}Y_it_i\right) = 1 + \sum_{i\in I}Z_it_i\in \mathrm{KVar}_{X}[[(t_i)_{i\in I}]]$$ has effective coefficients. Then we have $$\prod_{v\inR/R'}\left(\prod_{u\in X/R}\left(1 + \sum_{i\in I}Z_{i,u}t_i\right)\right)_v = \prod_{u\in X/R'}\left(1 + \sum_{i\in I}Z_{i,u}t_i\right)$$ for the effective family $(Z_i)_{i\in I}$. Thus, we have $$\prod_{v\inR/R'}\left(\prod_{u\in X/R}\left(1 + \sum_{i\in I}X_{i,u}t_i\right)\left(1 + \sum_{i\in I}Y_{i,u}t_i\right)\right)_v = \prod_{u\in X/R'}\left(1 + \sum_{i\in I}X_{i,u}t_i\right)\left(1 + \sum_{i\in I}Y_{i,u}t_i\right).$$ Using proposition \ref{compfinprodnoneff}, we get $$\prod_{v\inR/R'}\left(\prod_{u\in X/R}\left(1 + \sum_{i\in I}X_{i,u}t_i\right)\right)_v \prod_{v\inR/R'}\left(\prod_{u\in X/R}\left(1 + \sum_{i\in I}Y_{i,u}t_i\right)\right)_v=$$ $$ \prod_{u\in X/R'}\left(1 + \sum_{i\in I}X_{i,u}t_i\right)\prod_{u\in X/R'}\left(1 + \sum_{i\in I}Y_{i,u}t_i\right).$$ By proposition \ref{prop.effectivedoubleprod}, the factors corresponding to $\mathscr{Y}$ cancel out and we have the result. \end{proof} \begin{remark} Though in this section we have stated everything for $\mathrm{KVar}_X$, the results remain true for Grothendieck rings of exponentials and localised Grothendieck rings. \end{remark} \subsection{Some combinatorics of partitions}\label{subsect.combinatorics} We use some terminology and notation on partitions due to Vakil-Wood (see \cite{VW}, section 2) and Howe (see \cite{Howe}, section~6.1). As before, a (generalised) partition $\kappa$ in an abelian semigroup $I$ may equivalently be seen as a family of non-negative integers $(k_i)_{i\in I}$, almost all of them being zero, or as a finite multiset $[i^{k_i}]_{i\in I}$ of elements of $I$, the number of occurrences of $i$ being given by $k_i$ (so that exponents in the notation $[i^{k_i}]_{i\in I}$ denote multiplicity). We use the notations \begin{itemize} \item $\sum\kappa:= \sum_{i\in I} k_ii$ for the sum of the partition $\kappa$, \item $|\kappa| = \sum_{i\in I}k_i$ for the number of elements (or \textit{parts}) of $\kappa$, \item $||\kappa|| = |\{i\in I,\ k_i\neq 0\}|$ for the number of \textit{distinct} parts of $\kappa$. \end{itemize} A \textit{formalisation} of the partition $\kappa$ is a partition $f(\kappa)$ in the abelian semigroup $\mathbf{Z}^+[a_i]_{i\in I}$ (where the $a_i$ are formal indeterminates) given by $f(\kappa) = [a_i^{k_i}]_{i\in I}$. In other words, every element of $\kappa$ is replaced by a formal indeterminate, keeping the same multiplicities. Note that $|\kappa|$ and $||\kappa||$ are preserved when passing to a formalisation. We denote by $\mathcal{P}$ the set of partitions of integers (that is, partitions in the case where $I = \mathbf{Z}_{>0}$.) Among those, we denote by $\mathcal{Q}$ the set of partitions $\mu = (m_i)_{i\geq 1}$ such that for some $k\geq 1$, we have $m_i=0$ for $i>k$ and $m_i\neq 0$ for $1\leq i\leq k$. Such a partition $\mu$ can be also seen as an \textit{ordered} partition $(m_1,\ldots,m_k)$ of the integer $|\mu|$. From this point of view, the elements of $\mathcal{P}$ may be reinterpreted as ordered partitions allowing zeroes. For any integer~$m$, we denote by $\mathcal{P}_m\subset \mathcal{P}$ (resp. $\mathcal{Q}_m\subset \mathcal{Q}$) the subsets corresponding to partitions~$\mu$ with $|\mu| = m$. For non-negative integers $m_1,\ldots,m_r$, we will use the notations $(m_1,\ldots,m_r)$ or $1^{m_1}\ldots r^{m_r}$ to denote the partition $(m_1,\ldots,m_r,0,0,\ldots)$. There is a natural retraction map $c:\mathcal{P}\to \mathcal{Q}$ given by removing all intermediate zeroes. For example, $c(1,0,3) = (1,3)$. \begin{remark}\label{remark:piles} A helpful visualisation is to consider an element $\mu =(\mu_i)_{i\geq 1}\in \mathcal{P}$ as a pile of blocks arranged into columns numbered from left to right, with the $i$-th column containing $m_i$ blocks. The elements of $\mathcal{Q}$ are piles with no empty columns, $|\mu|$ is the total number of blocks, and $||\mu||$ is the total number of non-empty columns. The contraction $c(\mu)$ is given by sliding all of the non-empty columns as far to the left as possible. \end{remark} \paragraph{A combinatorial lemma} For every integer $n\geq 1$ will consider the \textit{decomposition sets} $$\mathcal{Q}^{n,*} = \{(\mu_1,\ldots,\mu_n)\in \mathcal{P}^{n}|\ \mu_1 + \ldots + \mu_n \in \mathcal{Q}\}.$$ There is a contraction map $c_n:\mathcal{Q}^{n,*}\to \mathcal{Q}^n$ given by $(\mu_1,\ldots,\mu_n)\mapsto (c(\mu_1),\ldots,c(\mu_n)).$ \begin{example}\label{contractionn1} When $n=1$, we have $\mathcal{Q}^{1,\star} = \mathcal{Q}$ and $c_1 = \mathrm{id}$. \end{example} We have the following combinatorial lemma, which, together with its proof, has been communicated to us by Sean Howe: \begin{lemma}\label{howelemma} For all $(\nu_1,\ldots,\nu_n)\in \mathcal{Q}^n$, $$\sum_{(\mu_1,\ldots,\mu_n)\in c_n^{-1}(\nu_1,\ldots,\nu_n)}(-1)^{||\mu_1 + \ldots + \mu_n||} = (-1)^{||\nu_1|| + \ldots + ||\nu_n||}.$$ \end{lemma} \begin{proof}The case $n=1$ is clear by Example \ref{contractionn1} We will show the case $n=2$ directly, and then proceed by induction for larger $n$. We now consider the case $n=2$. To give an element $(\mu_1, \mu_2)$ in $c_2^{-1}(\nu_1, \nu_2)$ is the same as to give: \begin{enumerate} \item A non-negative integer $j$ such that $j \leq \min(||\nu_1||, ||\nu_2||)$. \item A subdivision of $\{1, \ldots, ||\nu_1|| + ||\nu_2|| - j\}$ into a subset of size $j$, a subset of size $||\nu_1|| - j$, and a subset of size $||\nu_2|| - j$. \end{enumerate} Here $j$ corresponds to the number of integers $k$ such that $\mu_{1}$ and $\mu_2$ are both non-zero in the $k$-th spot; given this information, to determine $\mu_1$ and $\mu_2$ from $\nu_1$ and $\nu_2$, it suffices to pick $j$ spots for both to be non-zero, $||\nu_1|| - j$ for only $\mu_1$ to be non-zero, and $||\nu_2|| - j$ for only $\mu_2$ to be non-zero. Furthermore, for the resulting $(\mu_1, \mu_2)$, we have $$ (-1)^{||\mu_1 + \mu_2||} = (-1)^{||\nu_1|| + ||\nu_2||-j} = (-1)^{||\nu_1|| + ||\nu_2||} (-1)^j .$$ Thus, the sum depends only on $||\nu_1||=:a, ||\nu_2||=:b.$ Therefore, we may forget about multiplicities and we may assume that $\nu_1=1^1 2^1 \ldots a^1$ and $\nu_2=1^1 2^1 \ldots b^1$. Then, using the visualisation in terms of piles of blocks as in remark~\ref{remark:piles}, we find that we are summing $-1$ to the length of the pile over all piles of blocks created by inserting $a$ red blocks into a row of $b$ blue blocks with each red block going either between columns or on top of a blue block (with at most one red block on top of any blue block). We must show this sum is equal to $(-1)^{a+b}$. To prove this is the case, it suffices to consider the variant where the red blocks are numbered $1$ through $a$, and to show that the corresponding sum is equal to $(-1)^{a+b} a!$. In this variant, we may first choose where to insert the first block, then the second block, etc.; we will keep track of the sign for the corresponding term in the sum by considering whether inserting the block keeps the length of the pile the same or changes it by one. For every $i\in\{0,\ldots,a\}$, denote by $S_i$ the corresponding sum where only $i$ red blocks have been inserted, so that $S_0 = (-1)^b$ and $S_a$ is the sum we are looking for. Denoting by $\Pi_i$ the set of possible piles of $i$ (numbered) red blocks and $b$ blue blocks, and for every $p\in \Pi_i$ by $||p||$ the length (that is, the number of columns) of the pile $p$, we have, for all $i\in\{1,\ldots,a\}:$ $$S_i = \sum_{p\in \Pi_i}(-1)^{||p||} = \sum_{p\in \Pi_{i-1}}(-1)^{||p||}(r_i(p) - r'_i(p))$$ where $r_i(p)$ (resp. $r'_i(p)$) is the number of ways of adding an $i$-th block to $p\in \Pi_{i-1}$ which don't change (resp. change) the length of $p$. We claim that the difference $r_i(p) - r'_i(p)$ only depends on $i$, and not on the pile $p$. Indeed, note that each red block may either be inserted on top of a blue block, which does not change the length of the pile, or between two columns, which changes the length of the pile by one. If a red block is placed on top of a blue block, then the number of positions for the next red block to be placed without changing the length goes down by one, while the number of positions it can be placed that change the length by one remains the same. If a red block is placed between two columns, then the number of positions for the next red block to be placed which don't change the length remains the same, but the number of positions for the next red block to be placed that change the length goes up by one (since the spot where this block was placed now is replaced with two possible spots, one to the left and one to the right). In either case (and independently of the pile we are considering), the \emph{difference} between the number of ways to place a block without changing the length and the number of ways to place it that change the length goes down by one. For the first block this difference is $b - (b+1) = -1$, so when inserting the $i$-th block it is $-i$. In other words, for all $i\in \{1,\ldots,a\}$ and all $p\in \Pi_i$, we have $$r_i(p) - r'_i(p) = -i.$$ In particular, for all $i\in \{1,\ldots,a\}$, we have $S_i = (-i)S_{i-1}$, and therefore $$S_a = (-a)S_{a-1} = (-a)(-a+1)\ldots (-1)S_{0} = (-1)^{a+b}a!,$$ as desired. This completes the argument when $n=2$. We now assume $n \geq 3$, and that the identity is known for all positive integers $m < n$. Giving $(\mu_1,\ldots,\mu_n)\in c_n^{-1}(\nu_1,\ldots,\nu_n)$ is equivalent to giving the following data: \begin{enumerate}\item $(\mu'_1,\ldots,\mu'_{n-1})\in c_{n-1}^{-1}(\nu_1,\ldots,\nu_{n-1})$ \item $(\lambda,\mu_n)\in c_2^{-1}(\mu'_1+\ldots+\mu'_{n-1}, \nu_n)$. \end{enumerate} Indeed, to construct these data from $(\mu_1,\ldots,\mu_n)$, first of all put $\lambda = \mu_1 + \ldots + \mu_{n-1}$. It is an element of $\mathcal{P}$, but not in general an element of $\mathcal{Q}$, that is, in terms of the above description with piles of blocks, it may have some empty columns, which are exactly the empty columns common to all the $\mu_i$, $i\in\{1,\ldots,n-1\}$. To construct $\mu'_i$ from $\mu_i$, simply do the same operation of sliding columns to the left that we did when defining the retraction $c$, but making only these common empty columns disappear. Note that we have $||\mu_1 + \ldots + \mu_{n-1}|| = ||\mu'_1 + \ldots + \mu'_{n-1}||$, and $c(\mu'_i) = \nu_i$ for all $i\in\{1,\ldots,n-1\}$. Conversely, to reconstruct $(\mu_1,\ldots,\mu_n)$ from $(\lambda,\mu_n)$ and $(\mu'_1,\ldots,\mu'_{n-1})$, we simply look at what columns are empty in~$\lambda$, and create empty columns at exactly these spots in $\mu'_1,\ldots,\mu'_{n-1}$ by sliding appropriate columns to the right, starting from the leftmost empty spot and working our way up to the right. Therefore, the sum we are interested in may be rewritten as $$ \sum_{(\mu_1, \ldots, \mu_{n-1})\ \in\ c_{n-1}^{-1} (\nu_1, \ldots, \nu_{n-1})} \ \ \ \ \sum_{ (\lambda, \mu_n)\ \in\ c_2^{-1} (\mu_1 + \ldots + \mu_{n-1}, \nu_n) } (-1)^{||\lambda + \mu_n||}. $$ Applying the case $n=2$ to the interior sum we find that this is equal to \begin{multline*} \sum_{(\mu_1, \ldots, \mu_{n-1})\ \in\ c_{n-1}^{-1} (\nu_1, \ldots, \nu_{n-1})} (-1)^{||\mu_1 + \ldots + \mu_{n-1}|| + ||\nu_n||} = \\ (-1)^{||\nu_n||} \sum_{(\mu_1, \ldots, \mu_{n-1})\ \in\ c_{n-1}^{-1} (\nu_1, \ldots, \nu_{n-1})} (-1)^{||\mu_1 + \ldots + \mu_{n-1}||}, \end{multline*} and we conclude by applying the induction hypothesis to the remaining sum. \end{proof} \subsection{Symmetric products and the inverse of Kapranov's zeta function} For any variety~$Y$ over a field $k$ and for any $\mu = (m_i)_{i\geq 1}\in\mathcal{P}$ we define $$\mathrm{Sym}^{\mu}Y = \prod_{i\geq 1}S^{m_i}Y.$$ There is a natural morphism $\mathrm{Sym}^{\mu} Y\to S^{|\mu|}Y$ induced by starting with the identity $\prod_{i\geq 1}Y^{m_i}\to Y^{|\mu|}$ and passing to the appropriate quotients on both sides. Note that the symmetric product $S^{\mu}Y$ is by definition the restriction of $\mathrm{Sym}^{\mu}Y$ to the complement of the diagonal in $S^{|\mu|}Y$. \begin{remark} In what follows, we will often drop the square brackets when writing classes of varieties in a Grothendieck ring. \end{remark} The following result will be very important to us: \begin{lemma}\label{Smminuslemma} For any variety $Y$, and any integer $m\geq 1$, we have the equality \begin{equation}\label{Smminusformula} S^{m}(-Y) = \sum_{\mu\in \mathcal{Q}_m}(-1)^{||\mu||}[\mathrm{Sym}^{\mu}Y]\end{equation} in $\mathrm{KVar}_{S^mY}$. \end{lemma} \begin{proof} Note that by definition (see section \ref{sect.symprodgrouplaw}), the elements of the family $(S^r(-Y))_{r\geq 1}$ satisfy $$\sum_{i=0}^m S^{r}(Y)S^{m-r}(-Y) = 0$$ in $\mathrm{KVar}_{S^m(Y)}$ for every $m\geq 1$. In particular, $S^{1}(-Y) = -Y$, so the property is true for $m=1$. We now proceed by induction: write $$S^m(Y) = -\sum_{r=0}^{m-1} S^{r}(Y)S^{m-r}(-Y) = -\sum_{r=0}^{m-1} S^{r}(Y)\sum_{\mu\in \mathcal{Q}_{m-r}}(-1)^{||\mu||}\mathrm{Sym}^{\mu}Y$$ by using the formula in the statement for all $r\in\{1,\ldots,m-1\}$. For any such $r$ and for any partition $\mu = (m_i)_{i\geq 1}\in \mathcal{Q}_{m-r}$, we have $$S^{r}(Y)\cdot\mathrm{Sym}^{\mu}(Y) = \mathrm{Sym}^{\mu'}(Y),$$ where $\mu'$ is the partition in $\mathcal{Q}_m$ obtained from $\mu$ by adding a new part with multiplicity~$r$. Conversely, any $\mu' = (m_i)_{i\geq 1}\in \mathcal{Q}_m$ is obtained uniquely in this way by choosing $r$ to be the value of $m_i$ for the largest $i$ such that $m_i>0$ and by defining $\mu\in \mathcal{Q}_{m-r}$ by setting this $m_i$ to zero. Observing that in this case $||\mu'|| = ||\mu|| +1$, we get the result. \end{proof} \begin{remark} Lemma \ref{Smminuslemma} gives, using a formula by Vakil and Wood, a proof of a special case of proposition \ref{compfinprodnoneff}, namely of the formula $$\prod_{x\in X}(1 + t + t^2 + \ldots )\prod_{x\in X}(1-t) = 1,$$ which says that the series $\prod_{x\in X}(1-t)$ is the inverse of Kapranov's zeta function $Z_X(t)$ (see example \ref{eulerprodexample}, \ref{Kapranovexample}). First of all, let us point out that the right-hand side of (\ref{Smminusformula}) for $Y=X$ appears naturally as the coefficient of degree $m$ of the series $Z_{X}^{-1}$, as the computation \begin{eqnarray*} Z_X(t)^{-1} & = & \left(1 + \sum_{i\geq 1}[S^iX]t^i\right)^{-1}\\ & = & \sum_{m\geq 0} \left(-\sum_{i\geq 1}[S^iX]t^i\right)^{m}\\ & = & \sum_{\mu\in \mathcal{Q}}(-1)^{||\mu||}[\mathrm{Sym}^{\mu}X]t^{|\mu|}\end{eqnarray*} from \cite{VW} (just before paragraph 3.6) shows. On the other hand, the coefficients in the expansion of the series $\prod_{x\in X}(1-t) $ are the symmetric products of the family $(X_i)_{i\geq 1}$ given by $X_1 = -X$ and $X_i = 0$ for all $i\geq 2$, which are computed using definition \ref{symproddef}. In particular, the only partition occurring in the coefficient of degree $m$ is $1^m$, and the corresponding symmetric product is the class of the restriction $S^m(-X)_{|S^m_*X}$ of~$S^m(-X)$ to~$S^m_*X$: $$ \prod_{x\in X}(1-t) = 1 + \sum_{m\geq 1}S^m(-X)_{|S^m_*X}t^m. $$ Now, proposition 3.7 in \cite{VW} gives the equality $$\sum_{\mu\in \mathcal{Q}}(-1)^{||\mu||}[\mathrm{Sym}^{\mu}X]t^{|\mu|} =\sum_{\mu\in \mathcal{Q}}(-1)^{||\mu||}[S^{\mu}X]t^{|\mu|},$$ which implies precisely that $S^m(-X)$ is in fact supported above the complement of the diagonal in $S^mX$, that is, above $S^m_*X$, so $$ \prod_{x\in X}(1-t) = 1 + \sum_{m\geq 1}S^m(-X)t^m = Z_X(t)^{-1}. $$ \end{remark} \subsection{Overlaps between partitions}\label{subsect.overlaps} Let $k,\ell\in I$. For every partition $\kappa = (k_i)_{i\in I}$ of $k$ and $\lambda = (\ell_i)_{i\in I}$ of $\ell$, we denote by $$f(\kappa) = [a_i^{k_i}]_{i\in I},\ \ f(\lambda) = [b_i^{\ell_i}]_{i\in I}$$ their formalisations. We may define their concatenation $f(\kappa) \cdot f(\lambda)$ as a finite multiset with elements in the union $\{a_i\}_{i\in I}\cup\{b_i\}_{i\in I}$, $a_i$ having multiplicity $k_i$ and $b_j$ having multiplicity~$\ell_j$. The sum of this partition is the element $$\sum f(\kappa) \cdot f(\lambda) = \sum_{i\in I}k_ia_i+ \sum_{j\in I}\ell_jb_j$$ in the free abelian monoid $M_{a,b}$ on $a_i, b_j, $ for all $i,j\in I$. On the other hand, we may consider partitions with values in the abelian semigroup $M_{a,b}\setminus\{0\}$. Let $S_{\kappa,\lambda}$ be the set of such partitions $\gamma$ containing only elements of the form $a_i$, $b_j$ and $a_i+b_j$, and such that $$\sum \gamma = \sum f(\kappa)\cdot f(\lambda).$$ Explicitly, if $\gamma = [(a_i+b_j)^{n_{i,j}}]_{i,j\in I_0},$ where by convention $a_0 = b_0 = 0$, then $$\sum \gamma = \sum_{i,j\in I_0}n_{i,j}(a_i+b_j)= \sum_{i\in I}\left({\sum_{j\in I_0}n_{i,j}}\right)a_{i}+ \sum_{j\in I}\left({\sum_{i\in I_0}n_{i,j}}\right)b_j,$$ so that we must have, for all $i,j\in I:$ $$k_i = \sum_{j\in I_0}n_{i,j}\ \ \ \text{and}\ \ \ \ell_j = \sum_{i\in I_0}n_{i,j}.$$ In particular, giving a collection of integers $(n_{i,j})_{(i,j)\in I_0^2\setminus\{(0,0)\}}$ determines completely two partitions $\kappa$ and $\lambda$, and an ``overlap partition'' $\gamma$. Moreover, there is a natural way of ``deformalising''~$\gamma$, by putting $d(\gamma) = [m^{\gamma_m}]$ to be the partition with values in $I$ given by $$\gamma_m = \sum_{i + j = m}n_{i,j}.$$ Concretely, $d(\gamma)$ is obtained from $\gamma$ by replacing $a_i+ b_j$ by $i+j$ for all $(i,j)\in I_0^2\setminus\{(0,0)\}$. \subsection{Overlaps between symmetric products}\label{subsect.overlapssymproducts} Here we are going to give a notation that is going to be useful in the proof of proposition \ref{compfinprodnoneff}. Assume we are given two partitions $\kappa = (k_i)_{i\in I}$ and $\lambda= (\ell_j)_{j\in J}$, as well as a family $\gamma = (n_{i,j})_{i,j\in I_0}$ such that for all $i,j\in I$, we have $$\sum_{i\in I_0} n_{i,j} \leq \ell_j\ \ \ \text{and}\ \ \ \sum_{j\in I_0} n_{i,j} \leq k_i.$$ Then for any variety $X$, we define $\left(\prod_{i\in I} S^{k_i}X\right)_*\times_{\gamma} \left(\prod_{j\in I}S^{\ell_j}X\right)_*$ to be the locally closed subset of $\left(\prod_{i\in I} S^{k_i}X\right)_*\times \left(\prod_{j\in I}S^{\ell_j}X\right)_*$ given by points with $S^{k_i}X$-component and $S^{\ell_j}X$-component overlapping exactly along an effective zero-cycle of degree $n_{i,j}$. More generally, for a variety $A$ over $\left(\prod_{i\in I} S^{k_i}X\right)_*$ and a variety $B$ over $\left(\prod_{j\in I}S^{\ell_j}X\right)_*$, we may define the variety $A\times_{\gamma} B$ to be the restriction of $A\times B$ to $\left(\prod_{i\in I} S^{k_i}X\right)_*\times_{\gamma} \left(\prod_{j\in I}S^{\ell_j}X\right)_*$. In terms of classes in Grothendieck rings, for any elements $A\in \mathrm{KVar}_{\left(\prod_{i\in I}S^{k_i}X\right)_*}$ and $B\in \mathrm{KVar}_{\left(\prod_{j\in I}S^{\ell_j}X\right)_*}$, we denote by $A\times_{\gamma}B$ the element of $\mathrm{KVar}_{\left(\prod_{i\in I} S^{k_i}X\right)_*\times_{\gamma} \left(\prod_{j\in I}S^{\ell_j}X\right)_*}$ obtained from $A\boxtimes B$ by the restriction morphism $$\mathrm{KVar}_{\left(\prod_{i\in I} S^{k_i}X\right)_*\times \left(\prod_{j\in I}S^{\ell_j}X\right)_*}\to \mathrm{KVar}_{\left(\prod_{i\in I} S^{k_i}X\right)_*\times_{\gamma} \left(\prod_{j\in I}S^{\ell_j}X\right)_*}.$$ In particular, this defines $S^{\kappa}\mathscr{A}\times_{\gamma}S^{\lambda}\mathscr{B}$, whenever $(A_i)_{i\in I}$ and $(B_i)_{i\in I}$ are families of elements of $\mathrm{KVar}_X$. \begin{remark} Recall that we defined, in chapter \ref{grothrings}, (\ref{exteriorproduct}), a (relative) exterior product operation on Grothendieck rings which is compatible with fibre products for effective classes. In the next sections, we will sometimes use the notation $\times$ instead of $\boxtimes$ when we want to stress the fact that we are working with effective classes. \end{remark} \subsection{Proof of proposition \ref{compfinprodnoneff}} \subsubsection{Expansion of the left-hand side} The left-hand side is given by $$\prod_{x\in X} \left(1 + \sum_{i\in I}\left(\sum_{p + q = i} A_{p,x} B_{q,x}\right) t_i\right)$$ and therefore it is the Euler product associated to the family $$\left(\left(\sum_{p + q = i} A_{p}\boxtimes_X B_{q}\right)\right)_{i\in I},$$ which we will denote by $\mathscr{A}\ast_X\mathscr{B}$. Expanding this, we get $$1 + \sum_{n\in I}\left(\sum_{\pi\ \text{partition of}\ n}S^{\pi}(\mathscr{A}\ast_X\mathscr{B})\right) t_n.$$ Let $n\in I$ and let $\pi = (n_i)_{i\in I}$ be a partition of $n$. The contribution $S^{\pi}(\mathscr{A}\ast_X\mathscr{B})$ to the coefficient of $t_n$ in the expansion of the series is the restriction of $$\prod_{i\in I} S^{n_i}\left(\sum_{p + q = i} A_{p}\boxtimes_X B_{q}\right)\in \mathrm{KVar}_{\prod_{i\in I}S^{n_i}X}$$ to $S^{\pi}X.$ This can be written $$\prod_{i\in I}\ \ \ \sum_{\substack{(n_{p,q})_{p+q=i}\\ n_{i,0} + \ldots + n_{0,i} = n_i}}\ \prod_{p+q = i}S^{n_{p,q}}(A_p\boxtimes_X B_q)$$ \begin{equation}\label{LHSexpansion}= \sum_{\substack{(n_{p,q})_{(p,q)\in I_0^2\setminus\{0\}}\\ \forall i\in I,\ \sum_{p+q = i}n_{p,q} = n_i}}\prod_{i\in I}\prod_{p+q = i}S^{n_{p,q}}(A_p\boxtimes_X B_q).\end{equation} \subsubsection{Expansion of the right-hand side} The right-hand side is given by $$\left(1 + \sum_{n\in I} S^n(\mathscr{A})t_n\right)\left(1 + \sum_{n\in I} S^n(\mathscr{B})t_n\right) = 1 + \sum_{n\in I}\left(\sum_{k+\ell = n}S^k(\mathscr{A})\boxtimes S^{\ell}(\mathscr{B})\right)t_n $$ The proposition is implied by the following claim: \textbf{Claim:} For every partition $\pi$ of $n$, the restriction to $S^{\pi}X$ of the coefficient $$\sum_{k+\ell = n}S^k(\mathscr{A})\boxtimes S^{\ell}(\mathscr{B})\in \mathrm{KVar}_{S^nX}$$ of $t_n$ on the right-hand side is equal to $S^{\pi}(\mathscr{A}\ast_X\mathscr{B})$. We are in fact going to prove a more precise statement, which implies the above claim. The sum $\sum_{k+\ell = n}S^k(\mathscr{A})\boxtimes S^{\ell}(\mathscr{B})$ is a sum of terms of the form $S^{\kappa}(\mathscr{A})\boxtimes S^{\lambda}(\mathscr{B})$ where $\kappa$ and~$\lambda$ are partitions such that $\sum \kappa + \sum \lambda = n$. On the other hand, note that in the expansion (\ref{LHSexpansion}) of the left-hand side, there was a sum over collections of integers $(n_{p,q})_{(p,q)\in I_0^2\setminus\{0\}}$ such that $$\sum_{p+q = i}n_{p,q} = n_i.$$ In particular, each term of this sum corresponds to some $\gamma$ as in section \ref{subsect.overlaps} (together with some partitions~$\kappa$ and~$\lambda$) such that $d(\gamma) = \pi$. Our aim is to prove, for every given $\gamma = [(a_p+b_q)^{n_{p,q}}]$ such that $d(\gamma) = \pi$, and for the associated $\kappa,\lambda$, the equality \begin{equation}\label{preciseclaimequation}\left(\prod_{i\in I}\prod_{p+q = i}S^{n_{p,q}}(A_p\boxtimes_XB_q)\right)_{*} = S^{\kappa}(\mathscr{A})\times_{\gamma}S^{\lambda}(\mathscr{B})\end{equation} in $\mathrm{KVar}_{S^{\pi}X}$ where the notation $(\cdot)_*$ on the left-hand side, as always, stands for restriction to the complement of the diagonal. Then summing over all $\gamma$ such that $d(\gamma) = \pi$ proves our claim. \subsubsection{Computation of $S^{\kappa}(\mathscr{A})\times_{\gamma}S^{\lambda}(\mathscr{B})$} We will now compute $S^{\kappa}(\mathscr{A})\times_{\gamma}S^{\lambda}(\mathscr{B})$ in more detail, using the notations from section \ref{subsect.overlaps}. We have $$S^{\kappa}(\mathscr{A})\times_{\gamma}S^{\lambda}(\mathscr{B}) = \left(\prod_{i\in I} S^{k_i}(A_i)\right)_*\times_{\gamma}\left(\prod_{j\in I} S^{\ell_j}(B_j)\right)_*.$$ Recall that the notation $\times_{\gamma}$ from section \ref{subsect.overlapssymproducts} indicates that we restrict to points such that for all $i,j\in I$, the $S^{k_i}X$-component and the $S^{\ell_j}X$-component overlap exactly in $n_{i,j}$ geometric points. According to (\ref{preciseclaimequation}), we want to prove that this is equal to $$\left(\prod_{(i,j)\in I_0^2\setminus\{(0,0)\}} S^{n_{i,j}}(A_i\boxtimes_XB_j)\right)_*.$$ Since $k_i = \sum_{j\in I}n_{i,j}$ and $\ell_{j} = \sum_{i\in I} n_{i,j}$, by induction it suffices to prove that for any effective $K\in \mathrm{KVar}_{\prod_{p\neq i}S^{k_p}X_p}$ and for any $L\in \mathrm{KVar}_{\prod_{q\neq j}S^{\ell_q}X_q}$, denoting by $\gamma^{i,j}$ the family of integers obtained from $\gamma$ by setting $n_{i,j}$ to zero: $$\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j}(B_j)\boxtimes L\right)_* =$$ \begin{equation}\label{klgammaaim} \left(S^{n_{i,j}}(A_i\boxtimes_XB_j)\boxtimes \left(S^{k_i-n_{i,j}}(A_i)\times K\right)_*\times_{\gamma^{i,j}} \left(S^{\ell_j-n_{i,j}}(B_j)\boxtimes L\right)_*\right)_*.\end{equation} \begin{remark} The left-hand side of (\ref{klgammaaim}) is an element of the Grothendieck ring above $\left(\prod_{i\in I} S^{k_i}X_i\right)_*\times_{\gamma}\left(\prod_{j\in I} S^{\ell_j}X_j\right)_*,$ whereas the right-hand side is an element of the Grothendieck ring above $$\left(S^{n_{i,j}}(X)\times \left(S^{k_i-n_{i,j}}X\times \prod_{p\neq i}S^{k_p}X_p\right)_*\times_{\gamma^{i,j}}\left(S^{\ell_j-n_{i,j}}X\times \prod_{q\neq j}S^{\ell_q}X_p\right)_*\right)_*.$$ These two varieties are easily seen to be isomorphic. \end{remark} Writing $L = L'-L''$ for some effective $L',L''$, we can moreover assume that $L$ is effective. For every $j\in I$, fix effective elements $C_j,C'_j\in \mathrm{KVar}_X$ such that $B_j = C_j- C'_j$. We get \begin{eqnarray*}S^{\ell_j}(B_j) &=& \sum_{0\leq m\leq \ell_j}S^{\ell_j - m}(C_j) \boxtimes S^{m}(-C'_j)\\ & = & \sum_{0\leq m\leq \ell_j}\sum_{\mu\in \mathcal{Q}_m}(-1)^{||\mu||} S^{\ell_j - m}(C_j)\times \mathrm{Sym}^{\mu}(C'_j),\end{eqnarray*} using lemma \ref{Smminuslemma}, so that we need to compute \begin{equation}\label{klgammamuequation}\sum_{0\leq m\leq \ell_j}\sum_{\mu\in \mathcal{Q}_m} (-1)^{||\mu||}\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j - m}(C_j)\times \mathrm{Sym}^{\mu}(C'_j)\times L\right)_*.\end{equation} Now, for any fixed $m$ and $\mu = (\mu_p)_{p\geq 1}\in \mathcal{Q}_m$, we are looking for points of $$\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j - m}(C_j)\times \mathrm{Sym}^{\mu}(C'_j)\times L\right)_*$$ that is, with image in $S^{k_i}X$ of the component corresponding to $S^{k_i}A_i$ overlapping in $n_{i,j}$ geometric points with the image in $S^{\ell_j}X$ of the component corresponding to the product $$S^{\ell_j-m}(C_j)\times\prod_{p\geq 1}S^{\mu_p}(C_j').$$ For any such point, there is a non-negative integer $\delta\leq \min\{m,n_{i,j}\}$ such that $n_{i,j}-\delta\leq \ell_j-m$ and such that the overlap with $S^{\ell_j-m}C_j$ is along $n_{i,j}-\delta$ points and the overlap with $\prod_{p\geq 1 }S^{\mu_p}(C_j')$ is along $\delta$ points. The overlap with $\prod_{p\geq 1}S^{\mu_p}(C'_j)$ must moreover be distributed among the different factors $S^{\mu_p}(C'_j)$, which leads, for every possible value of~$\delta$ to the introduction, for every $p\geq 1$, of an integer $0\leq \epsilon_p\leq \mu_p$, the overlap with the component $S^{\mu_p}(C'_j)$, which defines a partition $\epsilon = (\epsilon_p)_{p\geq 1}$ such that $\sum_{p}\epsilon_p = \delta$, that is, $\epsilon \in \mathcal{P}_{\delta}$. In other words, for any integer $\delta$ satisfying the condition $$\max(0,n_{i,j} -\ell_j + m)\leq\delta\leq \min(m,n_{i,j})$$ and for any $\epsilon\in \mathcal{P}_{\delta}$ such that $\epsilon \leq \mu$, the variety $$\left(S^{n_{i,j}-\delta}(A_i\times_XC_j)\times \mathrm{Sym}^{\epsilon}(A_i\times_XC'_j)\times P(\delta,\mu-\epsilon)\right)_*,$$ where for any partition $\nu$, $P(\delta,\nu$ stands for $$P(\delta,\nu) = \left(S^{k_i-n_{i,j}}(A_i)\times K\right)_*\times_{\gamma^{i,j}}\left(S^{\ell_j-m-n_{i,j}+\delta}(C_j)\times\mathrm{Sym}^{\nu}(C'_j)\times L\right)_*,$$ is isomorphic to the locally closed subset of $$\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j - m}(C_j)\times \mathrm{Sym}^{\mu}(C'_j)\times L\right)_* $$ of points with image in $S^{k_i}(X)$ of the $S^{k_i}(A_i)$-component overlapping along an effective zero-cycle of degree $n_{i,j}-\delta$ with the image in $S^{\ell_j-m}X$ of the $S^{\ell_j-m}(C_j)$-component, and for every $p\geq 1$, along an effective zero-cycle of degree $\epsilon_p$ with the image in $S^{\mu_p}X$ of the $S^{\mu_p}(C'_j)$-component. Moreover, $$\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j - m}(C_j)\times\mathrm{Sym}^{\mu}(C'_j)\times L\right)_* $$ is the disjoint union of these locally closed subsets, so that in terms of classes in the Grothendieck ring above $\left(\prod_{i\in I} S^{k_i}X\right)_{*}\times_{\gamma}\left(\prod_{i\in I} S^{\ell_i}X\right)_{*}$, we have $$\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j - m}(C_j)\times\mathrm{Sym}^{\mu}(C'_j)\times L\right)_* = $$ $$\sum_{\substack{\max(0,n_{i,j} -\ell_j + m)\leq\delta\leq \min(m,n_{i,j})\\ \epsilon = (\epsilon_p)_{p\geq 1}\in \mathcal{P}_{\delta},\ \epsilon \leq \mu}}\left(S^{n_{i,j}-\delta}(A_i\times_XC_j)\times\mathrm{Sym}^{\epsilon}(A_i\times_XC'_j)\times P(\delta,\mu-\epsilon)\right)_*.$$ In what follows, though we do not specify it to make indices of sums lighter, sums are taken for $\delta$ satisfying the condition $\max(0,n_{i,j} -\ell_j + m)\leq\delta\leq \min(m,n_{i,j}).$ Note that the factor $\mathrm{Sym}^{\epsilon}(A_i\times_XC'_j)$ occurring in the term corresponding to $\epsilon$ depends only on $c(\epsilon)$ (recall the definition of $c$ in section \ref{subsect.combinatorics}), and not on $\epsilon$ itself. Therefore, we may write $$\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j - m}(C_j)\times \mathrm{Sym}^{\mu}(C'_j)\times L\right)_* = $$ $$\sum_{\substack{\delta\\ \eta\in \mathcal{Q}_{\delta}}}\left(S^{n_{i,j}-\delta}(A_i\times_XC_j)\times\mathrm{Sym}^{\eta}(A_i\times_XC'_j)\times \sum_{\substack{\epsilon\leq \mu\\ c(\epsilon) = \eta}}P(\delta,\mu-\epsilon)\right)_* $$ Taking the sum over $\mu\in \mathcal{Q}_m$ for fixed $m$ as in (\ref{klgammamuequation}), we get $$\sum_{\mu\in \mathcal{Q}_m}(-1)^{||\mu||}\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j - m}(C_j)\times\mathrm{Sym}^{\mu}(C'_j)\times L\right)_* =$$ \begin{equation}\label{equation_eta}\sum_{\substack{\delta\\ \eta\in \mathcal{Q}_{\delta}}}\left(S^{n_{i,j}-\delta}(A_i\times_XC_j)\times\mathrm{Sym}^{\eta}(A_i\times_XC'_j) \boxtimes \sum_{\mu\in \mathcal{Q}_m}(-1)^{||\mu||}\sum_{\substack{\epsilon\leq \mu\\ c(\epsilon) = \eta}}P(\delta,\mu-\epsilon)\right)_*\end{equation} For fixed $\eta$, note that $P(\delta,\mu-\epsilon)$ depends only on $\delta$ and $c(\mu-\epsilon)$, so that \begin{eqnarray*} \sum_{\mu\in \mathcal{Q}_m} (-1)^{||\mu||} \sum_{\substack{\epsilon\leq \mu\\ c(\epsilon) = \eta}}P(\delta,\mu-\epsilon)&=& \sum_{\nu\in \mathcal{Q}_{m-\delta}}P(\delta,\nu)\sum_{(\epsilon,\xi)\in c_2^{-1}(\eta,\nu)}(-1)^{||\epsilon +\xi||}\\ & = &(-1)^{||\eta||}\sum_{\nu\in \mathcal{Q}_{m-\delta}}(-1)^{||\nu||}P(\delta,\nu)\\ \end{eqnarray*} $$=(-1)^{||\eta||}\left(S^{k_i-n_{i,j}}(A_i)\times K\right)_*\times_{\gamma^{i,j}}\left(S^{\ell_j-m-n_{i,j}+\delta}(C_j)\boxtimes S^{m-\delta}(-C'_j)\boxtimes L\right)_*.$$ where the second equality comes from lemma \ref{howelemma} and the third one from lemma \ref{Smminuslemma} and the definition of $P(\delta,\nu)$, by linearity. In particular, denoting, for every integer $r$ such that $0\leq r \leq \ell_j - n_{i,j}$, $$P'(r):= \left(S^{k_i-n_{i,j}}(A_i)\times K\right)_*\times_{\gamma^{i,j}}\left(S^{\ell_j -n_{i,j}-r}(C_j)\boxtimes S^{r}(-C'_j)\boxtimes L\right)_*$$ and plugging this into equation (\ref{equation_eta}), we get $$\sum_{\mu\in \mathcal{Q}_m}(-1)^{||\mu||}\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j - m}(C_j)\times\mathrm{Sym}^{\mu}(C'_j)\times L\right)_* $$ \begin{eqnarray*}&=&\sum_{\substack{\delta\\ \eta \in \mathcal{Q}_{\delta}}}\left(S^{n_{i,j}-\delta}(A_i\times_XC_j)\times\mathrm{Sym}^{\eta}(A_i\times_XC'_j)\boxtimes (-1)^{||\eta||}P'(m-\delta)\right)_*\\ &=&\sum_{\delta}\left(S^{n_{i,j}-\delta}(A_i\times_XC_j)\boxtimes P'(m-\delta)\boxtimes\sum_{\eta\in \mathcal{Q}_{\delta}}(-1)^{||\eta||}\mathrm{Sym}^{\eta}(A_i\times_X C'_j)\right)_*\\ & =&\sum_{\delta}\left(S^{n_{i,j}-\delta}(A_i\times_XC_j)\boxtimes S^{\delta}(-A_i\times_XC'_j)\boxtimes P'(m-\delta)\right)_*,\end{eqnarray*} where the last equality comes again from lemma \ref{Smminuslemma}. Summing this over $m$, we get that~(\ref{klgammamuequation}) is equal to $$\sum_{0\leq m\leq \ell_j}\ \ \ \sum_{\max(0,n_{i,j} -\ell_j + m)\leq\delta\leq \min(m,n_{i,j})}\left(S^{n_{i,j}-\delta}(A_i\times_XC_j)\boxtimes S^{\delta}(-A_i\times_XC'_j)\boxtimes P'(m-\delta)\right)_*$$ $$ = \sum_{\delta = 0}^{n_{i,j}}\left(S^{n_{i,j}-\delta}(A_i\times_XC_j)\boxtimes S^{\delta}(-A_i\times_XC'_j)\boxtimes \sum_{\delta\leq m \leq \ell_j-n_{i,j} + \delta} P'(m-\delta)\right)_*.$$ On the other hand, observe that $$\sum_{\delta\leq m \leq \ell_j-n_{i,j} + \delta} P'(m-\delta) = $$ \begin{eqnarray*}&=& \left(S^{k_i-n_{i,j}}(A_i)\times K\right)_*\times_{\gamma^{i,j}}\left(\sum_{0\leq r\leq \ell_j-n_{i,j}}S^{\ell_j-n_{i,j}-r}(C_j)\boxtimes S^{r}(-C'_j)\boxtimes L\right)_*\\ &=& \left(S^{k_i-n_{i,j}}(A_i)\times K\right)_*\times_{\gamma^{i,j}}\left(S^{\ell_j-n_{i,j}}(B_j)\boxtimes L\right)_*,\end{eqnarray*} because $B_j = C_j - C'_j$. We end up with $$\sum_{0\leq m\leq \ell_j}\sum_{\mu\in \mathcal{Q}_m} (-1)^{||\mu||}\left(S^{k_i}(A_i)\times K\right)_*\times_{\gamma} \left(S^{\ell_j - m}(C_j)\times\mathrm{Sym}^{\mu}(C'_j)\times L\right)_* $$ \begin{eqnarray*}& = & \left(\left(S^{k_i-n_{i,j}}(A_i)\times K\right)_*\times_{\gamma^{i,j}}\left(S^{\ell_j-n_{i,j}}(B_j)\boxtimes L\right)_*\boxtimes \sum_{\delta=0}^{n_{i,j}} S^{n_{i,j}-\delta}(A_i\times_XC_j)\boxtimes S^{\delta}(-A_i\times_X C_j')\right)_*\\ & = & \left(\left(S^{k_i-n_{i,j}}(A_i)\times K\right)_*\times_{\gamma^{i,j}}\left(S^{\ell_j-n_{i,j}}(B_j)\boxtimes L\right)_* \boxtimes S^{n_{i,j}}(A_i\boxtimes_X B_j)\right)_* \end{eqnarray*} so that (\ref{klgammaaim}) is proved. \section{Allowing other constant terms}\label{coefficients}\index{Euler product!other constant terms} Until now, for simplicity we have only worked with series having constant terms equal to 1. Though for obvious reasons of convergence we cannot abandon this hypothesis completely, it is still possible to make sense of Euler products where a finite number of factors have arbitrary constant terms, by generalising our symmetric products a bit further. Let us motivate the construction first. In the simplest setting, we want to make sense of Euler products $$\prod_{x\in X}(X_{0,x} + X_{1,x} t + X_{2,x}t^2 + \ldots )$$ where $X$ is a variety over an algebraically closed field $k$ and $(X_i)_{i\geq 0}$ is a family of varieties over $X$, such that $X_{0,x}=1$ for almost all closed points $x\in X$. In other words, our product looks like $$\prod_{x\in U}(1 + X_{1,x} t + X_{2,x}t^2 + \ldots )\prod_{x\in F}(X_{0,x} + X_{1,x} t + X_{2,x}t^2 + \ldots )$$ for a finite set of closed points $F\subset X$ with open complement $U$. When one expands this product naively, one sees that the contribution for a zero-cycle $D\in S^nX(k)$ depends on the intersection of the support $|D|$ of $D$ with the set $F$. Let us assume for simplicity that $F = \{x_0\}$ is a singleton. Then the expansion of this product may formally be written $$\sum_{n\geq 0}\left(\sum_{\substack{D = \sum n_x x\in S^nX(k)\\ x_0\in |D|}} \prod_{x\in X}X_{n_x,x} + \sum_{D = \sum n_x x\in S^nU(k) }X_{0,x_0}\prod_{x\in U} X_{n_x,x}\right)t^n$$ In other words, the fibre $X_{0,x_0}$ appears whenever the zero-cycle with respect to which we expand does not contain $x_0$ in its support, forcing us to choose the term of degree zero in the factor corresponding to $x_0$. Thus, denoting by $\mathscr{X}$ the family $(X_i)_{i\geq 1}$, the coefficient of degree $n$ in this power series should be the class of the constructible set given by the product $X_{0,x}\times S^{n}\mathscr{X}_{|S^nU}$ above the open subset $S^{n}U$, and by $S^{n}\mathscr{X}_{|S^nX\setminus S^nU}$ above its complement. This construction generalises for general $F$: we cut $S^{n}X$ up into locally closed subsets $(S^nX)^{E}$ for all subsets $E\subset F$ such that $(S^nX)^{E}$ corresponds to zero-cycles $D$ such that $|D|\cap F = E$, and modify the symmetric product $S^{n}\mathscr{X}$ above each $(S^nX)^E$ accordingly, multiplying it by the product of the fibres $X_{0,x}$ for all $x$ in $F\setminus E$. \subsection{Another generalisation of symmetric products}\label{sect.addX0} Throughout this section, let $A\in\{\mathrm{KVar},\mathscr{M},\mathrm{KExpVar},\mathscr{E}xp\mathscr{M}\}.$ Let $I_0$ be an additive monoid, and $I = I_0\setminus\{0\}$. Let $X$ be a variety over a perfect field $k$ and $V$ an open set of $X$ such that its complement~$F$ is a finite set of closed points. Let $\mathscr{X} = (X_i)_{i\in I_0}$ be a family of varieties over $X$ (resp. of classes in the ring $A_X$), such that $X_0\times_XV\simeq V$ (resp. the image of $X_0$ in $A_V$ is the class $1 = [V\xrightarrow{\mathrm{id}} V]$). Thus, as a motivic function, $X_0$ is constant equal to 1 on the set $V$. In this section we are going to define a slight modification of the notion of symmetric product which takes into account $X_0$, or more precisely, the finite number of fibres $X_{0,v}$ for $v\in F$. This will allow us to consider a finite number of constant terms other than 1 in our Euler products. Recall that for any $\pi = (n_i)_{i\in I}\in\mathbf{N}^{(I)}$, $S^{\pi}X$ parametrises collections $D= (D_i)_{i\in I}$ of effective zero-cycles on $X$, with disjoint supports, such that $\deg D_i = n_i$. We denote by $|D| = \cup_{i\in I}|D_i|$ the union of the supports of the $D_i$. For any subset $E\subseteq F$ denote by $(S^{\pi}X)^E$ the constructible subset of $S^{\pi}X$ parametrising families $D$ of effective zero-cycles such that $|D|\cap F = E$. Also, denote by $\widetilde{\mathscr{X}}$ the family $(X_i)_{i\in I}$ (that is, $\mathscr{X}$ with $X_0$ removed), and by $\left(S^{\pi}\widetilde{\mathscr{X}}\right)^E$ the restriction of $S^{\pi}\widetilde{\mathscr{X}}$ to $(S^{\pi}X)^E$ (resp. the image of $S^{\pi}\widetilde{\mathscr{X}}$ in $A_{(S^{\pi}X)^E})$. \begin{definition}\label{addX0} \begin{enumerate}\item The symmetric product $S^{\pi}\mathscr{X}$ of the family of varieties $\mathscr{X} = (X_i)_{i\in I_0}$ is defined as the constructible set over $S^{\pi}X$ with restriction to $(S^{\pi}X)^E$ given by $$S^{\pi}\mathscr{X}_{|(S^{\pi}X)^E} = \left(S^{\pi}\widetilde{\mathscr{X}}\right)^E\times \prod_{v\in F\backslash E}X_{0,v}.$$ \item Denote by $i_E$ the immersion $(S^{\pi}X)^E\to S^{\pi}X.$ The symmetric product $S^{\pi}\mathscr{X}$ of the family of classes $\mathscr{X} = (X_i)_{i\in I_0}$ in $A_X$ is defined as the element of $A_{S^{\pi}X}$ given by $$S^{\pi}\mathscr{X} = \sum_{E\subseteq F} (i_E)_!\left(\left(S^{\pi}\widetilde{\mathscr{X}}\right)^E\right) \prod_{v\in F\backslash E}X_{0,v}.$$ \end{enumerate} \end{definition} Note that since $X_{0,v}$ is trivial (that is, a point) for every $v\in V$, it makes sense to write $$\prod_{v\in F\backslash E}X_{0,v} = \prod_{v\not\in E}X_{0,v}.$$ \begin{notation} We denote by $(S^{\pi}\mathscr{X})^E$ the pullback of $S^{\pi}\mathscr{X}$ along $i_E$, given by $$\left(S^{\pi}\widetilde{\mathscr{X}}\right)^E \prod_{v\not\in E}X_{0,v}.$$ \end{notation} By definition, the partition $\left\{(S^{\pi}X)^E,\ E\subseteq F\right\}$ of $S^{\pi}X$ into constructible subsets is such that for each $E\subseteq F$, the restriction $S^{\pi}\mathscr{X}\times_{S^{\pi}X}(S^{\pi}X)^E$ is a trivial fibration above $S^{\pi}\widetilde{\mathscr{X}}\times_{S^{\pi}X}(S^{\pi}X)^E$ with fibre $\prod_{v\not\in E}X_{0,v}.$ \begin{remark} It is clear from the definition of symmetric products of families of classes in $A_X$ that both parts of the definition are compatible, that is, the class in $A_{S^{\pi}X}$ of the constructible set $S^{\pi}\mathscr{X}$ from part 1 will be the element constructed in part 2 with the classes of the varieties $X_i$ in~$A_X$. \end{remark} \begin{remark} Definition \ref{addX0} does not depend on the choice of the set $F$. Choosing a bigger set~$F'$ only amounts to refining the partition $\left\{(S^{\pi}X)^E,\ E\subseteq F'\right\}$. Indeed, assume $F' = F\cup\{v_0\}$. Then for all $E\subset F$, $$\left\{D,\ |D|\cap F = E\right\} = \left\{D,\ |D|\cap F' = E\right\}\cup \left\{D,\ |D|\cap F' = E\cup \{v_0\}\right\},$$ and $X_{0,v_0} = 1$, so that $\prod_{v\in F'\backslash E}X_{0,v} = \prod_{v\in F\backslash E}X_{0,v}.$ \end{remark} \begin{remark} Assume that $F = \varnothing$, that is, $X_0=X$ (resp. $X_0 = 1\in A_X$). Then we get $S^{\pi}\mathscr{X} = S^{\pi}\widetilde{\mathscr{X}}$, so our definition is an extension of the previous definition of symmetric products. \end{remark} \begin{remark} For $E = \varnothing$, $(S^{\pi}X)^E$ is an open subset of $S^{\pi}X$. Thus, the constructible set $S^{\pi}\mathscr{X}$ is birationally equivalent to $S^{\pi}\widetilde{\mathscr{X}}\times \prod_{v\in F}X_{0,v}$. \end{remark} \begin{example} Take $\pi=0\in\mathbf{N}^{(I)}$. Then the only non-empty piece of $S^{\pi}\mathscr{X}$ is the one corresponding to $E = \varnothing$, and therefore $$S^{0}\mathscr{X} = \prod_{v\in X}X_{0,v}.$$ \end{example} The fibre of $S^{\pi}\widetilde{\mathscr{X}}$ above a family of effective zero-cycles $D = (D_i)_{i\in I}\in S^{\pi}X$ is given by $$\prod_{i\in I}\ \prod_{v\in |D_i|}X_{i,v}.$$ By definition, replacing $\widetilde{\mathscr{X}}$ by $\mathscr{X}$ consists in replacing this fibre by its product with $\prod_{v\in F\backslash |D|}X_{0,v} = \prod_{v\not\in |D|} X_{0,v}$. Thus, the fibre of $S^{\pi}\mathscr{X}$ above $D$ can be written $$\left(\prod_{v\not\in |D|}X_{0,v}\right)\left(\prod_{i\in I}\ \prod_{v\in |D_i|}X_{i,v}\right),$$ which is indeed a finite product since $\cup_{i}|D_i|$ is a finite set, and and only a finite number of fibres $X_{0,v}$ are non-trivial. \subsection{Application to Euler products} \begin{definition}\label{constterm} Let $X$ be a variety over a perfect field $k$ and $V$ an open subset of $X$ such that its complement~$F$ is a finite set of closed points. Let $\mathscr{X} = (X_i)_{i\in I_0}$ be a family of classes in $A_X$ such that $X_0$ maps to 1 in $A_V$. We define the zeta-function $$Z_{\mathscr{X}}(\t):= \sum_{\pi\in\mathbf{N}^{(I)}}[S^{\pi}\mathscr{X}]\t^{\pi}\in A_{k}[[\t]],$$ where $S^{\pi}\mathscr{X}$ is understood to be the generalised symmetric product from the previous paragraph, as well as the Euler product notation for $Z_{\mathscr{X}}$: $$\prod_{v\in X}\left(\sum_{i\in I_0}X_{i,v}t_i\right) :=Z_{\mathscr{X}}(\t).$$ \end{definition} \index{Euler product!other constant terms} \begin{lemma} Let $X,V,\mathscr{X}$ be like in Definition~\ref{constterm}. Let $Y$ be a closed subscheme of $X$ and $U$ its open complement. Define $\mathscr{U} = (U_i)_{i\in I_0}$ and $\mathscr{Y} = (Y_i)_{i\in I_0}$ to be the families of elements of $A_U$ (resp. $A_Y$) obtained by restriction from~$\mathscr{X}$. For every $\pi\in\mathbf{N}^{(I)}$ we have the equality $$S^{\pi}\mathscr{X} = \sum_{\pi'\leq \pi}S^{\pi'}\mathscr{U}\boxtimes S^{\pi-\pi'}\mathscr{Y}$$ in $A_{S^{\pi}X}$, where each term on the right-hand side is considered as an element of $A_{S^{\pi}X}$ via the immersion $S^{\pi'}U\times S^{\pi-\pi'}Y \to S^{\pi}X$. In particular, we have the equality $$Z_{\mathscr{X}}(\t) = Z_{\mathscr{U}}(\t)Z_{\mathscr{Y}}(\t)$$ in $A_k[[\t]]$. \end{lemma} \begin{proof} Let $E$ be a subset of $F$. Write $E_U$ (resp. $E_Y, F_U, F_Y$) for the intersection $E\cap U$ (resp. $E\cap Y, F\cap U, F\cap Y$). Define the families $\mathscr{U} = (U_i)_{i\in I_0}$, $\widetilde{\mathscr{U}} = (U_i)_{i\in I}$, $\mathscr{Y} = (Y_i)_{i\in I_0}$, $\widetilde{\mathscr{Y}} = (Y_i)_{i\in I}$. Applying corollary \ref{generalcut2} and pulling back along $i_E:(S^{\pi}X)^E\to S^{\pi}X$, we get $$\left(S^{\pi}\widetilde{\mathscr{X}}\right)^{E} = \sum_{\pi'\leq \pi} \left(S^{\pi'}\widetilde{\mathscr{U}}\right)^{E_U}\boxtimes \left(S^{\pi-\pi'}\widetilde{\mathscr{Y}}\right)^{E_Y}$$ in $A_{(S^{\pi}X)^E}$. Therefore, writing $$\prod_{v\in F\backslash E} X_{0,v}= \prod_{v\in F_{U}\backslash E_{U}}U_{0,v}\prod_{v\in F_Y\backslash E_Y}Y_{0,v},$$ we get that \begin{eqnarray*}(S^{\pi}\mathscr{X})^{E}&=&\sum_{\pi'\leq \pi}\left((S^{\pi'}\widetilde{\mathscr{U}})^{E_U}\prod_{v\in F_{U}\backslash E_{U}}U_{0,v}\right)\boxtimes \left((S^{\pi-\pi'}\widetilde{\mathscr{Y}})^{E_Y}\prod_{v\in F_Y\backslash E_Y}Y_{0,v}\right) \\ &=& \sum_{\pi'\leq \pi}\left(S^{\pi'}\mathscr{U}\right)^{E_U}\times \left(S^{\pi-\pi'}\mathscr{Y}\right)^{E_Y}\end{eqnarray*} in $A_{S^{\pi}X}$. On the other hand, denoting by $i_{E_U}$ (resp. $i_{E_Y}$) the immersion $(S^{\pi'}U)^{E_{U}}\to S^{\pi'}U$ (resp. $(S^{\pi-\pi'}Y)^{E_{Y}}\to S^{\pi-\pi'}Y$: \begin{eqnarray*}S^{\pi'}\mathscr{U}\boxtimes S^{\pi-\pi'}\mathscr{Y} &=& \left(\sum_{E_U \subset F_U}(i_{E_U})_!\left((S^{\pi'}\mathscr{U})^{E_U}\right)\right)\boxtimes \left(\sum_{E_Y \subset F_Y}(i_{E_Y})_!(S^{\pi-\pi'}\mathscr{Y})^{E_Y}\right)\\ &=& \sum_{E\subset F}\left(i_{E_U}\boxtimes i_{E_Y}\right)_!\left(\left(S^{\pi'}\mathscr{U}\right)^{E_U}\boxtimes \left(S^{\pi-\pi'}\mathscr{Y}\right)^{E_Y}\right)\end{eqnarray*} in $A_{S^{\pi'}U\times S^{\pi-\pi'}Y}$. We may conclude by commutativity of the diagram $$\xymatrix{(S^{\pi'}U)^{E_U}\times (S^{\pi-\pi'}Y)^{E_Y}\ \ \ \ar[d] \ar[r]^-{i_{E_U}\boxtimes i_{E_Y}} &\ \ \ S^{\pi'}U\times S^{\pi-\pi'}Y\ar[d]\\ (S^{\pi}X)^E \ar[r]^{i_E}&S^{\pi}X }$$ where the vertical arrow on the right is the immersion from the statement of the lemma, and the vertical arrow on the left is the immersion it induces above $(S^{\pi}X)^{E}$. \end{proof} This lemma immediately implies that the associativity property from Section \ref{eulerprod} extends to this Euler product with constant terms. \section{Grothendieck rings of varieties} \label{sect.grothrings} References for this section are e.g. \cite{CNS} for most classical definitions and properties of Grothendieck rings, and Hrushovski and Kazdan's \cite{HK}, or Cluckers and Loeser's \cite{CluckLosexp} for Grothendieck rings with exponentials. We follow mostly Chambert-Loir and Loeser's \cite{CL} containing a short introduction to Grothendieck rings with exponentials and their main properties. \subsection{Grothendieck semirings} Let $R$ be a noetherian scheme. By a variety over $R$ we mean a $R$-scheme of finite type. For a point $r\in R$, we denote by $\kappa(r)$ its residue field. The \emph{Grothendieck monoid of varieties} \index{Grothendieck monoid!of varieties} $\mathrm{KVar}^+_R$ \index{Kvarp@$\mathrm{KVar}^+$} over $R$ is a commutative monoid defined by generators and relations. Generators are all varieties over $R$, and relations are $$X\sim Y$$ whenever $X$ and $Y$ are isomorphic as $R$-varieties, $$\varnothing \sim 0$$ and $$X \sim Y + U$$ whenever $X$ is a $R$-variety, $Y$ a closed subscheme of $X$ and $U$ its open complement. We will write $[X]$ to denote the class of a variety $X$ in $\mathrm{KVar}^+_R$. The product $[X][Y] = [X\times_{R} Y]$ endows the monoid~$\mathrm{KVar}^+_{R}$ with a semiring structure. Two $R$-varieties $X$ and $Y$ have the same class in $\mathrm{KVar}^+_R$ if and only if they are piecewise isomorphic \index{piecewise isomorphic} over $R$, that is, one can partition them into locally closed subsets $X_1,\dots,X_m$ and $Y_1,\ldots,Y_m$ such that for all $i$, $X_i$ and $Y_i$ are isomorphic as $R$-schemes with their induced reduced structures (see \cite{CNS}, Chapter 1, Corollary 1.4.9). The \emph{Grothendieck monoid of varieties with exponentials} \index{Grothendieck monoid!of varieties with exponentials}$\mathrm{KExpVar}^+_R$ \index{Kexpvarp@$\mathrm{KExpVar}^+$}is defined by generators and relations as well. Generators are pairs $(X,f)$, where $X$ is a variety over $R$ and $f\colon X\to\mathbf{A}^1=\mathrm{Spec}\,(\mathbf{Z}[T])$ is a morphism. Relations are the following: $$(X,f)\sim (Y,f\circ u) $$ whenever $X$, $Y$ are $R$-varieties, $f\colon X\to\mathbf{A}^1$ a morphism, and $u\colon Y\to X$ an $R$-isomorphism; $$(\varnothing,f)\sim 0$$ where $f:\varnothing \to \mathbf{A}^1$ is the empty morphism; $$ (X,f)\sim (Y,f|_Y)+(U,f|_U) $$ whenever $X$ is an $R$-variety, $f\colon X\to\mathbf{A}^1$ a morphism, $Y$ a closed subscheme of~$X$ and $U=X\setminus Y$ its open complement; $$ (X\times_\mathbf{Z} \mathbf{A}^1,\mathrm{pr}_2)$$ where $X$ is an $R$-variety and $\mathrm{pr}_2$ is the second projection. We will write~$[X,f]$ to denote the class in $\mathrm{KExpVar}^+_R$ of a pair~$(X,f)$. The product $[X,f][Y,g] = [X\times_R Y, f\circ \mathrm{pr}_1+g\circ \mathrm{pr}_2]$ endows $\mathrm{KExpVar}^+_R$ with a semiring structure, the class $[R\xrightarrow{\mathrm{id}}R]$ being the unit element. \subsection{Grothendieck rings}\label{subsect.grothrings} Let $R$ be a noetherian scheme. The \emph{Grothendieck group of varieties} \index{Grothendieck group!of varieties} $\mathrm{KVar}_{R}$ \index{Kvar@$\mathrm{KVar}$} is defined by generators and relations. Generators are all varieties over $R$, and relations are $$X-Y$$ whenever $X$ and $Y$ are isomorphic as $R$-varieties, and $$X - Y - U$$ whenever $X$ is a $R$-variety, $Y$ a closed subscheme of $X$ and $U$ its open complement. It is the group associated to the monoid $\mathrm{KVar}^+_R$. Every constructible set~$X$ over~$R$ (that is, every constructible subset of a variety over $R$) has a class~$[X]$ in the group~$\mathrm{KVar}_R$ (see \cite{CNS}, chapter 1, corollaries 1.3.5 and 1.3.6). It will sometimes be denoted by $[X]_R$ when different base schemes are in play. The product $[X][Y] = [X\times_{R} Y]$ endows the group~$\mathrm{KVar}_{R}$ with a ring structure with unit element the class $1 = [R\xrightarrow{\mathrm{id}}R]$. Let $\mathbf{L}$, or $\mathbf{L}_R$, \index{L@$\mathbf{L}$} be the class of the affine line $\mathbf{A}_{R}^1$ in $\mathrm{KVar}_{R}$. We define the Grothendieck ring of varieties \index{Grothendieck ring!of varieties} localised at $\mathbf{L}$ to be $\mathscr{M}_{R} = \mathrm{KVar}_{R}[\mathbf{L}^{-1}]$. The \emph{Grothendieck group of varieties with exponentials} \index{Grothendieck group!of varieties!with exponentials} $\mathrm{KExpVar}_R$ \index{Kexpvar@$\mathrm{KExpVar}$} is defined by generators and relations as well. Generators are pairs $(X,f)$, where $X$ is a variety over $R$ and $f\colon X\to\mathbf{A}^1=\mathrm{Spec}\,(\mathbf{Z}[T])$ is a morphism. Relations are the following: $$(X,f)-(Y,f\circ u) $$ whenever $X$, $Y$ are $R$-varieties, $f\colon X\to\mathbf{A}^1$ a morphism, and $u\colon Y\to X$ an $R$-isomorphism; $$ (X,f)-(Y,f|_Y)-(U,f|_U) $$ whenever $X$ is a $R$-variety, $f\colon X\to\mathbf{A}^1$ a morphism, $Y$ a closed subscheme of~$X$ and $U=X\setminus Y$ its open complement, $$ (X\times_\mathbf{Z} \mathbf{A}^1,\mathrm{pr}_2)$$ where $X$ is an $R$-variety and $\mathrm{pr}_2$ is the second projection. We will write~$[X,f]$ (or $[X,f]_R$ if we want to keep track of the base scheme $R$) to denote the class in $\mathrm{KExpVar}_R$ of a pair~$(X,f)$. The product $[X,f][Y,g] = [X\times_R Y, f\circ \mathrm{pr}_1+g\circ \mathrm{pr}_2]$ endows $\mathrm{KExpVar}_R$ with a ring structure.\index{Grothendieck ring!of varieties!with exponentials} We will use the notation $f\oplus g$ for the morphism $f\circ \mathrm{pr}_1+g\circ \mathrm{pr}_2$. We denote by $\mathbf{L}$, or $\mathbf{L}_R$,\ \index{L@$\mathbf{L}$} the class of $[\mathbf{A}^1_R,0]$ in $\mathrm{KExpVar}_R$. As for $\mathrm{KVar}_R$, we may invert $\mathbf{L}$, which gives us a ring denoted by $\mathscr{E}xp\mathscr{M}_R$. There are obvious morphisms of semirings $$\mathrm{KVar}^+_R\to \mathrm{KVar}_R$$ and $$\mathrm{KExpVar}^+_R\to \mathrm{KExpVar}_R$$ which identify $\mathrm{KVar}_R$ (resp. $\mathrm{KExpVar}_R$) with the ring obtained from the semiring $\mathrm{KVar}^+_R$ (resp. $\mathrm{KExpVar}^+_R$) by adding negatives. An element in the image of one of these morphisms is said to be \textit{effective}. There are ring morphisms $\mathrm{KVar}_R \to \mathrm{KExpVar}_R$ and $\mathscr{M}_R\to \mathscr{E}xp\mathscr{M}_R$ sending the class of~$X$ to the class~$[X,0]$. According to lemma 1.1.3 in \cite{CL} together with lemma \ref{function.equality} below, they are injective. Let $X$ be an $R$-variety. A piecewise morphism \index{piecewise morphism} $f:X\to \mathbf{A}^1$ is the datum of a partition $X_1,\ldots,X_m$ of $X$ into locally closed subsets and of morphisms $f_i:X_i\to \mathbf{A}^1$ for every~$i$. Any pair $(X,f)$ consisting of an $R$-variety~$X$ and of a piecewise morphism $f\colon X\to\mathbf{A}^1$ has a class $[X,f]$ in $\mathrm{KExpVar}_R$. \begin{remark} The localisation morphism $\mathrm{KVar}_R\to \mathscr{M}_R$ \index{localisation morphism} is not injective in general: it was proved by Borisov in \cite{Borisov} that $\mathbf{L}$ is a zero-divisor in the Grothendieck ring $\mathrm{KVar}_k$ over a field~$k$ of characteristic zero. Borisov's argument also implies that two varieties $X$ and $Y$ having the same class in the Grothendieck ring of varieties are not necessarily piecewise isomorphic, so that the above morphism $\mathrm{KVar}^+_R\to \mathrm{KVar}_R$ is not injective either. \end{remark} Let $A\in\{\mathrm{KVar},\mathscr{M},\mathrm{KExpVar},\mathscr{E}xp\mathscr{M}\}$. For noetherian schemes $R$ and $S$ over some noetherian scheme $T$, there is an \textit{exterior product} \index{exterior product} morphism \begin{equation}\label{exteriorproduct}A_R\otimes_{A_T} A_S \xrightarrow{\boxtimes_T} A_{R\times_T S}\end{equation} of $A_T$-algebras. For $A = \mathrm{KExpVar}$, it is given by sending a pair $([X,f],[Y,g])$ to the class $[X\times_T Y, f\circ\mathrm{pr}_1 + g\circ\mathrm{pr}_2]$. \subsection{Functoriality and interpretation as functions}\label{sect.functoriality} Let $R$ and $S$ be noetherian schemes. A morphism $u:R\to S$ induces morphisms $u_{!}$ and~$u^*$ between the corresponding Grothendieck groups. For example, for $\mathrm{KExpVar}$, these morphisms are defined in the following manner: the \textit{proper pushforward} \index{proper pushforward} $$u_{!}:\mathrm{KExpVar}_R\to \mathrm{KExpVar}_S$$ sends a class $[X,f]_R$ of a $R$-variety $X$ in $\mathrm{KExpVar}_R$ to the class $[X,f]_S$ of the pair $(X,f)$ with $X$ viewed as an $S$-variety through the morphism $u$. This is a morphism of rings in the case when $u$ is an immersion, but not in general. In the other direction, there is a morphism of rings $$u^*:\mathrm{KExpVar}_S\to \mathrm{KExpVar}_R$$ called the \textit{pullback}, \index{pullback} given by sending the class of a pair $(X,f)$ with $X$ a $S$-variety to the class of the pair $(X\times_S R, f\circ\mathrm{pr}_1)$, where $R$ is viewed as an $S$-variety through the morphism~$u$. In particular, $\mathrm{KExpVar}_S$ may be seen as a $\mathrm{KExpVar}_S$-algebra via this morphism. Elements of the Grothendieck rings over $R$ may be interpreted as~motivic functions on~$R$. \index{motivic function} When $r\in R$ is a point, and $\a\in\mathrm{KExpVar}_R$, we will denote by $\a(r) = r^{*}(\a)$ (or $\a_r$) the image of $\a$ in $\mathrm{KExpVar}_{\kappa(r)}$ by the morphism $r^*$ induced by $r:\mathrm{Spec}\, (\kappa(r))\to R$. This will be interpreted as evaluation of the motivic function $\a$ at $r$. More generally, the above morphism $u^*$ is just composition with $u$. As for $u_!$, it may be interpreted as ``summation over rational points'' in the fibres of $u$, as explained in the following paragraph. We recall Lemma~1.1.8 from \cite{CL}, which, with this functional interpretation, means that a motivic function on~$R$ is determined by its values at points of $R$. \begin{lemma}\label{function.equality} Let $\a\in\mathrm{KVar}_R$ (resp. $\mathscr{M}_R$, $\mathrm{KExpVar}_R$, $\mathscr{E}xp\mathscr{M}_R$). If $\a(r) = 0$ for every $r\in R$, then $\a = 0$. \end{lemma} \subsection{Exponential sum \index{exponential sum} notation}\label{exponentialsumnotation} We start with the following lemma: \begin{lemma} Let $k$ be a finite field and let $\psi:k\to \mathbf{C}^*$ be a non-trivial additive character. Then there is a motivic measure $$\begin{array}{ccc}\mathrm{KExpVar}_k&\to& \mathbf{C} \\ \left[X,f\right] & \mapsto & \sum_{x\in X(k)}\psi(f(x))\end{array}$$ extending the counting measure $$\begin{array}{ccc}\mathrm{KVar}_{k}&\to& \mathbf{Z} \\ {[X]}&\mapsto& |X(k)| \end{array}$$ \end{lemma} \begin{proof} We may define a group morphism on the free abelian group of generators $(X,f)$ of $\mathrm{KExpVar}_k$, by sending $(X,f)$ to $\sum_{x\in X(k)}\psi(f(x))$. This passes to the quotient modulo the first relation defining $\mathrm{KExpVar}_k$ because it corresponds to being able to perform changes of variables in such a sum: $$\sum_{x\in X(k)}\psi(f(x)) = \sum_{y\in Y(k)}\psi(f\circ u(y))$$ whenever $u:Y\to X$ is an isomorphism over $k$. It also passes to the quotient by the second relation because the latter corresponds to cutting up the sum into smaller sums: $$\sum_{x\in X(k)}\psi(f(x)) = \sum_{x\in Y(k)} \psi(f(x)) + \sum_{x\in U(k)} \psi(f(x))$$ whenever $Y$ is a closed subscheme of $X$ and $U = X\setminus Y$ its open complement. The third relation is also satisfied, and comes from the fact that if $\psi$ is a non-trivial character, then $$\sum_{x\in \mathbf{A}^1_k(k)}\psi(x) = 0.$$ Finally, we observe that the product of two classes $[X,f]$ and $[Y,g]$ is sent to $$\sum_{x\in X}\sum_{y\in Y}\psi(f(x))\psi(g(y)) = \sum_{(x,y)\in X\times Y}\psi(f(x) + g(y))$$ which is exactly the image of the product $[X\times Y,f\oplus g]$ of these classes, so this map is a ring morphism. \end{proof} This lemma is meant as a motivation of the fact that, as explained in \cite{CL} 1.1.9, the class of a pair $(X,f)$ in $\mathrm{KExpVar}_k$ must be thought of as an analogue of the exponential sum $$\sum_{x\in X(k)}\psi(f(x))$$ where $k$ is a finite field and $\psi:k\to\mathbf{C}^*$ is a non-trivial additive character. This is why, when doing calculations with it, we will denote the class $[X,f]\in \mathrm{KExpVar}_k$, even for a not necessarily finite field $k$, by $$\sum_{x\in X}\psi(f(x)).$$ As noted in the proof of the lemma, when using this notation, the three relations occurring in the definition of $\mathrm{KExpVar}_k$ translate respectively as the possibility of performing change of variables, additivity and the property that $$\sum_{x\in k}\psi(x) = 0,$$ the latter being essential to make Fourier analysis work. More generally, let $R$ be a $k$-variety. For any morphism $h:R\to\mathbf{A}^1$ and any element $\theta\in \mathrm{KExpVar}_R$, we may define $$\sum_{r\in R}\theta(r)\psi(h(r)) = \theta \cdot [R,h]$$ where the product is taken in $\mathrm{KExpVar}_R$, and its result is viewed in $\mathrm{KExpVar}_k$. In the case when $h = 0$, the map $\theta\mapsto \sum_{r\in R}\theta(r)$ is exactly $u_{!}$ for $u:R\to k$ the structural morphism. In the same manner, if $u:R\to S$ is a morphism, then for any $\theta\in\mathrm{KExpVar}_R$, the motivic function $u_!\theta$ sends $s\in S$ to $\sum_{r\in u^{-1}(s)}\theta(s)$. \subsection{Grothendieck rings of varieties with action}\label{sect.grothaction} Grothendieck rings with actions were introduced by Denef and Loeser in \cite{DL02} to be able to take into account monodromy actions on the motivic nearby fibre. Other references are \cite{DL01} and \cite{GLM}. Let $k$ be a field of characteristic zero. We start by giving some general definitions about group actions on varieties. Let $G$ be a finite group acting on a variety $X$ over $k$. We say that the action of $G$ is \textit{good} \index{good action} if every $G$-orbit is contained in an affine open subset of $X$. If $X$ and $Y$ are two varieties with good $G$-action, we denote by $X\times^GY$ \index{Xtimes@$X\times^GY$} the quotient of the product $X\times Y$ by the equivalence relation $(gx,y)\sim (x,gy)$. It exists as a variety, and there is a good $G$-action on $X\times^GY$ induced by the action of $G$ on the first factor of $X\times Y$. Let $S$ be a variety over $k$ and $X$ a variety over $S$. When we speak of a good action of~$G$ on the $S$-variety $X$ we require it to leave the fibres of the structural morphism $X\to S$ invariant, i.e., the structural morphism must be equivariant if $S$ is equipped with the trivial $G$-action. For $n\geq 1$, let $\mu_n = \mathrm{Spec}\,(k[x]/(x^n-1))$ \index{mun@$\mu_n$} be the group scheme of $n$-th roots of unity, and let $\hat{\mu}$ \index{muh@$\hat{\mu}$} be the projective limit $\varprojlim\mu_n$ of the projective system with transition morphisms $\mu_{nd}\to \mu_n$, $x\mapsto x^{d}$. For any integer $r\geq 1$, a good $\hat{\mu}^r$-action \index{good $\hat{\mu}^r$-action} on a variety $X$ is an action of $\hat{\mu}^r$ that factors through a good action of $\mu_n^r$ for some integer $n\geq 1$. Let $S$ be a variety over $k$ and $r\geq 1$ an integer. The Grothendieck group of varieties with $\hat{\mu}^r$-action \index{Grothendieck group!of varieties!with $\hat{\mu}^r$-action} $\mathrm{KVar}^{\hat{\mu}^r}_S$ \index{Kvarmu@$\mathrm{KVar}^{\hat{\mu}^r}$} is defined in a similar way to $\mathrm{KVar}_S$: generators are pairs $(X,\sigma)$ where $X$ is a variety over $S$ and $\sigma$ a good $\hat{\mu}^r$-action on $X$, and relations are $$(X,\sigma)-(Y,\tau)$$ whenever $X$ and $Y$ are $S$-varieties with good $\hat{\mu}^r$-actions $\sigma,\tau$ such that there exists an equivariant $S$-isomorphism $u:(X,\sigma)\to (Y,\tau)$, $$(X,\sigma) - (Y,\sigma_{|Y}) - (U,\sigma_{|U})$$ where $X$ is a variety over $S$ with good $\hat{\mu}^r$-action $\sigma$, $Y$ a closed $\hat{\mu}^r$-invariant subscheme of~$X$, and $U$ its open complement, as well as an additional relation saying that \begin{equation}\label{actionrelation}(X\times \mathbf{A}^n_k,\sigma) - (X\times \mathbf{A}^n_k,\sigma')\end{equation} whenever $\sigma$ and $\sigma'$ are two liftings of the same $\hat{\mu}^r$-action on $X$ to an affine action on $X\times \mathbf{A}^n_k$ (that is, a good action the restriction of which to all fibres of the affine bundle $X\times\mathbf{A}^n_k\to X$ is affine). The fibred product with the diagonal $\hat{\mu}^r$-action endows $\mathrm{KVar}_S^{\hat{\mu}^r}$ with a ring structure. \begin{remark} We won't use this product much, because in the context of vanishing cycles, products defined via convolution \index{convolution product} are more relevant, see sections \ref{sect.convolution}, \ref{section.grothaffineline}. \end{remark} The class of the pair $(X,\sigma)$ will be denoted $[X,\sigma]$, or even $[X,\hat{\mu}^r]$ or $[X]$ when it is clear from the context what $\sigma$ is. We denote by $\mathbf{L}_S$, or $\mathbf{L}$, \index{L@$\mathbf{L}$} the class $\mathbf{A}^1_S$ with trivial $\hat{\mu}^r$-action. The last relation (\ref{actionrelation}) says in particular that $\mathbf{L}$ is equal to the class of the affine space endowed with any affine $\hat{\mu}^r$-action. As for $\mathrm{KVar}_S$, we may define the ring $\mathscr{M}_S^{\hat{\mu}^r} = \mathrm{KVar}_S^{\hat{\mu}^r}[\mathbf{L}^{-1}]$. \index{Mhat@$\mathscr{M}^{\hat{\mu}^r}$} \begin{remark}[Trivial action and forgetful morphism] There is a ring morphism \begin{equation}\label{actioninjection}\mathrm{KVar}_S\to \mathrm{KVar}_{S}^{\hat{\mu}^r}\end{equation} sending the class of a variety $X$ to the class of $X$ endowed with the trivial action. There is also a forgetful ring morphism $$\mathrm{KVar}_{S}^{\hat{\mu}^r}\to \mathrm{KVar}_{S}$$ which sends a class $[X,\sigma]$ of a variety~$X$ with action~$\sigma$ to the class~$[X]$, well defined because all relations defining $\mathrm{KVar}_S^{\hat{\mu}^r}$ go to zero in $\mathrm{KVar}_S$. The composition of the two gives the identity of $\mathrm{KVar}_S$, which shows that (\ref{actioninjection}) is injective. This remains valid with $\mathrm{KVar}$ replaced by $\mathscr{M}$. \end{remark} \begin{remark} Since $\mathrm{KVar}_S$ is a $\mathrm{KVar}_k$-module, the morphism (\ref{actioninjection}) endows $\mathrm{KVar}_{S}^{\hat{\mu}^r}$ with a $\mathrm{KVar}_k$-module structure. It is given, for any $k$-variety $X$ and any $S$-variety $Y$ with $\hat{\mu}^r$-action $\sigma$ by $[X][Y,\sigma] = [X\times_k Y, \sigma']$, where $\sigma'$ is acts trivially on $X$ and by $\sigma$ on $Y$. \end{remark} If $u:R\to S$ is a morphism of $k$-varieties, there are, as in \ref{sect.functoriality}, a group morphism \index{proper pushforward} $$u_!:\mathrm{KVar}_R^{\hat{\mu}^r}\to\mathrm{KVar}_S^{\hat{\mu}^r}$$ and a ring morphism \index{pullback} $$u^*:\mathrm{KVar}_S^{\hat{\mu}^r}\to \mathrm{KVar}_{R}^{\hat{\mu}^r},$$ defined similarly. These morphisms exist also when $\mathrm{KVar}$ is replaced with $\mathscr{M}$. Exterior products \index{exterior product} in the flavour of (\ref{exteriorproduct}) also exist for Grothendieck rings of varieties with action. For a $k$-variety $T$ and $T$-varieties $R$ and $S$, the one we are going to use is a morphism $$\mathrm{KVar}_R^{\hat{\mu}^r}\otimes_{\mathrm{KVar}_T}\mathrm{KVar}_S^{\hat{\mu}^r} \xrightarrow{\boxtimes_T}\mathrm{KVar}_{R\times_TS}^{\hat{\mu}^r\times \hat{\mu}^r}$$ of $\mathrm{KVar}_T$-algebras sending a pair $([X,\sigma],[Y,\tau])$ to the class of $X\times_TY$ endowed with the product action $\sigma\times_T\tau$ given by $(s,t).(x,y) = (\sigma(s)(x),\tau(t)(y))$. \subsection{The dimensional filtration}\index{dimensional filtration}\index{filtration!dimensional} A reference for this section is \cite{CNS}, chapter 1, sections 4.1 and 4.2. \begin{definition} Let $S$ be a scheme. The relative dimension \index{relative dimension}\index{dimension!relative} of a variety $X$ over $S$, denoted by $\dim_S(X)$, \index{dim@$\dim_S$} is defined to be $$\dim_SX := \sup_{s\in S} \dim_{\kappa(s)}X_s.$$ \end{definition} Let $S$ be a noetherian scheme. The above notion of relative dimension gives rise to a natural filtration on the ring $\mathrm{KVar}_{S}$: we define $F_d\mathrm{KVar}_S$ \index{fdk@$F_d\mathrm{KVar}$} to be the subgroup of $\mathrm{KVar}_S$ generated by classes of varieties of relative dimension $\leq d$. We may now define a function $$\dim_S:\mathrm{KVar}_S\to \mathbf{Z}\cup\{-\infty\}$$ \index{dim@$\dim_S$} by sending a class $\a$ to $\inf\{d\in\mathbf{Z},\ \a\in F_d\mathrm{KVar}_S\}$. This function has the following elementary properties: Let $\a,\a'\in \mathrm{KVar}_S$. Then \begin{enumerate}[$(i)$] \item $\dim_S(0) = -\infty$. \item $\dim_S(\a + \a') \leq \min\{\dim_S(\a), \dim_S(\a')\},$ with equality whenever $\dim_S(\a)\neq\dim_S(\a')$ \item $\dim_S(\a\a')\leq \dim_S(\a) + \dim_S(\a').$ \end{enumerate} Moreover, using the Euler-Poincaré polynomial, one may prove that for every variety $X$ over $S$, $\dim_S([X]) = \dim_SX$ (see lemma 4.1.3 in \cite{CNS}, chapter 1). For every $d\in\mathbf{Z}$, we define $F_d\mathscr{M}_S$ \index{fdm@$F_d\mathscr{M}$} to be the subgroup of $\mathscr{M}_S$ generated by elements of the form $[X]\mathbf{L}^{-n}$ where $X$ is an $S$-variety, $n\in\mathbf{Z}$ is an integer and $\dim_SX - n\leq d$. This gives us an increasing and exhaustive filtration on the ring $\mathscr{M}_S$. In the same manner as above, we may define a function $$\dim_S:\mathscr{M}_S\to \mathbf{Z}\cup\{-\infty\}$$\index{dim@$\dim_S$} satisfying the same properties. The same definition gives rise to dimensional filtrations and functions on Grothendieck rings with actions or with exponentials. Thus, for example, on $\mathrm{KExpVar}_S$, we define $F_d\mathrm{KExpVar}_S$ \index{fde@$F_d\mathrm{KExpVar}$} as the subgroup of $\mathrm{KExpVar}_S$ generated by classes of the form $[X,f]$ where~$X$ is of relative dimension $\leq d$. \subsection{Some presentations of Grothendieck groups in characteristic zero}\index{presentation}\index{Grothendieck group!generators} We start by recalling a classical result (it follows for example from \cite{Bittner}, Theorem 3.1). \begin{lemma}\label{smoothpropergensfield} Let $X$ be a variety over a field $k$ of characteristic zero. The Gro\-then\-dieck group $\mathrm{KVar}_X$ is generated by classes of the form~$[Y\xrightarrow{p} X]$ where $p$ is proper and~$Y$ is smooth over $k$. \end{lemma} This may be generalised in the following form: \begin{lemma}\label{smoothpropergens} Let $S$ be a variety over a field $k$ of characteristic zero, and $X$ a variety over $S$ with structural morphism $u:X\to S$. Then the group $\mathrm{KVar}_{X}$ is generated by classes $[Y\xrightarrow{p} X]$ such that $U =u\circ p(Y)$ is a locally closed subset of $S$, $Y$ is smooth over $U$ and such that the morphism $p\times_S\mathrm{id}_U: Y\times_S U \to X\times_S U$ is proper. \end{lemma} \begin{proof} First of all, $\mathrm{KVar}_{X}$ is generated by classes of quasi-projective morphisms. Let therefore $p:Y\to X$ be a quasi-projective morphism, and define $T$ to be the closure of $u\circ p(Y)$ in $S$. We will argue by induction on $\dim T$. Let $T_1,\ldots,T_n$ be the irreducible components of $T$. For each $i\in\{1,\ldots,n\}$ it suffices to write $[(u\circ p)^{-1}(T_i)\to X]$ as a sum of classes as in the statement of the theorem, and use induction on $\dim T$ to conclude. We may therefore assume that $T$ is irreducible. Compactify $p$ into a proper morphism $\bar{p}:\bar{Y}\to X$, where $\bar{Y}$ is a variety containing $Y$ as a dense open subset. The closure of $u\circ \bar{p}(\bar{Y})$ in $S$ is again $T$. An additional induction on $\dim Y$ (with $\dim T$ fixed), initialised at $\dim Y = \dim T$ (in which case we use the induction hypothesis for $\dim T-1$) then allows us to work with $\bar{p}$ instead of $p$. Let $\eta$ be the generic point of $T$. Consider a resolution of singularities $\widetilde{\bar{Y}_{\eta}}\to \bar{Y}_{\eta}$ above $\kappa(\eta)$. Above some dense open subset $T'$ of $T$, this spreads out to a proper birational morphism $f:\widetilde{\bar{Y}}\to \bar{Y}\times_T T'$ with $\widetilde{\bar{Y}}$ smooth over $T'$. If $\dim T=0$, then $T'=T$, otherwise by induction on $\dim T$, we may replace $\bar{p}:\bar{Y}\to X$ with its restriction $p':\bar{Y}\times_T T'\to X$. Finally, by induction on $\dim Y$ (and on $\dim T$ if $\dim Y = \dim T$), the morphism $p':Y\times_T T'\to X$ may be replaced with the composition $g = p'\circ f:\widetilde{\bar{Y}}\to X$. The morphism $g$ then satisfies the condition in the statement of the theorem. Indeed, the morphism $u\circ g:\widetilde{\bar{Y}}\to T'$ is smooth, and therefore open, so that $U:=u\circ g\left(\widetilde{\bar{Y}}\right)$ is open in its closure and therefore is locally closed is $S$, and~$\widetilde{\bar{Y}}$ is smooth over~$U$. Finally, $g\times_S\mathrm{id}_U$ is the composition of the morphisms $\widetilde{\bar{Y}}\times_{S}U\to \bar{Y}\times_S U$ and $\bar{Y}\times_S U\to X\times_S U$ which are both proper by base change. \end{proof} \section{Motivic vanishing cycles}\label{sect.motvancycles} Throughout this section, $k$ will be a field of characteristic zero. \subsection{Convolution}\label{sect.convolution} In this section we recall the definition of Looijenga's convolution from \cite{Loo}, in its generalised form due to Guibert, Loeser and Merle (\cite{GLM}), following section 2.3 in \cite{LS}. \begin{notation} Let $n\geq 1$ be an integer. We denote by $F_0^n$ (resp. $F_1^n$) the Fermat curve with equation $x^n +y^n = 0$ (resp. $x^n + y^n = 1$) inside $\mathbf{G}_m\times \mathbf{G}_m$ (with coordinates $x,y$), with the obvious $\mu_n\times \mu_n$-action. \index{Fermat curve}\index{F0@$F_0^n$}\index{F1@$F_1^n$} \end{notation} Let $S$ be a variety over a field $k$ of characteristic zero. There is a $\mathrm{KVar}_k$-linear morphism $$\Psi:\mathrm{KVar}_{S}^{\hat{\mu}\times \hat{\mu}}\to \mathrm{KVar}_S^{\hat{\mu}}$$\index{Psi@$\Psi$} given in the following manner: let $Z\xrightarrow{p}S$ be an $S$-variety with a $\hat{\mu}\times \hat{\mu}-$action. Then there is an integer $n$ such that this action factors through $\mu_n\times \mu_n$. One defines $$\Psi(Z\xrightarrow{p}S) = [Z\times^{\mu_n\times\mu_n}F_0^n\xrightarrow{p\circ\mathrm{pr}_1} S] - [Z\times^{\mu_n\times\mu_n}F_1^n\xrightarrow{p\circ\mathrm{pr}_1} S].$$ This gives an element of $\mathrm{KVar}_S^{\hat{\mu}}$: indeed, as we said in section \ref{sect.grothaction}, for $i=0,1$, the variety $$Z\times^{\mu_n\times\mu_n}F_i^n$$ is endowed with an action of $\mu_n$, given by $$t.(z,x,y) = ((t,t).z,x,y) = (z, tx, ty).$$ As explained in the discussion after remark 2.13 in \cite{LS}, this construction does not depend on the integer $n$, and therefore $\Psi$ is well-defined. By remark 2.8 in \cite{LS}, for every morphism $f:R\to S$ of $k$-varieties, $\Psi$ commutes with $f_!$ and $f^*$. We may define the convolution product \index{convolution product} on $\mathrm{KVar}_{S}^{\hat{\mu}}$ by the $\mathrm{KVar}_k$-linear composition $$\ast:\mathrm{KVar}_{S}^{\hat{\mu}} \otimes_{\mathrm{KVar}_k}\mathrm{KVar}_S^{\hat{\mu}}\xrightarrow{\boxtimes_S}\mathrm{KVar}_{S}^{\hat{\mu}\times\hat{\mu}}\xrightarrow{\Psi}\mathrm{KVar}_{S}^{\hat{\mu}},$$ \index{ast@$\ast$} (see definition of the exterior product $\boxtimes_S$ at the end of section \ref{sect.grothaction}) so that for two $S$-varieties $X,Y$ with actions $\sigma$ and $\tau$, we have $$[X,\sigma] \ast [Y,\tau] = \Psi([X\times_SY,\sigma\times_S \tau]).$$ By proposition 2.12 in \cite{LS} (or proposition 5.2 in \cite{GLM}), for any $k$-variety $S$, the convolution product~$\ast$ endows $\mathrm{KVar}_S^{\hat{\mu}}$ with an associative, commutative $\mathrm{KVar}_k$-algebra structure, with unit element the class of the identity $[S\xrightarrow{\mathrm{id}}S]$ (with trivial $\hat{\mu}$-action). From now on, the ring structure on $\mathrm{KVar}_S^{\hat{\mu}}$ we are going to consider will be the convolution product $\ast$. Thus, in what follows, both $\mathrm{KVar}_S^{\hat{\mu}}$ and $(\mathrm{KVar}_S^{\hat{\mu}},\ast)$ denote the same thing. The $n$-fold convolution product of $\mathbf{L}_S$ with itself is $\mathbf{L}^n_S$, and therefore localising the $\mathrm{KVar}_k$-algebra $(\mathrm{KVar}_S^{\hat{\mu}},\ast)$ at the multiplicative set $\{1,\mathbf{L}_S,\mathbf{L}_S\ast\mathbf{L}_S,\ldots \}$ yields an $\mathscr{M}_k$-algebra $(\mathscr{M}_S^{\hat{\mu}},\ast)$ with same underlying $\mathscr{M}_k$-module structure as the usual localisation $\mathscr{M}_S^{\hat{\mu}}$ of $\mathrm{KVar}_S^{\hat{\mu}}$. By $\mathscr{M}_k$-linearity, $\Psi$ and~$*$ extend to localised Grothendieck rings. Since $\Psi$ commutes with pullbacks, for any morphism $f:R\to S$ of $k$-varieties, there are pullback morphisms $$f^*:\mathrm{KVar}_S^{\hat{\mu}}\to \mathrm{KVar}_{R}^{\hat{\mu}}$$ (resp. $f^*:\mathscr{M}_S^{\hat{\mu}}\to \mathscr{M}_{R}^{\hat{\mu}}$) of $\mathrm{KVar}_k$-algebras (resp. of $\mathscr{M}_k$-algebras). \begin{remark}\label{asttrivialaction} By remark 2.11 in \cite{LS}, whenever the action $\tau$ on $Y$ is trivial, we have $$[X,\sigma] \ast [Y,\tau] = [X,\sigma][Y,\tau],$$ where on the right we consider the usual product on $\mathrm{KVar}_S^{\hat{\mu}}$ from section \ref{sect.grothaction}. In particular, there is a ring morphism $$\mathrm{KVar}_S\to(\mathrm{KVar}^{\hat{\mu}}_S,\ast)$$ sending the class of an $S$-variety $X$ to the class $[X,1]$ where $1$ denotes the trivial action. Such a morphism exists also at the level of localised Grothendieck rings. \end{remark} \subsection{Grothendieck rings over the affine line}\label{section.grothaffineline} Let $S$ be a $k$-variety. By section \ref{sect.convolution}, the Grothendieck group $\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}$ has a natural $(\mathrm{KVar}_S^{\hat{\mu}},\ast)$-algebra structure given by the ring morphism $\epsilon_S^*:(\mathrm{KVar}_{S}^{\hat{\mu}},\ast)\to (\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}},\ast)$ where $\epsilon_S:\mathbf{A}^1_S\to S$ is the structural morphism. On the other hand, there is a well-defined addition morphism \index{addition morphism} $$\mathrm{add}:\mathbf{A}^1_S\times_S\mathbf{A}^1_S\to \mathbf{A}^1_S$$\index{add@$\mathrm{add}$} on the group scheme $\mathbf{A}^1_S$, and this endows $\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}$ with a product $\star$ given by the composition \index{star@$\star$} \index{convolution product!over affine line} $$\star:\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}\otimes_{\mathrm{KVar}_S}\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}\xrightarrow{\boxtimes_S}\mathrm{KVar}_{\mathbf{A}^1_S\times_S\mathbf{A}^1_S}^{\hat{\mu}\times\hat{\mu}}\xrightarrow{\mathrm{add}_!}\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}\times\hat{\mu}}\xrightarrow{\Psi}\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}},$$ so that, for $\mathbf{A}^1_S$-varieties $X$ and $Y$ with $\hat{\mu}$-actions $\sigma$ and $\tau$, we have $$[X\xrightarrow{f}\mathbf{A}^1_S,\sigma]\star [Y\xrightarrow{g} \mathbf{A}^1_S,\tau] = \Psi([X\times_S Y \xrightarrow{f\oplus g} \mathbf{A}^1_S,\sigma\times_S \tau])$$ where $f\oplus g = f\circ \mathrm{pr}_1 + g\circ \mathrm{pr}_2$. \begin{lemma}\label{iotaSmorphism} Let $\iota_S:S\to \mathbf{A}^1_S$ be the morphism given by the zero-section of the trivial affine bundle $\mathbf{A}^1_S\to S$. Then $$\begin{array}{rcc}(\iota_S)_!:\ (\mathrm{KVar}_S^{\hat{\mu}},\ast) & \to & (\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}},\star)\\ \left[ X\xrightarrow{u} S\right] & \mapsto & [X \xrightarrow{(u,0)} \mathbf{A}^1_S] \end{array} $$ is a ring morphism. \end{lemma} \begin{proof} Let $X\xrightarrow{u}S$ and $Y\xrightarrow{v}S$ be two $S$-varieties and let $\sigma$ (resp. $\tau$) be a $\hat{\mu}$-action on $X$ (resp. $Y$). Using the fact that $\Psi$ commutes with proper pushforwards, we get \begin{eqnarray*}(\iota_S)_!([X\xrightarrow{u}S,\sigma]\ast[Y\xrightarrow{v}S,\tau]) & =& (\iota_S)_!\circ\Psi([X\times_S Y\xrightarrow{u\circ \mathrm{pr}_1}S,\sigma\times_S\tau]) \\ & = & \Psi\circ(\iota_S)_!([X\times_S Y\xrightarrow{u\circ \mathrm{pr}_1}S,\sigma\times_S\tau])\\ & = & \Psi([X\times_S Y\xrightarrow{(u\circ \mathrm{pr}_1,0)}\mathbf{A}^1_S,\sigma\times_S\tau])\\ & = & \Psi\circ\mathrm{add}_!([X\times_SY\xrightarrow{(u\circ\mathrm{pr}_1,0)}\mathbf{A}^1_S\times_S\mathbf{A}^1_S,\sigma\times_S\tau])\\ & = & \Psi\circ \mathrm{add}_!([X\xrightarrow{(u,0)}\mathbf{A}^1_S,\sigma]\boxtimes_S[Y\xrightarrow{(v,0)}\mathbf{A}^1_S,\tau])\\ & = & [X\xrightarrow{(u,0)}\mathbf{A}^1_S,\sigma]\star [Y\xrightarrow{(v,0)}\mathbf{A}^1_S,\tau]\\ & = & (\iota_S)_!([X\xrightarrow{u}S,\sigma])\star(\iota_S)_!([Y\xrightarrow{v}\mathbf{A}^1_S,\tau]). \end{eqnarray*} \end{proof} Thus, there is another $\mathrm{KVar}_S^{\hat{\mu}}$-algebra structure on $\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}$, given by $(i_S)_!$, the latter becoming a ring morphism if one replaces the product $\ast$ by $\star$. Denote by $\widetilde{\mathrm{KVar}^{\hat{\mu}}_{\mathbf{A}^1_S}}$ the $\mathrm{KVar}_S^{\hat{\mu}}$-module structure on $\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}$ given by $(\iota_S)_!$. \begin{lemma}\label{identitymorphism} The identity $\mathrm{id}: \mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}\to \widetilde{\mathrm{KVar}}_{\mathbf{A}^1_S}^{\hat{\mu}}$ is an isomorphism of $\mathrm{KVar}_S^{\hat{\mu}}$-modules. \end{lemma} \begin{proof} For all varieties $X,Y$ over $S$ with $\hat{\mu}$-actions $\sigma,\tau$ and any morphism $f:Y\to \mathbf{A}^1_S$, we have (denoting by $u$ the structural morphism $u:X\to S$ and by 1 the trivial $\hat{\mu}$-action on $\mathbf{A}^1_S$) \begin{eqnarray*}\epsilon_S^{*}([X\xrightarrow{u}S,\sigma])\ast[Y\xrightarrow{f} \mathbf{A}^1_S,\tau] &=& \Psi([X\times_S \mathbf{A}^1_S\xrightarrow{\mathrm{pr}_2}\mathbf{A}^1_S,\sigma\times_S1]\times_{\mathbf{A}^1_S} [Y\xrightarrow{f} \mathbf{A}^1_S,\tau]) \\ & = & \Psi([(X\times_S \mathbf{A}^1_S)\times_{\mathbf{A}^1_S}Y\xrightarrow{f\circ \mathrm{pr}_2}\mathbf{A}^1_S,(\sigma\times_S1)\boxtimes_{\mathbf{A}^1_S}\tau])\\ &=& \Psi([X\times_S Y\xrightarrow{f\circ\mathrm{pr}_2}\mathbf{A}^1_S,\sigma\times_S\tau]) \\ & = & \Psi\circ\mathrm{add}_!([X\xrightarrow{(u,0)}\mathbf{A}^1_S,\sigma]\boxtimes_S[Y\xrightarrow{f}\mathbf{A}^1_S,\tau])\\ & = & [X\xrightarrow{(u,0)}\mathbf{A}^1_S,\sigma]\star [Y\xrightarrow{f}\mathbf{A}^1_S,\tau]\\ &=& (\iota_S)_!([X\xrightarrow{u}S,\sigma])\star[Y\xrightarrow{f}\mathbf{A}^1_S,\tau].\end{eqnarray*}\end{proof} Thus in fact these two $\mathrm{KVar}_S^{\hat{\mu}}$-module structures are the same, and we will denote this $\mathrm{KVar}_{S}^{\hat{\mu}}$-module by~$\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}$. To distinguish between the two $\mathrm{KVar}_S^{\hat{\mu}}$-algebra structures, we will write them $(\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}},\ast)$ and $(\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}},\star)$. \begin{lemma}\label{epsilonmorphism} The pushforward map $$(\epsilon_S)_!: (\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}},\star)\to (\mathrm{KVar}_{S}^{\hat{\mu}},\ast)$$ is a morphism of $\mathrm{KVar}_{S}^{\hat{\mu}}$-algebras. \end{lemma} \begin{proof} First of all, we have $(\epsilon_S)_!\circ (\iota_S)_! = \mathrm{id}_{\mathrm{KVar}_S^{\hat{\mu}}}$. Moreover, for all varieties $X,Y$ over $S$ with morphisms $f:X\to \mathbf{A}^1_S$ and $g:Y\to\mathbf{A}^1_S$ and $\hat{\mu}$-actions $\sigma,\tau$, we have, using that $\Psi$ commutes with proper pushforwards: \begin{eqnarray*}(\epsilon_S)_!([X\xrightarrow{f}\mathbf{A}^1_S,\sigma]\star[Y\xrightarrow{g}\mathbf{A}^1_S,\sigma]) &= &(\epsilon_S)_!\Psi([X\times_S Y \xrightarrow{f\oplus g} \mathbf{A}^1_S,\sigma\times_S\tau])\\ & = &\Psi([X\times_S Y\to S,\sigma\times_S\tau])\\& =& [X\to S,\sigma]\ast[Y\to S,\tau]\\ &=& (\epsilon_S)_!([X\xrightarrow{f}\mathbf{A}^1_S,\sigma])\ast(\epsilon_S)_!([Y\xrightarrow{g}\mathbf{A}^1_S,\tau]).\end{eqnarray*} \end{proof} \begin{remark}[Trivial actions]\label{convvtrivialaction} From remark 2.15 in \cite{LS}, we see that if $f:X\to\mathbf{A}^1_S$ and $g:Y\to \mathbf{A}^1_S$ are $\mathbf{A}^1_S$-varieties with $\hat{\mu}$-actions $\sigma,\tau$ such that $\tau$ is the trivial action, then \begin{equation}\label{convvtrivialactionequation}[X\xrightarrow{f}\mathbf{A}^1_S]\star[Y\xrightarrow{g}\mathbf{A}^1_S] = [X\times_S Y\xrightarrow{f\oplus_S g}\mathbf{A}^1_S],\end{equation} where $f\oplus_S g$ stands for the morphism $\mathrm{add}\circ (f\times_Sg)$. We denote again by $\star$ the restriction of $\star$ to the image of the inclusion $\mathrm{KVar}_{\mathbf{A}^1_S}\to\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}$, given by formula (\ref{convvtrivialactionequation}). Thus, for Grothendieck rings without actions, lemma \ref{iotaSmorphism} boils down to the statement that $$(\iota_S)_!:\mathrm{KVar}_{S}\to (\mathrm{KVar}_{\mathbf{A}^1_S},\star)$$ is a ring morphism. \end{remark} \begin{remark}[Convolution induces product on Grothendieck ring with exponentials] \label{expquotientmorphism} By remark \ref{convvtrivialaction} and by the definition of the Grothendieck ring with exponentials $\mathrm{KExpVar}_S$, the quotient map $q:(\mathrm{KVar}_{\mathbf{A}^1_S},\star)\to \mathrm{KExpVar}_S$ is a morphism of $\mathrm{KVar}_S$-algebras. \end{remark} \subsection{Localised Grothendieck rings over the affine line} We have $$\epsilon_S^*(\mathbf{L}_S) = \mathbf{L}_{\mathbf{A}^1_S} = [\mathbf{A}^1_S\times_S \mathbf{A}^1_S \xrightarrow{\mathrm{pr}_2} \mathbf{A}^1_S],$$ whereas $$(\iota_S)_!(\mathbf{L}_S) = [\mathbf{A}^1_S\xrightarrow{(\epsilon_S,0)}\mathbf{A}^1_S],$$ which we denote by $\mathbf{L}_0\in\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}$. Define $(\widetilde{\mathscr{M}}_{\mathbf{A}^1_S}^{\hat{\mu}},\star)$ as the $\mathscr{M}_S$-algebra obtained by localising $(\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}},\star)$ with respect to the multiplicative set $\{1,\mathbf{L}_0,\mathbf{L}_0\star\mathbf{L}_0,\ldots\}$. Then we have a canonical isomorphism of $\mathscr{M}_S$-algebras $$\mathscr{M}_S\otimes_{\mathrm{KVar}_S}(\mathrm{KVar}^{\hat{\mu}}_{\mathbf{A}^1_S},\star)\xrightarrow{\sim} (\widetilde{\mathscr{M}}^{\hat{\mu}}_{\mathbf{A}^1_S},\star).$$ Thus, base change along $\mathrm{KVar}_S\to \mathscr{M}_S$ of lemma \ref{identitymorphism} gives us: \begin{lemma}\label{mkiso} There is a canonical isomorphism $\mathscr{M}_{\mathbf{A}^1_S}^{\hat{\mu}}\xrightarrow{\sim}\widetilde{\mathscr{M}}_{\mathbf{A}^1_S}^{\hat{\mu}}$ of $\mathscr{M}_S$-modules, given by $\frac{a}{\mathbf{L}_{\mathbf{A}^1_S}^n}\mapsto \frac{a}{\mathbf{L}_0^n}$ for all $a\in\mathrm{KVar}_{\mathbf{A}^1_S}^{\hat{\mu}}$ and $n\geq 1$. \end{lemma} Moreover, lemma \ref{epsilonmorphism} and remark \ref{expquotientmorphism} remain true with $\mathrm{KVar}$ replaced by $\widetilde{\mathscr{M}}$ and $\mathrm{KExpVar}$ replaced with $\mathscr{E}xp\mathscr{M}$. \subsection{Rational series} Following 2.8 in \cite{GLM}, for any $k$-variety $X$, define $\mathscr{M}_X^{\hat{\mu}}[[T]]_{\mathrm{rat}}$ to be the $\mathscr{M}_X^{\hat{\mu}}$-subalgebra of $\mathscr{M}_X^{\hat{\mu}}[[T]]$ generated by rational series of the form $p_{e,i}(T) = \frac{\mathbf{L}^{e}T^{i}}{1-\mathbf{L}^{e}T^{i}}$ where $e\in \mathbf{Z}$ and $i>0$. \index{MX@$\mathscr{M}_X^{\hat{\mu}}[[T]]_{\mathrm{rat}}$} There is a unique morphism of $\mathscr{M}_X^{\hat{\mu}}$-algebras $$\lim_{T\to \infty}:\mathscr{M}_X^{\hat{\mu}}[[T]]_{\mathrm{rat}}\longrightarrow \mathscr{M}_X^{\hat{\mu}}$$ such that $$\lim_{T\to \infty}\ p_{e,i}(T) = -1,$$ for any $e\in\mathbf{Z}$ and $i>0$. More generally, for an $k$-variety $S$ and for a variety $X$ over $S$, we define $\mathscr{M}_X^{\hat{\mu}}[[T]]_{\mathrm{rat},S}$ to be the $\mathscr{M}_S^{\hat{\mu}}$-subalgebra of $\mathscr{M}_X^{\hat{\mu}}[[T]]$ generated by rational series of the form $p_{e,i}(T) = \frac{\mathbf{L}^{e}T^{i}}{1-\mathbf{L}^{e}T^{i}}$ where $e\in \mathbf{Z}$ and $i>0$. \index{MX@$\mathscr{M}_X^{\hat{\mu}}[[T]]_{\mathrm{rat},S}$} There is a unique morphism of $\mathscr{M}_S^{\hat{\mu}}$-algebras $$\lim_{T\to \infty}:\mathscr{M}_X^{\hat{\mu}}[[T]]_{\mathrm{rat},S}\longrightarrow \mathscr{M}_X^{\hat{\mu}}$$ such that $$\lim_{T\to \infty}\ p_{e,i}(T) = -1,$$ for any $e\in\mathbf{Z}$ and $i>0$. \subsection{Motivic vanishing cycles}\label{sect.vanrecall} In \cite{DL98,DL99,DL01,DL02} Denef and Loeser defined and studied the notions of \emph{motivic nearby fibre} and \emph{motivic vanishing cycles}. For a smooth connected variety $X$ over$k$ of dimension $d$ and a morphism $f:X\to \mathbf{A}^1_k$, the motivic nearby fibre $\psi_{f}$ of $f$ at $0\in \mathbf{A}^1_k$ is an element of $\mathscr{M}_{X_0(f)}^{\hat{\mu}}$ (where $X_0(f)$ is the fibre of $X$ above~$0$) defined in terms of some motivic zeta function $Z_{f}$. More precisely, denoting by $\mathscr{L}_n(X)$ the space of $n$-jets of $X$, we define for every $n\geq 1$ $$\mathscr{X}_n(f) :=\{\gamma\in\mathscr{L}_n(X)\ |\ f(\gamma) \equiv t^n (\mathrm{mod}\ t^{n+1})\}\in\mathscr{M}_{X_0(f)}^{\hat{\mu}},$$ the $X_0(f)$-variety structure of $\mathscr{X}_n(f)$ being induced by the truncation morphism $\mathscr{L}_n(X)\to X$, and the $\hat{\mu}$-action being the one induced by the $\mu_n$-action given by $a.\gamma(t) = \gamma(at)$. We then put $$Z_{f}(T) = \sum_{n\geq 1} [\mathscr{X}_{n}(f)\to X_0(f)]\mathbf{L}^{-nd}T^{n}\in\mathscr{M}_{X_0(f)}^{\hat{\mu}}[[T]].$$ One may write $\mathscr{X}_n(f/k)$, resp. $Z_{f/k}$ if one wants to keep track of the base field $k$. Let $X$ be a smooth variety over $k$, not necessarily connected, and $f:X\to \mathbf{A}^1_k$ a morphism. Let~$C$ be the set of connected components of $X$, which are smooth varieties of pure dimension. Then the above definition extends immediately to the pair $(X,f)$ by putting: $$Z_f(T) = \sum_{Y\in C}\1_{Y}Z_{f_{|Y}}(T) \in \mathscr{M}_{X_0(f)}^{\hat{\mu}}[[T]],$$ where $\1_{Y}$ denotes the element $[Y\cap X_0(f)\to X_0(f)]\in \mathscr{M}_{X_0(f)}^{\hat{\mu}}$ corresponding to the inclusion of the $Y$-component of $X_0(f)$ into $X_0(f)$, with trivial $\hat{\mu}$ action. If $f$ is constant, we have $Z_{f}(T) = 0.$ More generally, Denef and Loeser showed in \cite{DL02} that $Z_f$ is a rational function by giving a formula for it in terms of a log-resolution of $(X,X_0(f))$. For this, let~$X$ be a smooth variety over~$k$ of pure dimension~$d$, and $f:X\to \mathbf{A}^1_k$ a morphism such that $X_0(f)$ is nowhere dense in~$X$. Let $h:X'\to X$ be a log-resolution of the pair $(X,X_0(f))$. Let $(E_i)_{i\in I}$ be the family of irreducible components of $ h^{-1}(X_0(f))$, and for every $i\in I$, let $a_i$ be the multiplicity of $f\circ h$ along $E_i$. For every $J\subset I$ we put $E_J = \cap_{j\in J}E_j$, $E_J^{\circ} = E_J\setminus \cup_{i\not\in J} E_i$ and $a_J = \gcd_{j\in J}(a_j)$. For every $J\subset I$, one defines an unramified Galois cover $\widetilde{E}_J^{\circ}\to E_J^{\circ}$ by glueing locally constructed covers obtained as follows: around every point of $E^{\circ}_J$, one can find an open subscheme $U$ of $X'$ on which $f\circ h = u \prod_{j\in J}x_j^{a_j}$, where $x_j$ is an equation for $E_j\cap U$ and $u$ is an invertible function on~$U$. One takes the étale cover of $E_J^{\circ}\cap U$ induced by the étale cover $U'\to U$ obtained by taking the $a_J$-th root of $u^{-1}$. There is a natural $\mu_{a_J}$-action on $\widetilde{E_J}^{\circ}$ which induces a $\hat{\mu}$-action in the obvious way. \index{Denef and Loeser's formula} \begin{theorem}[Denef-Loeser] \label{DLrat}Let $X$ be a smooth $k$-variety of pure dimension $d$, and $f:X\to \mathbf{A}^1_k$ a morphism such that $X_0(f)$ is nowhere dense in $X$. Let $h:X'\to X$ be a log-resolution of the pair $(X,X_0(f))$. Let $(E_i)_{i\in I}$ be the family of irreducible components of $h^{-1}(X_0(f))$. For every $i\in I$, let $a_i$ be the multiplicity of $f\circ h$ along $E_i$ and let $\nu_i-1$ be the multiplicity of the Jacobian ideal of $h$ along $E_i$. Then one has $$Z_f(T) = \sum_{\varnothing \neq J\subset I} (\mathbf{L}- 1)^{|J|-1} \left[\widetilde{E_J}^{\circ}\to X_0(f),\hat{\mu}\right]\prod_{j\in J} \frac{\mathbf{L}^{-\nu_j}T^{a_j}}{1-\mathbf{L}^{-\nu_j}T^{a_j}}\in\mathscr{M}_{X_0(f)}^{\hat{\mu}}[[T]],$$ where $\widetilde{E_J}^{\circ}$ is the Galois cover defined above. In particular $Z_f(T)\in \mathscr{M}_{X_0(f)}^{\hat{\mu}}[[T]]_{\mathrm{rat}}$.\end{theorem} \begin{cor}\label{Zfrational} Let $X$ be a smooth variety over $k$ and $f:X\to \mathbf{A}^1_k$ a morphism. Then $Z_f(T)$ is an element of $\mathscr{M}_{X_0(f)}^{\hat{\mu}}[[T]]_{\mathrm{rat}}.$ \end{cor} \begin{proof} Let $C$ be the set of connected components of $X$. By definition, we have $Z_f = \sum_{Y\in C}\1_{Y}Z_{f_{|Y}}$, and each $Z_{f_{|Y}}$ is 0 if $f_{|Y}$ is constant, or is an element of $\mathscr{M}_{Y_0(f_{|Y})}^{\hat{\mu}}[[T]]_{\mathrm{rat}}$ (which is naturally a $\mathscr{M}_X^{\hat{\mu}}$-subalgebra of $\mathscr{M}^{\hat{\mu}}_{X_0(f)}[[T]]_{\mathrm{rat}}$) by theorem \ref{DLrat}, whence the result. \end{proof} By corollary \ref{Zfrational}, it makes sense to define the \emph{motivic nearby fibre} \index{motivic nearby fibre} $\psi_{f}$ \index{psif@$\psi_f$} of $f$ at $0$ as $$\psi_{f} = -\lim_{T\to \infty}Z_{f}(T) \in\mathscr{M}_{X_0(f)}^{\hat{\mu}}$$ and the \emph{motivic vanishing cycles} \index{motivic vanishing cycles} $\phi_{f}$ of $f$ at $0$ as $$\phi_{f} := [X_0(f)\xrightarrow{\mathrm{id}} X_0(f)] - \psi_{f}\in\mathscr{M}_{X_0(f)}^{\hat{\mu}}.$$\index{phif@$\phi_f$} For the vanishing cycles, we use the definition in \cite{LS}, which differs from the one by Denef and Loeser by a sign (which will be important for the construction of our motivic measure, see remark 5.5 in~\cite{LS}). \index{motivic vanishing cycles!sign} Under the conditions and with the notations of theorem \ref{DLrat}, we have $$\psi_f = \sum_{\varnothing\neq J \subset I}(1-\mathbf{L})^{|J|-1}\left[\widetilde{E_J}^{\circ}\to X_0(f),\hat{\mu}\right]\in\mathscr{M}_{X_0(f)}^{\hat{\mu}}.$$\index{psif@$\psi_f$!formula} \begin{example}\label{vancyclespecialcases} Here are some important special cases: \begin{enumerate}[(I)] \item\label{constantproperty} Assume $f=a$ is constant. If $a\neq 0$ then we in fact have $X_0(f) = \varnothing$, so that $\mathscr{M}^{\hat{\mu}}_{X_0(f)}=0$ and $\psi_f = \phi_f = 0$. If $a=0$, then $Z_f = 0$, so $\psi_f = 0$, whereas $\phi_{f} = [X\xrightarrow{\mathrm{id}} X]\in\mathscr{M}_X^{\hat{\mu}}$ (the action being necessarily trivial). \item In the case when $f$ is non-constant and $X_0(f)$ is smooth and nowhere dense in $X$, then $X\xrightarrow{\mathrm{id}} X$ gives a log-resolution, and we get $\psi_{f} = [X_0(f)\xrightarrow{\mathrm{id}}X_0(f)]$, and $\phi_{f} = 0$. \item \label{singproperty} As a consequence, when $f$ is not constant equal to $0$, $\phi_{f}$ lives above $\mathrm{Sing}(f)$, that is, the closed subscheme of $X$ defined by the vanishing of the differential $\mathrm{d} f\in\Gamma(X,\Omega_{X/k}^1)$. Thus, $\phi_{f}$ may be seen in a canonical way as an element of $\mathscr{M}_{X_0(f)\ \cap\ \mathrm{Sing}(f)}^{\hat{\mu}}$ \end{enumerate} \end{example} \begin{remark}\label{vandimension} From the formula in theorem \ref{DLrat}, it is clear that, if $X_0(f)$ is nowhere dense in $X$ and if $a_X:X\to k$ is the structural morphism, then $\dim ((a_X)_!\phi_f) \leq \dim X -1$. Without any assumption on $X_0(f)$, we have the weaker inequality $\dim ((a_X)_!\phi_f) \leq \dim X$. \end{remark} \subsection{Relative motivic vanishing cycles} The previous definitions also make sense in the relative setting. Let $k$ be a field of characteristic zero, $S$ a $k$-variety and $X$ a variety over $S$ of relative dimension $d$, smooth over $S$, together with a morphism $f:X\to \mathbf{A}^1$. Then we may define $$\mathscr{X}_n(f/S) := \{\gamma \in \mathscr{L}_n(X/S)|\ f(\gamma) \equiv t^n(\mathrm{mod}\ t^{n+1})\},$$ with action of $\hat{\mu}$ given in the same manner, and $Z_{f/S}(T) = \sum_{n\geq 1} [\mathscr{X}_n(f/S)]\mathbf{L}^{-nd}T^{n}\in \mathscr{M}_{X_0(f)}^{\hat{\mu}}[[T]].$ Here, $\mathscr{L}_n(X/S)$ is the $n$-th jet scheme of $X$ relatively to $S$. By the base-change properties for jet schemes (see \cite{CNS}, chapter 2, (2.1.4)), for every $s\in S$ we have $$\mathscr{L}_n(X/S)\times_S\kappa(s) = \mathscr{L}_n(X\times_S\kappa(s)/\kappa(s))$$ where $\kappa(s)$ is the residue field of $s$, so that the fibre above $s$ of $Z_{f/S}$ is exactly $Z_{f_s/\kappa(s)}(T)\in\mathscr{M}_{X_0(f)_s}^{\hat{\mu}}[[T]]$, where $f_s:X_s\to \mathbf{A}^1$ is the morphism induced by $f$. \begin{lemma} The series $Z_{f/S}(T)$ is an element of $\mathscr{M}_{X_0(f)}^{\hat{\mu}}[[T]]_{\mathrm{rat},S}$. \end{lemma} \begin{proof} The proof goes by the classical ``spreading-out'' method. Take a generic point $\eta$ of~$S$: then by corollary \ref{Zfrational}, the series $Z_{f_{\eta}}$ is an element of $\mathscr{M}_{X_0(f)_{\eta}}^{\hat{\mu}}[[T]]_{\mathrm{rat}}$, and its coefficients can be spread out over some open subset $U$ of $S$. One concludes by Noetherian induction. \end{proof} In particular, it makes sense to define the relative versions of the motivic nearby fibre \index{motivic nearby fibre!relative} and motivic vanishing cycles \index{motivic vanishing cycles!relative}, by $$\psi_{f/S} = -\lim_{T\to \infty} Z_{f/S}(T)\ \ \text{and}\ \ \phi_{f/S} = [X_0(f)\xrightarrow{\mathrm{id}} X_0(f)] - \psi_{f/S} \in\mathscr{M}_{X_0(f)}^{\hat{\mu}}$$ \subsection{The motivic nearby fibre as a group morphism} In \cite{GLM}, Guibert, Loeser and Merle construct, for every smooth variety $Y$ together with a function $h:Y\to \mathbf{A}^1_k$ and every dense open subset $U$ of $Y$, an object $\mathscr{S}_{h,U}\in \mathscr{M}_{Y_0(h)}^{\hat{\mu}}$, such that $\mathscr{S}_{h,Y} = \psi_h$ is the motivic nearby fibre as defined above and such that these objects fit together into a morphism $\mathscr{S}_h$ as stated in the following theorem: \begin{theorem}[\cite{GLM}, Theorem 3.9] \label{glmtheorem} Let $Y$ be a $k$-variety and $h:Y\to \mathbf{A}^1_k$ a morphism. There exists a unique $\mathscr{M}_k$-linear map $\mathscr{S}_h:\mathscr{M}_Y\to \mathscr{M}_{Y_0(h)}^{\hat{\mu}}$ such that for every proper morphism $p:Z\to Y$ with $Z$ smooth and for every dense open subset $U$ of $Z$, $\mathscr{S}_h([U\to Y]) = p_!(\mathscr{S}_{h\circ p,U}).$ \index{S@$\mathscr{S}_h$, $\mathscr{S}_{h,U}$} \end{theorem} When we want to keep track of the base field $k$, we are going to denote the map in the theorem by $\mathscr{S}_{h/k}$. We are going to need the following corollary, which adapts this to a relative setting: \begin{cor}\label{glmfamily} Le $S$ be a $k$-variety and $Y$ a variety over $S$, with $h:Y\to \mathbf{A}^1_k$ a morphism. There exists a unique $\mathscr{M}_S$-linear map $\mathscr{S}_{h/S}:\mathscr{M}_Y\to \mathscr{M}_{Y_0(h)}^{\hat{\mu}}$ such that for all $s\in S$ the diagram $$\xymatrix{ \mathscr{M}_Y \ar[r]^{\mathscr{S}_{h/S}}\ar[d] &\mathscr{M}_{Y_0(h)}^{\hat{\mu}}\ar[d]\\ \mathscr{M}_{Y_s}\ar[r]^{\mathscr{S}_{h/\kappa(s)}}& \mathscr{M}_{Y_0(h)_s}^{\hat{\mu}}}$$ commutes, where the vertical arrows are induced by the pullback of the inclusion $\{s \}\hookrightarrow S$. \index{S@$\mathscr{S}_{h/S}$} \end{cor} \begin{proof} Uniqueness is immediate by lemma \ref{function.equality}. Denote by $u$ the structural morphism $u:Y\to S$. According to lemma \ref{smoothpropergens}, it suffices to construct $\mathscr{S}_{h/S}([Z\xrightarrow{p} Y])$ for morphisms of $S$-schemes $p:Z\to Y$ such that $T = u\circ p(Z)$ is locally closed, $u\circ p:Z\to T$ smooth, and $p\times_S\mathrm{id}_T:Z\times_S T\to Y\times_S T$ proper. For such a class, put $$\mathscr{S}_{h/S}([Z\xrightarrow{p} Y]) = p_{!}\left(\psi_{h\circ p/S}\right)\in \mathscr{M}_{Y_0(h)}^{\hat{\mu}}.$$ The group $\mathrm{KVar}_Y$ is obtained from the free abelian group $A$ on such generators by taking the quotient with respect to some relations, namely elements of the free abelian group on those generators which belong to the kernel of the canonical surjection $A\to \mathrm{KVar}_Y$. Let $R\in A$ be such a relation. For any element $\a\in A$, denote by $\a_s$ its image in $\mathscr{M}_{Y_s}$ through the composition $A \to \mathrm{KVar}_{Y}\to \mathrm{KVar}_{Y_s}\to \mathscr{M}_{Y_s}$. The definition of $\mathscr{S}_{h/S}$ on elements of $A$ and theorem \ref{glmtheorem} show that, for any element $\a\in A$, $(\mathscr{S}_{h/S}(\a))_s = \mathscr{S}_{h/\kappa(s)}(\a_s)$. In particular, since for every $s\in S$, $R_s = 0$, we have $(\mathscr{S}_{h/S}(R))_s = 0$, and therefore $\mathscr{S}_{h/S}(R) = 0$ by lemma \ref{function.equality}. Thus, $\mathscr{S}_{h/S}$ defines a group morphism $\mathrm{KVar}_Y\to \mathscr{M}_{Y_0(h)}^{\hat{\mu}}$. From the last few lines in the proof of theorem \ref{glmtheorem} in \cite{GLM} (theorem 3.9), it appears that $\mathscr{S}_{h/\kappa(s)}$ is first constructed on $\mathrm{KVar}_{Y_s}$, and then extended to $\mathscr{M}_{Y_s}$ by $\mathscr{M}_{\kappa(s)}$-linearity. By definition, $\mathscr{S}_{h/S}$ is compatible with $\mathscr{S}_{h/\kappa(s)}$ (seen as a morphism with source $\mathrm{KVar}_{Y_s}$) for all $s\in S$. For any $a\in \mathrm{KVar}_S$, and any $x\in \mathrm{KVar}_Y$, the relation $\mathscr{S}_{h/S}(ax) = a\mathscr{S}_{h/S}(x)$ is seen to be true by lemma \ref{function.equality}, because for every $s\in S$, $\mathscr{S}_{h/\kappa(s)}$ is $\mathscr{M}_{\kappa(s)}$-linear. Thus, $\mathscr{S}_{h/S}$ is $\mathrm{KVar}_S$-linear, and we may extend it by $\mathscr{M}_S$-linearity to a $\mathscr{M}_S$-linear morphism $\mathscr{M}_Y\to \mathscr{M}_{Y_0(h)}^{\hat{\mu}}$, which ensures the commutativity of the diagram in the statement of the theorem. \end{proof} \section{The motivic vanishing cycles measure} \label{sect.motvanmeasure} In \cite{LS}, Lunts and Schnürer defined, for an algebraically closed field $k$ of characteristic zero, a motivic measure $\Phi^{\mathrm{tot}}:(\widetilde{\mathscr{M}}_{\mathbf{A}^1_k},\star)\to (\widetilde{\mathscr{M}}_{\mathbf{A}_k^1}^{\hat{\mu}},\star)$, by the formula \begin{equation}\label{phiformula}\Phi^{\mathrm{tot}} = \sum_{a\in k}(i_a)_!(i_a^* - \mathscr{S}_{\mathrm{id}- a})\end{equation} where $i_a:\{a\}\to \mathbf{A}^1_k$ is the inclusion, and $\mathscr{S}_{\mathrm{id} - a}:\mathscr{M}_{\mathbf{A}^1}\to \mathscr{M}_{\{a\}}^{\hat{\mu}}$ is the morphism from theorem \ref{glmtheorem} applied with $Y =\mathbf{A}^1$ and $h= \mathrm{id} -a$ (this is the measure denoted by $\Phi$ in their paper). Formula (\ref{phiformula}) makes sense because for any class $[X\xrightarrow{f}\mathbf{A}^1]$ with $X$ smooth and $f$ proper, we have, denoting by $X_a$ the fibre of $f$ above $a$ and by $f_a:X_a\to\{a\}$ the constant map induced by $f$ on it, by theorem~\ref{glmtheorem} \begin{eqnarray*}(i_a)_!(i_a^* - \mathscr{S}_{\mathrm{id} - a})([X\xrightarrow{f}\mathbf{A}^1]) & = &(i_a)_!\left([X_a \to \{a\}] - (f_a)_!\psi_{f-a}\right) \\ &=& (i_a)_!(f_a)_!([X_a\to X_a] - \psi_{f-a}) \\ &=& f_{!}\phi_{f-a}\end{eqnarray*} which is zero whenever $a$ is not a critical point of $f$. Thus, the sum is always finite because the Grothendieck ring of varieties is generated by such classes, and because the set of critical points of a morphism $f:X\to \mathbf{A}^1_k$ is finite. The image of a class $[X\xrightarrow{f}\mathbf{A}^1]$ with~$X$ smooth and $f$ proper is \begin{equation}\label{atotvanformula}\Phi^{\mathrm{tot}}([X\xrightarrow{f}\mathbf{A}^1]) = \sum_{a\in k} f_!\phi_{f-a} =:\phi^{\mathrm{tot}}_f ,\end{equation} the sum of all vanishing cycles of $f$ at all $a\in k$. In other words, $\phi^{\mathrm{tot}}_f$ is the element of $\mathscr{M}^{\hat{\mu}}_{\mathbf{A}^1_k}$ corresponding to the motivic function sending $a\in\mathbf{A}^1_k$ to $f_!\phi_{f-a}$ In what follows, we are going to need to construct such a measure in families. Therefore, we will give a definition of $\phi^{\mathrm{tot}}_f$ in terms of vanishing cycles relative to the affine line above a base, which behaves well in such a context. For an algebraically closed field $k$ of characteristic zero, this will give an element of $\mathscr{M}_{\mathbf{A}^1_k}^{\hat{\mu}}$ supported above the critical points of~$\mathbf{A}^1_k$, with fibre at every point $a\in k$ given by the vanishing cycles $f_!\phi_{f-a}$, so that we recover formula (\ref{atotvanformula}). \subsection{Total vanishing cycles}\label{sect.totvandef} Let $k$ be a field of characteristic zero, $R$ a variety over $k$, and $X$ be a variety over~$R$, smooth over~$R$, and $f:X\to \mathbf{A}_k^1$ a morphism. We apply the construction of the previous paragraph to the variety $X\times \mathbf{A}_k^1$ over $S = \mathbf{A}_R^1$, together with the morphism $g:X\times \mathbf{A}^1_k\to \mathbf{A}^1_k$ given by $g = f\circ \mathrm{pr}_1 - \mathrm{pr}_2$. We have $(X\times \mathbf{A}^1)_0(g) = \Gamma_f$, where $\Gamma_f\subset X\times \mathbf{A}_k^1$ is the graph of the morphism~$f$, which we may identify with $X$ itself through the first projection. \begin{notation} We denote by~$\widetilde{\phi}^{\ \mathrm{tot}}_{f/R}:= \phi_{g/\mathbf{A}^1_R}$ and~$\widetilde{\psi}^{\ \mathrm{tot}}_{f/R}:=\psi_{g/\mathbf{A}^1_R}$ the corresponding vanishing cycles and nearby fibre, which are naturally defined as elements of $\mathscr{M}_{X}^{\hat{\mu}}$, and related by the identity $$\widetilde{\phi}^{\ \mathrm{tot}}_{f/R} = [X\xrightarrow{\mathrm{id}}X] - \widetilde{\psi}^{\ \mathrm{tot}}_{f/R}.$$ \index{phiti@$\widetilde{\phi}^{\ \mathrm{tot}}_{f/R}$, $\widetilde{\phi}^{\ \mathrm{tot}}_f$}\index{psiti@$\widetilde{\psi}^{\ \mathrm{tot}}_{f/R}$, $\widetilde{\psi}^{\ \mathrm{tot}}_f$} Denote by $f_R$ the morphism $X\to \mathbf{A}^1_R = R\times \mathbf{A}^1_k$ given by $(u,f)$ where $u:X\to R$ is the structural map. We define~$\phi^{\mathrm{tot}}_{f/R}:=(f_R)_!\widetilde{\phi}^{\ \mathrm{tot}}_{f/R}$ and~$\psi^{\mathrm{tot}}_{f/R}:=(f_R)_!\widetilde{\psi}^{\ \mathrm{tot}}_{f/R}$ their images in $\mathscr{M}_{\mathbf{A}^1_R}^{\hat{\mu}}$, satisfying $$\phi^{\mathrm{tot}}_{f/R} = [X\xrightarrow{f_R} \mathbf{A}^1_R] - \psi^{\mathrm{tot}}_{f/R}.$$ In the case where $R = k$, we will simply write $\widetilde{\psi}^{\ \mathrm{tot}}_f, \widetilde{\phi}^{\ \mathrm{tot}}_f$ etc. \index{phito@$\phi^{\mathrm{tot}}_{f/R}$, $\phi^{\mathrm{tot}}_f$}\index{psito@$\psi^{\mathrm{tot}}_{f/R}$, $\psi^{\mathrm{tot}}_f$} \end{notation} These objects will be called \textit{total nearby fibre} \index{total nearby fibre} and \textit{total vanishing cycles}, \index{total vanishing cycles} because they take into account the nearby fibre and vanishing cycles of $f$ at all points of $\mathbf{A}^1$: indeed, we see that for any $t\in \mathbf{A}^1$, $$\left(\widetilde{\psi}^{\ \mathrm{tot}}_{f/R}\right)_t = \psi_{g_t/R_{\kappa(t)}} = \psi_{(f-t)/R_{\kappa(t)}}\in\mathscr{M}_{X_t}^{\hat{\mu}},$$ where $f-t$ is the function $X\times_k\kappa(t) \to \mathbf{A}^1_{\kappa(t)}$ given by $x\mapsto f(x) -t$ and $R_{\kappa(t)} = R\times_k\kappa(t)$. A similar remark holds for $\widetilde{\phi}^{\ \mathrm{tot}}_{f/R}, \psi^{\mathrm{tot}}_{f/R}$ and $\phi^{\mathrm{tot}}_{f/R}$. The properties of vanishing cycles we recalled above lead to similar properties for total vanishing cycles. \begin{remark}\label{constantvan} Let $X$ and $R$ be as above, and assume $f:X\to \mathbf{A}^1_k$ is constant. Then property (\ref{constantproperty}) of example \ref{vancyclespecialcases} together with lemma \ref{function.equality} imply that $\widetilde{\psi}^{\ \mathrm{tot}}_f = 0$. In particular, we have $\widetilde{\phi}^{\ \mathrm{tot}}_f = [X\xrightarrow{\mathrm{id}}X]$ and $\phi^{\mathrm{tot}}_f = [X\xrightarrow{f_R} \mathbf{A}^1_R]$ (with trivial $\hat{\mu}$-action). \end{remark} \begin{remark}\label{totvandimension} With the above notations, it follows from remark \ref{vandimension} applied above every point of~$\mathbf{A}^1_R$, that $$\dim_{\mathbf{A}^1_{R}}(\phi^{\mathrm{tot}}_{f/R}) \leq \dim_{\mathbf{A}^1_{R}} X.$$ \end{remark} We recall that we denote by $\mathrm{Sing}(f)$ the vanishing locus of the differential form $\mathrm{d} f$, and we define $\mathrm{Crit}(f)$ to be the scheme-theoretic image $f_R(\mathrm{Sing}(f))\subset \mathbf{A}_R^1$. \begin{prop}\label{smoothvanzero} Let $X$ be a smooth $R$-variety and $f:X\to \mathbf{A}^1_k$ a morphism. Then $\widetilde{\phi}^{\ \mathrm{tot}}_{f/R}$ (resp. $\phi^{\mathrm{tot}}_{f/R}$) is canonically an element of $\mathscr{M}_{\mathrm{Sing}(f)}^{\hat{\mu}}$ (resp. $\mathscr{M}_{\mathrm{Crit}(f)}^{\hat{\mu}}$). In particular, if $f$ is smooth, then $\widetilde{\phi}^{\ \mathrm{tot}}_{f/R} = 0$ and $\phi^{\mathrm{tot}}_{f/R} = 0$. \end{prop} \begin{proof} This is a consequence of property (\ref{singproperty}) above applied point by point, together with lemma~\ref{function.equality}. \end{proof} \subsection{The Thom-Sebastiani theorem}\label{sect.TS} The classical theorem proved by Thom and Sebastiani in \cite{TS} is a multiplicativity result for the cohomology of Milnor fibres: for two germs $f:(\mathbf{C}^n,0)\to (\mathbf{C},0)$ and $g:(\mathbf{C}^n,0)\to (\mathbf{C},0)$ of holomorphic functions with an isolated critical point at 0, it expresses the reduced cohomology of the Milnor fibre of the germ $f\oplus g:(x,y)\mapsto f(x) + g(y)$ as a tensor product of the reduced cohomologies of the Milnor fibres of $f$ and $g$, together with compatibilities of monodromy actions. An analogue of this for motivic vanishing cycles was first proved by Denef and Loeser in the completed Grothendieck ring of Chow motives in \cite{DL99}. Then Looijenga, who in \cite{Loo} introduced an appropriate convolution operation, and Denef Loeser in \cite{DL01}, showed that essentially the same proof gave an equality in the Grothendieck ring of varieties with $\hat{\mu}$-action. Finally, in \cite{GLM}, Guibert, Loeser and Merle showed how one may recover the motivic Thom-Sebastiani theorem from a formula involving iterated vanishing cycles. In that paper, the theorem is stated using the generalised convolution operator~$\Psi$ which we defined in section \ref{sect.convolution}. We recall the motivic Thom-Sebastiani theorem, in the form in which it appears in \cite{GLM}: \index{Thom-Sebastiani!motivic}\index{motivic Thom-Sebastiani} \begin{theorem}\label{TS} Let $Y_1,Y_2$ be smooth varieties over $k$, with morphisms $g_1:Y_1\to \mathbf{A}^1_k$, $g_2:Y_2\to\mathbf{A}^1_k$. Denote by $i$ the natural inclusion $Y_0:= g_1^{-1}(0)\times g_2^{-1}(0)\to (g_1\oplus g_2)^{-1}(0) $. Then $$i^{*}\left(\phi_{g_1\oplus g_2}\right) = \Psi(\phi_{g_1}\boxtimes \phi_{g_2})$$ in $\mathscr{M}_{Y_0}^{\hat{\mu}}$. \end{theorem} This may be globalised in the following manner: \begin{cor}[Thom-Sebastiani for total vanishing cycles] \label{TStotal} Let $X_1,X_2$ be smooth varieties, with morphisms $f_1:X_1\to \mathbf{A}^1_k$ and $f_2:X_2\to \mathbf{A}^1_k$. Then we have the equalities \begin{equation}\label{TSequation}\widetilde{\phi}^{\ \mathrm{tot}}_{f_1\oplus f_2}= \Psi(\widetilde{\phi}^{\ \mathrm{tot}}_{f_1}\boxtimes \widetilde{\phi}^{\ \mathrm{tot}}_{f_2})\end{equation} in $\mathscr{M}_{X_1\times X_2}^{\hat{\mu}}$, and \begin{equation}\label{TSequationA1}\phi^{\mathrm{tot}}_{f_1\oplus f_2}= \Psi(\phi^{\mathrm{tot}}_{f_1}\star \phi^{\mathrm{tot}}_{f_2})\end{equation} in $\mathscr{M}_{\mathbf{A}^1_k}^{\hat{\mu}}$. \end{cor} \index{motivic Thom-Sebastiani!for $\widetilde{\phi}^{\ \mathrm{tot}}$ and $\phi^{\mathrm{tot}}$} \begin{proof} Let $g_i:X_i\times \mathbf{A}^1 \to \mathbf{A}^1$ be the functions defined by $g_i = f_i\circ \mathrm{pr}_1 - \mathrm{pr}_2$ for $i=1,2$, and $g:X_1\times X_2\times \mathbf{A}^1\to\mathbf{A}^1$ be the function defined by $g = f_1\circ \mathrm{pr}_1 + f_2\circ \mathrm{pr}_2 - \mathrm{pr}_3$. By definition, the left hand side of the first equality is exactly $\phi_{g/\mathbf{A}^1}$, whereas the right-hand side is given by $\Psi(\phi_{g_1/\mathbf{A}^1}\boxtimes \phi_{g_2/\mathbf{A}^1}).$ We have the commutative diagram $$\xymatrix{X_1\times X_2 \ar[d]_{(f_1,f_2)}\ar[rd]^{f_1\oplus f_2} & \\ \mathbf{A}^1\times \mathbf{A}^1 \ar[r]^-{+} & \mathbf{A}^1}$$ Let $t\in \mathbf{A}^1\times \mathbf{A}^1$ with residue field denoted by $K$, and put $t_1 = \mathrm{pr}_1(t)$, $t_2 = \mathrm{pr}_2(t)$ and $s = t_1 + t_2$. Let $i$ be the natural inclusion $$i:f_1^{-1}(t_1)\times_kf_2^{-1}(t_2)\to (f_1\oplus f_2)^{-1}(s)\times_{\kappa(s)}K.$$ Pulling back via the inclusion of $t$ inside $\mathbf{A}^1\times \mathbf{A}^1$, we see that the left-hand side $\phi_{g/\mathbf{A}^1}$ goes to $i^{*}(\phi_{(f_1 \oplus f_2 - s)/K})$, whereas the right-hand side $\phi_{g_1/\mathbf{A}^1}\boxtimes \phi_{g_2/\mathbf{A}^1}$ goes to $$\Psi(\phi_{(f_1-t_1)/\kappa(t_1)}\boxtimes \phi_{(f_2-t_2)/\kappa(t_2)})$$ because $\Psi$ commutes with pullbacks. These elements are equal in $\mathscr{M}_{f_1^{-1}(t_1)\times f_2^{-1}(t_2)}^{\hat{\mu}}$ by theorem \ref{TS}. Since $t$ was arbitrary, we get the first equality in the statement of the theorem. The second equality is then obtained easily by applying $(f_1\oplus f_2)_!$ on both sides and making use of the commutative diagram above. \end{proof} \subsection{Total vanishing cycles as a motivic measure}\label{subsect.totvanmeas} We are going to prove the following theorem: \begin{theorem}\label{motmeasureA1} Let $k$ be a field of characteristic zero. There is a unique morphism $$\Phi^{\mathrm{tot}}: (\mathrm{KVar}_{\mathbf{A}^1_k},\star) \to (\widetilde{\mathscr{M}}_{\mathbf{A}^1_k}^{\hat{\mu}},\star)$$ of $\mathrm{KVar}_{k}$-algebras such that $\Phi([X\xrightarrow{f} \mathbf{A}^1_k]) = \phi^{\mathrm{tot}}_f$ for any smooth variety $X$ over $k$ and any proper morphism $f:X\to \mathbf{A}^1_k$. \end{theorem}\index{phitot@$\Phi^{\mathrm{tot}}$}\index{total vanishing cycles measure} The case where $k$ is algebraically closed was treated by Lunts and Schnürer in \cite{LS}, and our proof goes along the same lines as theirs, the main difference being that we replace their total vanishing cycles $(\phi_f)_{\mathbf{A}^1_k}$ by our total vanishing cycles $\phi^{\mathrm{tot}}_f$ which behave better in relative settings. We start by proving the following result, which is the analogue in our setting of Theorem 5.3 in \cite{LS}: \begin{prop} \label{motmeasureaux} There exists a unique morphism $\Phi':\mathscr{M}_{\mathbf{A}^1_k} \to \mathscr{M}_{\mathbf{A}^1_k}^{\hat{\mu}}$ of $\mathscr{M}_k$-modules such that $\Phi'([X\xrightarrow{f} \mathbf{A}^1_k]) = \phi^{\mathrm{tot}}_f$ for any smooth variety $X$ over $k$ and any proper morphism $f:X\to \mathbf{A}^1_k$. \end{prop} \begin{proof} Uniqueness follows from lemma \ref{smoothpropergensfield} so it remains to prove existence. Apply corollary~\ref{glmfamily} to $Y = \mathbf{A}^1_k\times \mathbf{A}^1_k$, seen as a variety over $S = \mathbf{A}^1_k$ via the second projection, together with $h = \mathrm{pr}_1 - \mathrm{pr}_2$: we get an $\mathscr{M}_{\mathbf{A}^1}$-linear map $$\mathscr{S}_{h/\mathbf{A}^1}:\mathscr{M}_{\mathbf{A}^1_k\times\mathbf{A}^1_k}\to \mathscr{M}_{ \Delta}^{\hat{\mu}},$$ where $\Delta\subset \mathbf{A}^1_k\times \mathbf{A}^1_k$ is the diagonal $h^{-1}(0)$, which is isomorphic to $\mathbf{A}^1_k$ via $\mathrm{pr}_2$. Composing this with the pull-back $\mathrm{pr}_2^*:\mathscr{M}_{\mathbf{A}^1_k}\to \mathscr{M}_{\mathbf{A}^1_k\times \mathbf{A}^1_k}$, sending a class $[X\xrightarrow{f}\mathbf{A}_k^1]$ to $[X\times \mathbf{A}_k^1\xrightarrow{(f\circ \mathrm{pr}_1,\mathrm{pr}_2)}\mathbf{A}^1_k\times\mathbf{A}^1_k]$ we get a $\mathscr{M}_k$-linear map $\mathscr{S}_{h/\mathbf{A}^1}\circ\mathrm{pr}_2^*:\mathscr{M}_{\mathbf{A}^1_k}\to \mathscr{M}_{\mathbf{A}^1_k}^{\hat{\mu}}$. We put, for any $\a\in\mathscr{M}_{\mathbf{A}^1_k}$, $$\Phi'(\a) = \a - \mathscr{S}_{h/\mathbf{A}^1}\circ \mathrm{pr}_2^*(\a).$$ Let $f:X\to \mathbf{A}^1_k$ be a proper morphism with $X$ smooth. We denote by $p$ the morphism $$\mathrm{pr}_2^*(f):X\times \mathbf{A}^1_k\xrightarrow{(f\circ \mathrm{pr}_1,\mathrm{pr}_2)}\mathbf{A}^1_k\times \mathbf{A}^1_k$$ which is again proper by base change. We claim that $\mathscr{S}_{h/\mathbf{A}^1_k}\circ\,\mathrm{pr}_2^*([X\xrightarrow{f}\mathbf{A}^1])=\psi^{\mathrm{tot}}_f$. Indeed, by theorems \ref{glmtheorem} and \ref{glmfamily}, for every $t\in \mathbf{A}^1$, the fibre above $t$ of this element is given by \begin{eqnarray*}\left(\mathscr{S}_{h/\mathbf{A}^1_k}([X\times \mathbf{A}^1_k\xrightarrow{p}\mathbf{A}^1_k\times \mathbf{A}^1_k])\right)_t &=& \mathscr{S}_{h_t/\kappa(t)}([X\times_k\kappa(t)\xrightarrow{p_t} \mathbf{A}^1_{\kappa(t)}])\\ &=& (p_t)_!(\psi_{(h\circ p)_t/\kappa(t)})\in\mathscr{M}_{\kappa(t)}^{\hat{\mu}}.\end{eqnarray*} because $p_t$ is proper and $X_t$ smooth over $\kappa(t)$. On the other hand, we have $h\circ p = f\circ \mathrm{pr}_1 - \mathrm{pr}_2$, so that for every $t\in \mathbf{A}^1$, $\psi_{(h\circ p)_t/\kappa(t)} = (\psi_{h\circ p/\mathbf{A}^1_k})_t = (\widetilde{\psi}^{\ \mathrm{tot}}_f)_t$. Moreover, for every $t$, we have $p_t = f\times_k\kappa(t)$, so that $(p_t)_!(\psi_{(h\circ p)_t/\kappa(t)}) = \left(f_!(\widetilde{\psi}^{\ \mathrm{tot}}_f)\right)_t = \left(\psi^{\mathrm{tot}}_f\right)_t$, which proves the claim. Finally, we may conclude that $\Phi'([X\xrightarrow{f} \mathbf{A}^1_k]) = [X\xrightarrow{f} \mathbf{A}^1_k] - \psi^{\mathrm{tot}}_f = \phi^{\mathrm{tot}}_f$. \end{proof} As in remarks 5.4, 5.6 and 5.7 of \cite{LS}, the map $\Phi'$ has the following properties:\begin{lemma}\label{Phiprop} For any $k$-variety $Z$, we have \begin{enumerate}[(a)]\item \label{affineid} $\Phi'([Z\xrightarrow{0}\mathbf{A}^1_k]) = [Z\xrightarrow{0}\mathbf{A}^1_k]$ (with trivial action), so that in particular $\Phi'(\mathbf{L}_0) = \mathbf{L}_0$. \item\label{affinezero} $\Phi'([Z\times \mathbf{A}^1_k\xrightarrow{\mathrm{pr}_2} \mathbf{A}^1_k]) = 0$, so that in particular $\Phi'(\mathbf{L}_{\mathbf{A}^1_k}) = 0$. \item \label{smoothzero} $\Phi'([Z\xrightarrow{f}\mathbf{A}^1_k]) = 0$ whenever $f$ is a smooth and proper morphism. \end{enumerate} \end{lemma} \begin{proof} The subgroup of $\mathrm{KVar}_{\mathbf{A}^1_k}$ generated by classes of the form $[Z\xrightarrow{0}\mathbf{A}^1_k]$ is the image of the morphism $(\iota_k)_!:\mathrm{KVar}_k\to \mathrm{KVar}_{\mathbf{A}^1_k}$ from section \ref{section.grothaffineline}. It is therefore generated by such classes where $Z$ is additionally assumed to be smooth and proper, and these are preserved by $\Phi'$ by proposition \ref{motmeasureaux} and remark \ref{constantvan}. This proves (\ref{affineid}). In the same way, the subgroup of $\mathrm{KVar}_{\mathbf{A}^1_k}$ generated by classes of the form $[Z\times \mathbf{A}^1_k\xrightarrow{\mathrm{pr}_2} \mathbf{A}^1_k]$ is the image of the morphism $\epsilon_k^*:\mathrm{KVar}_k\to \mathrm{KVar}_{\mathbf{A}^1_k}$ from section \ref{section.grothaffineline}, and we may again assume $Z$ to be smooth and proper to prove (\ref{affinezero}). The statement then follows from propositions \ref{motmeasureaux} and \ref{smoothvanzero}. As for property (\ref{smoothzero}), if $f:Z\to \mathbf{A}^1_k$ is smooth and proper, then $Z$ is smooth, so this follows directly from propositions \ref{motmeasureaux} and \ref{smoothvanzero}. \end{proof} We then consider the morphism of $\mathrm{KVar}_k$-modules $\Phi^{\mathrm{tot}}$, defined as the composition $$\mathrm{KVar}_{\mathbf{A}^1_k}\to \mathscr{M}_{\mathbf{A}^1_k} \xrightarrow{\Phi'} \mathscr{M}_{\mathbf{A}^1_k}^{\hat{\mu}} \to \widetilde{\mathscr{M}}_{\mathbf{A}^1_k}^{\hat{\mu}},$$ where the first arrow is the localisation morphism, and the last arrow is the isomorphism of $\mathscr{M}_k$-modules from lemma \ref{mkiso}. The proof of theorem \ref{motmeasureA1} is complete once we have the following: \begin{prop}\label{motmeasureringmorph} The morphism $\Phi^{\mathrm{tot}}$ is a morphism $$\Phi^{\mathrm{tot}}:(\mathrm{KVar}_{\mathbf{A}^1_k},\star)\to (\widetilde{\mathscr{M}}_{\mathbf{A}^1_k}^{\hat{\mu}},\star)$$ of $\mathrm{KVar}_k$-algebras. \end{prop}\index{phitot@$\Phi^{\mathrm{tot}}$}\index{total vanishing cycles measure} \begin{proof} Part (\ref{affineid}) of lemma \ref{Phiprop} shows that $\Phi^{\mathrm{tot}}$ maps the unit element to the unit element, and that it is compatible with the algebra structure maps. By lemma \ref{smoothpropergensfield}, we may restrict to checking multiplicativity for classes of projective morphisms $f:X\to \mathbf{A}^1_k$ with $X$ a connected quasi-projective $k$-variety which is smooth over $k$. Let $X,Y$ therefore be connected quasi-projective smooth $k$-varieties, together with projective morphisms $f:X\to \mathbf{A}^1_k$ and $g:Y\to \mathbf{A}^1_k$. Then we know by proposition \ref{motmeasureaux} and corollary \ref{TStotal} that $$\Phi^{\mathrm{tot}}([X\xrightarrow{f}\mathbf{A}^1_k])\star \Phi^{\mathrm{tot}}([Y\xrightarrow{g}\mathbf{A}^1_k]) = (\phi^{\mathrm{tot}}_f)\star(\phi^{\mathrm{tot}}_g) = \phi^{\mathrm{tot}}_{f\oplus g}.$$ It remains to show that $\phi^{\mathrm{tot}}_{f\oplus g} = \Phi^{\mathrm{tot}}([X\times Y \xrightarrow{f\oplus g}\mathbf{A}^1_k])$, which does not follow directly from proposition \ref{motmeasureaux} because $f\oplus g$ is not proper in general. As in the proof of theorem 5.9 in \cite{LS}, we make use of lemma \ref{compactification} below (and the notation therein) which gives us a compactification $h:Z\to \mathbf{A}^1_k$ of $f\oplus g$ for which we may write, $$[X\times Y\xrightarrow{f\oplus g} \mathbf{A}^1_k] = [Z\xrightarrow{h}\mathbf{A}^1_k]+\sum_{I}(-1)^{|I|} [D_I\xrightarrow{h_I}\mathbf{A}^1_k].$$ Since all $h_{I}:D_{I}\to \mathbf{A}^1_k$ are projective and smooth, their image by $\Phi^{\mathrm{tot}}$, given by their total vanishing cycles, is zero by proposition \ref{smoothvanzero}. On the other hand, $\mathrm{Sing}(h) = \mathrm{Sing}(f\oplus g)$, and therefore $$\Phi^{\mathrm{tot}}([X\times Y\xrightarrow{f\oplus g} \mathbf{A}^1_k]) = \Phi^{\mathrm{tot}}([Z\xrightarrow{h}\mathbf{A}^1_k]) = \phi^{\mathrm{tot}}_h = \phi^{\mathrm{tot}}_{f\oplus g}.$$ \end{proof} \subsection{The motivic vanishing cycles measure over a base} We are now going to prove that the above motivic measure may be defined in families. For this purpose, we are going to denote for any field $k$ of characteristic zero by $\Phi^{\mathrm{tot}}_k:\mathrm{KVar}_{\mathbf{A}^1_k}\to \mathscr{M}_{\mathbf{A}^1_k}^{\hat{\mu}}$ the motivic measure from theorem \ref{motmeasureA1} relative to the field $k$. \begin{theorem} \label{motmeasurebase} Let $k$ be a field of characteristic zero and $S$ a variety over $k$. There is a unique morphism of $\mathrm{KVar}_S$-algebras $\Phi^{\mathrm{tot}}_S: (\mathrm{KVar}_{\mathbf{A}^1_S},\star)\to (\widetilde{\mathscr{M}}^{\hat{\mu}}_{\mathbf{A}^1_S},\star)$ such that for every $s\in S$ the diagram $$\xymatrix{\mathrm{KVar}_{\mathbf{A}^1_S}\ar[d] \ar[r]^-{\Phi^{\mathrm{tot}}_S} & \widetilde{\mathscr{M}}_{\mathbf{A}^1_S}^{\hat{\mu}}\ar[d] \\ \mathrm{KVar}_{\mathbf{A}^1_{\kappa(s)}}\ar[r]^-{\Phi^{\mathrm{tot}}_{\kappa(s)}} & \widetilde{\mathscr{M}}_{\mathbf{A}^1_{\kappa(s)}}^{\hat{\mu}} }$$ commutes. \end{theorem}\index{phitotS@$\Phi^{\mathrm{tot}}_S$}\index{total vanishing cycles measure!relative} \begin{proof} Uniqueness is immediate by lemma \ref{function.equality}. By lemma \ref{smoothpropergens}, denoting by $u:\mathbf{A}^1_S\to S$ the structural morphism, it suffices to construct~$\Phi^{\mathrm{tot}}_S$ on classes $[X\xrightarrow{p} \mathbf{A}^1_S]$ such that $T = u\circ p(X)$ is locally closed in $S$, $X$ is smooth over $T$ and $p\times_S\mathrm{id}_T:X\times_ST\to \mathbf{A}^1_T$ is proper. For such a class, denoting by $f$ the composition $X\xrightarrow{p}\mathbf{A}^1_S\to \mathbf{A}^1_k$, we put $$\Phi_S^{\mathrm{tot}}([X\xrightarrow{p} \mathbf{A}^1_S]) = \phi^{\mathrm{tot}}_{f/S}\in\mathscr{M}_{\mathbf{A}^1_S}^{\hat{\mu}}.$$ Then for every $s\in S$, we have $$\Phi_S^{\mathrm{tot}}([X\xrightarrow{p} \mathbf{A}^1_S])_s = \phi^{\mathrm{tot}}_{f_s/\kappa(s)}.$$ For any $s\in T$, $f_s:X_s\to \mathbf{A}^1_k$ is proper and the fibre $X_s$ is smooth over $\kappa(s)$ by the assumption on $p$. For $s\in S\setminus T$, the fibre $X_s$ is empty and therefore $\Phi_S^{\mathrm{tot}}([X\xrightarrow{p} \mathbf{A}^1_S])_s = 0$ in this case. Thus, by the characterisation of $\Phi^{\mathrm{tot}}_{\kappa(s)}$ in theorem \ref{motmeasureA1}, we may conclude, as in the proof of theorem~\ref{glmfamily}, that~$\Phi_S^{\mathrm{tot}}$ is well-defined as a group morphism $\mathrm{KVar}_{\mathbf{A}^1_S}\to \mathscr{M}_{\mathbf{A}^1_S}^{\hat{\mu}}$ and that the diagram in the statement is commutative. As in the proof of theorem~\ref{glmfamily}, the fact that $\Phi_S^{\mathrm{tot}}$ is a morphism of $\mathrm{KVar}_{S}$-algebras follows from the fact that $\Phi^{\mathrm{tot}}_{\kappa(s)}$ is a morphism of $\mathrm{KVar}_{\kappa(s)}$-algebras for every $s\in S$, using lemma~ \ref{function.equality}. \end{proof} \begin{lemma}\label{Phipropbase} Let $S$ be a $k$-variety, and let $X$ be a variety over $S$, with structural morphism $u:X\to S$. Denote by $u_0$ the morphism $X\xrightarrow{0\times_k u}\mathbf{A}^1_k\times_kS\simeq \mathbf{A}^1_S.$ Then \begin{enumerate}[(a)]\item\label{affineidbase} $\Phi^{\mathrm{tot}}_S([X\xrightarrow{u_0} \mathbf{A}^1_S]) = [X\xrightarrow{u_0} \mathbf{A}^1_S]$ (with trivial action). \item\label{affinezerobase} $\Phi^{\mathrm{tot}}_S([X\times_S \mathbf{A}^1_S\xrightarrow{\mathrm{pr}_2}\mathbf{A}^1_S]) = 0 $ \end{enumerate} \end{lemma} \begin{proof} This follows from lemma \ref{Phiprop} and proposition \ref{motmeasurebase} by lemma \ref{function.equality}. \end{proof} \subsection{A motivic measure on Grothendieck rings of varieties with exponentials} We come to the final form of the motivic vanishing cycles measure, which will be the one we are going to use. For the moment, we have constructed, for every $k$-variety $S$, a motivic measure $\Phi_S^{\mathrm{tot}}$ defined on the ring $(\mathrm{KVar}_{\mathbf{A}^1_S},\star)$ and with values in the ring $(\widetilde{\mathscr{M}}_{\mathbf{A}^1_S}^{\hat{\mu}},\star)$. For our purposes, it will be convenient to view $\Phi_S^{\mathrm{tot}}$ as a motivic measure on the Grothendieck ring with exponentials over $S$, and to compose it with the pushforward map $(\epsilon_S)_!$ where $\epsilon_S:\mathbf{A}^1_S\to S$ is the structural morphism. \begin{theorem}[Motivic vanishing cycles measure]\label{motmeasuremain} Let $k$ be a field of characteristic zero. \begin{enumerate}\item There is a unique morphism $$\Phi_k:\mathscr{E}xp\mathscr{M}_k \to (\mathscr{M}_k^{\hat{\mu}},\ast)$$ of $\mathscr{M}_k$-algebras, called the motivic vanishing cycles measure, such that, for any proper morphism $f:X\to \mathbf{A}^1_k$, with source a smooth $k$-variety $X$, one has $$\Phi_k([X\xrightarrow{f}\mathbf{A}^1_k]) = \epsilon_!(\phi^{\mathrm{tot}}_f)$$ where $\epsilon:\mathbf{A}^1_k\to k$ is the structural morphism. \item Let $S$ be a $k$-variety. There is a unique morphism $\Phi_S:\mathscr{E}xp\mathscr{M}_S\to (\mathscr{M}_S^{\hat{\mu}},\ast)$ of $\mathscr{M}_S$-algebras such that, for any $s\in S$, the diagram $$\xymatrix{ \mathscr{E}xp\mathscr{M}_S \ar[r]^-{\Phi_S}\ar[d] & \mathscr{M}_S^{\hat{\mu}}\ar[d] \\ \mathscr{E}xp\mathscr{M}_{\kappa(s)}\ar[r]^-{\Phi_{\kappa(s)} }& \mathscr{M}_{\kappa(s)}^{\hat{\mu}}}$$ commutes. \end{enumerate} Moreover, for any $k$-variety $S$, the restriction of $\Phi_S$ to $\mathscr{M}_S$ coincides with the natural inclusion $\mathscr{M}_S\to \mathscr{M}_S^{\hat{\mu}}$. \end{theorem}\index{phi@$\Phi_k, \Phi_S$}\index{motivic vanishing cycles measure} \begin{proof} Property (\ref{affinezerobase}) in lemma \ref{Phipropbase} says that $\Phi_S^{\mathrm{tot}}$ sends the additional relation $$[X\times_S \mathbf{A}^1_S\xrightarrow{\mathrm{pr}_2}\mathbf{A}^1_S]$$ defining $\mathrm{KExpVar}_{\mathbf{A}^1_S}$ to zero. Using remark \ref{expquotientmorphism}, we see that the morphism $\Phi^{\mathrm{tot}}_S$ induces a morphism of $\mathrm{KVar}_S$-algebras $\mathrm{KExpVar}_S\to (\widetilde{\mathscr{M}}_{\mathbf{A}^1_S}^{\hat{\mu}},\star)$. By property (\ref{affineidbase}) of lemma (\ref{Phipropbase}), the class $\mathbf{L}_S$ goes to the invertible element $\mathbf{L}_0\in\widetilde{\mathscr{M}}_{\mathbf{A}^1_S}^{\hat{\mu}}$ so that $\Phi^{\mathrm{tot}}$ extends by $\mathscr{M}_S$-linearity to a morphism of $\mathscr{M}_S$-algebras $\mathscr{E}xp\mathscr{M}_S\to (\widetilde{\mathscr{M}}_{\mathbf{A}^1_S}^{\hat{\mu}},\star)$. By the localised version of lemma \ref{epsilonmorphism}, the pushforward $(\epsilon_S)_!$ is a morphism of $\mathscr{M}_S$-algebras $(\widetilde{\mathscr{M}}_{\mathbf{A}^1_S}^{\hat{\mu}},\star)\to (\mathscr{M}_S,\ast)$, so putting $\Phi_S:=(\epsilon_S)_!\circ \Phi_S^{\mathrm{tot}}$ we get a morphism of $\mathscr{M}_S$-algebras $\Phi_S:\mathscr{E}xp\mathscr{M}_S\to (\mathscr{M}_S^{\hat{\mu}},\ast)$. Part 1 of the statement then follows immediately from theorem \ref{motmeasureA1}, whereas part 2 comes from proposition \ref{motmeasurebase}. Finally, the statement about the restriction of $\Phi_S$ to~$\mathscr{M}_S$ is seen to be true by property (\ref{affineidbase}) of lemma~\ref{Phipropbase}. \end{proof} The following proposition is the motivic version for the dimensional topology of the triangular inequality $$\left\vert\sum_{x\in X(\mathbf{F}_q)}\psi(f(x))\right\vert \leq |X(\mathbf{F}_q)|$$ for $X$ a variety over $\mathbf{F}_q$, $f:X\to \mathbf{A}^1_{\mathbf{F}_q}$ a morphism and $\psi:\mathbf{F}_q\to\mathbf{C}^*$ a non-trivial character. \begin{prop}[Triangular inequality]\label{triangulardim} Let $S$ be a $k$-variety, $X$ a variety over $S$ and $f:X\to \mathbf{A}^1_k$ a morphism. Then $$\dim_S(\Phi_S([X,f]))\leq \dim_S X$$ \end{prop}\index{phi@$\Phi_S$!triangular inequality}\index{motivic vanishing cycles measure!triangular inequality}\index{triangular inequality!for $\dim_S$} \begin{proof} It suffices to prove this above every point $s\in S$, so we may assume $S=\mathrm{Spec}\, k$. Then, up to adding classes of strictly smaller dimension which can be dealt with by induction, we may assume that $X$ is smooth and $f$ is proper. In this case $\Phi([X,f]) = \epsilon_!\phi^{\mathrm{tot}}_f$ and the result follows from remark~\ref{totvandimension}. \end{proof} \subsection{A compactification lemma} The following lemma, which we used in the proof of proposition \ref{motmeasureringmorph}, was stated and proved in \cite{LSmat} in the case where the field $k$ is assumed to be algebraically closed of characteristic zero. We show here that it remains true even if the field is no longer assumed to be algebraically closed. We are going to deduce this from the proof of proposition 6.1 in \cite{LSmat}, by proving that the construction of $Z$ commutes with base change to any extension of the field $k$, and that the statements of the lemma remain true over $k$ if they are true over an extension of $k$. For this, most of the arguments will go by faithfully flat descent (see EGA IV, 2.7.1), using that for any extension $k'$ of $k$, the structural morphism $\mathrm{Spec}\, k'\to \mathrm{Spec}\, k$ is faithfully flat. For this, we recall that, if $k'$ is an extension of $k$ (which is still assumed to be of characteristic zero), then (see e.g. tag 0C3H in the Stacks project) : \begin{enumerate} [(a)] \item\label{absolutesing} The formation of the singular locus $X^{\mathrm{sing}}$ of a variety $X$ over $k$ commutes with base change to~$k'$. \item For a morphism $f:X\to \mathbf{A}^1_k$, the formation of the singular locus of $f$ commutes with base change to~$k'$, i.e: $\mathrm{Sing} f_{k'} = (\mathrm{Sing} f)\times _kk'$. \end{enumerate} \begin{prop}\label{compactification} Let $k$ be a field of characteristic zero. Let $X$ and $Y$ be smooth varieties and let $f:X\to \mathbf{A}^1_k$ and $g:Y\to \mathbf{A}^1_k$ be projective morphisms. Then there exists a smooth quasi-projective $k$-variety $Z$ with an open embedding $X\times Y \hookrightarrow Z$ and a projective morphism $h:Z\to \mathbf{A}^1_k$ such that the following conditions are satisfied. \begin{enumerate}[(i)] \item The restriction of $h$ to $X\times Y$ is $f\oplus g$. \item All critical points of $h$ are contained in $X\times Y$, i.e. $\mathrm{Sing}(f\oplus g) = \mathrm{Sing}(h)$. \item The boundary $Z\setminus X\times Y$ is the support of a simple normal crossing divisor with pairwise distinct smooth irreducible components $D_1,\ldots,D_s$. \item For every $p$-tuple $I = (i_1,\ldots,i_p)$ of indices (with $p\geq 1$) the morphism $$h_{I}:D_{I}:=D_{i_1}\cap \ldots \cap D_{i_p} \to \mathbf{A}^1_k$$ induced by $h$ is projective and smooth, so that in particular all $D_{I}$ are smooth quasi-projective $k$-varieties. \end{enumerate} \end{prop} \begin{proof} All references in what follows are to \cite{LSmat} unless otherwise stated. \textit{Condition} (K): Lunts and Schnürer define a condition on a pair $(U,I)$ where $U$ is a scheme and $I\subset \mathcal{O}_U$ an ideal sheaf, called condition (K), under which one may associate to $(U,I)$ a monomialisation $c_I(U)\to U$. Since we will change fields, we view (K) as a condition on the triple $(U,I,k)$ where $k$ is the given base field: \begin{enumerate}[(K)] \item $U$ is a reduced scheme of finite type over $k$, $I$ is not zero on any irreducible component of $U$, and the closed subscheme $V(I)$ defined by $I$ contains the singular locus $U^{\mathrm{sing}}$ of $U$. \end{enumerate} \textbf{Claim:} Let $k'$ be an extension of $k$ and let $U$ be a scheme over $k$. If $(U_{k'},I\otimes_kk',k')$ satisfies condition (K), then so does $(U,I,k)$. Indeed, the fact that $U$ is of finite type and reduced follows from the faithful flatness of $\mathrm{Spec}\, k'\to \mathrm{Spec}\, k$. If $I$ is zero on some irreducible component $U$, then $I\otimes_kk'$ would be zero on any irreducible component of $U_{k'}$ lying over this component. Finally, by statement (\ref{absolutesing}) above, the condition that $V(I)$ contains $U^{\mathrm{sing}}$ descends as well. \textit{Monomialisation;} The monomialisation procedure recalled in remark 6.3 and used throughout the proof, may be done for any field of characteristic zero, commutes with extension of scalars, as stated in \cite{Kollar}, 3.34.2, 3.35 and 3.36. \textit{Compactification of one morphism} Proposition 6.4 is a first compactification result, which produces, from a smooth quasi-projective variety $X$ and a morphism $f:X\to \mathbf{A}^1$, a smooth projective variety $\bar{X}$ with an open embedding $X\hookrightarrow \bar{X}$ and a morphism $\bar{f}:X\to \mathbf{P}^1_k$ extending $f$ and such that the fibre at infinity is a strict normal crossing divisor. This may be done over a field that is not necessarily algebraically closed, and the construction commutes with change of fields, since it consists in applying monomialisation and a root extraction morphism. \textit{Shifting from addition to projection.} The construction of $Z$ and $h$ uses a morphism $\sigma:\mathbf{P}^1\times \mathbf{A}^1\to \mathbf{P}^1\times\mathbf{P}^1$ compactifying the automorphism $\mathbf{A}^1\times \mathbf{A}^1\xrightarrow{\sim} \mathbf{A}^1\times \mathbf{A}^1$ sending $(x,y)$ to $(x,y-x)$, and therefore transforming the addition map $\mathbf{A}^1\times \mathbf{A}^1\to \mathbf{A}^1$ into the second projection. The image of $\sigma$ is $\mathbf{A}^1\times \mathbf{A}^1 \cup \{(\infty,\infty)\}$, and $\sigma^{-1}(\infty,\infty) = \{\infty\}\times \mathbf{A}^1$ is denoted by $E$. Starting from compactifications $\overline{X}\xrightarrow{\overline{f}} \mathbf{P}^1$ and $\overline{Y}\xrightarrow{\overline{g}}\mathbf{P}^1$ of the given morphisms, obtained using proposition 6.4, one considers the pullback diagram $$\xymatrix{T \ar[r]^{\widehat{\sigma}}\ar[d]_{\theta} &\overline{X}\times \overline{Y} \ar[d]^{\overline{f}\times \overline{g}}\\ \mathbf{P}^{1}\times \mathbf{A}^1 \ar[r]^{\sigma} &\mathbf{P}^{1}\times \mathbf{P}^1}$$ Lunts and Schnürer's proof then goes on with proving that $(T,\theta^{-1}(I_E)\mathcal{O}_T)$ satisfies condition (K), so that one may form the monomialisation $\gamma:Z\to T$, which provides a morphism $$h:Z\xrightarrow{\gamma} T \xrightarrow{\theta} \mathbf{P}^{1}\times \mathbf{A}^1\xrightarrow{\mathrm{pr}_2} \mathbf{A}^1,$$ which they check satisfies all requested properties. By what we said on monomialisation above, the construction of $h$ and $Z$ commutes with change of fields. Thus, by Lunts and Schnürer's result, $h_{\bar{k}}$ and $Z_{\bar{k}}$ do the job over $\bar{k}$, and it suffices to show that the properties they satisfy remain true over $k$. First of all, since $(T,\theta^{-1}(I_E)\mathcal{O}_T)$ satisfies condition $(K)$ over $\bar{k}$, it does also satisfy it over $k$, so monomialisation may be performed. The fact that $h$ is projective is clear since it is a composition of projective morphisms. Then points $(i)$ and $(iii)$ in the statement of the theorem are satisfied automatically. As for point $(ii)$, Lunts and Schnürer's lemma says that the base change to $\bar{k}$ of the open immersion $\mathrm{Sing}(f\oplus g)\to \mathrm{Sing}(h)$ is in fact an isomorphism. By faithfully flat descent, we already have an isomorphism over $k$. Finally, it remains to check point (iv), or more precisely, the smoothness part of it, the projectivity being immediate. Again, this comes from faithfully flat descent. \end{proof} \begin{remark} When reading carefully Lunts and Schnürer's proof, one notices that in fact it can be made to work without the assumption that $k$ is algebraically closed. Indeed, they construct a diagram $$\xymatrix{ & &\widehat{T}\ar[ld]\ar[rd] \ar@/^2pc/[rrrdd]^{\beta} \ar@/_2pc/[lldd]_{\alpha} & & & \\ & \widehat{S}\ar[ld]\ar[rd] & & S'\ar[ld]\ar[rd] & \\ T & & S & & S''\ar[r] & S'''}$$ where all arrows are étale and where $S'''$ is of the form $\mathbf{A}^1_k\times L$ where $$L = \mathrm{Spec}\, k[x_1,\ldots,x_s,y_1,\ldots,y_t]/(x^{\mu}-y^{\nu})$$ with $x^{\mu} := x_1^{\mu_1}\ldots x_s^{\mu_s}$ and $y^{\nu} := y_1^{\nu_1}\ldots,y_t^{\nu_t}$ where the $\mu_i$ and $\nu_i$ are positive integers. This way one essentially reduces to verifying most properties directly for $L$ or for $S'''$. The only point in the proof when one seems to use the algebraic closedness is when checking that $(L, (x^{\mu}))$ satisfies condition~(K), but as we remarked above, since it is true geometrically it is already true over $k$. \end{remark} \begin{remark}[Compatibility of nearby cycles morphism with sums of proper morphisms]\label{vancyclesnonproper} Let $a\in k$ and let $\mathscr{S}_{\mathrm{id} -a}:\mathscr{M}_{\mathbf{A}^1_k}\to \mathscr{M}_{k}^{\hat{\mu}}$ be the morphism from theorem \ref{glmtheorem} for $Y=\mathbf{A}^1$ and $g :\mathbf{A}^1\to \mathbf{A}^1$ given by $x\mapsto x-a$. Let $X$ and $Y$ be smooth $k$-varieties and $f:X\to \mathbf{A}^1$, $g:Y\to \mathbf{A}^1$ projective morphisms. Proposition \ref{compactification} shows that, though $f\oplus g:X\times Y\to \mathbf{A}^1$ is not necessarily proper, we nevertheless have $$\mathscr{S}_{\mathrm{id} -a}([X\times Y \xrightarrow{f\oplus g} \mathbf{A}^1]) = (f\oplus g)_!\ \psi_{f\oplus g -a}.$$ Indeed, using notation from proposition \ref{compactification}, we may write $$\mathscr{S}_{\mathrm{id}-a}([X\times Y \xrightarrow{f\oplus g} \mathbf{A}^1]) = \mathscr{S}_{\mathrm{id}-a}([Z\xrightarrow{h}\mathbf{A}^1_k]) + \sum_{I\neq \varnothing}(-1)^{|I|}\mathscr{S}_{\mathrm{id} -a}([D_I\xrightarrow{h_I}\mathbf{A}^1_k]).$$ Since all $h_I$ are projective and smooth, we have $$\mathscr{S}_{\mathrm{id}-a}[D_I\xrightarrow{h_I}\mathbf{A}^1_k] = (h_I)_!\psi_{h_I - a} =0,$$ so that in fact, using that $h$ is projective, we have $$\mathscr{S}_{\mathrm{id}-a}([X\times Y \xrightarrow{f\oplus g} \mathbf{A}^1]) = \mathscr{S}_{\mathrm{id}-a}([Z\xrightarrow{h}\mathbf{A}^1_k]) = h_!\psi_{h-a}.$$ On the other hand, since $\mathrm{Sing}(h)=\mathrm{Sing}(f\oplus g)$, and $\psi_{h-a}$ is supported on $\mathrm{Sing}(h)$, we have $\psi_{h-a} = \psi_{f\oplus g -a}$ (see corollary 3.6 in \cite{LS}), whence the result. \end{remark} \section{The Thom-Sebastiani theorem: an explicit example}\label{sect.TSexample}\index{Thom-Sebastiani!example} To illustrate the contents of section \ref{sect.TS}, we compute in this section both sides of equality (\ref{TSequation}) for $X_1 = X_2 = \mathbf{A}^1_{\mathbf{C}}$ over $k=\mathbf{C}$, with $f_1 = f_2 = (x\mapsto x^2)$. \subsection{Computation of left-hand side}\label{LHScomputation} Here we are dealing with the variety $X = \mathbf{A}^2$ together with the morphism $f:(x,y)\mapsto x^2 + y^2$. The only critical value is zero, so $\widetilde{\phi}^{\ \mathrm{tot}}_f$ is just $\phi_f$, seen as an element of $\mathscr{M}_X^{\hat{\mu}}$. Since in $\mathbf{C}$ we have the decomposition $f(x,y) = (x+iy)(x-iy)$, we may apply theorem \ref{DLrat} with $h = \mathrm{id}$, $E_1 = \{x + iy=0\}$, $E_2 = \{x-iy = 0\},$ so that $a_1 = a_2 = 1$, $a_{12} = 1$ (in particular, $\hat{\mu}$-actions are trivial) and \begin{eqnarray*}\phi_f &= &[X_0(f)\xrightarrow{\mathrm{id}} X_0(f)] - [E_1^{\circ}\to X_0(f)] -[E_2^{\circ}\to X_0(f)] + (\mathbf{L}-1)[E_1\cap E_2\to X_0(f)]\\ & = & \mathbf{L}[\{(0,0)\}\to X_0(f)]\in \mathscr{M}_{X_0(f)}^{\hat{\mu}} \end{eqnarray*} since $[X_0(f)\xrightarrow{\mathrm{id}} X_0(f)] = [E_1^{\circ}\to X_0(f)] +[E_2^{\circ}\to X_0(f)] + [E_1\cap E_2\to X_0(f)]$. Thus, $$\widetilde{\phi}^{\ \mathrm{tot}}_f = \mathbf{L}[\{(0,0)\}\to \mathbf{A}^2]\in \mathscr{M}_{\mathbf{A}^2_{\mathbf{C}}}^{\hat{\mu}}.$$ \subsection{Computation of the right-hand side}\label{RHScomputation} Here we are dealing with $Y = \mathbf{A}^1$ with $g:\mathbf{A}^1\to \mathbf{A}^1$, $x\mapsto x^2$. Again, the only critical value is zero. We may again apply theorem \ref{DLrat} with $h = \mathrm{id}$, the only irreducible component of $Y_0(g)$ being $E = \{0\}$, with multiplicity $a =2$, so that we consider a double cover $\tilde{E}\to E$ with $\mu_2$-action. In other words, $\tilde{E}$ is just a pair of two points, exchanged by the action of the generator of $\mu_2$. By the formula, we have $$\phi_f = [Y_0(g)\xrightarrow{\mathrm{id}}Y_0(g)] - [\tilde{E}\to Y_0(g)]\in \mathscr{M}_{Y_0(g)}^{\hat{\mu}},$$ so that $\widetilde{\phi}^{\ \mathrm{tot}}_g = [\{0\}\to\mathbf{A}^1] - [\tilde{E}\to \mathbf{A}^1] \in \mathscr{M}_{\mathbf{A}^1}^{\hat{\mu}}.$ \begin{remark} By example 2.7 in \cite{LS}, whenever we have two $k$-varieties $S_1,S_2$ and $p_i:Z_i\to S_i$ is a variety over $S_i$ with $\hat{\mu}$-action, and the action of $\hat{\mu}$ on $Z_2$ is trivial, then $$\Psi(Z_1\times Z_2\xrightarrow{p_1\times p_2}S_1\times S_2) = [Z_1\times Z_2\to S_1\times S_2].$$ \end{remark} This shows that $$\Psi(\widetilde{\phi}^{\ \mathrm{tot}}_g\boxtimes \widetilde{\phi}^{\ \mathrm{tot}}_g) = [\{(0,0)\}\to \mathbf{A}^2] - 2[\{0\}\times \tilde{E}\to \mathbf{A}^2] + \Psi([\tilde{E}\times \tilde{E}\to \mathbf{A}^2]).$$ Note that all the $\mathbf{A}^2$-varieties here are supported above the point $\{(0,0)\}$ of $\mathbf{A}^2$. We are therefore going to stop writing the morphisms to $\mathbf{A}^2$, which implicitly are all taken to be constant equal to $(0,0)$. Now we are going to compute $\Psi([\tilde{E}\times \tilde{E}])$. By definition, this is $$\Psi([\tilde{E}\times \tilde{E}]) = [(\tilde{E}\times \tilde{E})\times^{\mu_2\times \mu_2}F_0^2] - [(\tilde{E}\times \tilde{E})\times^{\mu_2\times \mu_2}F_1^2].$$ Denote the two points of $\tilde{E}$ by $e_{-1}$ and $e_1$. Then the product $\tilde{E}\times \tilde{E}\times F_i^2$ is simply given by four copies of $F_i^2$, corresponding to each pair $(e_i,e_j)$ for $i,j\in\{-1,1\}$. Moreover, these copies are all identified via the $\mu_2\times \mu_2$-action: indeed, any element $(\epsilon,\eta)\in\mu_2\times \mu_2$ induces an isomorphism $$\begin{array}{ccc}\{(e_1,e_1)\}\times F_i^2&\to& \{(e_{\epsilon},e_{\eta})\}\times F_i^2\\ (e_1,e_1,x,y)&\mapsto& (e_{\epsilon},e_{\eta},\epsilon x,\eta y) \end{array}$$ so that $(\tilde{E}\times \tilde{E})\times^{\mu_2\times \mu_2}F_i^2$ is in fact isomorphic to $F_i^2$, endowed with the diagonal $\mu_2$-action. \begin{lemma} \begin{enumerate} \item The morphism $$\begin{array}{ccc} F_0^2&\to& \mathbf{G}_m\times \{-1,1\}\\ (x,y)& \mapsto & (x,\frac{y}{ix})\end{array} $$ is an isomorphism (over $\mathbf{C}$), identifying $F_0^2$ with the disjoint union of two copies of $\mathbf{G}_m$. It is equivariant if one endows each copy of $\mathbf{G}_m$ with the obvious $\mu_2$-action by translation. \item The morphism $$\begin{array}{ccc} F_1^2&\to& \mathbf{G}_m\setminus\{-1,1,i,-i\}\\ (x,y)& \mapsto & x + iy\end{array} $$ is an isomorphism (over $\mathbf{C}$), identifying $F_1^2$ with $\mathbf{G}_m\setminus \{-1,1,i,-i\}$. It is equivariant if one endows $\mathbf{G}_m\setminus\{-1,1,i,-i\}$ with the action induced by the obvious $\mu_2$-action by translation on~$\mathbf{G}_m$. \end{enumerate}\end{lemma} \begin{proof}\begin{enumerate}\item A point $(x,y)$ of $\mathbf{G}_m^2$ is an element of $F_0^2$ if and only if either $x =iy$ or $x = -iy$: an inverse to the map in the statement if therefore given by $(x,\epsilon)\mapsto (x,i\epsilon x)$. The statement about actions follows immediately. \item Rewriting the equation of $F_1^2$ in the form $(x + iy)(x-iy) = 1$, we see that $x + iy$ is always non-zero, and that if $x + iy$ is equal to some $a\in \mathbf{G}_m$, then $x-iy$ is equal to $a^{-1}$. This remark allows us to construct an inverse $$a\mapsto \left(\frac{a + a^{-1}}{2},\frac{a-a^{-1}}{2i}\right), $$ which is well-defined and with image contained in $F_1^2$ whenever $a$ is a complex number outside the set $\{0,1,-1,i,-i\}$. Again, the statement on actions is immediate. \end{enumerate} \end{proof} Combining the results in the lemma, we have $$\Psi([\tilde{E}\times \tilde{E}]) = 2[\mathbf{G}_m,\mu_2] - [\mathbf{G}_m\setminus\{-1,1,i,-i\},\mu_2]$$ Denoting by $[\tilde{E},\mu_2]$ the class of a union of two points exchanged by the generator of $\mu_2$, as above, we have $$[\mathbf{G}_m\setminus\{-1,1,i,-i\},\mu_2] = [\mathbf{G}_m,\mu_2] - 2[\tilde{E},\mu_2],$$ whence $$\Psi([\tilde{E}\times \tilde{E}]) = [\mathbf{G}_m,\mu_2] + 2[\tilde{E},\mu_2].$$ Thus, finally, we have, observing that $[\{0\}\times\tilde{E},\mu_2] = [\tilde{E},\mu_2]$, $$\Psi(\widetilde{\phi}^{\ \mathrm{tot}}_g\boxtimes\widetilde{\phi}^{\ \mathrm{tot}}_g) = 1 + [\mathbf{G}_m,\mu_2] = [\mathbf{A}^1,\mu_2],$$ that is, $\mathbf{A}^1$ with the generator of $\mu_2$ acting through $x\mapsto -x$. By relation (\ref{actionrelation}) in the Grothendieck ring $\mathscr{M}_{\mathbf{A}^2_{\mathbf{C}}}^{\hat{\mu}}$, this is equal to the left-hand side $\mathbf{L}$ (i.e. $\mathbf{A}^1$ with the trivial action) computed in section \ref{LHScomputation}, whence the result. \begin{remark} Our computation shows that in $\mathscr{M}^{\hat{\mu}}_{\mathbf{A}^2_{\mathbf{C}}}$ we have the relation $$\Psi(\widetilde{\phi}^{\ \mathrm{tot}}_{x^2}\boxtimes\widetilde{\phi}^{\ \mathrm{tot}}_{x^2}) = \widetilde{\phi}^{\ \mathrm{tot}}_{x^2 + y^2}$$ which in our calculation boils down to the equality $$(1-[\tilde{E},\mu_2])\ast (1-[\tilde{E},\mu_2]) = \mathbf{L}$$ in $\mathscr{M}_{\mathbf{C}}^{\hat{\mu}}$. Thus, the class $1-[\tilde{E},\mu_2]$ may be seen as a ``square root'' of $\mathbf{L}$ for the product $\ast$. For obvious dimensional reasons, such a square root does not exist in the ring $\mathscr{M}_{\mathbf{C}}$. \end{remark} \section{Mixed Hodge modules\index{mixed Hodge module}}\label{sect.mhmbasics} We are going to use freely the language of mixed Hodge modules introduced by Saito and are going to fix some notations and recall some facts to this end in this section. References are the original works \cite{Saito88} and \cite{Saito90}, the summary \cite{Saito89} by Saito himself, the axiomatic introduction by Peters and Steenbrink in \cite{PS} section 14.1.1, and Beilinson, Bernstein and Deligne's paper \cite{BBD} for definitions and properties of perverse sheaves. If $S$ is a variety over~$\mathbf{C}$, we denote by $\mathrm{MHM}_S$ the abelian category of mixed Hodge modules on~$S$, by $D(\mathrm{MHM}_S)$ its derived category and by $D^{b}(\mathrm{MHM}_S)$ its bounded derived category. For any integer $a$, we also denote by $D^{\leq a}(\mathrm{MHM}_S)$ the full subcategory of $D(\mathrm{MHM}_S)$ of complexes only having cohomology in degree $\leq a$. We use square brackets to denote the shifting of complexes: for any complex $M$ of mixed Hodge modules, for every $n\in \mathbf{Z}$ and any $i\in \mathbf{Z}$, $(M[n])^i =M^{i+n}$. \subsection{The $\mathrm{rat}$ functor} For any variety $S$ over $\mathbf{C}$, there is an exact and faithful functor $$\mathrm{rat}_S:\mathrm{MHM}_S\to \mathrm{Perv}_S$$ to the abelian category of perverse sheaves on $S$, extending to a functor $$\mathrm{rat}_S:D^{b}(\mathrm{MHM}_S)\to D^{b}_{cs}(S)$$ where the category on the right is the full subcategory of cohomologically constructible complexes inside the bounded derived category of sheaves of $\mathbf{Q}$-vector spaces on $S$. Saito showed (Theorem 0.1 in \cite{Saito90} or Theorem 1.3 in \cite{Saito89}) that the usual operations $\boxtimes$, $\otimes$, as well as $f_*, f^*, f_!, f^!$ for any morphism $f$ of varieties over $\mathbf{C}$ lift to the corresponding derived categories of mixed Hodge modules in a way compatible with the functor $\mathrm{rat}$. In particular, there are adjunctions $(f^*,f_*)$ and $(f_!,f^!)$. There is a morphism $f_! \to f_*$ which is an isomorphism when $f$ is proper. \subsection{Twists} In the case where $S$ is a point, the category $\mathrm{MHM}_{\mathrm{pt}}$ is exactly the category of polarisable mixed Hodge structures (see \cite{Saito89}, Theorem 1.4), and the functor $\mathrm{rat}$ becomes the forgetful functor associating to a mixed Hodge structure its underlying $\mathbf{Q}$-vector space. For any integer $d\in \mathbf{Z}$, we denote by~$\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}(d)\in \mathrm{MHM}_{\mathrm{pt}}$ the Hodge structure of type $(-d,-d)$ with underlying vector space~$\mathbf{Q}$. For $d=0$, it will be denoted simply by $\mathbf{Q}^{\mathrm{Hdg}}_{\mathrm{pt}}$. For any complex variety $S$, tensoring with~$\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}(1)$ defines a Tate twist on $D^{b}(\mathrm{MHM}_S)$. When $f$ is smooth of relative dimension $d$, then \begin{equation}\label{uppershriek} f^{!} \simeq f^*[2d](d).\end{equation} \subsection{Weight filtration} Each $M\in\mathrm{MHM}_S$ has a finite increasing weight filtration $W_{\bullet}M$, the graded parts of which will be denoted $\mathrm{Gr}^{W}_{\bullet}$. For a bounded complex of mixed Hodge modules $M^{\bullet}$, we say~$M^{\bullet}$ has weight $\leq n$ if $\mathrm{Gr}^W_i\mathcal{H}^j(M^{\bullet}) = 0$ for all integers $i$ and $j$ such that $i>j+n$. For varieties $X$ and $Y$ over $\mathbf{C}$ we say that a functor $F:D^{b}(\mathrm{MHM}_X)\to D^{b}(\mathrm{MHM}_Y)$ does not increase weights if for every $n\in\mathbf{Z}$ and every $M^{\bullet}\in D^{b}(\mathrm{MHM}_X)$ with weight $\leq n$, the complex $F(M^{\bullet})$ is also of weight $\leq n$. In particular, for any morphism of complex varieties~$f$, the functors $f_!$ and $f^{*}$ do not increase weights (see \cite{Saito90}, (4.5.2)). \subsection{Cohomological functors and cohomological amplitude} The usual truncation functor $\tau_{\leq}$ on $D^b(\mathrm{MHM}_S)$ corresponds to the perverse truncation $^p\tau_{\leq}$ on $D^{b}_{cs}(S)$, so that $\mathrm{rat}_S\circ \mathcal{H}^{\bullet} =\,^p\mathcal{H}^{\bullet}\circ \mathrm{rat}_S$, where $\mathcal{H}^{\bullet}$ is the usual cohomology on $D^{b}(\mathrm{MHM}_S)$ and $^p\mathcal{H}^{\bullet}$ is the perverse cohomology on $D^{b}_{cs}(S)$. \begin{definition} Let $T:D_1\to D_2$ be an exact functor between triangulated categories endowed with $t$-structures, and let $a$ be an integer. The functor $T$ is said to be of \textit{cohomological amplitude} $\leq a$ if $T(D_1^{\leq 0})\subset D_2^{\leq a}$. Denoting by $^t\mathcal{H}^{\bullet}$ the corresponding cohomological functors, this means that for all $i>a$ and all $X\in D_1^{\leq 0}$, $^t\mathcal{H}^i(T(X)) = 0$. \end{definition} \begin{lemma}\label{cohamplitude} For a morphism $f:Y\to X$ of varieties over $\mathbf{C}$ with fibres of dimension $\leq d$, the functors $$f_{!}:D^{b}(\mathrm{MHM}_Y)\to D^b(\mathrm{MHM}_X)\ \ \ \text{and}\ \ \ f^{*}:D^{b}(\mathrm{MHM}_X)\to D^{b}(\mathrm{MHM}_Y)$$ are of cohomological amplitude $\leq d$. \end{lemma} \begin{proof} According to \cite{BBD} 4.2.4, this is true for the corresponding functors $$f_!:D^{b}_{cs}(Y)\to D^{b}_{cs}(X)\ \ \ \text{and}\ \ \ f^{*}:D^b_{cs}(X)\to D^{b}_{cs}(Y)$$ with the perverse $t$-structure. Let $M^{\bullet}\in D^{\leq 0}(\mathrm{MHM}_Y)$. Then by compatibility of $\mathrm{rat}$ with pushforwards and $t$-structures as explained above, we have, for every $i\in \mathbf{Z}$ $$\mathrm{rat}_X(\mathcal{H}^{i}f_{!}(M^{\bullet})) = \ ^p\mathcal{H}^i(f_!(\mathrm{rat}_Y(M^{\bullet})))$$ The right-hand side is zero for $i>d$ by \cite{BBD} 4.2.4. The functor $\mathrm{rat}$ being faithful, we therefore have $\mathcal{H}^{i}f_{!}(M^{\bullet}) = 0$ for $i>d$ by lemma \ref{faithfulfunctorzero} below. The same argument holds for~$f^{*}$. \end{proof} \begin{lemma}\label{faithfulfunctorzero} Let $F:A\to B$ be a faithful functor between additive categories. If $F(X) = 0$ for some object $X$ of $A$, then $X=0$. \end{lemma} \begin{proof} The assumption $F(X) = 0$ implies that $F(\mathrm{id}_X) = 0 = F(0_X)$ where $0_X$ is the constant zero map on $X$. By faithfulness we have $\mathrm{id}_X = 0_X$, which means that $X=0$. \end{proof} \subsection{The trace morphism \index{trace morphism}for mixed Hodge modules} \label{section.trace} Though the theory of trace morphisms for Hodge modules is probably classical, we include this paragraph for lack of an appropriate reference. We only treat the case of smooth morphisms because it is sufficient for our purposes. \begin{notation}\label{qshdg}For any complex variety $S$, we denote by $a_S:S\to \mathrm{Spec}\, \mathbf{C}$ its structural morphism and by $\mathbf{Q}_S^{\mathrm{Hdg}}$ the complex of mixed Hodge modules $a_S^{*}\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}$. \end{notation} \begin{remark} In the case where $S$ is smooth and connected, the complex of mixed Hodge modules $\mathbf{Q}_S^{\mathrm{Hdg}}$ is concentrated in degree $\dim S$, and $\mathcal{H}^{\dim S}\mathbf{Q}_S^{\mathrm{Hdg}}$ is pure of weight $\dim S$, given by the pure Hodge module associated to the constant (rank one) variation of Hodge structures of weight 0 on~$S$. When $S$ is not smooth, by lemma \ref{cohamplitude} the complex $\mathbf{Q}_S^{\mathrm{Hdg}}$ is still an object of $D^{\leq \dim S}(\mathrm{MHM}_S)$, of weight $\leq 0$ because the functor $a_S^{*}$ does not increase weights, so that $\mathrm{Gr}^W_i\mathcal{H}^{\dim S}(\mathbf{Q}_S^{\mathrm{Hdg}}) = 0$ for $i>\dim S$. On the other hand, by \cite{Saito90}, (4.5.9), $\mathrm{Gr}^{W}_{\dim S}\mathcal{H}^{\dim S}(\mathbf{Q}_S^{\mathrm{Hdg}})$ is non-zero and simple, given by the intermediate extension of the constant weight 0 variation of Hodge structures on an open subset of $S$. \end{remark} \begin{remark}[Duality and the trace morphism for Hodge modules] \label{tracemap} Let $X$ be a smooth variety of dimension $d$ over $\mathbf{C}$. Then according to (\ref{uppershriek}), we have $a_X^{!}\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}} \simeq \mathbf{Q}_X^{\mathrm{Hdg}}(d)[2d]$, and by \cite{Saito90} (4.4.2), there is a morphism of complexes of mixed Hodge structures $$(a_X)_!\mathbf{Q}_X^{\mathrm{Hdg}}\to \mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}(-d)[-2d]$$ lifting the corresponding morphism in the derived category of sheaves of $\mathbf{Q}$-vector spaces on $X$. The cohomology of the complex $(a_X)_!\mathbf{Q}_X^{\mathrm{Hdg}}$ is exactly the cohomology with compact supports of $X$, and the morphism $(a_X)_!\mathbf{Q}_X^{\mathrm{Hdg}}\to \mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}(-d)[-2d]$ induces the trace morphism $H^{2d}_c(X,\mathbf{Q})\to \mathbf{Q}(-d)$ on the top cohomology. The lift described above is compatible with Deligne's Hodge theory (see e.g. lemma 14.8, corollary 14.9 and remark 14.10 in \cite{PS}), and turns the trace morphism into a morphism of Hodge structures $H^{2d}_c(X,\mathbf{Q})\to \mathbf{Q}^{\mathrm{Hdg}}_{\mathrm{pt}}(-d)$ (which is an isomorphism when $X$ is irreducible). \end{remark} The following proposition generalises this over a base: \begin{prop}\label{traceprop} Let $S$ be a variety over $\mathbf{C}$ of dimension $n$ and $p:X\to S$ a smooth morphism with fibres of constant dimension $d\geq 0$. Then there exists a morphism of complexes of Hodge modules $$f:p_!\mathbf{Q}_X^{\mathrm{Hdg}} \to \mathbf{Q}_S^{\mathrm{Hdg}}(-d)[-2d]$$ inducing a morphism of mixed Hodge modules $$\mathcal{H}^{2d+n}(p_!\mathbf{Q}_X^{\mathrm{Hdg}})\to \mathcal{H}^{2d+n}(\mathbf{Q}_S^{\mathrm{Hdg}}(-d)[-2d])$$ which above every closed point $s\in S$ corresponds to the classical trace morphism $$H_c^{2d}(X_s,\mathbf{Q})\to \mathbf{Q}^{\mathrm{Hdg}}_{\mathrm{pt}}(-d)$$ of mixed Hodge structures. \end{prop} \begin{proof} The counit $p_!p^{!}\to \mathrm{id}$ associated to the adjunction $(p_!,p^{!})$ induces a morphism of complexes of Hodge modules $$p_!p^{!}\mathbf{Q}_S^{\mathrm{Hdg}} \to \mathbf{Q}_S^{\mathrm{Hdg}}.$$ Since $p$ is smooth, we have $p^{!} = p^*(d)[2d]$, and this morphism induces a morphism $$f:p_!p^{*}\mathbf{Q}_{S}^{\mathrm{Hdg}}\to \mathbf{Q}_S^{\mathrm{Hdg}}(-d)[-2d].$$ Note that by lemma \ref{cohamplitude}, both these complexes are objects of $D^{\leq 2d + n}(\mathrm{MHM}_S)$. Moreover, by proper base change (\cite{Saito90} 4.4.3), above every closed point $s\in S$, this induces a morphism of complexes of Hodge structures $f_s: (p_s)_!(p_s)^{*}\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}\to \mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}(-d)[-2d]$, where $p_s:X_s\to \mathbf{C}$ is the pullback of $p$ by the inclusion $i_s:s\to S$, that is, $p_s = a_{X_s}$ using notation \ref{qshdg}. This morphism induces the trace map on the top cohomology as explained in remark \ref{tracemap}. \end{proof} \begin{remark}\label{tracerem} Let $p:X\to S$ be as in the proposition, and assume moreover that $X$ is irreducible. Then the morphism of mixed Hodge modules $$\mathcal{H}^{2d+n}(p_!\mathbf{Q}_X^{\mathrm{Hdg}})\to \mathcal{H}^{2d+n}(\mathbf{Q}_S^{\mathrm{Hdg}}(-d)[-2d])$$ given by the proposition induces an isomorphism $$\mathrm{Gr}^{W}_{2d+n}\mathcal{H}^{2d+n}(p_!\mathbf{Q}_X^{\mathrm{Hdg}})\simeq \mathrm{Gr}^{W}_{2d+n}\mathcal{H}^{2d+n}(\mathbf{Q}_S^{\mathrm{Hdg}}(-d)[-2d]).$$ Indeed, denoting by $K_1$ (resp. $K_2$) the kernel (resp. cokernel) of the above morphism, we have an exact sequence of mixed Hodge modules $$0\to K_1\to \mathcal{H}^{2d+n}(p_!\mathbf{Q}_X^{\mathrm{Hdg}})\to \mathcal{H}^{2d+n}(\mathbf{Q}_S^{\mathrm{Hdg}}(-d)[-2d])\to K_2 \to 0,$$ which above $s\in S$ becomes \begin{equation}\label{ksexact}0\to K_{1,s}\to H^{2d}_c(X_s,\mathbf{Q})\to \mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}(-d)\to K_{2,s}\to 0.\end{equation} The trace morphism being always surjective, we may conclude that $K_2=0$. Moreover, for every $s\in S$, since $X$ is irreducible, there is an open dense subset $U$ of $S$ such that for all $s\in S$, $X_s$ is irreducible, so that the trace morphism for $X_s$ is an isomorphism. In particular, $K_1$ is supported inside some closed subset of $S$ contained in the complement of $U$. Denote by $S_1$ the support of the pure Hodge module $\mathrm{Gr}^{W}_{2d+n}K_1$. If $S_1$ is non-empty, there is an open dense subset of $S_1$ over which $\mathrm{Gr}^{W}_{2d+n}K_1$ corresponds to a variation of pure Hodge structures, which must be of weight $2d$ because of the exact sequence (\ref{ksexact}). This would mean that the corresponding pure Hodge module is of weight $\dim S_1 + 2d < \dim S + 2d$, a contradiction. \end{remark} \subsection{Mixed Hodge modules with monodromy\index{mixed Hodge module!with monodromy}}\label{sect.mhmmonodromy} A reference for this is Saito's unpublished paper about the Thom-Sebastiani theorem for mixed Hodge modules \cite{SaitoTS}. One can also consult the summary in section 2.9 of \cite{BBDJS}. We denote by $\mathrm{MHM}_X^{\mathrm{mon}}$ the category of mixed Hodge modules $M$ on a complex variety~$X$ endowed with commuting actions of a finite order operator $T_s:M\to M$ and a locally nilpotent operator $N:M\to M(-1)$. The category $\mathrm{MHM}_X$ can be identified with a full subcategory of $\mathrm{MHM}_X^{\mathrm{mon}}$ via the functor $$\mathrm{MHM}_X\to \mathrm{MHM}_{X}^{\mathrm{mon}}$$\index{Ts@$T_s$, monodromy operator}\index{mhmmon@$\mathrm{MHM}_X^{\mathrm{mon}}$} sending a mixed Hodge $M$ to itself with $T_s = \mathrm{id}$ and $N = 0$. The Tate twist and the cohomological pullback and pushforward operations still exist in this setting (see \cite{BBDJS}, second paragraph of section 2.9). However, we need a more appropriate, twisted version of the external tensor product. Saito gives two equivalent ways of defining it, one of them being the following. Let $M_i =(D_i,F,L_i,W; T_s,N)$, $i=1,2$ be two mixed Hodge modules with monodromy, with underlying $\mathcal{D}_X-$modules $D_i$, underlying perverse complexes $L_i$ with isomorphisms $\mathrm{DR}(D_i)\simeq L_i\otimes \mathbf{C}$ given by the Riemann-Hilbert correspondence, Hodge filtrations~$F$, weight filtrations $W$ and monodromy actions $(T_s,N)$. For each rational number $\alpha\in(-1,0]$, let $D^{\alpha}_i = \ker(T_s - \exp(-2i\pi\alpha))\subset D_i$. We define $$M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2 = (D,F,L,W;T_s,N)$$ by $D = D_1\boxtimes D_2$, $L = L_1\boxtimes L_2$, $T_s = T_s\boxtimes T_s$, $N = N\boxtimes\mathrm{id} + \mathrm{id} \boxtimes N$, the Hodge filtration being given by \index{twisted exterior product} \index{boxT@${\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}$} $$F_p(D_1^{\alpha}\boxtimes D_2^{\beta}) = \left\{\begin{array}{cl}\displaystyle{ \sum_{i+ j = p+1} F_i D_1^{\alpha}\boxtimes F_jD_2^{\beta} }& \text{if}\ \alpha + \beta \leq -1,\\ \displaystyle{\sum_{i+ j = p} F_iD_1^{\alpha}\boxtimes F_jD_2^{\beta}} & \text{if}\ \alpha + \beta > -1.\end{array}\right.$$ and the weight filtration by \begin{equation}\label{twistedweight} W_k(D_1^{\alpha}\boxtimes D_2^{\beta}) = \left\{\begin{array}{cl}\displaystyle{ \sum_{i+ j = k} W_i D_1^{\alpha}\boxtimes W_jD_2^{\beta} }& \text{if}\ \alpha\beta =0,\\ \displaystyle{\sum_{i+ j = k-1} W_iD_1^{\alpha}\boxtimes W_jD_2^{\beta}} & \text{if}\ \alpha\beta \neq 0,\alpha + \beta \neq -1,\\ \displaystyle{\sum_{i+ j = k-2} W_iD_1^{\alpha}\boxtimes W_jD_2^{\beta}} & \text{if}\ \alpha + \beta = -1\end{array}\right.\end{equation} for the underlying $\mathcal{D}$-modules. The weight filtration on the complex $(L_1\otimes\mathbf{C})\boxtimes(L_2\otimes\mathbf{C})$ is defined in the same manner, and gives a weight filtration on $L_1\boxtimes L_2$ via the action of the Galois group of $\mathbf{Q}$. We refer to \cite{SaitoTS} for the definition of the isomorphism $\mathrm{DR}(D_1\boxtimes D_2)\simeq (L_1\otimes\mathbf{C})\boxtimes(L_2\otimes\mathbf{C})$. \begin{example}\label{twtimesexample} Consider the Hodge structure with monodromy $$H = (\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}},-\mathrm{id},0)\in \mathrm{MHM}_{\mathbf{C}}^{\mathrm{mon}}.$$ Let us compute $H{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} H$. The underlying $\mathbf{Q}$-vector space is of dimension one. Moreover, observe that $H = H^{-\frac{1}{2}}$, so that by definition, we have $W_{1}(H{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} H)=0$ and $W_{2}(H{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} H) = W_0H\boxtimes W_0 H$. Thus, $H{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} H$ is pure of weight 2, with trivial monodromy, that is, it is equal to the pure Hodge structure $\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}(-1)$. \end{example} There is also a twisted tensor product on the derived category $D^b(\mathrm{MHM}_X^{\mathrm{mon}})$, defined for $M_1,M_2\in D^b(\mathrm{MHM}_X^{\mathrm{mon}})$ by $$M_1{\kern .1em\mathop{\otimes}\limits^{T}\kern .1em} M_2 := \delta^*(M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2)$$ where $\delta: X\to X\times X$ is the diagonal map. It is clear from the definition that these twisted products ${\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}, {\kern .1em\mathop{\otimes}\limits^{T}\kern .1em}$ coincide with the usual products $\boxtimes, \otimes$ for Hodge modules with trivial monodromy. \index{otime@${\kern .1em\mathop{\otimes}\limits^{T}\kern .1em}$}\index{twisted tensor product} More generally, for any complex variety $X$ and for varieties $Y,Z$ above $X$, we have a relative twisted exterior product \index{twisted exterior product!relative}\index{boxTX@${\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}_X$} $${\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}_X: D^b(\mathrm{MHM}^{\mathrm{mon}}_Y)\times D^b(\mathrm{MHM}^{\mathrm{mon}}_Z)\to D^b(\mathrm{MHM}^{\mathrm{mon}}_{Y\times_XZ})$$ given for $M_1\in D^b(\mathrm{MHM}^{\mathrm{mon}}_Y)$, $M_2\in D^b(\mathrm{MHM}^{\mathrm{mon}}_Z)$ by $$M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}_XM_2 = i^*(M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2)$$ where $i$ is the closed immersion $Y\times_XZ\to Y\times_{\mathbf{C}} Z.$ We denote simply by $\boxtimes_X$ the relative exterior product it induced on mixed Hodge modules with trivial monodromy. \index{box@$\boxtimes_X$} \subsection{Grothendieck ring of mixed Hodge modules\index{Grothendieck ring!of mixed Hodge modules}} A reference for this is \cite{CNS}, chapter 1 section 3.1, especially 3.1.4, 3.1.9 and~3.1.10. The Grothendieck group $K_0(\mathrm{MHM}_S)$ associated to the abelian category $\mathrm{MHM}_S$ is defined as the quotient of the free abelian group on isomorphism classes of mixed Hodge modules by relations of the form $[X] - [Y] + [Z]$ for all objects $X,Y,Z\in \mathrm{MHM}_S$ forming a short exact sequence $$0\to X \to Y \to Z \to 0.$$ The full subcategory of $\mathrm{MHM}_S$ of objects of pure weight, denoted by $\mathrm{HM}_S$, is semi-simple (see \cite{Saito88}, Lemme 5). On the other hand, the natural group morphism $$K_0(\mathrm{HM}_S)\to K_0(\mathrm{MHM}_S)$$ is an isomorphism, with inverse given by sending the class of a mixed Hodge module $M$ to the sum of the classes of its graded parts. Thus, $K_0(\mathrm{MHM}_S)$ may also be seen as the quotient of the free abelian group on isomorphism classes of Hodge modules of pure weight by relations of the form $[X] - [Y] + [Z]$ for all objects $X,Y,Z\in \mathrm{HM}_S$ forming a \textit{split} short exact sequence $$0\to X \to Y \to Z \to 0.$$ On the other hand, the triangulated category $D^{b}(\mathrm{MHM}_S)$ has a Grothendieck group $$K_0^{\mathrm{tri}}(D^{b}(\mathrm{MHM}_S)),$$ which is defined as the quotient of the free abelian group on isomorphism classes of objects of $D^{b}(\mathrm{MHM}_S)$, by relations of the form $[X] - [Y] + [Z]$, for any objects $X,Y,Z$ of the category $D^{b}(\mathrm{MHM}_S)$ fitting into a distinguished triangle $$X \to Y \to Z \to X[1].$$ The $\otimes$ operation endows the ring $K_0^{\mathrm{tri}}(D^{b}(\mathrm{MHM}_S))$ with a ring structure. There is a natural group morphism $$K_0(\mathrm{MHM}_S)\to K_0^{\mathrm{tri}}(D^{b}(\mathrm{MHM}_S))$$ sending the class of a mixed Hodge module $M$ to the class of the complex defined by this Hodge module placed in degree zero. This morphism is an isomorphism, with inverse given by sending the class of any complex $M^{\bullet}$ of mixed Hodge modules to the alternating series of the classes of its cohomology groups $\sum_{i\in \mathbf{Z}} (-1)^{i}[\mathcal{H}^{i}(M^{\bullet})].$ In what follows, we will denote this group always by $K_0(\mathrm{MHM}_S)$, and consider it as a ring via the ring structure on $ K_0^{\mathrm{tri}}(D^{b}(\mathrm{MHM}_S))$. As for Grothendieck rings of varieties, a morphism $f:T\to S$ of complex varieties induces a group morphism $$f_!:K_0(\mathrm{MHM}_T)\to K_0(\mathrm{MHM}_S)$$ and a ring morphism $$f^*:K_0(\mathrm{MHM}_S)\to K_0(\mathrm{MHM}_T).$$ In particular, for any complex variety $S$, $K_0(\mathrm{MHM}_S)$ is endowed with a $K_0(\mathrm{MHM}_{\mathrm{pt}})$-algebra structure. One may also consider Grothendieck rings of mixed Hodge modules with monodromy $K_0(\mathrm{MHM}_S^{\mathrm{mon}})$,\index{Grothendieck ring!of mixed Hodge modules!with monodromy} defined in the same manner, the product being induced by ${\kern .1em\mathop{\otimes}\limits^{T}\kern .1em}$. We denote this product by $\ast$. \section{Vanishing cycles and mixed Hodge modules}\label{sect.totvanhdg} \subsection{The classical theory of vanishing cycles}\label{sect.classicaltheoryvancycles} Here we recall briefly how nearby and vanishing cycles are defined in the classical transcendental setting. The main reference for this is Deligne's article \cite{DeligneSGA} in SGA 7. For a summary with a view towards mixed Hodge modules, see section 8 of Schnell's notes \cite{Schnell}. Let $X$ be a complex manifold, $D\subset \mathbf{C}$ the open unit disc, and $f:X\to D$ a holomorphic function. We denote by $D^* = D\setminus\{0\}$ the punctured unit disc and by $\widetilde{D}^*$ its universal covering space, which may be viewed as the complex upper half plane via the covering map $$\begin{array}{ccc}p: \widetilde{D}^* = \{z\in\mathbf{C},\ \mathrm{Im}(z) >0\}&\to & D^*\\ z &\mapsto &\exp(2i\pi z)\end{array}. $$ We denote by $X^*$ (resp $X_0$) the inverse image $f^{-1}(D^*)$ (resp. $f^{-1}(0)$), and by $\widetilde{X}^*$ the complex manifold making the rightmost square in the following diagram cartesian: $$\xymatrix{X_0\ar[d]^f \ar@{^{(}->}[r]^{i} & X\ar[d]^{f} & X^* \ar@{_{(}->}[l]_{j} \ar[d]^f & \widetilde{X}^* \ar[l]_p\ar[d] \\ \{0\} \ar@{^{(}->}[r] & D & D^* \ar@{_{(}->}[l] & \widetilde{D}^* \ar[l]_p} $$ The nearby cycle functor $$\psi_f:D^b_c(X) \to D^b_c(X_0)$$ is defined in the following way: for $\mathcal{F}^{\bullet}\in D^b_c(X)$ a bounded constructible complex of sheaves on $X$, we put $$\psi_f\mathcal{F}^{\bullet} = i^{-1} R(j\circ p)_*\, (j\circ p)^{-1} \mathcal{F}^{\bullet}. $$ The deck transformation $z\mapsto z+1$ on $\widetilde{D}^*$ induces an automorphism of $\psi_{f}\mathcal{F}^{\bullet}$ called the \textit{monodromy}. The adjunction morphism $$\mathcal{F}^{\bullet}\to R(j\circ p)_*\, (j\circ p)^{-1} \mathcal{F}^{\bullet}$$ gives, after applying the functor $i^{-1}$, a morphism $$i^{-1}\mathcal{F}^{\bullet} \to \psi_f\mathcal{F}^{\bullet}.$$ The vanishing cycles complex $\phi_f\mathcal{F}^{\bullet}$ of $\mathcal{F}^{\bullet}$ at 0 is defined as the cone of this morphism, so that there is a distinguished triangle $$ i^{-1}\mathcal{F}^{\bullet} \to \psi_f\mathcal{F}^{\bullet} \to \phi_{f}\mathcal{F}^{\bullet} \to i^{-1}\mathcal{F}^{\bullet}[1].$$ The functors $\psi_f$ and $\phi_f$ take distinguished triangles to distinguished triangles, and commute with shifting of complexes. A theorem of Gabber (see \cite{Brylinski}, Théorème 1.2) says that if $\mathcal{F}^{\bullet}$ is a perverse complex, then both $^p\psi_f\mathcal{F}^{\bullet}:=\psi_{f}\mathcal{F}^{\bullet}[-1]$ and $^p\phi_f\mathcal{F}^{\bullet}:=\phi_{f}\mathcal{F}^{\bullet}[-1]$ are perverse. \begin{lemma}\label{vancyclesfinite} Let $X$ be a complex algebraic variety, $f:X\to \mathbf{A}^1_{\mathbf{C}}$ a morphism and $\mathcal{F}^{\bullet}\in D^{b}_c(X)$. Then $\phi_{f-a}\mathcal{F}^{\bullet} = 0$ for all but a finite number of $a\in\mathbf{C}$. \end{lemma} \begin{proof} The case when $\mathcal{F}^{\bullet}$ is a constructible sheaf in degree zero follows from theorem 2.13 in \cite{DeligneSGA45}. Indeed, though the latter is formulated in an $\ell$-adic setting, the proof is formal and goes over to the complex setting. Another argument may be given using local triviality results (see e.g. Corollaire 5.1 in \cite{Verdier}) Since vanishing cycles commute with shifting of complexes, we may then proceed by induction on the amplitude of the complex $\mathcal{F}^{\bullet}$: there are complexes $\mathcal{G}^{\bullet}$ and $\mathcal{G}'^{\bullet}$ of strictly smaller amplitude fitting into a distinguished triangle $$\mathcal{G}^{\bullet}\to \mathcal{F}^{\bullet}\to \mathcal{G}'^{\bullet} \to \mathcal{G}^{\bullet}[1].$$ Applying the functor $\phi_{f-a}$ and using the induction hypothesis, we get, for all $a$ but a finite number, $\phi_{f-a}\mathcal{F}^{\bullet}=0$ \end{proof} \begin{remark} There are several conventions and notations concerning nearby and vanishing cycles, which may seem quite confusing. In this section we use the most common one, coming from SGA, which is also the one used by Saito. Another convention is the one by Kashiwara and Shapira from \cite{KS}. It has the same definition $\psi_f^{KS} = \psi_f$ for nearby cycles, but vanishing cycles are shifted by one: $$\phi_f^{KS} := \phi_{f}[-1],$$ so that in Kashiwara and Shapira's theory, the above distinguished triangle is shifted and takes the form \begin{equation}\label{KSdistinguished}\phi_f^{KS}\mathcal{F}^{\bullet}\to i^{-1}\mathcal{F}^{\bullet} \to \psi_{f}^{KS} \to\phi_f^{KS}\mathcal{F}^{\bullet}[1].\end{equation} This latter convention is used, e.g., in David Massey's paper \cite{Massey} on the Thom-Sebastiani theorem in the derived category of constructible sheaves, because the shift chosen by Kashiwara and Shapira is the one that makes the theorem of Thom-Sebastiani work. For this reason, it is in fact also the convention used in the motivic setting of chapter \ref{grothrings}: the distinguished triangle (\ref{KSdistinguished}) induces, in the triangulated Grothendieck ring of the category $D^{b}_c(X_0)$, the identity $$[\phi_f^{KS}\mathcal{F}^{\bullet}] = [i^{-1}\mathcal{F}^{\bullet}] - [\psi_f^{KS}\mathcal{F}^{\bullet}],$$ which corresponds to the way we defined motivic vanishing cycles in section \ref{sect.vanrecall} of chapter~\ref{grothrings}. This definition, which we took from Lunts and Schnürer's work \cite{LS} is the one which, simultaneously, enables us to define a group morphism using total motivic vanishing cycles and makes the motivic Thom-Sebastiani theorem work, so that this group morphism is actually a ring morphism, our motivic vanishing cycles measure. Denef and Loeser's motivic vanishing cycles $\mathscr{S}^{\phi}_f$ which are equal to $(-1)^{\dim X}$ times our motivic vanishing cycles, satisfy Thom Sebastiani (each side of the Thom-Sebastiani equality being multiplied by the same power of $-1$), but can't be combined into a group morphism because of obvious sign issues.\index{motivic vanishing cycles!sign} \end{remark} \subsection{Nearby and vanishing cycles for Hodge modules}\label{sect.vancycleshodgemodules} For a morphism $f:X\to \mathbf{A}^1$ on a complex variety $X$, denoting $X_0(f) = f^{-1}(0)$, there are nearby and vanishing cycles functors $$\psi_f^{\mathrm{Hdg}}, \phi_f^{\mathrm{Hdg}}:\mathrm{MHM}_X\to \mathrm{MHM}_{X_0(f)}^{\mathrm{mon}}$$ \index{phifHdg@$\phi_f^{\mathrm{Hdg}}$}\index{psifHdg@$\phi_f^{\mathrm{Hdg}}$} lifting the corresponding functors $^p\psi_f$ and $^p\phi_f$ on perverse sheaves. The monodromy operator is quasi-unipotent, so that its semisimple part is of finite order. The action of $T_s$ is given by this semisimple part, whereas the action of $N$ is given by the logarithm of the unipotent part of the monodromy. For any mixed Hodge module $M\in\mathrm{MHM}_{X}$, there are decompositions $$\psi_{f}^{\mathrm{Hdg}}(M) = \psi_{f,1}^{\mathrm{Hdg}}(M) \oplus \psi_{f,\neq 1}^{\mathrm{Hdg}}(M)$$ and $$\phi_{f}^{\mathrm{Hdg}}(M) = \phi_{f,1}^{\mathrm{Hdg}}(M) \oplus \phi_{f,\neq 1}^{\mathrm{Hdg}}(M)$$ in the category $\mathrm{MHM}_{X_0(f)}$ where $\psi_{f,1}^{\mathrm{Hdg}}(M) = \ker (T_s-\mathrm{id})$ and $\psi_{f,\neq 1}^{\mathrm{Hdg}}(M) = \ker (T_s^{d-1} + \ldots + T_s + \mathrm{id})$ where $d$ is the order of $T_s$ (same definition for $\phi_f$). The following compatibility result with proper morphisms is classical in the theory of nearby and vanishing cycles, and, in the case of Hodge modules, follows from \cite{Saito90} Theorem~2.14. \begin{lemma}\label{vancyclhdgproper} Let $p:X\to Y$ and $f:Y\to \mathbf{A}^1$ be morphisms of complex varieties. Assume that $p$ is proper. $$\xymatrix{X \ar[r]^p\ar[dr]_{f\circ p} & Y \ar[d]^f\\ &\mathbf{A}^1}$$ Denoting by $\tilde{p}$ the restriction of $p$ to $(f\circ p)^{-1}(0)$, for any $M\in\mathrm{MHM}_{X}$ there are isomorphisms $$\tilde{p}_*(\psi_{f\circ p}(M))\simeq \psi_{f}(p_{*}(M)) $$ and $$\tilde{p}_*(\phi_{f\circ p}(M))\simeq \phi_{f}(p_{*}(M)) $$ \end{lemma} \subsection{Total vanishing cycles}\label{sect.totalvancycleshodgemodules} Let $X$ be a complex variety, and denote by $\mathrm{pr}$ the projection $\mathbf{A}^1_X\to \mathbf{A}^1_{\mathbf{C}}$. By lemma \ref{vancyclesfinite} and faithfulness of the functor $\mathrm{rat}$, there is a well-defined \textit{total vanishing cycles functor}: \index{total vanishing cycles functor} $$\begin{array}{rccc}\phi^{\tot}_X:&\mathrm{MHM}_{\mathbf{A}^1_{X}} &\to& \mathrm{MHM}_{X}^{\mathrm{mon}}\\ & M & \mapsto & \bigoplus_{a\in\mathbf{C}}\phi_{\mathrm{pr}-a}^{\mathrm{Hdg}} M \end{array}$$ It satisfies a Thom-Sebastiani property: \index{Thom-Sebastiani!for $\phi_X^{\mathrm{tot}}$} \begin{prop}\label{HodgeTS} For all $M_1,M_2\in D^b(\mathrm{MHM}_{\mathbf{A}^1_{X}})$, denoting by $\mathrm{add}$ the addition morphism $\mathbf{A}^1_{X}\times \mathbf{A}^1_{X}\to \mathbf{A}^1_{X^2}$, one has the isomorphism $$\phi^{\tot}_{X^2}\left((\mathrm{add}_{!})(M_1\boxtimes M_2)\right) \simeq \phi^{\tot}_X(M_1){\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \phi^{\tot}_X(M_2)$$ in $D^b(\mathrm{MHM}_{X^2}^{\mathrm{mon}})$. \end{prop} \begin{proof} Saito's Thom-Sebastiani theorem for Hodge modules (see \cite{SaitoTS}) gives, for any $a_1,a_2\in \mathbf{C}$, an isomorphism $$i_{a_1,a_2}^*\phi^{\mathrm{Hdg}}_{\mathrm{pr}\oplus \mathrm{pr} - a_1 -a_2}(M_1\boxtimes M_2) \simeq \phi^{\mathrm{Hdg}}_{\mathrm{pr}-a_1}M_1 {\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \phi_{\mathrm{pr}-a_2}^{\mathrm{Hdg}}M_2,$$ in $D^{b}(\mathrm{MHM}^{\mathrm{mon}}_{\mathrm{pr}^{-1}(a_1)\times \mathrm{pr}^{-1}(a_2)})$, where $$i_{a_1,a_2}:\mathrm{pr}^{-1}(a_1)\times \mathrm{pr}^{-1}(a_2) \to (\mathrm{pr}\oplus \mathrm{pr})^{-1}(a_1 + a_2)$$ is the natural inclusion. For any $a\in\mathbf{C}$, the products $\mathrm{pr}^{-1}(a_1)\times \mathrm{pr}^{-1}(a_2)$ for all $a_1,a_2$ such that $a_1 + a_2 = a$ form a partition of $(\mathrm{pr}\oplus \mathrm{pr})^{-1}(a)$. Therefore, taking the direct sum over all $a_1,a_2$, we get an isomorphism $$\bigoplus_{a\in\mathbf{C}}\phi^{\mathrm{Hdg}}_{\mathrm{pr}\oplus \mathrm{pr} -a}(M_1\boxtimes M_2) \simeq \phi^{\tot}_{X}(M_1){\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \phi^{\tot}_{X}(M_2)$$ in $D^{b}(\mathrm{MHM}^{\mathrm{mon}}_{X^2}).$ As in the motivic setting, a compactification argument allows us to write the left-hand side as $$ \bigoplus_{a\in\mathbf{C}}\phi^{\mathrm{Hdg}}_{\mathrm{pr}\oplus \mathrm{pr} -a}(M_1\boxtimes M_2) \simeq \bigoplus_{a\in\mathbf{C}} \phi^{\mathrm{Hdg}}_{\mathrm{pr} -a} ((\mathrm{add})_!(M_1\boxtimes M_2)),$$ whence the result. \end{proof} \section{Symmetric products and vanishing cycles}\label{sect.symprodvancycles} \subsection{Symmetric products of mixed Hodge modules}\label{mhmsymproducts} From now on, we are going to take symmetric products, so we assume the varieties to be quasi-projective over $\mathbf{C}$. A notion of \textit{symmetric power} of a complex of mixed Hodge modules was defined by Maxim, Saito and Schürmann in \cite{MSS}. Here we are going to explain how it extends to the setting of mixed Hodge modules with monodromy. Theorem 1.9 in \cite{MSS} gives, for an integer $n\geq 1$, bounded complexes of Hodge modules $M_1,\ldots,M_n$ on a quasi-projective complex variety $X$ and for all $\sigma \in \mathfrak{S}_n$, an isomorphism $$\sigma^{\sharp}: M_1\boxtimes \ldots \boxtimes M_n \xrightarrow{\sim} \sigma_{*}\left(M_{\sigma(1)}\boxtimes\ldots\boxtimes M_{\sigma(n)} \right)$$ in $D^b(\mathrm{MHM}_{X^n})$ compatible with the analogous isomorphism on the underlying complexes of constructible sheaves. These isomorphisms in fact induce a right action of $\mathfrak{S}_n$ on the exterior product $M_1\boxtimes \ldots \boxtimes M_n$. When $X$ is smooth, these isomorphisms are constructed from analogous isomorphisms on the underlying complexes of $\mathcal{D}$-modules and constructible sheaves, which are checked to be compatible via the De Rham functor (proposition 1.5 in \cite{MSS}). The general non-smooth case is deduced from this by an embedding argument. Note that the above generalises to complexes of mixed Hodge modules with monodromy, if we replace the ordinary exterior product $\boxtimes$ by its twisted counterpart ${\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}$. Indeed, the underlying complexes of $\mathcal{D}$-modules and constructible sheaves and the De Rham functor relating them remain the same, and the monodromy is compatible with the above action. Let now $M$ be a bounded complex of mixed Hodge modules with monodromy over a quasi-projective complex variety $X$. Let $n\geq 1$ be an integer, and denote by $\pi: X^n\to S^nX$ the canonical projection. The above construction leads to the definition of a right $\mathfrak{S}_n$-action on the complex $\pi_*(\bigtwtimes{n} M)$. Finally, as in~\cite{MSS}, the idempotent $e = \frac{1}{n!}\sum_{\sigma\in \mathfrak{S}_n}\sigma\in\mathbf{Q}[\mathfrak{S}_n]$ defines an idempotent of $\pi_{*}(\bigtwtimes{n}M)$ in the category $D^b(\mathrm{MHM}_{S^nX}^{\mathrm{mon}})$, which splits by corollary~2.10 in~\cite{BS}, meaning that we may write $$\pi_*(\bigtwtimes{n} M) = \ker(e) \oplus \mathrm{Im}(e).$$ For any bounded complex of mixed Hodge modules~$M$ with monodromy on a complex variety $X$, the symmetric power \index{symmetric power!of mixed Hodge module} $S^n(M)$ is then defined to be $$S^n(M) = \left(\pi_*\left(\bigtwtimes{n}M\right)\right)^{\mathfrak{S}_n} := \mathrm{Im}(e)\in D^b(\mathrm{MHM}^{\mathrm{mon}}_{S^nX}),$$ where $\pi: X^n\to S^nX$ is the canonical projection. \begin{remark} In the case where $M$ has trivial monodromy, we recover the definition from \cite{MSS}. In particular, in this case $S^nM$ is an element of $D^{b}(\mathrm{MHM}_{S^nX})$. \end{remark} As in chapter \ref{eulerproducts}, section \ref{sect.symprodgrouplaw}, we may define a multiplicative group structure on the product $\prod_{i\geq 1}K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^iX})$, by using the twisted exterior product ${\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}$: for all $a =(a_i)_{i\geq 1}$ and $b= (b_i)_{i\geq 1}$ in the product $\prod_{i\geq 1}K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^iX})$, put, for every $n\geq 1$, $$ (ab)_n = \sum_{k=0}^na_i\boxtimes b_{n-i}$$ where by convention $a_0 = b_0 = 1$, and $a_i{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} b_{n-i}$ is the image of $(a_i,b_{n-i})$ through the composition $$K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^iX})\times K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{n-i}X})\xrightarrow{{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}}K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^iX\times S^{n-i}X})\to K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^nX})$$ where the latter morphism is obtained from the quotient map $X^n\to S^nX$ by passing to the quotient with respect to the natural permutation action of the group $\mathfrak{S}_i\times \mathfrak{S}_{n-i}$ on~$X^n$. We denote this group by $K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X}).$ \begin{lemma} \label{mhmmorphism} There is a unique group morphism $$S_X^{\mathrm{Hdg}}: K_0(\mathrm{MHM}^{\mathrm{mon}}_X) \to K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X}),$$ such that the image of the class of a complex of mixed Hodge modules $M$ over $X$ is the family of classes~$([S^nM])_{n\geq 1}$. \index{SXHdg@$S_X^{\mathrm{Hdg}}$} \end{lemma} \begin{proof} We define a map $S_X^{\mathrm{Hdg}}$ from the free abelian group on Hodge modules to the group $K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X})$ by putting $S_X^{\mathrm{Hdg}}(M) = ([S^nM])_{n\geq 1}$. By the discussion about the definition of $K_0(\mathrm{MHM}^{\mathrm{mon}}_X)$, to show that~$S_X^{\mathrm{Hdg}}$ passes to the quotient, it suffices to show that whenever $M,M_0, M_1$ are Hodge modules such that $M = M_0\oplus M_1$, we have $S_X^{\mathrm{Hdg}}(M) = S_X^{\mathrm{Hdg}}(M_0)S_X^{\mathrm{Hdg}}(M_1)$. In fact, denoting for every $k\in \{0,\ldots,n\}$ by $i_k$ the natural morphism $S^{n-k}X\times S^{k}X\to S^nX$, we will show that for every $n\geq 1$, we have, in $K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^nX})$, the equality $$S^nM = \sum_{k=0}^n i_{k,*}\left(S^{n-k}M_0{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} S^{k}M_1\right).$$ For this, note that in $D^b(\mathrm{MHM}^{\mathrm{mon}}_{X^n})$ we have the direct sum decomposition $$\bigtwtimes{n}M = \bigoplus_{(\epsilon_1,\ldots,\epsilon_n)\in\{0,1\}^n}M_{\epsilon_1}{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \ldots {\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_{\epsilon_n}= \bigoplus_{k=0}^n \bigoplus_{\substack{(\epsilon_1,\ldots,\epsilon_n)\in\{0,1\}^n\\ \epsilon_1 + \ldots + \epsilon_n = k}}M_{\epsilon_1}{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}\ldots{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_{\epsilon_n}.$$ Let $M_{(k)} :=\oplus_{\substack{(\epsilon_1,\ldots,\epsilon_n)\in\{0,1\}^n\\ \epsilon_1 + \ldots + \epsilon_n = k}}M_{\epsilon_1}{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}\ldots{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_{\epsilon_n}$ and let $\pi: X^n\to S^nX$ be the quotient morphism. After applying the exact functor $\pi_*$ to the above decomposition, we get $$\pi_*\left(\bigtwtimes{n}M\right) = \bigoplus_{k=0}^n \pi_*(M_{(k)})$$ in $D^b(\mathrm{MHM}^{\mathrm{mon}}_{S^nX})$. Observe that each of the factors $\pi_{*}(M_{(k)})$ is stable under the action of the symmetric group $\mathfrak{S}_n$, so that we have $$\pi_*\left(\bigtwtimes{n} M\right)^{\mathfrak{S}_n} = \bigoplus_{k=0}^n(\pi_*M_{(k)})^{\mathfrak{S}_n}.$$ It suffices to prove that for every $k$, $i_{k,*}\left(S^{n-k}M_0{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} S^{k}M_1\right)$ is isomorphic to $(\pi_*M_{(k)})^{\mathfrak{S}_n}$. We denote again by $e$ the restriction of the idempotent $e$ to $\pi_*M_{(k)}$. Fix $k\in\{0,\ldots,n\}$, and denote by $\pi_k:X^k\to S^kX$ and $\pi_{n-k}: X^{n-k}\to S^{n-k}X$ the quotient maps. The corresponding idempotent on $(\pi_{n-k})_*\bigtwtimes{n-k}M_0$ (resp. $(\pi_{k})_*\bigtwtimes{k}M_1$) will be denoted by~$e^0$ (resp.~$e^1$). By the commutativity of the diagram $$\begin{array}{ccc}X^{n-k}\times X^{k}\ \ \ \ &\xrightarrow{\mathrm{id}}& X^n \\ \downarrow\ \scriptstyle{\pi_{n-k}\times \pi_k} & & \downarrow\ \scriptstyle{\pi}\\ S^{n-k}X\times S^kX\ \ \ \ &\xrightarrow{i_k} & S^nX\end{array}$$ and exactness of $\pi_*$, the inclusion of the direct factor $(\bigtwtimes{n-k} M_0){\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}(\bigtwtimes{k}M_1)\to M_{(k)}$ induces a monomorphism $$i_{k,*}\left((\pi_{n-k})_*\left(\bigtwtimes{n-k}M_0\right) {\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} (\pi_k)_*\left(\bigtwtimes{k}M_1\right)\right)\xrightarrow{f} \pi_*M_{(k)}.$$ The $\mathfrak{S}_{n-k}\times \mathfrak{S}_k$-action on the left-hand side is compatible with the $\mathfrak{S}_n$-action on the right-hand side, when $\mathfrak{S}_{n-k}\times \mathfrak{S}_k$ is seen in a natural way as a subgroup of $\mathfrak{S}_n$. Therefore, taking invariants, we have a monomorphism $$i_{k,*}\left(S^{n-k}M_0{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} S^{k}M_1\right)\xrightarrow{\tilde{f}} (\pi_*M_{(k)})^{\mathfrak{S}_n},$$ fitting into the commutative diagram $$\xymatrix{i_{k,*}\left((\pi_{n-k})_*\left(\bigtwtimes{n-k}M_0\right) {\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} (\pi_k)_*\left(\bigtwtimes{k}M_1\right)\right)\ar[r]^-{f}\ar[d]^{i_{k,*}(e^0\,{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}\, e^1)} &\pi_*M_{(k)}\ar[d]^{e}\\ i_{k,*}\left(S^{n-k}M_0{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} S^{k}M_1\right)\ar[r]^-{\tilde{f}}& (\pi_*M_{(k)})^{\mathfrak{S}_n} }$$ Since $e$ is surjective, $\tilde{f}$ is an isomorphism. \end{proof} \begin{definition} For any $\a\in K_0(\mathrm{MHM}_{X}^{\mathrm{mon}})$ and any $n\geq 1$, we define $S^n\a$ to be the element of $K_0(\mathrm{MHM}_{S^nX}^{\mathrm{mon}})$ given by the $n$-th component of $S^ {\mathrm{Hdg}}(\a)$. \end{definition} \begin{remark} If $\a$ is an element of $K_0(\mathrm{MHM}_{X})$ (i.e. has trivial monodromy), then $S^n\a$ is an element of $K_0(\mathrm{MHM}_{S^nX})$. \end{remark} \subsection{Compatibility between symmetric products and total vanishing cycles}\label{sect.compsymprodvanhdg} Denote by $\mathrm{add}_n$ the addition map $\mathbf{A}^n\to \mathbf{A}^1$ on the group scheme $\mathbf{A}^n$, and for any quasi-projective variety $Y$, by $\pi_Y$ the quotient map $Y^n\to S^nY$. Since $\mathrm{add}_n$ is invariant via the permutation of the coordinates of $\mathbf{A}^1$, for any quasi-projective complex variety $X$, it induces a morphism $\overline{\mathrm{add}}_n$ fitting into a commutative diagram \index{add@$\mathrm{add}_n$, $\overline{\mathrm{add}}_n$} $$\xymatrix{(\mathbf{A}^1_{X})^n\ar[r]^-{\mathrm{add}_n}\ar[d]_{\pi_{\mathbf{A}^1_X}} & \mathbf{A}^1_{X^n}\ar[d]^{\pi'_X} \\ S^{n}(\mathbf{A}^1_{X}) \ar[r]^-{\overline{\mathrm{add}}_n} & \mathbf{A}^1_{S^nX}}$$ where we denote by $\pi'_X$ the morphism $\mathbf{A}^1_{X^n}\to \mathbf{A}^1_{S^nX}$ induced by $\pi_X:X^n\to S^nX$. Let $M \in D^b(\mathrm{MHM}_{\mathbf{A}^1_{X}})$. From \ref{mhmsymproducts} we know that there is a $\mathfrak{S}_n$-action on the mixed Hodge module $\bboxtimes^nM$ on $(\mathbf{A}^1_X)^n$, compatible with the permutation action of $\mathfrak{S}_{n}$ on~$(\mathbf{A}^1_X)^n$. Since $\pi'_X\circ \mathrm{add}_n$ is equivariant if one equips $\mathbf{A}^1_{S^nX}$ with the trivial $\mathfrak{S}_n$-action, this $\mathfrak{S}_n$-action induces a $\mathfrak{S}_n$-action on $(\pi'_X\circ \mathrm{add}_n)_! \left(\bboxtimes^nM\right)$. The relation $\pi'_X\circ \mathrm{add}_n = \bar{\mathrm{add}}_n\circ \pi_{\mathbf{A}^1_X}$ shows that the functor $(\bar{\mathrm{add}}_n)_!$ sending $(\pi_{\mathbf{A}^1_X})_*(\bboxtimes^nM)$ to $(\pi'_X\circ\mathrm{add}_n)_{!}(\bboxtimes^nM)$ is compatible with $\mathfrak{S}_n$-actions, which, taking invariants, gives us the relation: \begin{lemma} Let $M\in D^b(\mathrm{MHM}_{\mathbf{A}^1_X})$. One has \label{addsymprod} \begin{equation} \label{addsymprodeq} (\bar{\mathrm{add}}_n)_!S^nM = \left((\pi'_X\circ \mathrm{add}_n)_!\left(\bboxtimes^nM\right)\right)^{\mathfrak{S}_n} \end{equation} in $D^b(\mathrm{MHM}_{\mathbf{A}^1_{S^nX}})$. \end{lemma} \begin{prop}\label{vancyclefunctorsymproducts} For any $M\in D^b(\mathrm{MHM}_{\mathbf{A}^1_{X}})$, we have $$\phi^{\tot}_{S^nX}\left(\left(\overline{\mathrm{add}}_n\right)_!S^nM\right) = S^n\left(\phi^{\tot}_X(M)\right)$$ in $D^{b}(\mathrm{MHM}_{S^nX}^{\mathrm{mon}}).$ \end{prop} \begin{proof} For all $M_1,\ldots M_n\in D^b(\mathrm{MHM}_{\mathbf{A}^1_{X}})$, the Thom-Sebastiani theorem for $\phi^{\tot}$ says $$\phi^{\tot}_{X^n}\left((\mathrm{add}_n)_!\left(M_1\boxtimes\ldots\boxtimes M_n\right)\right) \simeq \phi^{\tot}_X(M_1){\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \ldots {\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \phi^{\tot}_X(M_n)$$ in $D^{b}(\mathrm{MHM}_{X^n}^{\mathrm{mon}}).$ Composing with the functor $(\pi_X)_*$ where $\pi_X:X^n\to S^nX$ is the projection, and using lemma \ref{vancyclhdgproper}, we have $$\phi^{\tot}_{S^nX}((\pi'_X\circ \mathrm{add}_n)_!(M_1\boxtimes \ldots \boxtimes M_n)) \simeq (\pi_X)_*\left(\phi^{\tot}_X(M_1){\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \ldots {\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \phi^{\tot}_X(M_n)\right),$$ where $\pi'_X$ is the morphism $\mathbf{A}^1_{X^n}\to \mathbf{A}^1_{S^nX}$ induced by $\pi_X$. In the same manner, for every $\sigma\in\mathfrak{S}_n$, we also have $$\phi^{\tot}_{S^nX}((\pi'_X\circ \mathrm{add}_n)_!(M_{\sigma^{-1}(1)}\boxtimes \ldots \boxtimes M_{\sigma^{-1}(n)~})) $$ $$\simeq (\pi_X)_*\left(\phi^{\tot}_X(M_{\sigma^{-1}(1) }){\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \ldots {\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \phi^{\tot}_X(M_{\sigma^{-1}(n)})\right),$$ Thus, for any $M\in D^b(\mathrm{MHM}_{\mathbf{A}^1_{X}})$, the idempotent $\frac{1}{n!}\sum_{\sigma\in\mathfrak{S}_n}\sigma\in \mathbf{Z}[\mathfrak{S}_n]$ induces idempotents~$e$ and~$e'$ of $$(\pi'_X\circ\mathrm{add}_n)_!\left(\bboxtimes^n M\right)\in D^b(\mathrm{MHM}_{\mathbf{A}^1_{X^n}})$$ and $$(\pi_X)_*\bigtwtimes{n}(\phi^{\tot}_X(M))\in D^{b}(\mathrm{MHM}_{X^n}^{\mathrm{mon}})$$ such that $\phi^{\tot}_{S^nX}(e) =e'$. Therefore, splittings being preserved by any additive functor, we have $$\phi^{\tot}_{S^nX}\left(\left((\pi'_X\circ\mathrm{add}_n)_!\bboxtimes^n M\right)^{\mathfrak{S}_n}\right) \simeq \left(\bigtwtimes{n}(\phi^{\tot}_X(M))\right)^{\mathfrak{S}_n},$$ which, using the isomorphism (\ref{addsymprodeq}) above, gives the result. \end{proof} \section{Compatibility with motivic vanishing cycles}\label{sect.motvancomp} \subsection{The Hodge realisation}\label{sect.hodgereal} Let $S$ be a complex variety, and let $X\xrightarrow{p}S$ be an $S$-variety, endowed with a $\mu_n$-action~$\sigma$ for some $n\geq 1$. Then $\sigma(e^{\frac{2i\pi}{n}})$ induces an automorphism $T_s(\sigma)$ of finite order on each cohomology group of the complex of mixed Hodge modules $p_!\mathbf{Q}_X^{\mathrm{Hdg}}$ (see notation \ref{qshdg}). Thus, as explained in \cite{GLM} section~3.16, this defines a group morphism $$\begin{array}{rccc}\chi^{\mathrm{Hdg}}_S:&\mathscr{M}_S^{\hat{\mu}}&\to &K_0(\mathrm{MHM}_S^{\mathrm{mon}})\\ & [X\xrightarrow{p} S,\sigma] & \mapsto& \sum_{i\in\mathbf{Z}}(-1)^i[\mathcal{H}^i(f_!\mathbf{Q}_X^{\mathrm{Hdg}}),T_s(\sigma), 0] \end{array}$$ called the \textit{Hodge realisation morphism}\index{Hodge realisation}. Here are some properties of this Hodge realisation (for a proof, see the ideas in \cite{GLM}, section 6): \begin{prop}\label{chihdgprop} Let $S,T$ be complex varieties. \begin{enumerate} \item The morphism $\chi^{\mathrm{Hdg}}_S$ commutes with twisted exterior products, that is, for any $\a\in\mathscr{M}_{S}^{\hat{\mu}}, \b\in\mathscr{M}_{T}^{\hat{\mu}}$, we have $$\chi_{S\times T}^{\mathrm{Hdg}}(\Psi(\a\boxtimes\b)) = \chi_{S}^{\mathrm{Hdg}}(\a){\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} \chi_{T}^{\mathrm{Hdg}}(\b)$$ where $\Psi:\mathscr{M}_{S\times T}^{\hat{\mu}\times \hat{\mu}} \to \mathscr{M}_{S\times T}^{\hat{\mu}}$ is the convolution morphism from chapter \ref{grothrings}, section \ref{sect.convolution}. \item For any morphism $f:T\to S$ between complex varieties, we have $$\chi^{\mathrm{Hdg}}_S\circ f_! = f_!\circ \chi_T^{\mathrm{Hdg}}\ \ \ \ \text{and}\ \ \ \chi^{\mathrm{Hdg}}_T\circ f^* = f^*\circ \chi_{S}^{\mathrm{Hdg}}.$$ \item \label{chihgdpropring} The group morphism $\chi_S^{\mathrm{Hdg}}$ is a ring morphism $$(\mathscr{M}_S^{\hat{\mu}},\ast)\to (K_0(\mathrm{MHM}_S^{\mathrm{mon}}),\ast).$$ \end{enumerate} \end{prop} \begin{example} For $S=\mathrm{Spec}\, \mathbf{C}$, we have, for any separated complex variety $X$ with $\mu_n$-action $\sigma$, $$\chi_{\mathrm{pt}}^{\mathrm{Hdg}}([X,\sigma]) = \sum_{i=0}^{\dim X}(-1)^i[H^{i}_c(X(\mathbf{C}),\mathbf{Q}),T_s(\sigma)],$$ where $[H^{i}_c(X(\mathbf{C}),\mathbf{Q}),\sigma]$ is the class of the mixed Hodge structure defined by Deligne \cite{Deligne} on the singular cohomology group $H^{i}_c(X(\mathbf{C}),\mathbf{Q})$, together with the automorphism of finite order induced by~$\sigma(e^{\frac{2i\pi}{n}})$. \end{example} \begin{example} In section \ref{sect.TSexample} of chapter \ref{grothrings}, we showed that the motivic vanishing cycles $\phi_{x^2}$ of the function $\mathbf{A}^1\to \mathbf{A}^1$, $x\mapsto x^2$ are equal to the class $1-[\tilde{E},\mu_2]\in\mathscr{M}_{\mathbf{C}}^{\hat{\mu}}$, where $[\tilde{E},\mu_2]$ is the class of the union of two points with permutation action by $\mu_2$. The Hodge realisation of $[\tilde{E},\mu_2]$ is the Hodge structure with monodromy $$\left((\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}})^2,\left(\begin{array}{cc} 0 & 1 \\ 1 & 0 \end{array}\right)\right),$$ which may be decomposed as a direct sum $$(\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}},\mathrm{id}) \oplus (\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}, -\mathrm{id}).$$ Thus, the class $1- [\tilde{E},\mu_2]$ maps to the class of the Hodge structure with monodromy $H = (\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}},-\mathrm{id},0)$. We may conclude that the equality $$(1-[\tilde{E},\mu_2])\ast (1-[\tilde{E},\mu_2]) = \mathbf{L}$$ from section \ref{sect.TSexample} of chapter \ref{grothrings} becomes the equality $$H{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} H = \mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}(-1)$$ in $K_0(\mathrm{MHM}^{\mathrm{mon}}_{\mathrm{pt}})$. It is consistent with example \ref{twtimesexample}, where we actually proved the two sides were isomorphic. \end{example} \begin{remark} The above Hodge realisation induces the classical Hodge realisation $\chi_S^{\mathrm{Hdg}}:\mathscr{M}_S\to K_0(\mathrm{MHM}_S)$ (see e.g. \cite{CNS}, Chapter 1, paragraph 3.3) on the corresponding rings without monodromy. In what follows, we are also going to use this realisation. \end{remark} \subsection{Compatibility with symmetric products} \begin{lemma}\label{chihdgsymprod} Let $p:Y\to X$ be a quasi-projective variety over $X$. Then $$\chi^{\mathrm{Hdg}}_{S^nX}(S^nY) = S^n(\chi^{\mathrm{Hdg}}_XY)$$ in $D^b(\mathrm{MHM}_{S^nX})$. \end{lemma} \begin{proof} The relation we want is $$(S^np)_!\mathbf{Q}_{S^nY}^{\mathrm{Hdg}} = S^{n}(p_!\mathbf{Q}_Y^{\mathrm{Hdg}}),$$ which is exactly equation (1.11) in \cite{CMSSY}. \end{proof} \begin{lemma} The diagram $$\xymatrix{(\mathscr{M}_X,+) \ar[r]^-S \ar[d] &\left(\prod_{n\geq 1}\mathscr{M}_{S^nX}, \cdot \right)\ar[d]\\ (K_0(\mathrm{MHM}_X),+) \ar[r]^-{S_X^{\mathrm{Hdg}}} & \left(\prod_{n\geq 1}K_0(\mathrm{MHM}_{S^nX}),\cdot\right) }$$ where the vertical arrows are given by the Hodge realisation morphisms, commutes. \end{lemma} \begin{proof} We checked it is true for classes of quasi-projective varieties $Y$ over~$X$ in lemma \ref{chihdgsymprod}, and such classes generate $\mathrm{KVar}_X$. Moreover, note that for any $M\in D^b(\mathrm{MHM}_X)$, any $n\geq 1$ and any $k\in\mathbf{Z}$, we have $S^n(M(k)) = (S^nM)(nk)$ (recall that $M(k) = M\otimes \mathbf{Q}^{\mathrm{Hdg}}_{\mathrm{pt}}(k)$ ). Thus, for any $\a\in\mathrm{KVar}_X$, any $k\in\mathbf{Z}$ and any $n\geq 1$, we have \begin{eqnarray*} \chi_{S^nX}^{\mathrm{Hdg}}(S^n(\mathbf{L}^{-k}\a)) &= &\chi_{S^nX}^{\mathrm{Hdg}}(\mathbf{L}^{-kn}S^n\a)\\ & = &(\chi_{S^nX}^{\mathrm{Hdg}}(S^n\a))(kn)\\ & = &S^n(\chi^{\mathrm{Hdg}}_{X}(\a)(k))\\ & = & S^n(\chi_{X}^{\mathrm{Hdg}}(\mathbf{L}^{-k}\a)) \end{eqnarray*} which shows that the diagram commutes also on the localisation. \end{proof} \subsection{Grothendieck rings of Hodge modules over the affine line} There are two natural $K_0(\mathrm{MHM}_{X})-$algebra structures on the group $K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}})$. The first one is given by the pullback morphism $$(\epsilon_X)^*:K_0(\mathrm{MHM}_{X}) \to K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}})$$ where $\epsilon_X:\mathbf{A}^1_{X}\to X$ is the structural morphism. Denote by $\star$ the product induced by the addition morphism $\mathrm{add}:\mathbf{A}^2_X\to \mathbf{A}^1_{X}$: $$\star: K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}})\times K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}})\xrightarrow{\boxtimes_X} K_0(\mathrm{MHM}_{\mathbf{A}^2_X})\xrightarrow{\mathrm{add}_!} K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}})$$ (see the end of section \ref{sect.mhmmonodromy} for the definition of $\boxtimes_X$ in the context of mixed Hodge modules). Moreover, we define $i_X:X\to \mathbf{A}^1_X$ and $i^2_X: X\to \mathbf{A}^2_X$ to be the morphisms induced by the inclusions $\{0\}\to \mathbf{A}^1_{\mathbf{C}}$ and $\{(0,0)\}\to \mathbf{A}^2_{\mathbf{C}}$, respectively. \begin{lemma} The functor $(i_X)_!$ induces a ring morphism $$(i_X)_!: K_0(\mathrm{MHM}_{X}) \to (K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}}),\star),$$ endowing the ring $(K_0(\mathrm{MHM}_{\mathbf{A}^1_X}),\star)$ with a $K_0(\mathrm{MHM}_X)$-algebra structure. \end{lemma} \begin{proof} There is a commutative diagram $$\xymatrix{X \ar[r]^{i^2_X} \ar[rd]^{i_X}& \mathbf{A}^2_X \ar[d]^{\mathrm{add}}\\ &\mathbf{A}^1_X}$$ which gives us for $M,M'\in D^b(\mathrm{MHM}_X),$ \begin{eqnarray*}(i_X)_!(M)\star (i_X)_! (M') & = & (\mathrm{add})_!((i_X)_!(M)\boxtimes_X (i_X)_!(M')) \\ & = & (\mathrm{add})_!(i^2_X)_! (M\otimes_XM') \\ &=& (i_X)_!(M\otimes_X M') \end{eqnarray*} \end{proof} \begin{lemma}\label{chihdgconvv} The Hodge realisation $\chi^{\mathrm{Hdg}}_{\mathbf{A}^1_{X}}$ is a morphism $$\chi^{\mathrm{Hdg}}_{\mathbf{A}^1_{X}} : (\mathscr{M}_{\mathbf{A}^1_{X}},\star)\to (K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}}),\star)$$ of $\mathscr{M}_{X}$-algebras. \end{lemma} \begin{proof} Let $\a,\b\in\mathscr{M}_{\mathbf{A}^1_{X}}$. Using the fact that the Hodge realisation commutes with pushforwards, pullbacks and exterior products (proposition \ref{chihdgprop}), we have \begin{eqnarray*} \chi_{\mathbf{A}^1_X}^{\mathrm{Hdg}}(\a\star \b) & = & \chi^{\mathrm{Hdg}}_{\mathbf{A}^1_X}(\mathrm{add}_!(\a\boxtimes_X\b))\\ & = & \mathrm{add}_{!} \left(\chi_{\mathbf{A}^2_X}^{\mathrm{Hdg}}(\a\boxtimes_X\b)\right)\\ & = & \mathrm{add}_!\left(\chi_{\mathbf{A}^1_X}^{\mathrm{Hdg}}(\a)\boxtimes_X\chi_{\mathbf{A}^1_X}^{\mathrm{Hdg}}(\b)\right)\\ & = & \chi_{\mathbf{A}^1_X}^{\mathrm{Hdg}}(\a)\star\chi_{\mathbf{A}^1_X}^{\mathrm{Hdg}}(\b). \end{eqnarray*} \end{proof} \subsection{Compatibility with motivic vanishing cycles} In section \ref{sect.vancycleshodgemodules} we defined a total vanishing cycle functor: $$\phi^{\tot}_X:\mathrm{MHM}_{\mathbf{A}^1_{X}}\to \mathrm{MHM}_{X}^{\mathrm{tot}}.$$ It induces a group morphism $$\Phi_X^{\mathrm{Hdg}}: K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}})\to K_0(\mathrm{MHM}_{X}^{\mathrm{mon}})$$ between the corresponding Grothendieck rings. \begin{prop}\label{HodgeTSgrothring} The morphism $\Phi^{\mathrm{Hdg}}_X$ is a morphism of $K_0(\mathrm{MHM}_{X})$-algebras $$\Phi_X^{\mathrm{Hdg}}:(K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}}),\star)\to (K_0(\mathrm{MHM}_{X}^{\mathrm{mon}}),\ast).$$\index{Phihdg@$\Phi_X^\mathrm{Hdg}$} \end{prop} \begin{proof} This is a direct consequence of the Thom-Sebastiani property for total vanishing cycles, proposition \ref{HodgeTS} . \end{proof} On the other hand, in chapter \ref{grothrings} we defined, for every variety $X$ over a field $k$ of characteristic zero, the motivic vanishing cycles measure $$\Phi_X:\mathscr{E}xp\mathscr{M}_X\to (\mathscr{M}_X^{\hat{\mu}},\ast).$$ Here, to be able to compare it with $\Phi^{\mathrm{Hdg}}_X$, we are going to consider rather its composition $$\Phi'_{X}: (\mathscr{M}_{\mathbf{A}^1_X},\star) \to (\mathscr{M}_X^{\hat{\mu}},\ast)$$\index{phip@$\Phi'_X$} with the quotient morphism $$(\mathscr{M}_{\mathbf{A}^1_X},\star)\to \mathscr{E}xp\mathscr{M}_{X}$$ (which is given, by definition, by sending to zero the elements $[\mathbf{A}^1_Y\to \mathbf{A}^1_X]$ for all morphisms $Y\to X$, the morphism $\mathbf{A}^1_Y\to \mathbf{A}^1_X$ being the identity on the $\mathbf{A}^1$--components, see the definition of Grothendieck rings with exponentials in chapter \ref{grothrings}, section \ref{subsect.grothrings}). Recall from property \ref{chihgdpropring} of lemma \ref{chihdgprop} and lemma \ref{chihdgconvv} the multiplicative properties of the Hodge realisation morphisms. \begin{prop}\label{commdiagramvancycles} The diagram $$\xymatrix{(\mathscr{M}_{\mathbf{A}^1_{X}},\star) \ar[r]^{\Phi'_{X}}\ar[d]^{\chi^{\mathrm{Hdg}}} & (\mathscr{M}_{X}^{\hat{\mu}},\ast)\ar[d]^{\chi^{\mathrm{Hdg}}}\\ (K_0(\mathrm{MHM}_{\mathbf{A}^1_{X}}),\star)\ar[r]^{\Phi_X^{\mathrm{Hdg}}} &(K_0(\mathrm{MHM}_{X}^{\mathrm{mon}}),\ast) }$$ commutes. \end{prop} \begin{proof} Proposition 3.17 in \cite{GLM} shows compatibility between the motivic nearby fibre morphism and the nearby fibre functor on mixed Hodge modules. As for the motivic vanishing cycle morphism, as noted just after notation 3.9 in \cite{DL01}, the motivic vanishing cycles $\mathscr{S}^{\phi}_f$ for $f:X\to \mathbf{A}^1$ as they were defined by Denef and Loeser should be seen as the motivic incarnation of $\phi_f[d-1]$ where $d$ is the dimension of $X$. With our notation, $\phi_f = (-1)^{d}\mathscr{S}^{\phi}_f$, so that our vanishing cycles should be the motivic incarnation of $\phi_f[-1]$, which is exactly the perverse sheaf underlying $\phi_f^{\mathrm{Hdg}}$. \end{proof} Recall that in section \ref{sect.compsymprodvanhdg} we defined a morphism $$\overline{\mathrm{add}}_n:S^{n}(\mathbf{A}^1_{X})\to \mathbf{A}^1_{S^nX}$$ for every integer $n\geq 1$. \begin{cor}\begin{enumerate}[(a)]\item For any $\a\in\mathscr{M}_{\mathbf{A}^1_X}$ and any integer $n\geq 1$, we have $$\chi^{\mathrm{Hdg}}_{S^nX}\circ \Phi'_{S^nX}((\overline{\mathrm{add}}_n)_!(S^n\a)) = S^n(\chi^{\mathrm{Hdg}}_X\circ \Phi'_X(\a)).$$ \item For any $\a\in\mathscr{E}xp\mathscr{M}_X$ and any integer $n\geq 1$, we have $$\chi^{\mathrm{Hdg}}_{S^nX}\circ \Phi_{S^nX}(S^n\a) = S^n(\chi^{\mathrm{Hdg}}_X\circ \Phi_X(\a)).$$ \end{enumerate} \end{cor} \begin{proof} By proposition \ref{commdiagramvancycles}, to prove $(a)$, it suffices to prove $$\Phi^{\mathrm{Hdg}}_{S^nX}\circ \chi^{\mathrm{Hdg}}_{\mathbf{A}^1_{S^nX}}((\overline{\mathrm{add}}_n)_!(S^n\a)) = S^n(\Phi_X^{\mathrm{Hdg}}\circ \chi^{\mathrm{Hdg}}_{\mathbf{A}^1_X}(\a)).$$ We have \begin{eqnarray*}\Phi^{\mathrm{Hdg}}_{S^nX}\circ \chi^{\mathrm{Hdg}}_{\mathbf{A}^1_{S^nX}}((\overline{\mathrm{add}}_n)_!(S^n\a)) & = & \Phi^{\mathrm{Hdg}}_{S^nX}\circ (\overline{\mathrm{add}}_n)_!\circ \chi^{\mathrm{Hdg}}_{S^n(\mathbf{A}^1_X)}(S^n\a)\ \ \text{by proposition \ref{chihdgprop} }\\ & = & \Phi^{\mathrm{Hdg}}_{S^nX}\circ (\overline{\mathrm{add}}_n)_!(S^n\chi^{\mathrm{Hdg}}_{\mathbf{A}^1_X}(\a))\ \ \ \ \ \ \text{by lemma \ref{chihdgsymprod}}\\ & = & S^n(\Phi_X^{\mathrm{Hdg}}\circ \chi^{\mathrm{Hdg}}_{\mathbf{A}^1_X}(\a))\ \ \ \ \ \ \ \text{by proposition \ref{vancyclefunctorsymproducts}.} \end{eqnarray*} To prove $(b)$, take $\a\in\mathscr{E}xp\mathscr{M}_X$, and, denoting by $q:\mathscr{M}_{\mathbf{A}^1_X}\to \mathscr{E}xp\mathscr{M}_X$ the quotient map, pick $\a'\in\mathscr{M}_{\mathbf{A}^1_X}$ such that $\a = q(\a')$. Applying $(a)$ to $\a'$, we have (recall $\Phi' = \Phi\circ q$) $$\chi^{\mathrm{Hdg}}_{S^nX}\circ \Phi_{S^nX}\circ q\circ (\overline{\mathrm{add}}_n)_!(S^n\a') = S^n(\chi^{\mathrm{Hdg}}_X\circ \Phi_X(\a)).$$ It therefore remains to prove that $q\circ (\overline{\mathrm{add}}_n)_!(S^n\a') = S^n\a$. In other words, we want to show the commutativity of the diagram \begin{equation}\label{qcircadddiag}\xymatrix{\mathscr{M}_{\mathbf{A}^1_X} \ar[r]^{S} \ar[d]^{q} & \mathscr{M}_{S^{\bullet}(\mathbf{A}^1_X)}\ar[d]^{q\circ (\overline{\mathrm{add}})_!} \\ \mathscr{E}xp\mathscr{M}_{X}\ar[r]^S & \mathscr{E}xp\mathscr{M}_{S^{\bullet}X} }\end{equation} (recall the group morphisms $S$ have been defined in chapter \ref{eulerproducts}, sections \ref{symprodclasses} and \ref{sect.locsymproducts}), where $\overline{\mathrm{add}}_! = \prod_{n\geq 1}(\overline{\mathrm{add}}_n)_!$. Let us start by checking that $q\circ (\overline{\mathrm{add}})_!$ is a group morphism. Let $\a = (\a_i)_{i\geq 1}$ and $\b = (\b_i)_{i\geq 1}$ be elements of $\mathscr{M}_{S^{\bullet}(\mathbf{A}^1_X)}$. We have $$\a\b = \left(\sum_{i=0}^n\gamma_!(\a_i\boxtimes \b_{n-i})\right)_{n\geq 1}$$ where $\gamma$ is the morphism $S^i(\mathbf{A}^1_X)\times S^{n-i}(\mathbf{A}^1_X)\to S^n(\mathbf{A}^1_X)$ induced by the identity $(\mathbf{A}^1_X)^i \times (\mathbf{A}^1_X)^{n-i} \to (\mathbf{A}^1_X)^n$. To prove that $$q\circ (\overline{\mathrm{add}})_!(\a\b) = q\circ (\overline{\mathrm{add}})_!(\a)q\circ (\overline{\mathrm{add}})_!(\b)$$ in $\mathscr{E}xp\mathscr{M}_{S^{\bullet}X}$, it suffices to prove that for all $n\geq 1$ and all $i\in\{0,\ldots,n\}$, we have \begin{equation}\label{qcircadd}q \circ (\overline{\mathrm{add}}_n)_!\gamma_!(\a_i\boxtimes \b_{n-i}) = \beta_!(q\circ (\overline{\mathrm{add}})_i(\a_i)\boxtimes q\circ (\overline{\mathrm{add}}_{n-i})_!(\b_{n-i}))\end{equation} in $\mathscr{E}xp\mathscr{M}_{S^{n}X},$ where $\beta:S^{i}X\times S^{n-i}X\to S^nX$ is the morphism induced by the identity $X^i\times X^{n-i}\to X^n$. For this, consider the following diagram: \begin{equation}\label{qcircadddiagram}\xymatrix{\mathscr{M}_{S^{i}(\mathbf{A}^1_X)}\times \mathscr{M}_{S^{n-i}(\mathbf{A}^1_X)}\ar[r]^-{\boxtimes}\ar[d]_{(\overline{\mathrm{add}}_i)_!\times (\overline{\mathrm{add}}_{n-i})_!} & \mathscr{M}_{S^{i}(\mathbf{A}^1_X)\times S^{n-i}(\mathbf{A}^1_X)}\ar[r]^-{\gamma_!}\ar[d]_-{(\overline{\mathrm{add}}_i\times\overline{\mathrm{add}}_{n-i})_!}&\mathscr{M}_{S^n(\mathbf{A}^1_X)}\ar[d]^{(\overline{\mathrm{add}}_n)_!}\\ \mathscr{M}_{\mathbf{A}^1_{S^iX}}\times \mathscr{M}_{\mathbf{A}^1_{S^{n-i}X}}\ar[r]^-{\boxtimes}\ar[d]_{q\times q}&\mathscr{M}_{\mathbf{A}^1_{S^iX}\times \mathbf{A}^1_{S^{n-i}X}}\ar[r]^-{(\beta\circ \mathrm{add})_!}\ar[d]_{q\circ \mathrm{add}_!} & \mathscr{M}_{\mathbf{A}^1_{S^{n}X}}\ar[d]^q\\ \mathscr{E}xp\mathscr{M}_{S^iX}\times \mathscr{E}xp\mathscr{M}_{S^{n-i}X}\ar[r]^-{\boxtimes}& \mathscr{E}xp\mathscr{M}_{S^iX\times S^{n-i}X} \ar[r]^-{\beta_!}&\mathscr{E}xp\mathscr{M}_{S^nX} }\end{equation} Here $\beta$ denotes the morphism $S^{i}X\times S^{n-i}X\to S^{n}X$, as well as the morphism $\mathbf{A}^1_{S^{i}X\times S^{n-i}X}\to \mathbf{A}^1_{S^nX}$ it induces, and $\mathrm{add}$ refers to the morphism $$\mathbf{A}^1_{S^iX}\times \mathbf{A}^1_{S^{n-i}X}\to \mathbf{A}^1_{S^{i}X\times S^{n-i}X}$$ induced by the addition morphism on the $\mathbf{A}^1$-components. To prove (\ref{qcircadd}), it suffices to prove that this diagram is commutative. We do this square by square. The commutativity of the top left square comes from the fact that pushdowns commute with exterior products. The commutativity of the top right square comes from the commutativity of the square $$\xymatrix{ (\mathbf{A}^1_X)^{i}\times (\mathbf{A}^1_X)^{n-i}\ar[r]^-{\mathrm{id}}\ar[d]^{\mathrm{add}_i\times \mathrm{add}_{n-i}} & (\mathbf{A}^1_X)^n\ar[d]^{\mathrm{add}_{n}}\\ \mathbf{A}^1_{X^i}\times \mathbf{A}^1_{X^{n-i}}\ar[r]^-{\mathrm{add}}&\mathbf{A}^1_{X^n}}$$ after taking quotients by the appropriate permutation actions. For the bottom left square, by bilinearity, it suffices to check commutativity for effective elements. For any morphisms $Y\xrightarrow{f}\mathbf{A}^1_{S^iX}$, and $Z\xrightarrow{g}\mathbf{A}^1_{S^{n-i}X}$, we have, by definition, \begin{eqnarray*}q\circ \mathrm{add}_!([Y\xrightarrow{f}\mathbf{A}^1_{S^iX}]\boxtimes[Z\xrightarrow{g}\mathbf{A}^1_{S^{n-i}X}])& = &q\circ \mathrm{add}_!([Y\times Z\xrightarrow{ f\times g}\mathbf{A}^1_{S^{i}X}\times \mathbf{A}^1_{S^{n-i}X}])\\ & = & [Y\times Z, \mathrm{add}\circ (f\times g)] \\ &=& [Y,f]\boxtimes [Z,g]\\ & = & q([Y\xrightarrow{f}\mathbf{A}^1_{S^iX}])\boxtimes q([Z\xrightarrow{g}\mathbf{A}^1_{S^{n-i}X}])\end{eqnarray*} in $\mathscr{E}xp\mathscr{M}_{S^{i}X\times S^{n-i}X}$. The commutativity of the last square comes from the fact that $q$ commutes with~$\beta_!$. We come back to the proof of the main statement. Since all maps involved are group morphisms, it suffices to prove commutativity of diagram (\ref{qcircadddiag}) for effective elements. Let therefore $f:Y\to \mathbf{A}^1_X$ be a morphism, inducing $S^nf:S^nY\to S^n(\mathbf{A}^1_X)$, as well as $f^{(n)} = \overline{\mathrm{add}}_n\circ S^nf:S^nY\to \mathbf{A}^1_{S^nX}$. According to the definition of symmetric products of varieties with exponentials, we have, for all $n\geq 1$, \begin{eqnarray*} q\circ (\overline{\mathrm{add}}_n)_!(S^n[Y\xrightarrow{f}\mathbf{A}^1_X])& =& q\circ (\overline{\mathrm{add}}_n)_!([S^nY \xrightarrow{S^nf}S^{n}(\mathbf{A}^1_X)])\\ & = & q([S^nY \xrightarrow{\overline{\mathrm{add}}_n\circ S^nf} \mathbf{A}^1_{S^nX}])\\ & = & [S^nY, f^{(n)}]\\ & = & S^n([Y,f])\\ & = & S^n(q([Y\xrightarrow{f}\mathbf{A}^1_X])) \end{eqnarray*} in $\mathscr{E}xp\mathscr{M}_{S^nX}$, whence the result.\end{proof} \section{Weight filtration on Grothendieck rings of mixed Hodge modules}\label{sect.weightmhm} \subsection{The weight filtration}\label{sect.weightfiltrationmhm}\index{weight filtration!on $K_0(\mathrm{MHM}_S^{\mathrm{mon}})$} For any integer $n$, denote by $W_{\leq n}K_0(\mathrm{MHM}_S^{\mathrm{mon}})$ \index{WKM@$W_{\leq n}K_0(\mathrm{MHM}_S^{\mathrm{mon}})$} the subgroup of $K_0(\mathrm{MHM}_S^{\mathrm{mon}})$ generated by classes of pure Hodge modules $(M,\mathrm{id},N)$ (i.e. with trivial semi-simple monodromy) of weight at most $n$ and by classes of pure Hodge modules with monodromy $(M,T_s,N)$ of weight at most $n-1$. \begin{remark}\label{eigenspacedecomp} The monodromy $T_s$ is an automorphism of finite order, so that a pure Hodge module $(M,T_s,N)$ over $S$ of weight $m$ with monodromy decomposes into $M = M^{0}\oplus M^{\neq 0}$ where $M^0 = \ker(T_s - \mathrm{id})$ and $M^{\neq 0} = \ker(T_s^{k-1} + \ldots + T_s + \mathrm{id})$, where $k$ is minimal such that $T_s^k = 1$. This Hodge module is an element of $W_{\leq m}K_0(\mathrm{MHM}_S^{\mathrm{mon}})$ if $M^{\neq 0} = 0$, and of $W_{\leq m+1}K_0(\mathrm{MHM}_S^{\mathrm{mon}})$ otherwise. \end{remark} We have the following compatibility with respect to pushdowns, pullbacks and exterior products: \begin{lemma}\label{weightpushpull} \begin{enumerate}\item\label{pushpull} Let $f:Y\to X$ be a morphism of complex varieties with fibres of dimension $\leq d$, then for all integers $n$, we have $$f_{!}\left(W_{\leq n} K_0(\mathrm{MHM}_Y^{\mathrm{mon}})\right)\subset W_{\leq n + d}K_0(\mathrm{MHM}_X^{\mathrm{mon}})$$ and $$f^*\left(W_{\leq n} K_0(\mathrm{MHM}_X^{\mathrm{mon}})\right)\subset W_{\leq n + d}K_0(\mathrm{MHM}_Y^{\mathrm{mon}}).$$ \item \label{exterior}Let $X$ and $Y$ be complex varieties. Then for all integers $n,m$ we have $$W_{\leq n}K_0(\mathrm{MHM}^{\mathrm{mon}}_X){\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} W_{\leq m}K_0(\mathrm{MHM}^{\mathrm{mon}}_Y)\subset W_{\leq n+m} K_0(\mathrm{MHM}^{\mathrm{mon}}_{X\times Y}).$$ \end{enumerate} \end{lemma} \begin{proof} Let $M$ be a pure Hodge module of weight at most $n$ (resp. $n-1$). Then, since the functor~$f_!$ does not increase weights, $f_{!}M$ is a complex of weight $\leq n$ (resp. $\leq n-1$) which belongs to $D^{\leq d}(\mathrm{MHM}_X)$ by lemma~\ref{cohamplitude}, so $$[f_!M] = \sum_{i\leq d}(-1)^i[\mathcal{H}^if_!M]$$ is a sum of Hodge modules of weight $\leq n + d$ (resp. $\leq n-1 + d$). If the monodromy on~$M$ is trivial, then it is also trivial on all mixed Hodge modules $\mathcal{H}^if_!M$, so in any case, we have $[f_!M]\in W_{\leq n+d}K_0(\mathrm{MHM}_S^{\mathrm{mon}})$. The proof is the same for $f^{*}$. Let $(M_1,T_{s,1},N_1)\in W_{\leq n}K_0(\mathrm{MHM}_X)$ and $(M_2,T_{s,2},N_2)\in W_{\leq m}K_0(\mathrm{MHM}_X)$ be two pure Hodge modules with monodromy. By remark \ref{eigenspacedecomp}, it suffices to treat the following cases : \begin{itemize} \item $M_1 = M_1^0$ is of weight $n$, $M_2 = M_2^0$ is of weight $m$; \item $M_1 = M_1^0$ is of weight $n$, $M_2 = M_2^{\neq 0}$ is of weight $m-1$; \item $M_1 = M_1^{\neq 0}$ is of weight $n-1$ and $M_2 = M_2^{\neq 0}$ is of weight $m-1$. \end{itemize} By the definition of the weight filtration on $M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2$, in the first case $M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2$ is pure of weight $m+n$, with trivial monodromy, and in the second case, pure of weight $m+n-1$. Thus, in both cases, $M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2$ is an element of $W_{\leq m+n}K_0(\mathrm{MHM}_S^{\mathrm{mon}})$. This leaves us with the third case. For any $\alpha,\beta\in(-1,0]$ such that $\exp(-2i\pi\alpha)$ (resp. $\exp(-2i\pi\beta)$) is an eigenvalue of $T_{s,1}$ (resp. $T_{s,2}$), the complex number $\exp(-2i\pi(\alpha + \beta))$ is an eigenvalue of monodromy on $M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2$, and $(M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2)^1$ is non-zero if and only if there exist such $\alpha,\beta$ with $\alpha + \beta = -1$. The weight filtration on $M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2$ is such that $(M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2)^{\neq 1}$ is of weight $m+n-2$, and $(M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2)^{1}$ is of weight $m+n$, so that the Hodge module $M_1{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} M_2$ is an element of $W_{\leq m + n}K_0(\mathrm{MHM}_S^{\mathrm{mon}})$. \end{proof} For any element $\a\in K_0(\mathrm{MHM}_S^{\mathrm{mon}})$, we put $$w_S(\a):= \inf \{n,\ \a\in W_{\leq n}K_0(\mathrm{MHM}_S^{\mathrm{mon}})\},$$ which defines a function $w_S:K_0(\mathrm{MHM}_S^{\mathrm{mon}})\to \mathbf{Z}\cup \{-\infty\}$. \index{weight function!on $K_0(\mathrm{MHM}_S^{\mathrm{mon}})$}\index{wS@$w_S$ (weight function)} Lemma \ref{weightpushpull} gives us the following: \begin{lemma}\label{weight.properties} For any complex varieties $S$ and $T$, any $\a,\a'\in K_0(\mathrm{MHM}_S^{\mathrm{mon}})$ and $\mathfrak{b}\in K_0(\mathrm{MHM}_T^{\mathrm{mon}})$ the weight function satisfies the following properties: \begin{enumerate}[(a)]\item\label{weightofzero} $w_S(0) = -\infty$ \item \label{weight.sum}$w_S(\a + \a')\leq \max\{w_S(\a),w_S(\a')\}$, with equality if $w_S(\a)\neq w_S(\a')$. \item \label{weight.extproducts} $w_{S\times T}(\a\,{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em}\,\mathfrak{b})\leq w_S(\a) + w_T(\mathfrak{b}).$ \item \label{weight.push} If $f:S\to T$ is a morphism with fibres of dimension $\leq d$, then $$w_T(f_!(\a))\leq w_S(\a) + d.$$ \item \label{weight.pull} If $f:S\to T$ is a morphism with fibres of dimension $\leq d$, then $$w_S(f^*(\b))\leq w_T(\b) + d.$$ \item\label{elemcomparison} If $M^{\bullet}\in D^{\leq a}(\mathrm{MHM}_S)$ is a complex of mixed Hodge modules of weight $\leq n$, then $w_S([M^{\bullet}])\leq a + n$. Equality is achieved if and only if $\mathrm{Gr}^{W}_{a+n}\mathcal{H}^a(M^{\bullet}) \neq 0$. \end{enumerate} \end{lemma} \begin{remark}\label{absweightremark} As a special case of (\ref{weight.push}), denoting by $w$ the weight function $w_{\mathrm{pt}}$ on $K_0(\mathrm{MHM}^{\mathrm{mon}}_{\mathrm{pt}})$ and by $a_S:S\to \mathrm{pt}$ the structural morphism, we have $$w((a_S)_!\a)\leq w_S(\a) + \dim S.$$ \end{remark} \subsection{Weights of symmetric powers of mixed Hodge modules} Recall that in section \ref{mhmsymproducts} we defined a group $K_0(\mathrm{MHM}_{S^{\bullet}X}^{\mathrm{mon}})$ and a morphism $$S_X^{\mathrm{Hdg}}: K_0(\mathrm{MHM}^{\mathrm{mon}}_X) \to K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X})$$ sending the class of a mixed Hodge module $M$ to $(S^nM)_{n\geq 1}$. Our goal here is to show that $S_X^{\mathrm{Hdg}}$ behaves well with respect to the weight filtration from section \ref{sect.weightfiltrationmhm}. We define the following natural filtration on $K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X})$: $$W_{\leq d}K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X}) := \prod_{n\geq 1} W_{\leq nd}K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{n}X})\subset \prod_{n\geq 1}K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^nX}).$$ We have the following properties: \begin{prop}\label{propweightsymprod} \begin{enumerate}\item For every $d$, $W_{\leq d}K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X}) $ is a subgroup of the group $K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X})$. \item For any integer $d$, $S_X^{\mathrm{Hdg}}(W_{\leq d}K_0(\mathrm{MHM}^{\mathrm{mon}}_X))\subset W_{\leq d}K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X}).$ \item \label{mainweightsymprodineq} For every $\a\in K_0(\mathrm{MHM}^{\mathrm{mon}}_X)$ and every integer $n\geq 0$, we have $w_{S^nX}(S^n\a)\leq nw_X(\a)$. \index{weight!of symmetric power} \end{enumerate} \end{prop} \begin{proof} 1. Let $a = (a_n)_{n\geq 1}$ and $b = (b_n)_{n\geq 1}$ be two elements of $W_{\leq d}K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^{\bullet}X})$. Then for all $n\geq 1$ and all $i\in\{0,\ldots,n\}$, by property \ref{exterior} of lemma \ref{weightpushpull} we have $a_i{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} b_{n-i}\in K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^iX\times S^{n-i}X})$ of weight $\leq id + (n-i)d = nd$. The map $S^iX\times S^{n-i}X\to S^nX$ has fibre dimension zero, so by property \ref{pushpull} of lemma \ref{weightpushpull} the same estimate is valid for $a_i{\kern .1em\mathop{\boxtimes}\limits^{\scriptscriptstyle{T}}\kern .1em} b_{n-i}$ seen as an element of $K_0(\mathrm{MHM}^{\mathrm{mon}}_{S^nX})$. 2. Let $M\in \mathrm{MHM}^{\mathrm{mon}}_X$ be a pure Hodge module of weight $d$. Then $\bigtwtimes{n} M$ is a pure Hodge module of weight $nd$, and denoting by $p$ the quotient morphism $X^n\to S^nX$, the complex $p_!(\bigtwtimes{n} M)$ is of weight $\leq nd$ by property \ref{pushpull} of lemma \ref{weightpushpull}, since $p$ has fibres of dimension $0$. Finally, $S^nM$ is obtained as a subobject of $p_!(\bigtwtimes{n} M)$, so its weight is $\leq nd$ again. Statement 3 is a direct consequence of 2. \end{proof} \section{Weight filtration on Grothendieck rings of varieties}\label{sect.weightfiltrationkvar} In this section, we are going to use the previously defined weight filtration to define a notion of weight on Grothendieck rings of varieties with exponentials. For this, we are going to use the motivic vanishing cycles measure from chapter \ref{grothrings}, and the Hodge realisation from section \ref{sect.hodgereal}. \subsection{The weight filtration and completion} Let $S$ be a complex variety. Recall that $\Phi_S:\mathscr{E}xp\mathscr{M}_S\to (\mathscr{M}_S^{\hat{\mu}},\ast)$ is the motivic vanishing cycles measure from theorem \ref{motmeasuremain} of chapter \ref{grothrings}. \begin{definition}\label{weightfiltration}\begin{enumerate}\item The weight filtration on the ring $\mathscr{M}_S^{\hat{\mu}}$ is given by $$W_{\leq n}\mathscr{M}_S^{\hat{\mu}} := (\chi^{\mathrm{Hdg}}_S)^{-1}(W_{\leq n}K_0(\mathrm{MHM}_{S}^{\mathrm{mon}}))$$ for every $n\in\mathbf{Z}$. \index{WnM@$W_{\leq n}\mathscr{M}_S^{\hat{\mu}}$} The weight function on $\mathscr{M}_S^{\hat{\mu}}$, again denoted by $w_S$, is the composition $$\mathscr{M}_S^{\hat{\mu}} \xrightarrow{\chi_S^{\mathrm{Hdg}}} K_0(\mathrm{MHM}_{S}^{\mathrm{mon}}) \xrightarrow{w_S} \mathbf{Z}.$$ \item The weight filtration on the ring $\mathscr{E}xp\mathscr{M}_S$ is given by $$W_{\leq n}\mathscr{E}xp\mathscr{M}_S := (\chi^{\mathrm{Hdg}}_S\circ\Phi_S)^{-1}(W_{\leq n}K_0(\mathrm{MHM}_{S}^{\mathrm{mon}}))$$\index{WnE@$W_{\leq n}\mathscr{E}xp\mathscr{M}_S$} for every $n\in\mathbf{Z}$. The weight function on $\mathscr{E}xp\mathscr{M}_S$, again denoted by $w_S$, is the composition $$\mathscr{E}xp\mathscr{M}_S \xrightarrow{\Phi_S} \mathscr{M}_S^{\hat{\mu}} \xrightarrow{\chi_S^{\mathrm{Hdg}}} K_0(\mathrm{MHM}_{S}^{\mathrm{mon}}) \xrightarrow{w_S} \mathbf{Z}.$$ \end{enumerate} \end{definition} \index{weight function!on $\mathscr{M}_S^{\hat{\mu}}$}\index{weight function!on $\mathscr{E}xp\mathscr{M}_S$}\index{wS@$w_S$ (weight function)} \begin{remark}\label{weight.grothringprops} Properties $(\ref{weightofzero})-(\ref{weight.pull})$ of lemma \ref{weight.properties} remain true for $w_S$ on $\mathscr{M}_S^{\hat{\mu}}$ or $\mathscr{E}xp\mathscr{M}_S$. Indeed, this is obvious for $(\ref{weightofzero})$, for $(\ref{weight.push})$ and $(\ref{weight.pull})$ it follows from the fact that the Hodge realisation commutes with pushforwards and pullbacks, and for $(\ref{weight.sum})$ it comes from the fact that $\chi^{\mathrm{Hdg}}_S$ and~$\Phi_S$ are group morphisms. Property $(\ref{weight.extproducts})$ comes from the Thom-Sebastiani property for $\Phi_S$, and from the fact that $\chi^{\mathrm{Hdg}}$ is compatible with twisted exterior products. \end{remark} \begin{remark} Both these definitions induce the same weight filtration $(W_{\leq n}\mathscr{M}_S)_{n\in\mathbf{Z}}$ and the same weight map $w_S:\mathscr{M}_S\to \mathbf{Z}$ on the localised Grothendieck ring $\mathscr{M}_S$, because the restriction of $\Phi_S$ to $\mathscr{M}_S$ is the inclusion $\mathscr{M}_S\to \mathscr{M}_S^{\hat{\mu}}$. \end{remark} \begin{notation} For a variety $X$ over $S$, we use $w_S(X)$ as a shorthand for $w_S([X])$. We denote by $w$ the weight function for $S = \mathrm{Spec}\, \mathbf{C}$. \end{notation} \begin{definition} We define the completion of the ring $\mathscr{E}xp\mathscr{M}_S$ with respect to the weight topology as $$\widehat{\mathscr{E}xp\mathscr{M}_{S}} = \varprojlim_n \mathscr{E}xp\mathscr{M}_{S}/ W_{\leq n} \mathscr{E}xp\mathscr{M}_S. $$ \end{definition}\index{expMh@$\widehat{\mathscr{E}xp\mathscr{M}_{S}}$}\index{completion!weight topology}\index{weight filtration!completion} \subsection{Weights of symmetric products} \begin{lemma}\label{weightssymproducts} Let $I$ be a set and let $\pi = (n_i)_{i\in I}\in\mathbf{N}^{(I)}$. Let $X$ be a complex variety, and let $\mathscr{A}=(\a_i)_{i\in I}$ be a family of elements of $\mathscr{E}xp\mathscr{M}_X$. Then $$w_{S^{\pi}X}(S^{\pi}\mathscr{A})\leq \sum_{i\in I}n_iw_{X}(\a_i).$$ \end{lemma}\index{weight!of symmetric product} \begin{proof} Recall that by definition, $S^{\pi}\mathscr{A}$ is the element of $\mathscr{E}xp\mathscr{M}_{S^{\pi}X}$ obtained by pulling back the product $\prod_{i\in I} S^{n_i}\a_i \in \mathscr{E}xp\mathscr{M}_{\prod_{i\in I}S^{n_i}X}$ along the open immersion $j:S^{\pi}X\to \prod_{i\in I}S^{n_i}X$. By property (\ref{weight.extproducts}) of lemma \ref{weight.properties} and property \ref{mainweightsymprodineq} of proposition \ref{propweightsymprod} we have $$w_{\prod_{i\in I}S^{n_i}X}\left(\bboxtimes_{i\in I}S^{n_i}\a_i\right)\leq \sum_{i\in I}w_{S^{n_i}X}(S^{n_i}\a_i)\leq \sum_{i\in I}n_iw_X(\a_i).$$ Applying property (\ref{weight.push}) of lemma \ref{weight.properties} to $j$, which has fibre dimension 0, we get the result. \end{proof} \subsection{Weight and dimension} For effective classes in the Grothendieck ring of varieties, weight and dimension are closely linked, as shown in the following lemma: \begin{lemma}\label{weightdimension} Let $S$ be a complex variety and $X$ a variety over $S$. One has the equality $$w_S(X) = 2\dim_SX + \dim S.$$\index{weight!vs. dimension} \end{lemma} \begin{proof} We are going to denote by $f:X\to S$ the structural morphism of $X$, and by $d$ the relative dimension $\dim_SX$, that is, the supremum of the dimensions of the fibres of $f$. Since the functors $a_X^*$ and $f_!$ do not increase weights, the complex $f_!\mathbf{Q}_X^{\mathrm{Hdg}} =f_!a_X^*\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}$ is of weight $\leq 0$. Moreover, we see by lemma \ref{cohamplitude} that $f_!\mathbf{Q}_X^{\mathrm{Hdg}}$ is an object of $D^{\leq 2d + \dim S}(\mathrm{MHM}_S)$. By property (\ref{elemcomparison}), it suffices to prove that the top cohomology $\mathcal{H}^{2d + \dim S}(f_!\mathbf{Q}_X^{\mathrm{Hdg}})$ has non-zero graded part of weight $2d + \dim S$. By \cite{Saito89}~1.20, we may write the Leray spectral sequence for $a_X = a_S\circ f$: for any $M^{\bullet}\in D^b(\mathrm{MHM}_X)$, $$ \mathcal{H}^p (a_S)_! \left(\mathcal{H}^qf_{!}M^{\bullet}\right)\Longrightarrow \mathcal{H}^{p+q}((a_X)_! M^{\bullet}).$$ Applying this with $M^{\bullet} = \mathbf{Q}_X^{\mathrm{Hdg}}$, $p= \dim S$ and $q=2d + \dim S$, and recalling that the cohomology of the complex of mixed Hodge structures $(a_X)_!\mathbf{Q}_X^{\mathrm{Hdg}}$ is exactly the cohomology with compact supports of $X$ with coefficients in $\mathbf{Q}$ with its standard Hodge structure, we have $$\mathcal{H}^{\dim S}(a_S)_!\left( \mathcal{H}^{2d + \dim S} f_!\mathbf{Q}_X^{\mathrm{Hdg}}\right) \Longrightarrow H^{2 \dim X}_c(X,\mathbf{Q}).$$ If $\mathrm{Gr}^W_{2d + \dim S}\mathcal{H}^{2d + \dim S} f_!\mathbf{Q}_X^{\mathrm{Hdg}}=0$, then the graded part of weight $2 \dim X$ of the left-hand side is zero. But the right-hand side is a sub-object of a quotient of the left-hand side, and therefore its graded part of weight $2\dim X$ should be zero as well, which it is not: it is classical that $H^{2\dim X}_c(X,\mathbf{Q})$ is pure of weight $2\dim X$, isomorphic to $\mathbf{Q}_{\mathrm{pt}}^{\mathrm{Hdg}}(-\dim X)^{r}$ where~$r$ is the number of irreducible components of~$X$. \end{proof} As a consequence, we have \begin{lemma} \label{weightdimensionnoneffective}Let $S$ be a complex variety and $\a$ an element of $\mathscr{M}_S^{\hat{\mu}}$. Then $$w_S(\a)\leq 2\dim_S\a + \dim S.$$\index{weight!vs. dimension} \end{lemma} \begin{proof} We may assume $\a$ is of the form $\mathbf{L}^{-m}([X]-[Y])$ for some $S$-varieties $X$ and $Y$ with $\hat{\mu}$-actions. Assume moreover that $\max\{\dim_SX,\dim_SY\}$ is minimal, so that $$\dim_S\a = \max\{\dim_SX,\dim_SY\}-m.$$ Then, using lemma \ref{weight.properties} and the fact that $\chi^{\mathrm{Hdg}}_{\mathrm{pt}}(\mathbf{L}^{-m}) = \mathbf{Q}^{\mathrm{Hdg}}_{\mathrm{pt}}(m)$ is of weight $-2m$, we have \begin{eqnarray*}w_S(\a)&\leq & w_{\mathrm{pt}}(\mathbf{L}^{-m}) + w_S([X]-[Y]) \\ &\leq &-2m + \max\{w_S(X),w_S(Y) \} \end{eqnarray*} By lemma \ref{weightdimension}, we therefore get $$w_S(\a)\leq -2m + 2\max\{\dim_S(X),\dim_S(Y)\} + \dim S$$ whence the result. \end{proof} We may therefore deduce the triangular inequality for the weight topology: \begin{lemma}[Triangular inequality for weights]\label{triangularwt} Let $S$ be a variety over $\mathbf{C}$, $X$ a variety over $S$ and $f:X\to \mathbf{A}^1_{\mathbf{C}}$ a morphism. Then $$w_S([X,f])\leq w_S(X).$$ \index{triangular inequality!for weights}\index{weight!triangular inequality} \end{lemma} \begin{proof} By lemmas \ref{triangulardim} and \ref{weightdimension} we have $$w_S([X,f])\leq 2\dim_S(\Phi_S([X,f])) + \dim S \leq 2\dim_SX + \dim S = w_S(X).$$ \end{proof} The following property, which follows from our discussion of the trace morphism (lemma \ref{traceprop} and remark \ref{tracerem}) and states that there is a drop in weights for certain simple non-effective classes, will be very important to us: \begin{lemma}[Cancellation of maximal weights]\label{weightcancellation} Let $S$ be a complex variety and $p:X\to S$, $q:Y\to S$ morphisms with fibres of constant dimension $d\geq 0$, with $X$ and $Y$ irreducible. Then $$w_S([X\xrightarrow{p} S] - [Y\xrightarrow{q} S]) \leq 2 d + \dim S -1.$$\index{cancellation of maximal weights} \end{lemma} \begin{proof} The classes $[X]$ and $[Y]$ are of weights $\leq 2d + \dim S$, and according to remark \ref{tracerem} the graded parts of weight exactly $2d + \dim S$ of the corresponding complexes of mixed Hodge modules cancel out. \end{proof} \section{Convergence of power series}\label{sect.powerseriesconv} \subsection{Radius of convergence} Recall that for a classical series $\sum_{i\geq 0}a_iz^i$, the radius of convergence is given by $$\left(\limsup \left(|a_i|\right)^{\frac{1}{i}}\right)^{-1}.$$ Analogously, in our setting, we have: \begin{definition} Let $F(T) = \sum_{i\geq 0} X_i T^i\in\mathscr{E}xp\mathscr{M}_X[[T]]$. The radius of convergence of $F$ is defined by $$\sigma_F = \limsup_{i\geq 1} \frac{w_X(X_i)}{2i}.$$ We say that $F$ converges for $|T|<\mathbf{L}^{-r}$ if $r\geq \sigma_F$. \end{definition}\index{radius of convergence}\index{sigmaF@$\sigma_F$, radius of convergence} When $F$ converges for $|T|<\mathbf{L}^{-r}$, it converges also for $|T|<\mathbf{L}^{-r'}$ for any $r'>r$. The subset of power series converging for $|T| < \mathbf{L}^{-r}$ is a subring of $\mathscr{E}xp\mathscr{M}_{X}[[T]]$. \begin{remark} If $r>\sigma_F$, there is some $i_0$ such that for all $i\geq i_0$, $\frac{w_X(X_i)}{2i} < r,$ which means that the set of integers $\{w_X(X_i)-2ri,\ i\geq 0\}$ is bounded from above. Conversely, if this set is bounded from above for some $r$, then we may conclude that $r \geq \sigma_F$, that is, $F$ converges for $|T|<\mathbf{L}^{-r}$. Thus, in general, we are going to prove that a series converges by finding a linear bound for $w_X(X_i)$. However, one does not in general have $\{w_X(X_i)-2\sigma_F i,\ i\geq 0\}$ bounded from above: see for example the series $\sum_{i\geq 0}\mathbf{L}^{i + \lceil\sqrt{i}\rceil}T^i$. \end{remark} If $F(T)$ converges for $|T|<\mathbf{L}^{-r}$, then for any element $\a\in\mathscr{E}xp\mathscr{M}_{\mathbf{C}}$ such that $w(\a) < -2r$, $F(\a)$ exists as an element of $\widehat{\mathscr{E}xp\mathscr{M}}_X.$ In particular, $F(\mathbf{L}^{-m})$ exists as an element of $\widehat{\mathscr{E}xp\mathscr{M}}_{X}$ if $m>r$. \begin{example} Let $X$ be a complex quasi-projective variety, and consider $Z_X(T) = \sum_{i\geq 0} [S^i X]T^i\in\mathscr{E}xp\mathscr{M}_{\mathbf{C}}[[T]]$ its Kapranov zeta function. We have $$w(S^iX) = 2i\dim X$$ for all $i\geq 0$, so that the radius of convergence of $Z_X(T)$ is $\dim X$. \index{radius of convergence!Kapranov's zeta function} \end{example} \subsection{A convergence criterion} \index{Euler product!convergence}\index{convergence criterion} \begin{prop}\label{convergence} Assume $F(T) = 1 + \sum_{i\geq 1} X_iT^i\in \mathscr{E}xp\mathscr{M}_X[[T]]$ is such that there exists an integer $M\geq 0$ and real numbers $\epsilon >0$, $\alpha < 1$ and $\beta$ such that \begin{itemize}\item for all $i\in\{1,\ldots,M\}$,\ $w_X(X_i) \leq (i-\frac12 -\epsilon)w(X)$ \item for all $i\geq M+1$, $w_X(X_i) \leq (\alpha i+ \beta - \frac12) w(X).$ \end{itemize} Then there exists $\delta >0$ such that the Euler product $\prod_{v\in X}F_v(T)\in\mathscr{E}xp\mathscr{M}_{\mathbf{C}}[[T]]$ \begin{itemize} \item converges for $|T|<\mathbf{L}^{-\frac{w(X)}{2}\left(1-\delta + \frac{\beta}{M+1}\right)}$ \item for any $0\leq \eta< \delta$, takes non-zero values for $|T|\leq \mathbf{L}^{-\frac{w(X)}{2}\left(1-\eta + \frac{\beta}{M+1}\right)}$ (that is, for every $\a\in \mathscr{E}xp\mathscr{M}_{\mathbf{C}}$ such that $w(\a)< -w(X)\left(1 -\eta + \frac{\beta}{M+1}\right)$). \end{itemize} \end{prop} \begin{proof} Let $n\geq 1$ be an integer, and $\pi = (n_i)_{i\geq 1}$ a partition of $n$. Then we have \begin{eqnarray*}w (S^{\pi}\mathscr{X}) &\leq & w_{S^{\pi}X}(S^{\pi}\mathscr{X}) + \dim(S^{\pi}X) \ \ \ \ \ \ \ \ \text{by remark \ref{absweightremark}}\\ &\leq &\sum_{i\geq 1}n_iw_X(X_i) + \frac{1}{2}\sum_{i\geq 1}n_i w(X)\ \ \ \ \ \ \text{by lemmas \ref{weightssymproducts} and \ref{weightdimension}} \\ & \leq &\sum_{i=1}^Mn_iiw(X) - \epsilon\sum_{i=1}^Mn_iw(X)+ \sum_{i\geq M+1}\alpha in_i w(X) +\beta w(X)\sum_{i\geq M+1}n_i \\ &\leq & \sum_{i=1}^Mn_iiw(X) - \frac{\epsilon}{M}\sum_{i=1}^M n_iiw(X) + \alpha \sum_{i\geq M+1}in_i w(X) + \frac{\beta w(X)}{M+1}\sum_{i\geq M+1}in_i \\ & = & \left(1-\frac{\epsilon}{M}\right) \sum_{i=1}^Mn_iiw(X)+ \left(\alpha + \frac{\beta}{M+1}\right) \sum_{i\geq M+1}n_ii w(X)\\ & \leq & \left(1-\delta + \frac{\beta}{M+1}\right) nw(X) \end{eqnarray*} where $1-\delta = \max\{1-\frac{\epsilon}{M},\alpha\}<1$ (in the case when $M = 0$, we put $1-\delta = \alpha$). The desired convergence follows. Moreover, one sees that for $n\geq 1$, any $0\leq \eta < \delta$ and any $\a\in \mathscr{E}xp\mathscr{M}_{\mathbf{C}}$ such that $w(\a)\leq -w(X)\left(1 - \eta + \frac{\beta}{M+1}\right)$, we have $$w(S^{n}\mathscr{X}\a^n) \leq -(\delta-\eta) n w(X) < 0,$$ so the value of the product at $\a$ is equal to 1 plus some terms of negative weight: it is therefore non-zero. \end{proof} \begin{example} Let $Z_X(T) = \sum_{i\geq 0}[S^iX]T^i$ be Kapranov's zeta function for some quasi-projective variety $X$. Then $Z_X(T) = \prod_{v\in X} F_v(T)$ where $$F(T) = \sum_{i\geq 1}T^i \in \mathscr{M}_{X}[[T]],$$ that is, every coefficient is equal to $1 = [X]\in\mathscr{M}_X$. Take $M=0$, $\alpha = 0$, $\beta = 1$ and $\eta = 1 -\frac{1}{2\dim X}< \delta = 1$. Then, since $w_X(X) = \dim X = \frac12 w(X)$, the condition in the lemma is satisfied, and we get that $Z_X(T)$ converges for $|T| < \mathbf{L}^{-\dim X}$ and takes non-zero values for $|T|\leq \mathbf{L}^{-\dim X-\frac{1}{2}}$. Note that each factor $F(T) = \sum_{i\geq 0} T^i$ has radius of convergence $0$, so taking the Euler product has the effect of shifting the radius of convergence by exactly the dimension of the base variety. \end{example} \begin{example} Let $X$ be a quasi-projective variety over $\mathbf{C}$, and let $\a\in \mathscr{M}_X$ be an element such that $w_X(\a) \leq \dim X + 1$. As an example of such an element, by lemma \ref{weightcancellation} we may take $\a = Y-Z$ for two irreducible varieties $Y,Z$ over $X$ of relative dimension 1. Consider the polynomial $F(T) = 1 + \a T^2$, so that $$\prod_{v\in X} F_v(T) = \prod_{v\in X} ( 1 + \a_v T^2) = \sum_{n\geq 0} S^n_{*,X}(\a) T^{2n}.$$ Taking $M = 2, \epsilon = 1 - \frac{1}{w(X)}, \alpha = 0, \beta = 0$, we get convergence for $|T| < \mathbf{L}^{-\frac12\dim X - \frac14}$. Let us check that we get the same convergence by estimating the radius of convergence directly: for this, note that $$w(S^n_{*,X}(\a))\leq w_{S^n_*X}(S^n_{*,X}\a) + \dim S^n_*X \leq n(\dim X + 1) + n\dim X = 2n\dim X + n.$$ Thus, taking the $\limsup$ over all even $n$, the radius of convergence is smaller than $$\limsup \frac{n\dim X + \frac12 n}{2n} = \frac12\dim X + \frac14.$$ \end{example} \subsection{Growth of coefficients} We finish this section by a lemma that allows one to get information about growth of coefficients of a power series from the fact that it possesses a pole of some order at $T=\mathbf{L}^{-1}$. It shows that we can predict the behaviour of a positive proportion of the coefficients of the Hodge-Deligne polynomials of the coefficient of degree $n$ for large $n$. For any constructible set $M$, denote by $\kappa(M)$ the number of irreducible components of maximal dimension of~$M$. \begin{prop}\label{coefgrowth} Let $Z(T) = \sum_{n\geq 0}[M_n]T^n\in\mathrm{KVar}_{\mathbf{C}}^{+}[[T]]$ be a power series with effective coefficients such that there exist integers $a,r\geq 1$, a real number $\delta>0$ and a power series $F(T) = \sum_{i\geq 0}f_iT^i\in\mathscr{M}_{\mathbf{C}}[[T]]$ converging for $|T|<\mathbf{L}^{-1 + \delta}$ and taking a non-zero effective value at $T = \mathbf{L}^{-1}$, such that $$Z(T) = \frac{F(T)}{(1-\mathbf{L}^aT^a)^r}.$$ Then for every $p\in\{0,\ldots,a-1\}$, one of the following cases occur when $n$ tends to infinity in the congruence class of $p$ modulo $a$: \begin{enumerate}[(i)] \item Either $\limsup \frac{\dim(M_n)}{n}< 1$. \item Or $\dim(M_n) -n$ has finite limit $d_0\in \mathbf{Z}$ and $\frac{\log(\kappa(M_n))}{\log n}$ converges to some integer in the set $\{0,\ldots,r-1\}$. More generally, for every real number $\eta$ such that $0 < \eta < \delta$ and for sufficiently large $n$ in the congruence class of $p$ modulo $a$, the coefficients of the Hodge-Deligne polynomial $HD(M_n)$ of degrees contained in the interval $$[2(1- \eta)n + 2d_0, 2n + 2d_0]$$ are polynomials in $\frac{n-p}{a}$ of degree at most $r-1$. \end{enumerate} Moreover, the second case happens for at least one value of $p$. \end{prop} \begin{proof} Note first that since $[M_n]$ is effective, we have $2\dim [M_n] = w(M_n)$, so it suffices to prove the statement with dimensions replaced by weights divided by two. First of all, replace $Z(T)$ by $Z(\mathbf{L}^{-1} T)$ and $F(T)$ with $F(\mathbf{L}^{-1} T)$, so that $F$ is now a power series converging for $|T| < \mathbf{L}^{\delta}$ and taking a non-zero effective value at $T = 1$, and $Z(T) = F(T) (1-T^a) ^{r}$ with $Z(T) = \sum_{n\in \mathbf{Z}}[M_n]\mathbf{L}^{-n} T^n$. We are going to do calculations in the case $a=1$, and explain later how one can reduce to this case. Note first that if~$F$ converges for $|T|<\mathbf{L}^{\delta}$, then the same is true for all its derivatives. We may write its Taylor expansion at $T =1$: $$F(T) = \sum_{i\geq 0} \frac{F^{(i)}(1)}{i!}(T-1)^{i} = \sum_{i\geq 0}\frac{F^{(i)}(1)(-1)^i}{i!}(1-T)^i.$$ Put $G(T) = \sum_{i\geq r} \frac{F^{(i)}(1)(-1)^i}{i!}(1-T)^{i-r}.$ Then \begin{eqnarray*} G(T) & = & \sum_{i\geq r} \frac{F^{(i)}(1)(-1)^i}{i!} \sum_{j=0}^{i-r}{i-r \choose j}(-1)^jT^j. \\ & = & \sum_{j\geq 0} \left(\sum_{i\geq r + j}\frac{F^{(i)}(1)(-1)^i}{i!}{i-r \choose j}\right)(-T)^j,\end{eqnarray*} so that the coefficient of degree $j$ of $G(T)$ is exactly $g_j = (-1)^j\left(\sum_{i\geq r + j}\frac{F^{(i)}(1)(-1)^i}{i!}{i-r \choose j}\right).$ Writing $F(T) = \sum_{i\geq 0} f_j T^j$, we have $w(f_j)\to -\infty$ as $j\to +\infty$. More precisely, for any $\eta$ such that $0 < \eta < \delta$ and for sufficiently large $j$, we have \begin{equation}\label{coefestimate} w(f_j) < -2\eta j.\end{equation} Thus, since $$F^{(i)}(1) = \sum_{j\geq i} j(j-1)\ldots (j-i + 1) f_j,$$ we see that as $i$ grows, $w(F^{(i)}(1))\to -\infty$ linearly in $i$. In particular, $w(g_j)\to -\infty$ linearly in $j$, and, more precisely, the estimate \begin{equation}\label{coefestimategj} w(g_j) < -2\eta j\end{equation} coming from (\ref{coefestimate}) holds for all sufficiently large $j$. Write now \begin{eqnarray*} Z(T) & = & \frac{F(T)}{(1-T)^{r}}\\ & = & G(T) + \sum_{i=0}^{r-1} \frac{F^{(i)}(1)(-1)^i}{i!(1-T)^{r-i}}\\ & = & G(T) + \sum_{i=0}^{r-1} \frac{F^{(i)}(1)(-1)^i}{i!}\sum_{n\geq 0} {n + r-i -1\choose r-i-1}T^n\\ & = & G(T) + \sum_{n\geq 0} \left(\sum_{i=0}^{r-1} \frac{F^{(i)}(1)(-1)^i}{i!}{n + r-i -1\choose r-i-1}\right) T^n. \end{eqnarray*} Thus, identifying coefficients, we have \begin{equation}\label{Mnexpansion}[M_n] \mathbf{L}^{-n} = g_n + \sum_{i=0}^{r-1} \frac{F^{(i)}(1)(-1)^i}{i!}{n + r-i -1\choose r-i-1}.\end{equation} Since by assumption $[M_{n}]$ is an element of $\mathrm{KVar}_k^{+}$, its Hodge-Deligne polynomial is of the form \begin{equation}\label{Mnform}\kappa(M_n) (uv)^{\dim(M_n)}+ \ \text{terms of lower degree}.\end{equation} To get asymptotics for $\dim(M_n)$ and for the coefficients of high degree of $HD(M_n)$ when $n$ goes to infinity, we therefore need to keep track of the dominant terms of the Hodge-Deligne series of $(\ref{Mnexpansion})$. We denote by $\{\a \}_d$ the coefficient of $(uv)^d$ in the Hodge-Deligne series of $\a\in \widehat {\mathscr{M}}_{\mathbf{C}}$. Let~$d_0$ be the largest integer $d$ such that there exists $i\in\{0,\ldots,r-1\}$ with $\{F^{(i)}(1)\}_d\neq 0$. Such a $d_0$ does exist since by assumption, we have $F(1)$ effective and non-zero, and therefore there exists some integer $b$ such that $ \{F(1)\}_b\neq 0$. Then for all sufficiently large $n$ (namely, for $n$ such that $w(g_n) < 2d_0$), and for all $d\geq d_0$, , we have \begin{equation}\label{Mncoef}\{[M_n]\mathbf{L}^{-n}\}_{d} = \sum_{i=0}^{r-1} \frac{(-1)^i}{i!}{n + r-i -1\choose r-i-1}\{F^{(i)}(1)\}_d.\end{equation} Then, for $d >d_0$, the right-hand side of (\ref{Mncoef}) is zero, forcing the left-hand side to be zero as well, so that $w(M_n\mathbf{L}^{-n})\leq 2d_0.$ Put now $d=d_0$, and let $i_0$ be the smallest $i$ such that $\{F^{(i)}(1)\}_d\neq 0$. Then we have $$\{[M_n]\mathbf{L}^{-n}\}_{d_0} \sim_{n\to\infty}\{F^{(i_0)}(1)\}_{d_0}\frac{(-1)^{i_0}}{i_0!(r-i_0-1)!}n^{r-i_0-1},$$ so that for sufficiently large $n$, $w([M_n]\mathbf{L}^{-n}) = 2d_0$, and moreover $$ \frac{\log\kappa(M_n)}{\log n} \longrightarrow r-i_0-1 \in\{0,\ldots,r-1\}.$$ More generally, going back to equation (\ref{Mnexpansion}), we see that for sufficiently large $n$, the effective element $M_n$ is the sum of the element $\mathbf{L}^ng_n$ of $\mathscr{M}_k$ which is of weight strictly less than $2(1-\eta)n$ by estimate (\ref{coefestimategj}), and of the sum $$\sum_{i=0}^{r-1} \frac{F^{(i)}(1)(-1)^i}{i!}{n + r-i -1\choose r-i-1},$$ which is a polynomial of degree at most $r-1$ in $n$ with coefficients in $\mathscr{M}_k$ and of weight~$2n$. The statement on the coefficients of the Hodge-Deligne polynomial follows. It remains to show how to reduce to this when $a >1$. We may decompose $F$ in the following manner: $$F(T) = \sum_{p=0}^{a-1}\sum_{j\geq 0}f_{aj+p}T^{aj+p} = \sum_{p=0}^{a-1}T^{p}F_p(T^{a}),$$ where $F_p(T) = \sum_{j\geq 0}f_{aj+p}T^{j},$ so that $$Z(T) = \sum_{p=0}^{a-1}T^{p}\frac{F_p(T^a)}{(1-T^a)^r}.$$ Using the expansion we did above and putting $G_p(T) = \sum_{i\geq r} \frac{F_p^{(i)}(1)(-1)^i}{i!}(1-T)^{i-r} = \sum_{m\geq 0}g_{p,m}T^m$, we then have $$Z(T) = \sum_{p=0}^{a-1} T^p\left(G_p(T^a) + \sum_{m\geq 0}T^{am}\sum_{i=0}^{r-1} \frac{F_p^{(i)}(1)(-1)^i}{i!}{m + r-i -1\choose r-i-1}\right) $$ Thus, for every $p\in\{0,\ldots,a-1\}$ and every $m\geq 0$, we have $$[M_{am+p}]\mathbf{L}^{-(am+p)} = g_{p,m} + \sum_{i=0}^{r-1} \frac{F_p^{(i)}(1)(-1)^i}{i!}{m + r-i -1\choose r-i-1}.$$ Fix $p\in\{0,\ldots,a-1\}$, and assume first that there is some $d\in\mathbf{Z}$ and some $i\in\{0,\ldots,r-1\}$ such that $\{F_p^{(i)}(1)\}_d\neq 0.$ Then we may conclude as above. If on the contrary such a $d$ does not exist, this means that $$w([M_{am+p}]\mathbf{L}^{-(am+p)})\to -\infty$$ linearly in $m$ (because $w(g_{p,m})\to -\infty$ linearly in $m$), so that $\limsup \frac{\dim M_n}{n}<1$ when $n$ goes to infinity in the congruence class of $p$ modulo~$a$. It remains to show that this last case does not occur for all $p$. For this, recall that $F(1) = \sum_{p=0}^{a-1}F_p(1)$, and, $F(1)$ being effective and non-zero, there exists $d$ such that $\{F(1)\}_d\neq 0$. This means that $\{F_p(1)\}_d\neq 0$ for at least one $p$. \end{proof} \section{Manin's problem in the arithmetic setting} One of the greatest concerns of number theory and algebraic geometry in the previous decades has been to understand the subtle link between the distribution of rational or integral points on a variety defined over a number field, and some of it geometric invariants. Let $F$ be a number field and $X$ a projective variety over ~$F$, endowed with an ample line bundle~$L$. Such a line bundle defines (up to adding a bounded function) a height function $H:X(F)\to \mathbf{R}_{+}$ such that for any real number~$B$, the set $$\{x\in X(F), H(x)\leq B\}$$ is finite (this is called \textit{Northcott's property}). We then denote, for any Zariski open subset~$U$ of~$X$, $$N_{U,H}(B) := \#\{x\in U(F), H(x)\leq B\}.$$ When the set $U(F)$ is itself infinite, one can ask about the asymptotic behaviour of $N_{U,H}(B)$ as~$B$ goes to $+\infty$. In all known cases, it is of the form $$N_{U,H}(B)\underset{B\to\infty}{\sim} C B^{a}(\log B)^{b-1}$$ with $C>0$ and $a\geq 0$ being real numbers, and $b\in \frac{1}{2}\mathbf{Z}$, $b\geq 1$. A series of conjectures, or rather questions, stated by Manin and his coauthors in \cite{FManT} et \cite{BM} in the end of the 1980s initiated a vast programme aimed at giving a geometric interpretation of the exponents $a$ and $b$, in terms of the classes of the line bundle $L$ and the anti-canonical bundle $K_X$ in the N\'{e}ron-Severi group of $X$, and of the cone of effective classes of divisors on $X$. To get a plausible statement, one of course has to make some restrictions on the variety $X$. In particular, one can only hope to obtain an asymptotic describing the geometry of $X$ properly if the set of rational points~$X(F)$ is Zariski dense in~$X$. Since Lang's conjecture predicts that this should never happen for varieties of general type, Manin-type problems often restrict to Fano varieties, that is, varieties with ample anti-canonical bundle. One sometimes considers a larger class of varieties, called \textit{almost Fano} (we refer to \cite{PeyreBourbaki}, Définition 3.11, for a precise definition) having a (not necessarily ample but) big anti-canonical bundle, which is a sufficient condition for the height function to satisfy Northcott's property on a nonempty open subset of the variety $X$. For such varieties, the conjectures take on a particularly simple form if one counts points with respect to the anti-canonical height: \begin{pbm} \label{mainquest}Let~$X$ be an almost Fano variety defined over a number field~$F$, such that~$X(F)$ is Zariski dense in~$X$. Let $H$ be a height function relative to the anti-canonical bundle on~$X$. Does there exist a dense open subset~$U$ of~$X$ satisfying \begin{equation}\label{asymptotic}N_{U,H}(B)\underset{B\to \infty}{\sim} C_HB(\log B)^{r-1}\end{equation} where $r$ is the Picard rank of~$X$, and $C_H$ a positive constant? \end{pbm} It is necessary to autorise restriction to an open subset, in order to take into account the possible accumulation of rational points inside proper closed subsets which overrule the distribution on the complementary open subset. Let us also mention a more precise form of this conjecture, due to Peyre, who in \cite{Peyre} proposed an interpretation for the constant~$C_H$ in terms of volumes of certain adelic spaces. Furthermore, in~\cite{BT98} Batyrev and Tschinkel adjusted Peyre's formula for the constant by adding a cohomological factor. The first result of this type has been obtained by Schanuel \cite{Schanuel}, long before the conjectures were formulated, in the case where $X = U = \mathbf{P}^n_F$, giving moreover an explicit formula for the constant~$C_H$. In~\cite{FManT}, it is shown that formula~(\ref{asymptotic}) holds for flag varieties (with~$U=X$). Since then, problem \ref{mainquest}, sometimes even in Peyre's more precise form, could be answered affirmatively in many special cases, and using many different methods, among which we can quote harmonic analysis, universal torsors and the circle method. However, there also exist some counterexamples like the one by Batyrev and Tschinkel~\cite{BT98}, which is why we preferred to state the above conjecture in the form of a question. The main tool, common to most of the existing approaches, is the \textit{height zeta function} $$\zeta_{U,H}(s) = \sum_{x\in U(F)}H(x)^{-s},$$ a function defined on a portion of the complex plane, and the convergence properties of which (abscissa of convergence, order of the first pole, coefficient at this pole) are linked, via tauberian theorems, to the exponents and the constant appearing in the asymptotic we are looking for. More precisely, problem~\ref{mainquest} may then essentially be reformulated in the following manner: \begin{pbm}\label{zetaquest} Let~$X$ be an almost Fano variety over a number field~$F$, such that~$X(F)$ is Zariski dense in $X$. Let $H$ be a height function relative to the anti-canonical bundle on~$X$. Does there exist a dense open subset~$U$ of $X$ such that the $\zeta_{U,H}(s)$ converges absolutely for $\mathrm{Re}(s)>1$, and admits a meromorphic continuation on the open set $\{\mathrm{Re}(s) > 1-\delta\}$ for some real number $\delta>0$, with a unique pole of order $r = \mathrm{rg}\ \mathrm{Pic}(X)$ at $s=1$? \end{pbm} In the same way as for problem~\ref{mainquest}, there is a more precise version of this question, requiring that the coefficient of the main term of~$\zeta_{U,H}(s)$ at 1 should be the constant predicted by Peyre. \section{Manin's problem via harmonic analysis} \label{sect.maninharmonique} The aformentioned case of flag varieties was the first one solved using techinques coming from harmonic analysis: the proof relied on the fact that for these varieties, the height zeta function is an Eisenstein series, the analytic properties of which could be determined using results by Langlands. Then, in the middle of the 1990s, Batyrev and Tschinkel treated the case of toric varieties, the open subset~$U$ being the open orbit of the torus action on the variety. Their argument relies on the Poisson summation formula and the torus action plays a key role there. It could be generalised to many other varieties endowed with an action of an algebraic group with an open orbit, e.g. certain equivariant compactifications of algebraic groups. An \textit{equivariant compactification} of an algebraic group~$G$ is a (projective and smooth) variety~$X$ which has an open subset isomorphic to~$G$ and which is endowed with an action $G\times X\to X$ of $G$ extending the group law $G\times G \to G$. Apart from toric varieties which are exactly the equivariant compactifications of algebraic tori, Manin's problem has been solved for equivariant compactifications of vector groups (\cite{CLT}, \cite{CLTi}), as well as for compactifications of certain non-commutative algebraic groups (\cite{ShTTB03}, \cite{ShTTB07}, \cite{STH}, \cite{STU}, \cite{TTB}, \cite{TT}). Let us give an outline of the argument in the case of equivariant compactifications of vector groups, due to Chambert-Loir and Tschinkel, and which will play a central role in this text. Let~$X$ be an equivariant compactification of the group $G = \mathbf{G}^n_a$ for some $n\geq 1$, defined over a number field~$F$. For each place~$v$ of the field~$F$, there is a local height~$H_v:G(F_v)\to \mathbf{R}_+$, the global height~$H$ being given by the formula $$H(x) = \prod_{v}H_v(x),$$ which gives a way of extending~$H$ to the locally compact group~$G(\mathbb{A}_F)$, where~$\mathbb{A}_F$ is the group of adeles of~$F$. The group~$G(\mathbb{A}_F)$ is self-dual, and~$G(F)$ may be seen as a discrete subgroup of~$G(\mathbb{A}_F)$ with orthogonal identified to itself, so that, when applying the Poisson summation formula to the function $H^{-s}$ (after checking all necessary integrability conditions), one obtains: \begin{equation}\label{intro.formulepoisson}\zeta_{H,G}(s) = \sum_{x\in G(F)}H(x)^{-s} = \sum_{\xi\in G(F)}\mathscr{F}(H^{-s})(\xi),\end{equation} where $\mathscr{F}$ denotes Fourier transformation. This equality is valid whenever the real part of~$s$ is large enough. Moreover, the function~$H^{-s}$ is invariant modulo a compact subgroup of~$G(\mathbb{A}_F)$, so that its Fourier transform is supported inside such a subgroup, which reduces the summation over~$G(F)$ in the right-hand side to a summation over a lattice. The upshot of this procedure is that it rearranges the terms of the height zeta function in a convenient way, and the term corresponding to the trivial character (that is, $\xi=0$), in all cases where this method has worked, and when $H$ is the anti-canonical height, happens to be the only carrier of the first pole with the correct order. To prove this, one uses again the decomposition into local factors $$\mathscr{F}(H^{-s})(\xi) = \prod_{v}\mathscr{F}(H^{-s}_v)(\xi_v)$$ and one studies the different factors separately. The latter take the form of Igusa-type integrals, for which one gets, for almost all finite places~$v$, an explicit expression by reduction to the residue field. Their convergence properties are then determined using bounds coming from Lang-Weil estimates. At archimedean places, successive integrations by parts yield bounds with polynomial growth in $s$. For the finite number of remaining places, cruder bounds suffice. In the case where the considered height is the anti-canonical height, one concludes from these local estimates that the term corresponding to $\xi =0$ has a pole at~$s=1$, with the coefficient predicted by Peyre, of order the rank of the Picard group, that the other terms have poles with strictly smaller orders, and that the function~$s\mapsto (s-1)^{\mathrm{rg Pic}(X)}\zeta_{H,G}(s)$ extends holomorphically to $\{s\in\mathbf{C},\ \Re(s) >1-\delta\}$ for some real number~$\delta >0$. Though the problem of counting integral solutions of diophantine equations is natural, no one had adressed it from the geometric angle suggested by Manin, before, in their paper~\cite{CLTi}, Chambert-Loir and Tschinkel also proposed a solution to Manin's problem for integral points on \textit{partial} equivariant compactifications of vector groups. Such a partial compactification~$U$ is seen as the complement, in an equivariant compactification~$X$, of a divisor~$D$ which geometrically has strict normal crossings. As before, it contains a dense open subset~$G$ isomorphic to the additive group~$\mathbf{G}^{n}_{a}$. We fix models of $X,U,D$ above the ring of integers $\mathcal{O}_F$, a finite set of places~$S$ containing the archimedean places, and one aims at counting the points of~$G(F)$ which are $S$-integral with respect to the chosen model of~$U$. Note that this problem includes the previous one, because in the case where~$D=\varnothing$, if the chosen model for~$D$ is empty as well, then all points of~$G(F)$ are $S$-integral, by projectivity of $X$. For this counting problem, the relevant height isn't the anti-canonical one, but the \textit{log-anticanonical} height, that is, the one associated to the (big) line bundle~$-(K_X + D)$. Chambert-Loir and Tschinkel then prove that the number~$N(B)$ of $S$-integral points of log-anticanonical height smaller than~$B$ satisfies the asymptotic $$N(B)\sim C B(\log B)^{b-1}$$ where $C$ is real positive constant, and where the exponent~$b$ is given by the formula \begin{equation}\label{formuleexposant}b = \mathrm{rg}\ \mathrm{Pic}(U) + \sum_{v\in S}(1 + \dim \mathscr{C}^{\mathrm{an}}_{F_v}(D)).\end{equation} Here, for every~$v$, $\mathscr{C}^{\mathrm{an}}_{F_v}(D)$ is a simplicial complex encoding the incidence properties of the components of~$D$ at the place~$v$. The term $1 + \dim \mathscr{C}^{\mathrm{an}}_{F_v}(D)$ corresponds exactly to the maximal number of components of~$D$ over~$F_v$ with intersection having~$F_v$-points. Of course, in the special case~$U = X$, all these terms are zero, and we recover the previous result for rational points. \section{Manin's problem over function fields} For the moment, we have always restricted ourselves to the case where~$F$ is a number field. The case where~$F$ is the function field~$k(C)$ of a smooth projective curve~$C$ over a finite field~$k$ of cardinality~$q$ was first mentioned by Batyrev and Manin (\cite{BM}, 3.13). In this setting, the heights used (when suitably normalised) will take their values in the set $q^{\mathbf{Z}}$ of integral powers of $q$, which forbids the existence of an asymptotic like the one predicted by problem~\ref{mainquest}. Nevertheless, problem~\ref{zetaquest} remains valid, up to a slight modification to take into account the fact that the corresponding zeta function will in this case be $\frac{2i\pi}{\log(q)}$-periodic, namely that we need to authorise, in addition to the pole at~1, poles at $1 + \frac{2i\pi m}{\log(q)}$ for every integer $m$. In other words, it will be more suitable here to consider the height zeta function as a function of the variable~$t = q^{-s}$, which yields the following reformulation of problem~\ref{zetaquest}: \begin{pbm}\label{zetafunctionquest} Let $C$ be a smooth projective connected curve over~$\mathbf{F}_q$, with function field denoted by~$F$. Let~$X$ be an almost Fano variety over the field~$F$, such that~$X(F)$ is Zariski dense in~$X$. Let~$H$ be a height relative to the anti-canonical bundle of~$X$. Does there exist an open dense subset~$U$ of~$X$ such that the series~$\zeta_{U,H}(t)$ converges absolutely on the disc defined by $|t| < q^{-1}$, and extends to a meromorphic function on the disc defined by $|t| < q^{-1 + \delta}$ for some real number $\delta>0$, with a unique pole of order $r = \mathrm{rg}\ \mathrm{Pic}(X)$ at $t = q^{-1}$ ? \end{pbm} Let us point out that in general, one can however have other poles of orders~$<r$ on the circle $|t| = q^{-1}$, in particular for certain split toric varieties. In the functional case, the problem acquires an additional geometric interpretation: if one chooses a model~$\mathcal{X}$ of~$X$ above the curve~$C$, the rational points of an open subset~$U$ of~$X$ correspond to sections~$\sigma: C\to \mathcal{X}$ of the structural morphism $\pi:\mathcal{X}\to C$ such that, denoting $\eta_C$ the generic point of~$C$, one has $\sigma(\eta_C)\in U(F)$. Given a (generically ample, or at least big) line bundle~$\mathcal{L}$ on~$\mathcal{X}$, the height of such a section with respect to~$\mathcal{L}$ is given by~$q^{d}$, where $d$ is the degree of the line bundle~$\sigma^{*}\mathcal{L}$ over~$C$. The height zeta function then takes the form $$\zeta_{U,\mathcal{L}}(s) = \sum_{x\in U(F)} H(x)^{-s} = \sum_{d\geq 0}m_dq^{-ds},$$ with $$m_d = |\{\sigma:C\to \mathcal{X},\ \sigma(\eta_C) \in U(F),\ \deg\sigma^{*}\mathcal{L} = d\}|.$$ Thus, information about the convergence of the height zeta function will furnish the asymptotic of the number~$m_d$ of points of fixed height~$q^d$, which in this setting will be the analogue of formula~(\ref{asymptotic}). Manin's problem on function fields has been studied very scarcely until now: one must nevertheless quote the works of Bourqui \cite{Bou02,Bou03,Bou11}, which completely solve the case of toric varieties (employing harmonic analysis but also the universal torsor method), as well as the papers~\cite{LY} and~\cite{Peyre12} which treat independently the case of generalised flag varieties, Peyre's work containing moreover an interpretation of the constant. Peyre's method is analogous to Franke, Manin and Tschinkel's for flag varieties over number fields in~\cite{FManT}, the role of the results of Langlands being played by those of Morris about Eisenstein series over function fields. The recent advances of \textit{motivic integration} finally suggest the following generalisation of Manin's problem over function fields: the sections $\sigma:C\to \mathcal{X}$ such that $\sigma(\eta_C) \in G(F)$ and $ \deg\sigma^{*}\mathcal{L} = d$ have a \textit{moduli space}~$M_d$ which is a quasi-projective $k$-scheme. The interest of the geometric study of such moduli spaces in relation with Manin's conjectures was first pointed out by Batyrev. More precisely, following an idea of Peyre, one can ask not only about the asymptotic cardinality~$m_d$ of $M_d(k)$, but more generally about the properties, when~$d$ is large, of the class of~$M_d$ in the Grothendieck ring of varieties $\mathrm{KVar}_k$ over~$k$. As a group, the latter is defined as the quotient of the free abelian group on the isomorphism classes of varieties over~$k$ by relations of the form $$X - U - Z$$ for any variety~$X$ and any closed subscheme~$Z$ of~$X$ with open complement~$U$. The ring structure comes from the product of varieties: using brackets to denote classes in~$\mathrm{KVar}_k$, one has $[X][Y] = [X\times_kY]$ for all $k$-varieties $X$ and~$Y$. We denote by $\mathbf{L} = [\mathbf{A}^1_k]$ the class of the affine line, and we also often consider the localised Grothendieck ring~$\mathscr{M}_{\mathbf{C}} = \mathrm{KVar}_k[\mathbf{L}^{-1}]$. The class of a variety in the Grothendieck ring contains a large amount of geometric information about this variety: indeed, there exist numerous \textit{motivic measures}, that is, ring morphisms from~$\mathrm{KVar}_k$ to other rings, associating to a class~$[X]$ various geometric invariants of~$X$. Among these, in the case where the field is finite, on can quote the \textit{counting measure} $$\begin{array}{ccc} \mathrm{KVar}_{k}&\to& \mathbf{Z} \\ \left[X\right] &\mapsto & \# X(k)\end{array}$$ which recovers the number of rational points of the variety. For any field~$k$, fixing a separable closure~$k^s$ of~$k$ and~$\ell$ a prime number coprime to the characteristic of~$k$, the Euler-Poincaré polynomial (associated to cohomology with coefficients in~$\mathbf{Q}_{\ell}$) also defines a motivic measure $$\begin{array}{ccc} \mathrm{KVar}_{k}&\to& \mathbf{Z}[t] \\ \left[X\right] &\mapsto & EP(X)(t) \end{array}$ which for a smooth and projective variety~$X$ is given by $$EP(X)(t) = \sum_{i=0}^{2\dim X}(-1)^i\dim_{\mathbf{Q}_{\ell}}H^{i}_{\text{ét}}(X\otimes_kk^s,\mathbf{Q}_{\ell})t^i.$$ Another example of the same flavour, and which will be important for us, is the Hodge-Deligne polynomial: $$\begin{array}{ccc} \mathrm{KVar}_{\mathbf{C}}&\to& \mathbf{Z}[u,v] \\ \left[X\right] &\mapsto & HD(X)(u,v)\end{array}$$ sending the class of a projective and smooth complex variety~$X$ to the polynomial $$HD(X)(u,v) = \sum_{0\leq p,q\leq \dim X} (-1)^{p+q}h^{p,q}(X)u^pv^q$$ defined from the Hodge numbers $h^{p,q}(X)$ of~$X$. Noting that $HD(\mathbf{L}) = uv$, one can moreover extend this measure to a ring morphism $$HD:\mathscr{M}_{\mathbf{C}}\to \mathbf{Z}[u,v,(uv)^{-1}]$$ defined on the localised Grothendieck ring $\mathscr{M}_{\mathbf{C}}$. Now that these definitions have been given, we come to the fundamental observation that we can in fact make sense of a version of Manin's problem over the function field~$k(C)$ of a curve even when the base field~$k$ is not necessarily finite: the question would now be to investigate certain geometric invariants of the space~$M_d$ when~$d$ goes to infinity, for example its dimension, or its number of irreducible components of maximal dimension. In this setting, which we will call \textit{motivic}, the notion of height zeta function takes the form of the series \begin{equation}\label{zetafunctionformula}Z(T) = \sum_{d\geq 0}[M_d]T^d\in \mathrm{KVar}_k[[T]]\end{equation} with coefficients in the Grothendieck ring of varieties, called the \textit{motivic height zeta function}. Such functions have been studied by Bourqui in~\cite{Bou09} for certain toric varieties. One of the first difficulties appearing when dealing with these is the question of convergence of such a series. The localised Grothendieck ring $\mathscr{M}_k = \mathrm{KVar}_{k}[\mathbf{L}^{-1}]$ has a natural topology induced by the \textit{dimensional filtration}: for every $n\in \mathbf{Z}$ we define $F_n\mathscr{M}_k$ to be the subgroup of~$\mathscr{M}_k$ generated by classes of the form $[X]\mathbf{L}^{-m}$ where~$X$ is a $k$-variety such that $\dim(X) - m\leq n$. It is then natural to say that the above series converges at $\mathbf{L}^{-s}$ if $\dim[M_d] - ds\to -\infty$ when $d\to +\infty$. Unfortunately, because of the coarseness of the notion of dimension, this convergence is not very amenable. In this text, we are going to use a slightly finer topology, coming from the theory of weights in cohomology. Note that in~\cite{Bou09}, Bourqui nevertheless manages to prove convergence with respect to the dimensional filtration (in the Grothendieck ring of Chow motives), because in the case of the split toric varieties addressed in that paper, the dimensional filtration is sufficient to take into account the annihilation of maximal weights which is necessary for convergence. Later, in~\cite{Bou10}, Bourqui introduces a topology which is similar to ours, and the idea of which appears also in~\cite{Ekedahl}. \section{Main result} The principal objective of this text is to obtain a motivic analogue of the theorem of Chambert-Loir and Tschinkel on counting integral points of partial equivariant compactifications of vector groups, described in the second half of section~\ref{sect.maninharmonique}. As in the arithmetic case, we are going to use harmonic analysis, which requires the construction of suitable objects in the motivic setting. We start by stating the main theorem. \begin{hypothese}\label{hypothese.geom} Let~$C_0$ be a smooth quasi-projective connected curve over an algebraically closed field~$k$ of characteristic zero, let~$C$ be its smooth projective compactification, and let $S = C\setminus C_0$. We denote by $F =k(C)$ the function field of~$C$. We are given a projective irreducible $k$-scheme~$\mathcal{X}$ together with a non-constant morphism $\pi:\mathcal{X}\to C$, a Zariski open subset~$\mathcal{U}$ of~$\mathcal{X}$, and~$\mathcal{L}$ a line bundle over~$\mathcal{X}$. We make the following assumptions on the generic fibres $X = \mathcal{X}_F$, $U = \mathcal{U}_{F}$ and the line bundle $L = \mathcal{L}_{F}$: \begin{itemize}\item $X$ is smooth, the open subset $U$ of $X$ contains a dense open subset~$G$ isomorphic to $\mathbf{G}^{n}_{a,F}$, and~$U$ and~$X$ are endowed with an action of~$G$ extending the group law of~$G$. In other words, $X$ (resp. $U$) is an equivariant (resp. partial equivariant) compactification of the additive group~$\mathbf{G}^n_{a}$. \item the boundary $\partial X = X\setminus U$ is a divisor~$D$ with strict normal crossings. \item the line bundle~$L$ on~$X$ is the log-anticanonical line bundle $-K_X(D)$. \end{itemize} \end{hypothese} As above, we are interested in the moduli spaces of sections $\sigma: C\to \mathcal{X}$ such that $\sigma(\eta_C)\in \mathbf{G}^{n}_{a}(F)$ (where $\eta_C$ is the generic point of~$C$), but we restrict to those which correspond to $S$-integral points, which amounts to moreover requiring $\sigma(C_0)\subset \mathcal{U}$. From the geometric point of view, the first condition means that such a section $\sigma:C\to \mathcal{X}$ exits~$G$ only in a finite number of points of~$C$, called \textit{poles}, and the second one that for all $v\in C_0$, $\sigma(v)$ remains in~$\mathcal{U}$. If we denote by $(D_{\alpha})_{\alpha\in \mathscr{A}}$ the irreducible components of~$X\setminus G$, the log-anticanonical divisor can be written in the form $\sum_{\alpha\in \mathscr{A}} \rho'_{\alpha}D_{\alpha}$ for positive integers $\rho'_{\alpha}$. The line bundle~$\mathcal{L}$ being generically log-anticanonical, it is of the form $\mathcal{L} = \sum_{\alpha\in \mathscr{A}}\rho'_{\alpha}\mathcal{L}_{\alpha}$ where for all $\alpha\in \mathscr{A}$, the restriction of the line bundle~$\mathcal{L}_{\alpha}$ to the generic fibre is~$D_{\alpha}$. For $d\in\mathbf{Z}$ we denote by~$M_d$ the moduli space of sections~$\sigma$ satisfying the above conditions, and such that moreover $\deg \sigma^{*}\mathcal{L} = d$. In view of the description of~$\mathcal{L}$ just given, up to a finite number of places (because~$\mathcal{L}$ may have a finite number of vertical components), only the poles of~$\sigma$ contribute to this degree, the contribution of each pole being the sum of the intersection degrees of~$\sigma$ at this pole with the divisors $\mathcal{L}_{\alpha}$ (the \textit{order} of the pole with respect to $\mathcal{L}_{\alpha}$), weighted by the integers $\rho'_{\alpha}$. One checks that the spaces~$M_d$ are empty for $d\ll 0$, so that one can define the motivic height zeta function by \begin{equation}\label{def.fonctionzeta}Z(T) = \sum_{d\in \mathbf{Z}}[M_d]T^d \in\mathrm{KVar}_k[[T]][T^{-1}].\end{equation} In addition to the ``generic'' hypotheses above, we need some assumption on the model~$\mathcal{U}$ for the spaces~$M_d$ to be non-empty. In fact, a ``Hasse principle''-type hypothesis is sufficient: \begin{hypothese}\label{hypothese.section} We assume that there is no \textit{local obstruction} for the existence of such sections, that is for any closed point~$v\in C_0$ we have $G(F_v)\cap \mathcal{U}(\mathcal{O}_v) \neq \varnothing$, where~$F_v$ is the completion of~$F$ at the place $v$, and~$\mathcal{O}_v$ its ring of integers. \end{hypothese} To understand this requirement, it is useful to reformulate the condition on the sections in an adelic language. Each section corresponds to a unique point $\sigma(\eta_C)$ of $G(F)$, which by the diagonal embedding defines an element of the set $G(\mathbb{A}_F)$ of adelic points of~$G$. Up to a finite number of places, for a closed point $v\in C_0$ which is not a pole of~$\sigma$, the component of~$\sigma$ in~$G(F_v)$ is an element of~$G(\mathcal{O}_v)$. The $S$-integrality condition $\sigma(C_0)\subset \mathcal{U}$ means that for all $v\in C_0$, the component of~$\sigma$ in~$G(F_v)$ is an element of~$\mathcal{U}(\mathcal{O}_v)$: the non-emptyness of the intersection $G(F_v)\cap \mathcal{U}(\mathcal{O}_v)$ is therefore a necessary condition for the existence of such a section. Under these assumptions, we obtain the expected convergence for a topology on the Grothendieck ring~$\mathscr{M}_{\mathbf{C}}$ which will be made more explicit below: \begin{theoreme}\label{main} We assume $k=\mathbf{C}$, and the notation and hypotheses in Settings \ref{hypothese.geom} et~\ref{hypothese.section}. We denote by~$b$ the integer given by formula (\ref{formuleexposant}). There exists an integer $a\geq 1$ and a real number $\delta >0$ such that the Laurent series $(1-(\mathbf{L} T)^a)^bZ(T)$ converges for $|T| < \mathbf{L}^{-1 + \delta}$ and takes a non-zero effective value at $ T = \mathbf{L}^{-1}$. \end{theoreme} Thus, the order of the pole of the height zeta function is given by the same formula as in Chambert-Loir and Tschinkel's result mentioned in section~\ref{sect.maninharmonique}. The non-zero effective value at $\mathbf{L}^{-1}$ announced in the statement is an element of the completion $\widehat{\mathscr{M}}_{\mathbf{C}}$ of $\mathscr{M}_{\mathbf{C}}$ for the topology we consider: it appears as an infinite product of local volumes, and is a motivic analogue of Peyre's constant. For any $k$-constructible set~$M$, we denote by~$\kappa(M)$ the number of irreducible components of maximal dimension of~$M$. The Hodge-Deligne polynomial~$HD$ extends to a motivic measure $$\widehat{\mathscr{M}}_{\mathbf{C}}\to \mathbf{Z}[[(uv)^{-1}]][u,v]$$ on the aforementioned completion, and theorem~\ref{main}, via this motivic measure, furnishes a description of the asymptotic behaviour of~$\dim(M_d)$ and $\kappa(M_d)$, with a distinction according to the congruence class of $d$ modulo $a$ imposed by the presence of the exponent~$a$. Moreover, a Lefschetz principle type argument enables us to remove the hypothesis $k=\mathbf{C}$ assumed in the theorem. \begin{corollaire}\label{maincor} For all $p\in\{0,\ldots,a-1\}$, one of the following cases occur when~$d$ goes to infinity in the congruence class of~$p$ modulo~$a$: \begin{enumerate}[(i)] \item Either $\limsup \frac{\dim(M_d)}{d}<1$. \item Or $\dim(M_d) -d$ has a finite limit $d_0$ and $$\frac{\log(\kappa(M_d))}{\log d}$$ converges to an element of the set $\{0,\ldots,b-1\}$. More generally, for every sufficiently small $\eta>0$ and for sufficiently large $d$ in the congruence class of $p$ modulo $a$, the coefficients of the Hodge-Deligne polynomial $HD(M_d)$ of degrees contained in the interval $$[2(1- \eta)d + 2d_0, 2d + 2d_0]$$ are polynomials in $\frac{d-p}{a}$ of degree at most $b-1$. \end{enumerate} Moreover the second case happens for at least one value of $p\in\{0,\ldots,a-1\}$. \end{corollaire} In other words, for large $d$ in at least one congruence class modulo $a$, the space $M_d$ will be of dimension $d+ d_0$ for some integral constant $d_0$, and the number of components having precisely this dimension has polynomial growth of degree bounded by $b$. Moreover, more generally, the latter asymptotic is satisfied for a positive proportion of the coefficients of highest degree of the Hodge-Deligne polynomial of $M_d$. A condition on congruence classes cannot be avoided in general: for example, if the log-anticanonical divisor is a multiple $(L')^a$ of a class in $\mathrm{Pic}(\mathcal{X})$, then $M_d = \varnothing$ for $d\nmid a$. It is important to highlight two important special cases of these results: when~$\mathcal{X} = \mathcal{U}$ we obtain a motivic analogue of Chambert-Loir and Tschinkel's paper~\cite{CLT} for rational points on equivariant compactifications of vector groups. In this case, there is no condition on poles of sections. On the contrary, the case~$U=G_F$ where we only allow sections with poles in the finite set $C\setminus C_0$ has been treated in Chambert-Loir and Loeser's work~\cite{CL}, following the same idea as the proof sketched in section \ref{sect.maninharmonique} for the arithmetic version of Manin's problem. Because of the restriction on the poles of sections, only a finite number of places contribute to the height. Moreover, at fixed degree~$d$, the orders of the poles of the sections counted in the space~$M_d$ are bounded in terms of~$d$. Going back to the adelic description above, one sees that the characteristic function of the sections parametrised by~$M_d$ is a function on the adeles~$G(\mathbb{A}_F)$ the restriction of which to~$G(F_v)$ is, for almost all~$v$, the characteristic function of $G(\mathcal{O}_v)$. More precisely, because of the bound on orders of poles and of the equivariance of the compactification, this characteristic function is a \textit{motivic Schwartz-Bruhat function}. The main tool employed by~Chambert-Loir and Loeser is the \textit{motivic Poisson formula} of Hrushovski and Kazhdan, which is satisfied for this kind of functions and which, when applied to the characteristic function of~$M_d$ in $G(\mathbb{A}_F)$ for each~$d$, enables one to rewrite the height zeta function~$Z(T)$ in the form $$Z(T) = \sum_{\xi\in G(F)} Z(T,\xi),$$ where the $Z(T,\xi)$ are series with coefficients in a Grothendieck ring described below. This equality is the analogue, in this setting, of identity~(\ref{intro.formulepoisson}). Chambert-Loir and Loeser then study the functions~$Z(T,\xi)$ separately: since, as explained above, only a finite number of places contribute, those are \textit{finite} products of local factors (whereas the decompositions~$H(x) = \prod_{v}H_v(x)$ in Chambert-Loir and Tschinkel's work had an infinite number of factors not equal to 1) which can be rewritten as motivic integrals on the arc space of the variety~$\mathcal{X}$. These integrals take the form of motivic Igusa zeta functions, the study of which goes back to Denef and Loeser~(\cite{DL98}). By a method analogous to Chambert-Loir and Tschinkel's analysis, the reduction to the residue field being replaced by the reduction of the arc space to the special fibre, Chambert-Loir and Loeser prove that each factor is a rational function. Thus, in this situation the motivic height zeta function happens to be rational, and their version of theorem~\ref{main} is stated by describing the denominators. The coefficient at $\mathbf{L}^{-1}$ is a finite product of motivic volumes in this case. Our approach in this text is greatly inspired from Chambert-Loir and Loeser's, but requires nevertheless several major adaptations. First of all, since we impose fewer constraints on the poles of the sections we are counting, those do not live in a fixed finite set any more: the products of local factors which were finite in Chambert-Loir and Loeser's work have therefore no reason to be finite in our setting, and we need to make sense of motivic analogues of the infinite products $H(x) = \prod_{v}H_v(x)$ used by Chambert-Loir and Tschinkel. Moreover, the characteristic function of the sections parametrised by the space~$M_d$ is in our case a complicated adelic function, and a direct application of Hrushovski and Kazhdan's motivic Poisson summation is a priori not possible. Last but not least, the presence of infinite products opens the question of their convergence, which will require the introduction of a suitable topology on the Grothendieck rings involved. \section{Sketch of proof} \subsection{Motivic Euler products} It is well known that the Riemann zeta function $$\zeta(s) = \sum_{n\geq 1} \frac{1}{n^s}$$ has an \textit{Euler product} decomposition $$\zeta(s) = \prod_{p}(1 - p^{-s})^{-1} = \prod_{p} \left( 1 + p^{-s} + p^{-2s} + \ldots \right),$$ where the product is over the set of all prime numbers. Each factor, separately, converges for $\Re(s) >0$, but the product converges only for $\Re(s)>1$. This property of Euler product decomposition is true more generally for Dirichlet series $$\sum_{n\geq 1} a(n)n^{-s} = \prod_{p} (1 + a(p)p^{-s} + a(p^2)p^{-2s} +\ldots)$$ where $a:\mathbf{N}\to \mathbf{C}$ is a multiplicative function. Another, more geometric example is given by the zeta function of a variety~$X$ over a finite field~$\mathbf{F}_q$: $$\zeta_X(s) := \exp \left(\sum_{m\geq 1} \frac{|X(\mathbf{F}_{q^m})|}{m}q^{-ms}\right).$$ Denoting by $X_{cl}$ the set of closed points of the variety~$X$, the function $\zeta_X(s)$ can indeed be written in the form of an infinite product \begin{equation}\label{produiteulerien}\zeta_X(s) = \prod_{x\in X_{cl}}(1-q^{-s\deg x})^{-1}.\end{equation} Here, as for the Riemann zeta function, each local factor converges for~$\Re(s)>0$, but the series~$\zeta_X(s)$ converges only for $\Re(s)>\dim X$: taking the product shifts the abscissa of convergence by the dimension of the scheme over the closed points of which the product is taken. The example of Riemann's zeta function can also be understood in this way, if we see the set of prime numbers as the set of closed points of the arithmetic scheme~$\mathrm{Spec}\,(\mathbf{Z})$ of dimension $1$. The method of proof described in section \ref{sect.maninharmonique} for Manin's problem over number fields shows that the two main tools to tackle it are the possibility to decompose a function into an infinite product of local factors, and Fourier analysis. Chapter~\ref{eulerproducts} of this text introduces a notion of \textit{motivic Euler product} which gives a meaning, for any quasi-projective variety~$X$ and any family~$\mathscr{X} = (X_i)_{i\geq 1}$ of quasi-projective varieties over~$X$ (or, more generally, of classes in a relative Grothendieck ring over~$X$), to a product of the form \begin{equation}\label{intro.eulerproduct}\prod_{x\in X}\left(1 + X_{1,x}t + X_{2,x}t^2 + \ldots \right),\end{equation} where each $X_{i,x}$ may be seen as the fibre of~$X_i$ above a point $x\in X$. By analogy with the above examples coming from number theory, one can think of the variable~$t$ as corresponding to~$q^{-s\deg x}$, at least when the field~$k$ is algebraically closed. To define~(\ref{intro.eulerproduct}), we start by constructing the coefficients of the series that should be the expansion of such a product. When we try to expand~(\ref{intro.eulerproduct}) naïvely, we observe that any contribution to the coefficient of degree~$n$ comes from choosing a certain term in each factor in a way that the sum of the degrees of the chosen terms is~$n$, which induces a certain partition of the integer~$n$. We construct the part of the coefficient of degree~$n$ corresponding to a fixed partition~$\pi$ of~$n$ separately. Writing $\pi=(n_i)_{i\geq 1}$ where~$n_i$ is the number of occurrences of the integer~$i$ in the partition~$\pi$, so that $\sum_{i\geq 1}in_i = n$, we define the \textit{symmetric product} $S^{\pi}\mathscr{X}$ of the family~$\mathscr{X}$ in the following way: first of all, since we want to construct the part of the coefficient of degree~$n$ corresponding to partition~$\pi$, we need to choose each term $X_{i,x}t^i$ in exactly~$n_i$ factors, which leads us to consider the product \begin{equation}\label{intro.product}\prod_{i\geq 1}X_i^{n_i}.\end{equation} On the other hand, these terms have been chosen in distinct factors, that is, factors corresponding to distinct points $x\in X$. Thus, considering the morphism $$\prod_{i\geq 1}X_i^{n_i}\to \prod_{i\geq 1}X^{n_i}$$ induced by the structural morphisms~$X_i\to X$, we have to restrict to points of the product~(\ref{intro.product}) with image in~$\prod_{i\geq 1}X^{n_i}$ having all its coordinates distinct, that is, lying in the complement of the big diagonal. Denoting by $$\left( \prod_{i\geq 1}X_i^{n_i}\right)_{*,X}$$ the open subset defined in this way, it remains to observe that the factors of the above Euler product do not come in any particular order, which prompts us to take the quotient by the natural permutation action of the product of symmetric groups $\prod_{i\geq 1}\mathfrak{S}_{n_i}$. We therefore put $$S^{\pi}\mathscr{X} = \left( \prod_{i\geq 1}X_i^{n_i}\right)_{*,X}/\prod_{i\geq 1}\mathfrak{S}_{n_i},$$ which exists as a variety under the above quasi-projectivity assumptions. The Euler product~(\ref{intro.eulerproduct}) will thus be introduced in section~\ref{eulerprod} of chapter~\ref{eulerproducts} first as a~\textit{notation} for the series $$1 + \sum_{n\geq 1} \left(\sum_{\substack{\pi\ \text{partition}\\ \text{of}\ n}}[S^{\pi}\mathscr{X}]\right)t^n\in \mathrm{KVar}_{k}[[t]].$$ By showing various properties of the geometric construction we just described, we will indicate how one can do computations with this notion of Euler product. For example, we will prove the multiplicativity property $$\prod_{x\in X}\left(1 + X_{1,x}t + X_{2,x}t^2 + \ldots \right)$$ $$= \prod_{x\in U}\left(1 + X_{1,x}t + X_{2,x}t^2 + \ldots \right)\prod_{x\in Y}\left(1 + X_{1,x}t + X_{2,x}t^2 + \ldots \right)$$ for any closed subscheme~$Y$ of~$X$ with open complement~$U$, which shows that we indeed have defined something which behaves like a product. An important example of a series with coefficients in $\mathrm{KVar}_k$ is Kapranov's zeta function, introduced by Kapranov in~\cite{Kapr}. For a quasi-projective variety~$X$ sur $k$, we denote for every $n\geq 0$ by~$S^nX$ its $n$-th symmetric power $X^n/\mathfrak{S}_n$, which is also a quasi-projective variety, and we define $$Z_X(t) = \sum_{n\geq 0} [S^nX] t^n \in \mathrm{KVar}_k[[t]].$$ It is the motivic analogue of the function~$\zeta_X(s)$ considered above, in the sense that, for finite~$k$ it specialises to the latter via the counting measure. Its decomposition as a motivic Euler product is given by $$Z_X(t) = \prod_{x\in X}(1 + t + t^2 + \ldots ) = \prod_{x\in X} \frac{1}{1-t},$$ which is the motivic analogue of the Euler product decomposition~(\ref{produiteulerien}) of $\zeta_X(s)$. Note that writing~$Z_X(t)$ in this way was already within the scope of the notion of \textit{motivic power} due to Gusein-Zade, Luengo and Melle (\cite{gusein}), of which our Euler products are a generalisation. \subsection{Hrushovski and Kazhdan's Poisson summation} Hrushovski and Kazhdan's Poisson summation formula, proved in~\cite{HK}, is a motivic analogue of (a weakened version of) the Poisson formula over the adeles of a number field which intervenes in the proof of Chambert-Loir and Tschinkel's theorem explained in section~\ref{sect.maninharmonique}. First of all, to be able to do Fourier analysis in a motivic setting, we need to work in a larger Grothendieck ring, the Grothendieck ring of varieties with exponentials~$\mathrm{KExpVar}_k$ of the field~$k$. It is given as the quotient of the free abelian group on isomorphism classes of pairs~$(X,f)$ with~$X$ a variety over~$k$ and $f:X\to \mathbf{A}^1$ a morphism, by cut-and-paste relations similar to those of the classical Grothendieck ring~$\mathrm{KVar}_k$, as well as by the additional relation \begin{equation}\label{relationsuppl}(X\times \mathbf{A}^1,\mathrm{pr}_2)\end{equation} for every variety~$X$ over~$k$, with $\mathrm{pr}_2:X\times \mathbf{A}^1\to \mathbf{A}^1$ the second projection. One can also define a product on~$\mathrm{KExpVar}_k$ and in the case where the field~$k$ is finite, the counting measure on~$\mathrm{KVar}_k$ extends, for any non-trivial character $\psi:k\to\mathbf{C}^*$ of the field $k$, to a motivic measure \begin{equation}\label{expmotmeasure}\begin{array}{ccc}\mathrm{KExpVar}_k& \to & \mathbf{C}\\ \left[X,f\right] & \to & \sum_{x\in X(k)}\psi(f(x)), \end{array}\end{equation} with relation~(\ref{relationsuppl}) being the one translating the fact that, the character~$\psi$ being non-trivial, we have $$\sum_{x\in k}\psi(x) = 0,$$ a property which is essential to make Fourier analysis work. The ``motivic'' functions that we will consider will have values in the ring~$\mathrm{KExpVar}_k$, or rather in its localisation~$\mathscr{E}xp\mathscr{M}_k$ obtained by inverting the class $[\mathbf{A}^1,0]$. The natural map $\mathrm{KVar}_k\to \mathrm{KExpVar}_k$ given by $[X]\mapsto [X,0]$ is an injective ring morphism. Though it holds under much more general hypotheses, the classical Poisson formula on the adeles of a global field~$F$ is often stated for~\textit{Schwartz-Bruhat functions}. These are linear combinations of functions $f:\mathbb{A}_F\to \mathbf{C}$ which can be written as a product $$f = \prod_{v}f_v$$ such that for every place~$v$, $f_v$ is a local Schwartz-Bruhat function $F_v\to \mathbf{C}$ on the completion~$F_v$ of the global field~$F$ at~$v$ (i.e. smooth and rapidly decreasing if~$v$ is archimedean, locally constant and compactly supported if~$v$ is non-archimedean), equal to the characteristic function~$\1_{\mathcal{O}_v}$ of the ring of integers of~$F_v$ for almost all non-archimedean places. Hrushovski and Kazhdan define a geometric analogue of a Schwartz-Bruhat function on adeles of a function field (where there are only non-archimedean places). For a local Schwartz-Bruhat (that is, locally constant and compactly supported) function $f:F\to \mathbf{C}$ defined on a non-archimedan local field~$F$ (for which we denote by~$\mathcal{O}$ its ring of integers,~$t$ a uniformiser and~$k$ its residue field), by local compactness we can find integers $M,N\geq 0$ such that~$f$ is zero outside of $t^{-M}\mathcal{O}$, and invariant modulo the subgroup $t^N\mathcal{O}$. As a consequence, such a function may be seen as a function on the quotient $t^{-M}\mathcal{O}/t^{N}\mathcal{O}$, which happens to be a $k$-vector space of dimension $M+N$. Thus, in the motivic setting, a local Schwartz-Bruhat function (of level $(-M,N)$) will be a function defined on an affine space $\mathbf{A}_k^{M+N}$ (denoted by $\mathbf{A}_k^{(-M,N)}$ to keep track of the values of~$M$ and~$N$) and with values in the ring~$\mathscr{E}xp\mathscr{M}_k$. More precisely, such functions are introduced as elements of the \textit{relative} Grothendieck ring $\mathscr{E}xp\mathscr{M}_{\mathbf{A}_k^{(-M,N)}}$. This construction can also be performed to produce a motivic analogue of locally constant and compactly supported functions defined on a finite product of local fields (rather than only one local field): these are Hrushovski and Kazhdan's motivic Schwartz-Bruhat functions, and their Poisson summation formula holds for these functions. Thus, this formula is the analogue of the Poisson summation formula over the adeles of a number field~$F$ for classical Schwartz-Bruhat functions As we have mentioned above, this formula was sufficient for the purposes of Chambert-Loir and Loeser's work~\cite{CL}, because the height function for the counting problem they considered satisfied this very restrictive hypothesis. In the general case we find ourselves in, this is no longer the case, the poles of the sections we are counting are not contained in some fixed finite set, and we therefore need to show how we may apply Hrushovski and Kazhdan's Poisson formula in families, making the locus of the poles of sections vary, which is done using the above notion of symmetric product. To explain this in a simple special case, let us consider, for every integer~$i\geq 1$ and for any integers $M_{i}, N_i\geq 0$ the variety $$\mathbf{A}_C^{(-M_i,N_i)} := C\times \mathbf{A}_k^{(-M_i,N_i)}$$ above the curve~$C$. We can define, for any integer~$m\geq 0$ the symmetric product $$S^{m}((\mathbf{A}_C^{(-M_i,N_i)})_{i\geq 1})$$ of the family $\left(\mathbf{A}_C^{(-M_i,N_i)}\right)_{i\geq 1}$. This symmetric product is naturally endowed with a morphism to the symmetric power~$S^{m}C$. We observe that for every effective zero-cycle $D = \sum_{v}m_vv\in S^{m}C(k)$, the fibre of $S^{m}((\mathbf{A}_C^{(-M_i,N_i)})_{i\geq 1})$ above $D$ is of the form $$\prod_{v\in C}\mathbf{A}_k^{(-M_{m_v},N_{m_v})},$$ and that it therefore may be seen as the domain of definition of a motivic Schwartz-Bruhat function, with support and invariance controlled by the zero-cycles $$-\sum_{v}M_{m_v}v\ \ \ \text{et}\ \ \ \sum_{v}N_{m_v} v.$$ Thus, making the zero-cycle~$D$ vary, we can in this way parametrise functions with varying supports (if the $M_i$ are sufficiently large) and varying invariance domains (if the $N_i$ are sufficiently large). The general construction in chapter \ref{poissonformula} is slightly more elaborate because it allows additional parameters, but the main idea is exactly the same. We then show that all the operations of Hrushovski and Kazhdan's theory can be performed in families over the base $S^mC$ for every $m\geq 1$, and check in particular the validity of the Poisson formula in this setting. \subsection{Weight filtration and convergence} Let us now go back to the motivic height zeta function~(\ref{def.fonctionzeta}). Thanks to a decomposition of the moduli spaces~$M_d$ according to the values of the poles and the zeroes of the sections they parametrise, we can apply to $Z(T)$ our generalised Poisson formula from the previous paragraph. This enables us, analogously to what has been explained in section~\ref{sect.maninharmonique}, to rewrite the motivic height zeta function in the form \begin{equation}\label{zetafunctionpoisson}Z(T) = \sum_{\xi\in k(C)^n} Z(T,\xi)\end{equation} for some series~$Z(T,\xi)$ with coefficients in~$\mathscr{E}xp\mathscr{M}_{k}$, each having an Euler product decomposition. We now arrive to the question of the convergence of these Euler produts: as in Chambert-Loir and Tschinkel's work, we wish to prove that the series~$Z(T,0)$ is the only term in the right-hand side of~(\ref{zetafunctionpoisson}) responsible for the first pole of the function~$Z(T)$ at~$\mathbf{L}^{-1}$, and to extend it meromorphically beyond this pole. We mentioned earlier that the dimensional filtration on the Grothendieck ring of varieties will not give us the expected convergence for the function~$Z(T)$. To understand this, let us go back to the bounds established by Chambert-Loir and Tschinkel: for almost all local factors, their calculations feature differences of the form $D_{\alpha}(\mathbf{F}_q) - q^{n-1}$ for each irreducible component~$D_{\alpha}$ of the divisor at infinity~$D$, $q$ a prime power, and $n$ the dimension of~$X$. To obtain the desired convergence, it is crucial to bound those using the Lang-Weil estimates~\cite{LW}: \begin{equation}\label{LangWeil.ineq}\left\vert D_{\alpha}(\mathbf{F}_q) - q^{n-1}\right\vert\leq cq^{n -\frac{3}{2}},\end{equation} for some constant $c>0$. In the motivic setting, the calculations are completely analogous, and we therefore find ourselves naturally with the same kind of difference, namely $$[D_{\alpha}] - \mathbf{L}^{n-1}\in \mathscr{M}_{k}.$$ In general, the dimension of this element of~$\mathscr{M}_k$ is $n-1$ : for example, if $D_{\alpha}$ is a curve of $g\geq 1$ (in particular, then $n=2$), then the Hodge-Deligne polynomial $$HD([D_{\alpha}] - \mathbf{L}^{n-1}) = (1 + gu + gv + uv) - uv = 1 + gu + gv,$$ is of degree 1, which shows that $[D_{\alpha}] - \mathbf{L}^{n-1}$ cannot be a linear combination of elements of dimension $\leq 0$. The dimensional filtration therefore gives a weaker bound than the one obtained via the Lang-Weil estimates in the arithmetic case, which leads us to use a finer topology on the Grothendieck ring of varieties, based on Hodge theory. Let us temporarily assume that the base field $k$ is the field $\mathbf{C}$ of complex numbers. There exists a motivic measure $$\chi^{\mathrm{Hdg}}: \mathscr{M}_{\mathbf{C}}\to K_0(HS),$$ called the \textit{Hodge realisation}, with values in the Grothendieck ring of Hodge structures, which to a complex variety~$Y$ associates \begin{equation}\label{defchi}\chi(Y) = \sum_{i=0}^{2\dim Y} (-1)^i[H^{i}_c(Y(\mathbf{C}),\mathbf{Q})],\end{equation} where $[H^{i}_c(Y(\mathbf{C}),\mathbf{Q})]$ is the class in $K_0(HS)$ of the mixed Hodge structure on the $i$-th singular cohomology group with compact supports of~$Y(\mathbf{C})$. There is a natural increasing weight filtration $(W_{\leq n}K_0(HS))_{n\in \mathbf{Z}}$ on $K_0(HS)$, given by defining $W_{\leq n}K_0(HS) $ as the subgroup of~$K_0(HS)$ generated by classes of pure Hodge structures of weights~$\leq n$. For an element $\a\in\mathscr{M}_{\mathbf{C}}$, we define its weight by $$w(\a) = \inf\{n\in \mathbf{Z}, \chi^{\mathrm{Hdg}}(\a) \in W_{\leq n}K_0(HS)\}.$$ The formula (\ref{defchi}) giving $\chi^{\mathrm{Hdg}}$ then implies directly that for the class of the variety~$Y$, we have $w(Y) = 2\dim Y$. More precisely, we have $$H^{2\dim Y} (Y(\mathbf{C}),\mathbf{Q}) \simeq \mathbf{Q}(-\dim Y)^{\kappa(Y)},$$ where $\kappa(Y)$ is the number of irreducible components of maximal dimension of~$Y$ and $\mathbf{Q}(-\dim Y)$ is the unique pure Hodge structure of weight~$2\dim Y$ and dimension~1. Thus, we observe that in the expression of $$\chi^{\mathrm{Hdg}}([D_{\alpha}] - \mathbf{L}^{n-1})$$ the terms corresponding to cohomology groups of maximal degree cancel out, and that as a consequence, $$w([D_{\alpha}] - \mathbf{L}^{n-1}) \leq 2n-3 = 2\left(n-\frac32\right).$$ This bound is the analogue, in the motivic setting, of inequality (\ref{LangWeil.ineq}). \subsection{Motivic vanishing cycles} Recall that in the decomposition~(\ref{zetafunctionpoisson}) given by the motivic Poisson formula, the series~$Z(T,\xi)$ have coefficients in the Grothendieck ring of varieties with exponentials~$\mathscr{E}xp\mathscr{M}_{\mathbf{C}}$ (we still assume~$k=\mathbf{C}$). Thus, we need to extend the weight topology, defined on~$\mathscr{M}_{\mathbf{C}}$ in the previous paragraph, to the ring~$\mathscr{E}xp\mathscr{M}_{\mathbf{C}}$. Moreover, it would be advisable, to preserve the analogy with the arithmetic case, for such an extended weight function to satisfy the \textit{triangular inequality}: for any complex variety~$X$ with a morphism $f:X\to \mathbf{A}^1$, we would like that \begin{equation}\label{eq.triangular}w([X,f])\leq w([X]).\end{equation} Via the motivic measure~(\ref{expmotmeasure}) on the Grothendieck ring of varieties with exponentials, we see that inequality~(\ref{eq.triangular}) is the motivic analogue of the inequality $$\left\vert\sum_{x\in X(\mathbf{F}_q)}\psi(f(x)) \right\vert\leq \# X(\mathbf{F}_q)$$ coming from the classical triangular inequality, for a variety~$X$ over~$\mathbf{F}_q$, a morphism $f:X\to \mathbf{A}^1$ and a non-trivial character $\psi:\mathbf{F}_q\to \mathbf{C}^*$. The solution to this problem uses the \textit{motivic vanishing cycles} of Denef and Loeser. For a smooth variety~$X$ over a field~$k$ of characteristic zero, and a morphism $f:X\to \mathbf{A}^1_{k}$, the motivic vanishing cycles~$\phi_{f,a}$ of $f$ at~$a\in k$ are an element of the localised Grothendieck ring $\mathscr{M}_{f^{-1}(a)}^{\hat{\mu}}$ of varieties above the fibre $f^{-1}(a)\subset X$, with action of the group~$\hat{\mu}$, which is the projective limit of the groups of $n$-th roots of unity for every~$n\geq 1$. When the fibre $f^{-1}(a)$ is nowhere dense in~$X$, Denef and Loeser give a formula for computing~$\phi_{f,a}$ in terms of the components of the exceptional divisor of a log-resolution of the pair $(X,f^{-1}(a))$. Using the works of Denef and Loeser on this subject, as well as Guibert, Loeser and Merle's paper~\cite{GLM}, Lunts and Schnürer proved in~\cite{LS} that in the case where~$k$ is algebraically closed of characteristic zero, combining the motivic vanishing cycles at all the points of $k$, one could define a ring morphism, called motivic vanishing cycles measure $$\Phi:\mathscr{E}xp\mathscr{M}_{k}\to (\mathscr{M}_{k}^{\hat{\mu}},\ast)$$ where $\ast$ is a \textit{convolution product}, the definition of which is due to Looijenga, Denef and Loeser. For a class $[X,f]$ with~$X$ smooth and $f:X\to \mathbf{A}^1_{k}$ proper, we have $$\Phi([X,f]) = \sum_{a\in k}f_!\phi_{f,a}.$$ The main ingredient in this proof is a Thom-Sebastiani theorem for motivic vanishing cycles, due to Denef and Loeser, which is the motivic analogue of the corresponding theorem in the classical theory of vanishing cycles. In chapter~\ref{grothrings} of this text, we extend the definition of the measure~$\Phi$ to~$k$ of characteristic zero but not necessarily algebraically closed, then above an arbitrary $k$-variety~$S$. Motivic vanishing cycles thus give us a way to go from $\mathscr{E}xp\mathscr{M}_{\mathbf{C}}$ to $\mathscr{M}_{\mathbf{C}}^{\hat{\mu}}$. To extend the weight filtration to the ring $\mathscr{E}xp\mathscr{M}_{\mathbf{C}}$, we then use the fact that the Hodge realisation mentioned above generalises to a morphism \begin{equation}\label{realhodge}\mathscr{M}_{\mathbf{C}}^{\hat{\mu}}\to K_0(HS^{\mathrm{mon}})\end{equation} from the Grothendieck group of complex varieties with $\hat{\mu}$-action, to the Grothendieck group of Hodge structures with action of a linear finite order operator (the \textit{monodromy} operator). This morphism becomes a ring morphism when both groups are endowed with appropriate products. Composing this generalised Hodge realisation with the morphism~$\Phi$ then amounts to looking at the Hodge structure on the vanishing cycles (in the classical sense), with the natural action of the monodromy around the given point. We then extend the weight filtration to the group $K_0(HS^{\mathrm{mon}})$, defining $W_{\leq n}K_0(HS^{\mathrm{mon}})$ for every $n\in\mathbf{Z}$ as the subgroup generated by the pure Hodge structures of weight $\leq n$ and trivial monodromy, as well as by the pure Hodge structures of weight $\leq n-1$ with non-trivial monodromy. Using Denef and Loeser's formula, one can see that the triangular inequality is then satisfied. The last difficulty which comes up is the question of studying convergence of Euler products using the weight filtration, for which we need to use the above results in the relative setting: we will therefore use our motivic vanishing cycles measure above a base variety $S$, as well as the theory of mixed Hodge modules of Saito in chapter~\ref{hodgemodules} of this text. \section{Geometric setting}\label{sectGeometry} \subsection{Equivariant compactifications}\label{sect.equivariant} Let $k$ be an algebraically closed field of characteristic zero. Let $C_0$\index{C0@$C_0$} be a smooth quasi-projective curve over~$k$, $C$ be its smooth projective compactification, and $S = C\setminus C_0$. We denote by $F = k(C)$ the function field of $C$, $g$ its genus, and $G$ the additive group scheme~$\mathbf{G}^{n}_{a}$. A smooth projective \emph{equivariant compactification} \index{equivariant compactification} of $G_F$ is a smooth projective $F$-scheme $X$ containing $G_F$ \index{GF@$G_F$} as a dense open subset, and such that the group law $G_F\times G_F\longrightarrow G_F$ extends to a group action $G_F\times X\longrightarrow X$. The geometry of such varieties has been investigated in \cite{HT}. We summarise here the main facts that will be used in this chapter. \begin{prop} Let $X$ be a smooth projective equivariant compactification of~$G_{F}$. \begin{enumerate} \item The boundary $X\setminus G_F$ is a divisor. \item The Picard group of~$X$ is freely generated by the irreducible components $(D_{\alpha})_{\alpha\in\mathscr{A}}$ of $X\setminus G_F$. \index{A@$\mathscr{A}$}\index{Da@$D_{\alpha}$} \item The closed cone of effective divisors $\Lambda_{\mathrm{eff}}(X) \subset \mathrm{Pic}(X)_{\mathbf{R}}$ is given by $$\Lambda_{\mathrm{eff}}(X) = \bigoplus_{\alpha\in \mathscr{A}}\mathbf{R}_{\geq 0}D_{\alpha}.$$ \item Up to multiplication by a scalar, there is a unique $G_F$-invariant meromophic differential form~$\omega_X$ on $X$. Its restriction to $G_F$ is proportional to the form $\mathrm{d} x_1\wedge \ldots \wedge \mathrm{d} x_n$. \item There exist integers $\rho_{\alpha}\geq 2$ such that the divisor \index{omega@$\omega_X$} $-\div(\omega_X)$ is given by $$-\div(\omega_X) = \sum_{\alpha\in\mathscr{A}}\rho_{\alpha}D_{\alpha}.$$ \index{rho@$\rho_{\alpha}$} \end{enumerate} \end{prop} We will moreover assume that the divisor $X\setminus G_F$ has strict normal crossings. This means that the components $D_{\alpha}$ are geometrically irreducible, smooth and meet transversally. \subsection{Partial compactifications} A \textit{partial compactification} \index{partial compactification} \index{equivariant compactification!partial} of $G_F$ is a smooth quasi-projective scheme~$U$, containing $G_F$ as an open subset, and endowed with an action of $G_F$ which extends the group law $G_F\times G_F\longrightarrow G_F$. If $U$ is an open subscheme of a projective smooth equivariant compactification~$X$ of $G_F$, globally invariant under the action of $G_F$, with complement a reduced divisor $D=X\setminus U$, we denote by $\mathscr{A}_D$ the subset of $\mathscr{A}$ such that $$D = \sum_{\alpha\in \mathscr{A}_D}D_{\alpha},$$\index{AD@$\mathscr{A}_D$} where $(D_{\alpha})_{\alpha\in\mathscr{A}}$ are the irreducible components of $X\setminus G_F$. The log-anticanonical class \index{log-anticanonical class} of the partial compactification $U$ is the class of the divisor $$-K_X(D) = -(K_X + D) = \sum_{\alpha\in\mathscr{A}_D}(\rho_{\alpha}-1)D_{\alpha} + \sum_{\alpha\not\in\mathscr{A}_D}\rho_{\alpha}D_{\alpha} = \sum_{\alpha\in\mathscr{A}}\rho'_{\alpha}D_{\alpha},$$ \index{KXD@$K_X(D)$} \index{KX@$K_X$} \index{rhop@$\rho'_{\alpha}$} where $\rho'_{\alpha} = \rho_{\alpha}-1$ for $\alpha\in\mathscr{A}_D$, and $\rho_{\alpha}$ otherwise. Since $\rho_{\alpha} \geq 2$, it belongs to the interior of the effective cone of $X$, and therefore it is big. \subsection{Choice of a good model}\label{sect.goodmodelchoice} Let $k, C_0, C, F$ be as in section \ref{sect.equivariant}. We now assume we are in the situation described in the introduction, namely that we are given a projective irreducible $k$-scheme $\mathscr{X}$ \index{X@$\mathscr{X}$} together with a non-constant morphism $\pi:\mathscr{X}\to C$, $\mathscr{U}$ a Zariski open subset of $\mathscr{X}$, and $\mathscr{L}$ a line bundle on $\mathscr{X}$. We make the following assumptions on the generic fibres $X = \mathscr{X}_F$ and $U = \mathscr{U}_{F}$: \begin{itemize}\item $X$ is a smooth equivariant compactification of $G_F$, and $U$ is a partial compactification of $G_F.$ \item the boundary $D= X\setminus U$ is a strict normal crossings divisor. \item the restriction $L$ of the line bundle $\mathscr{L}$ to $X$ is the log-anticanonical line bundle $-K_X(D)$. \end{itemize} We also assume that for all $v\in C_0$ we have $\mathbf{G}(F_v)\cap \mathscr{U}(\mathcal{O}_v) \neq \varnothing$, where $F_v$ is the completion of~$F$ at $v$, and $\mathcal{O}_v$ its ring of integers. According to lemma 3.4.1 in \cite{CL}, up to modifying the models without changing this hypothesis nor the counting problem we are going to deal with, we may assume additionally that in fact~$\mathscr{X}$ is a good model, \index{good model} that is, it is smooth over $k$ and the sum of the non-smooth fibres of $\mathscr{X}$ and of the closures~$\mathscr{D}_{\alpha}$ \index{dalpha@$\mathscr{D}_{\alpha}$} of the irreducible components $D_{\alpha}$ of $X\setminus U$ is a divisor with strict normal crossings on~$\mathscr{X}$. Moreover, we may assume that $\mathscr{U}$ is the complement in $\mathscr{X}$ to a divisor with strict normal crossings. We will make use of the notations concerning equivariant compactifications introduced in section \ref{sect.equivariant}. \subsection{Vertical components} \index{vertical components} For every $v\in C(k)$, we write $\mathscr{B}_v$ for the set of irreducible components of $\pi^{-1}(v)$, and $\mathscr{B}$ for the disjoint union of all $\mathscr{B}_v$, $v\in C(k)$. \index{B@$\mathscr{B}$, $\mathscr{B}_v$} For $\beta\in \mathscr{B}_v$, we denote by $E_{\beta}$ \index{Eb@$E_{\beta}$} the corresponding component, and by $\mu_{\beta}$ \index{mub@$\mu_{\beta}$} its multiplicity in the special fibre of $\mathscr{X}$ at $v$. The line bundle $\mathscr{L}$ is of the form $\sum_{\alpha\in\mathscr{A}}\rho'_{\alpha}\mathscr{L}_{\alpha}$ where for every $\alpha$, the line bundle $\mathscr{L}_{\alpha}$ on~$\mathscr{X}$ extends $D_{\alpha}$. We may write: $$\mathscr{L}_{\alpha} = \mathscr{D}_{\alpha} + \sum_{\beta\in \mathscr{B}}e_{\alpha,\beta}E_{\beta}$$\index{Lalpha@$\mathscr{L}_{\alpha}$} where the integers $e_{\alpha,\beta}$ are zero for almost all $\beta$. We also define integers $\rho_{\beta}$ such that $$-\div(\omega_X) = \sum_{\alpha}\rho_{\alpha}\mathscr{D}_{\alpha} + \sum_{\beta\in\mathscr{B}}\rho_{\beta}E_{\beta}$$ where $\omega_X$ is seen as a meromorphic section of $K_{\mathscr{X}/C}$. \subsection{Weak Néron models}\label{Neron} \index{weak Néron model} Let $\mathscr{B}_1$ \index{B1@$\mathscr{B}_1$, $\mathscr{B}_{1,v}$} be the subset of $\mathscr{B}$ consisting of those $\beta$ for which the multiplicity $\mu_{\beta}$ is equal to~1. Let $\mathscr{B}_{1,v} = \mathscr{B}_1\cap \mathscr{B}_v$. Let $\mathscr{X}_1$ be the complement in $\mathscr{X}$ of the union of the components $E_{\beta}$ for $\beta\in \mathscr{B}\setminus\mathscr{B}_1$ as well as of the intersections of distinct vertical components. It is a smooth scheme over~$C$. By lemma 3.2.1 in \cite{CL}, the $C$-scheme $\mathscr{X}_1$ is a weak Néron model of $X$. This means that for every smooth $C$-scheme~$\mathscr{Z}$, the canonical map $$\mathrm{Hom}_C(\mathscr{Z},\mathscr{X}_1)\to \mathrm{Hom}_F(\mathscr{Z}_F,X)$$ is a bijection. Applying this to $\mathscr{Z} = C$, we see that in particular, any rational point $g:\mathrm{Spec}\, F\to X$ extends to a section $\sigma_g:C\to \mathscr{X}_1$ with image inside $\mathscr{X}_1$. \section{Height zeta functions} \label{sect.heightzeta} \subsection{Definition} \label{sect.heightzetadef} Let $\lambda =(\lambda_{\alpha})_{\alpha\in\mathscr{A}}$ be a family of positive integers, and let $$\mathscr{L}_{\lambda} = \sum_{\alpha\in\mathscr{A}}\lambda_{\alpha}\mathscr{L}_{\alpha}$$ be the corresponding line bundle on the good model $\mathscr{X}$. For every integer $n\in\mathbf{Z}$ and any $\mathbf{n} = (n_{\alpha})_{\alpha\in \mathscr{A}}\in\mathbf{Z}^{\mathscr{A}}$ let $M_{U,n}$ be the moduli space of sections $\sigma:C\to\mathscr{X}$ such that \begin{itemize} \item the section $\sigma$ maps the generic point $\eta_C$ of $C$ to a point of $G_F$. \item it represents an $S$-integral point of $U$, that is, $\sigma(C_0)\subset \mathscr{U}$. \item $\deg_{C}(\sigma^*\mathscr{L}_{\lambda}) = n$ \end{itemize} and $M_{U,\mathbf{n}}$ the moduli space of sections $\sigma:C\to\mathscr{X}$ such that \index{moduli space of sections} \index{MUn@$M_{U,n}$, $M_{U,\mathbf{n}}$} \begin{itemize} \item the section $\sigma$ maps the generic point $\eta_C$ of $C$ to a point of $G_F$. \item it represents an $S$-integral point of $U$, that is, $\sigma(C_0)\subset \mathscr{U}$. \item for all $\alpha\in \mathscr{A}$, $\deg_{C}(\sigma^*\mathscr{L}_{\alpha}) = n_{\alpha}.$ \end{itemize} According to Proposition 2.2.2 in \cite{CL}, these moduli spaces exist as constructible sets over $k$, and there exists an integer $m$ such that $M_{U,n} $ (resp. $M_{U,\mathbf{n}}$) is empty when $n<m$ (resp. when $n_{\alpha}<m$ for all $\alpha\in\mathscr{A}$). Moreover, $M_{U,n}$ can be seen as the disjoint union of all $M_{U,\mathbf{n}}$ such that $\sum_{\alpha\in\mathscr{A}}\lambda_{\alpha}n_{\alpha} = n$. The multivariate motivic height zeta function is given by $$Z(\mathbf{T}) = \sum_{\mathbf{n}\in\mathbf{Z}^{\mathscr{A}}}[M_{U,\mathbf{n}}]\mathbf{T}^{\mathbf{n}}\in\mathscr{M}_k[[(T_{\alpha})_{\alpha\in\mathscr{A}}]]\left[\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{-1}\right],$$ \index{ZT@$Z(\mathbf{T})$, $Z(T)$, motivic height zeta function}\index{motivic height zeta function!multivariate} and the motivic height zeta function associated to the line bundle $\mathscr{L}$, defined as $$Z_{\lambda}(T) = \sum_{n\in \mathbf{Z}}[M_{U,n}]T^n\in\mathscr{M}_k[[T]][T^{-1}],$$ can be written in the form $$Z_{\lambda}(T) = Z((T^{\lambda_{\alpha}})) = \sum_{m\in \mathbf{Z}}\left(\sum_{\substack{\mathbf{n}\in\mathbf{Z}^{\mathscr{A}}\\\sum_{\alpha}\lambda_{\alpha}n_{\alpha} = m}}[M_{U,\mathbf{n}}]\right)T^m.$$\index{motivic height zeta function!associated to line bundle}\index{Zl@$Z_{\lambda}(T)$} As stated in the introduction, we will investigate the case where $\mathscr{L} = \mathscr{L}_{\rho'}= \sum_{\alpha\in\mathscr{A}}\rho'_{\alpha}\mathscr{L}_{\alpha}$ is generically the log-anticanonical line bundle. Denote by $Z(T)$ the corresponding height zeta function. \subsection{Local intersection degrees} Let $v\in C(k)$. To every point $g\in G(F_v)$, one can attach local intersection degrees $(g,\mathscr{D}_{\alpha})_v\in\mathbf{N}$ for all $\alpha\in\mathscr{A}$ and $(g,E_{\beta})_v$ for all $\beta\in\mathscr{B}_v$ such that \index{local intersection degree} \begin{enumerate}\item For every $g\in G(F)$ and every $\alpha\in\mathscr{A}$, one has $$\deg_{C}(\sigma_g^*(\mathscr{D}_{\alpha})) = \sum_{v\in C(k)}(g,\mathscr{D}_{\alpha})_v$$\index{gd@$(g,\mathscr{D}_{\alpha})_v$} where $\sigma_g:C\to\mathscr{X}$ is the canonical section extending $g$. \item There is exactly one $\beta\in\mathscr{B}_v$ such that $(g,E_{\beta})_v = 1$, and this $\beta$ has multiplicity one. For any $\beta'\in \mathscr{B}_v$ different from this $\beta$, one has $(g,E_{\beta'})_v = 0$ (see section \ref{Neron}). \end{enumerate} We refer to \cite{CL}, 3.3 for details. \subsection{Decomposition of $G(F_v)$}\label{decomposition} We can decompose $G(F_v)$ into definable (in the Denef-Pas language, see section~2.1 of \cite{CluckLos}) bounded domains on which all the above intersection degrees are constant: for all $\mathbf{m}\in\mathbf{N}^{\mathscr{A}}$ and all $\beta\in\mathscr{B}_v$, we define $$G(\mathbf{m},\beta)_v =\{g\in G(F_v),\ (g,E_{\beta})_v = 1\ \text{and}\ (g,\mathscr{D}_{\alpha})_v = m_{\alpha}\ \text{for all}\ \alpha\in\mathscr{A}\}$$ and $G(\mathbf{m})_v = \cup_{\beta\in\mathscr{B}_v}G(\mathbf{m},\beta)_v$. \index{gmbeta@$G(\mathbf{m},\beta)_v$, $G(\mathbf{m})_v$} Lemma 3.3.2 in \cite{CL} says that for any $\mathbf{m}\in\mathbf{N}^{\mathscr{A}}$ and any $\beta\in\mathscr{B}_v$, the set $G(\mathbf{m},\beta)_v$ is a bounded definable subset of $G(F_v)$, and that $G(F_v)$ is the disjoint union of all the $G(\mathbf{m},\beta)_v$ for $\mathbf{m}\in\mathbf{N}^{\mathscr{A}}$ and $\beta\in\mathscr{B}_v$. Moreover, Lemmas 3.3.3 and 3.3.4 from \cite{CL} can be summarised in the following proposition: \begin{prop}\label{gm} There exists a dense open subset $C_1$ of $C_0$ such that for every $v\in C_1(k)$: \begin{enumerate}[(i)]\item The set $\mathscr{B}_v$ contains only one element $\beta_v$. \item The set $G(\mathbf{0},\beta)_v$ is equal to the subgroup $G(\mathcal{O}_v)$ of $G(F_v)$. \end{enumerate} Moreover, there is an almost zero function $s:C\to\mathbf{Z}$ such that for every $v\in C(k)$, $\mathbf{m}\in\mathbf{N}^{\mathscr{A}}$ and $\beta\in\mathscr{B}_v$, the set $G(\mathbf{m},\beta)_v$ is invariant under the subgroup $G(\mathfrak{m}_v^{s_v})$ of $G(\mathcal{O}_v)$, where $\mathfrak{m}_v$ is the maximal ideal of $\mathcal{O}_v$, and one can take $s_v=0$ for all $v\in C_1(k)$. \end{prop} As a consequence of this, as in Corollary 3.3.5 from \cite{CL}, the characteristic function of each set $G(\mathbf{m},\beta)_v$ may be seen as motivic Schwartz-Bruhat functions on $G(F_v)$ in the sense of \ref{sect.localSB}, with $N=s_v$ and with $M$ such that $G(\mathbf{m},\beta)_v\subset (t^M\mathcal{O}_v)^n$ (which exists by boundedness). \begin{notation}\label{C1} In what follows, it will be convenient for us to consider an even smaller set~$C_1$. Namely, from now on $C_1$ denotes the open subset of places $v\in C_0$ satisfying \begin{itemize} \item the conditions in proposition \ref{gm}; \item $e_{\alpha,\beta_v} = 0$ for all $\alpha\in\mathscr{A}$; \item $\rho_{\beta_v} = 0$ \item For all $\alpha\in \mathscr{A}\setminus \mathscr{A}_D$, $\mathscr{D}_{\alpha}\times_C C_1\to C_1$ is smooth.\end{itemize}\end{notation}\index{C1@$C_1$} \begin{prop}\label{gmdeffamily} There exist almost zero functions $s$ and $s'$, an unbounded family $N = (N_{\mathbf{m}})_{\mathbf{m} \in \mathbf{N}^{\mathscr{A}}}$ with $N_{\mathbf{0}} = 0$, and for every $\mathbf{m}\in \mathbf{N}^{\mathscr{A}}$ and every $\beta\in \prod_{v}\mathscr{B}_v$, a constructible subset $G_{\mathbf{m},\beta}$ of $\mathbf{A}_C^{n(s'-N_{\mathbf{m}},s)}$, the fibre of which at every $v\in C(k)$ is exactly $G(\mathbf{m},\beta_v)_v$. \index{Gmbeta@$G_{\mathbf{m},\beta}$} \end{prop} \begin{proof} The definable boundedness of $G(\mathbf{m},\beta)_v$ means that there exists an almost zero function $s':C\to \mathbf{Z}$, and, for every $\mathbf{m}\in\mathbf{N}^{\mathscr{A}}$, a non-negative integer $N_{\mathbf{m}}$ such that $$G(\mathbf{m},\beta)_v\subset (t_v^{s'_v-N_{\mathbf{m}}}\mathcal{O}_v)^{n},$$ for all $v\in C$. Statement $(ii)$ in proposition \ref{gm} implies that we may take $N_{\mathbf{0}} = 0$. Since $G(F_v)$ is the union of all the $G(\mathbf{m},\beta)_v$, the family $(N_{\mathbf{m}})_{\mathbf{m}}$ is necessarily unbounded. Taking the almost zero function $s$ from proposition \ref{gm}, we see that $G(\mathbf{m},\beta)_v$ defines naturally a constructible subset of $\mathbf{A}_k^{n(s'_v-N_{\mathbf{m}},s_v)}$: the conditions on the intersection degrees will indeed translate into Zariski open and Zariski closed polynomial conditions on the coordinates of $\mathbf{A}_k^{n(s'_v-N_{\mathbf{m}},s_v)}$. Let $\mathscr{Y}$ be an affine subset of $\mathscr{X}$ such that $\pi_{|\mathscr{Y}}:\mathscr{Y}\to C$ is non-constant. Let~$C'$ be an open dense subset of $C$ contained in the intersection of the image of $\pi_{|\mathscr{Y}}$ and of the open subset $C_1$ from proposition \ref{gm}. For every $\alpha\in\mathscr{D}_{\alpha}$, let $f_{\alpha}$ be a local equation for $\mathscr{D}_{\alpha}$ in $\mathscr{Y}$. Then the condition that $(g,\mathscr{D}_{\alpha})_v = m_{\alpha}$ may be written $\mathrm{ord} (f_{\alpha}(g)) = m_{\alpha}$ for all $v$ in the image of $\pi_{|\mathscr{Y}}$. We thus see that we may take the same integer $N_{\mathbf{m}}$ for all $v$, and that for all $v\in C'$, the equations defining $G(\mathbf{m})_v$ inside $\mathbf{A}_k^{n(s'_v-N_{\mathbf{m}},s_v)}$ are uniform in $v$ (in the sense that they are defined by the same formula in the Denef-Pas Language for every such $v$). This, together with the fact that the $G(\mathbf{m},\beta)_v$, for $v\in C\setminus C'$ are constructible, guarantees the existence of a constructible subset of $\mathbf{A}_C^{n(s'-N_{\mathbf{m}},s)}$ as in the statement of the proposition. \end{proof} \subsection{Integral points}\label{sect.integral} The complement of the model $\mathscr{U}$ inside $\mathscr{X}$ is the union of the divisors $\mathscr{D}_{\alpha}$ for $\alpha\in\mathscr{A}_D$, and of the vertical components $E_{\beta}$ for a finite subset $\mathscr{B}^0$ of~$\mathscr{B}$. We then set $\mathscr{B}_v^{0} = \mathscr{B}^0\cap \mathscr{B}_v$ for every $v\in C(k)$, and define $$\mathscr{B}_0 = \mathscr{B}_1\setminus \left(\cup_{v\in C_0}\mathscr{B}_v^0\right),$$ and $\mathscr{B}_{0,v} = \mathscr{B}_0\cap \mathscr{B}_v$. In other words, $\mathscr{B}_{0}$ corresponds to vertical components of multiplicity one which either lie above $S$ or are contained in $\mathscr{U}$. Thus in particular $\mathscr{B}_{0,v} = \mathscr{B}_{1,v}$ for $v\in S$. \index{B0@$\mathscr{B}_0$, $\mathscr{B}_{0,v}$} Let $\mathbf{m}_v\in\mathbf{N}^{\mathscr{A}}$ and $\beta_v\in\mathscr{B}_v$. We say that the pair $(\mathbf{m}_v,\beta_v)$ is $v$-\textit{integral} if \index{vintegral@$v$-integral} \begin{itemize}\item either $v\in S$ \item or $v\in C_0$, $\beta_v\in\mathscr{B}_0$ and $m_{\alpha,v} = 0$ for every $\alpha\in \mathscr{A}_D$. \end{itemize} In other words, the union of the sets $G(\mathbf{m}_v,\beta_v)_v$ for all $v$-integral pairs $(\mathbf{m}_v,\beta_v)$ is equal to $\mathscr{U}(\mathcal{O}_v)$ if $v\in C_0$, and $G(F_v)$ otherwise. For any $(\mathbf{m}_v,\beta_v)\in \mathbf{N}^{\mathscr{A}}\times \mathscr{B}_v$, define $$H(\mathbf{m}_v,\beta_v)_v = \left\{\begin{array}{cc} G(\mathbf{m}_v,\beta_v)_v& \text{if}\ (\mathbf{m}_v,\beta_v)\ \text{is~$v$-integral}\\\varnothing& \text{else}.\end{array}\right.$$ Then the union of the sets $H(\mathbf{m}_v,\beta_v)_v$ for all $(\mathbf{m}_v,\beta_v)$ is equal to~$\mathscr{U}(\mathcal{O}_v)$ if~$v\in C_0$, and~$G(F_v)$ otherwise. We also define $H(\mathbf{m}_v)_v = \cup_{\beta\in\mathscr{B}_v} H(\mathbf{m}_v,\beta)_v$. \index{Hmbeta@$H(\mathbf{m},\beta)_v$, $H(\mathbf{m})_v$} Let $\mathbf{m} = (\mathbf{m}_v)_{v}$ and $\beta = (\beta_v)_v$ be families indexed by $v\in C(k)$, where $\mathbf{m}_v = (m_{\alpha,v})\in\mathbf{N}^{\mathscr{A}}$ and $\beta_v\in\mathscr{B}_v$ for all~$v$, such that $\mathbf{m}_v = 0$ for almost all $v$. The element $\mathbf{m}$ must be seen as an effective zero-cycle on $C$ with coefficients in~$\mathbf{N}^{\mathscr{A}}$. We say $(\mathbf{m},\beta)$ is \emph{integral} if $(\mathbf{m}_v,\beta_v)$ is $v$-integral for every~$v$. \index{integral pair $(\mathbf{m},\beta)$} \begin{remark} For fixed $\mathbf{n}\in \mathbf{N}^{\mathscr{A}}$ and $\beta\in \prod_{v}\mathscr{B}_{0,v}$, families $\mathbf{m} = (\mathbf{m}_v)_v$ such that the pair $(\mathbf{m},\beta)$ is integral and such that $\sum_{v\in C}\mathbf{m}_v = \mathbf{n}$ are parametrised by the symmetric product $S^{\mathbf{n}'}(C\setminus C_0)\times S^{\mathbf{n}''}(C)$ (where $\mathbf{n}' = (n_{\alpha})_{\alpha\in \mathscr{A}_D}$ and $\mathbf{n}'' = (n_{\alpha})_{\alpha\in\mathscr{A}\setminus \mathscr{A}_D}$) which may naturally be seen as a constructible subset of $S^{\mathbf{n}}(C)$. \end{remark} For any pair $(\mathbf{m},\beta)$, the characteristic functions of the subsets $$G(\mathbf{m},\beta) = \prod_{v\in C(k)}G(\mathbf{m}_v,\beta_v)_v\subset G(\mathbb{A}_F)$$ \index{gmbeta@$G(\mathbf{m},\beta)$ (adelic set)} and $$H(\mathbf{m},\beta) = \prod_{v\in C(k)}H(\mathbf{m}_v,\beta_v)_v\subset G(\mathbb{A}_F)$$ \index{hmbeta@$H(\mathbf{m},\beta)$ (adelic set)} may be seen as global motivic Schwartz-Bruhat functions by proposition \ref{gm}. More precisely, using the notation of proposition \ref{gmdeffamily}, they may be seen as elements of $$\mathscr{E}xp\mathscr{M}_{\prod_{v}\mathbf{A}_k^{n(s'_v-N_{\mathbf{m}_v},s_v)}}.$$ We have $H(\mathbf{m},\beta) = G(\mathbf{m},\beta)$ if $(\mathbf{m},\beta) $ is integral, and $H(\mathbf{m},\beta)=\varnothing$ else. In the same manner as the $G(\mathbf{m},\beta)_v$, the $H(\mathbf{m},\beta)_v$ may be combined into a constructible set $H_{\mathbf{m},\beta}$: \begin{prop}\label{hmdeffamily} For any $\mathbf{m}\in\mathbf{N}^{\mathscr{A}}$ and any $\beta = (\beta_v)_v\in\prod_{v}\mathscr{B}_v$, there is a constructible subset $H_{\mathbf{m},\beta}\subset G_{\mathbf{m},\beta}$ the fibre of which at any $v\in C(k)$ is exactly $H(\mathbf{m},\beta_v)_v$. \index{Hmbeta@$H_{\mathbf{m},\beta}$} \end{prop} \begin{proof} There are two cases to consider. Assume first that $\mathbf{m}$ is such that $\mathbf{m}_{\alpha} = 0$ for all $\alpha \in \mathscr{A}_D$. Then $H_{\mathbf{m},\beta}$ is obtained from $G_{\mathbf{m},\beta}$ by removing the fibres above the finite number of points $v\in C(k)$ such that $(\mathbf{m},\beta_v)$ is not $v$-integral, that is, such that $v\in C_0$ and $\beta_v\not\in \mathscr{B}_0$. If on the contrary there exists $\alpha\in \mathscr{A}_D$ such that $\mathbf{m}_{\alpha}\neq 0$, then $H_{\mathbf{m},\beta}$ is the restriction of $G_{\mathbf{m},\beta}$ to the finite set of points $S$. \end{proof} \subsection{Two constructible families of Schwartz-Bruhat functions}\label{twoconstructiblefamilies} In propositions \ref{gmdeffamily} and \ref{hmdeffamily} we have combined, for any $\mathbf{m}\in\mathbf{N}^{\mathscr{A}}$ and any $\beta = (\beta_v)_v\in \prod_{v}\mathscr{B}_v$, the sets $G(\mathbf{m},\beta_v)_v$ (resp. $H(\mathbf{m},\beta_v)_v$) into a family $G_{\mathbf{m},\beta}\subset \mathbf{A}_C^{n(s'-N_{\mathbf{m}},s)}$ (resp. a family $H_{\mathbf{m},\beta}\subset G_{\mathbf{m},\beta}$) above~$C$. The symmetric product construction then allows us to consider, for any $\mathbf{n}\in\mathbf{N}^{\mathscr{A}}$ and any $\beta\in\prod_{v}\mathscr{B}_v$, constructible subsets $$S^{\mathbf{n}}((H_{\mathbf{m},\beta})_{\mathbf{m}\in\mathbf{N}^{\mathscr{A}}})\subset S^{\mathbf{n}}((G_{\mathbf{m},\beta})_{\mathbf{m}\in\mathbf{N}^{\mathscr{A}}}) \subset S^{\mathbf{n}}\left(\left(\mathbf{A}_C^{n(s'-N_{\mathbf{m}},s)}\right)_{\mathbf{m}\in\mathbf{N}^{\mathscr{A}}}\right) = \mathscr{A}_{\mathbf{n}}(s',s,N,0).$$ with the notation of section \ref{sect.domainsofdef}. Therefore, using the terminology of section \ref{sect.uniformfamilies}, this defines two uniformly smooth constructible families of Schwartz-Bruhat functions of level~$\mathbf{n}$. They parametrise the characteristic functions of the adelic sets $H(\mathbf{m},\beta)$ and $G(\mathbf{m},\beta)$ for fixed $\beta\in\prod_{v}\mathscr{B}_v$ and with $\mathbf{m}$ varying inside $S^{\mathbf{n}}C$: we will therefore denote these families $(\1_{H(\mathbf{m},\beta)})_{\mathbf{m}\in S^{\mathbf{n}}C}$ and $(\1_{G(\mathbf{m},\beta)})_{\mathbf{m}\in S^{\mathbf{n}}C}$. Their Fourier transforms will then be uniformly compactly supported constructible families of functions, defined on $\mathscr{A}_{\mathbf{n}}(\nu -s,\nu-s',0,N)$. \subsection{Applying the Poisson summation formula}\label{applicPoisson} Let $\mathbf{n}\in \mathbf{Z}^{\mathscr{A}}$. For any $\beta = (\beta_v)_{v}\in \prod_{v}\mathscr{B}_{0,v}$, and $\alpha\in\mathscr{A}$, put $n^{\beta}_{\alpha} := n_{\alpha} - \sum_{v}e_{\alpha,\beta_v}.$ We define $M_{\mathbf{n},\beta}$ to be the constructible subset of $M_{\mathbf{n}}$ of sections~$\sigma_g$ such that for all $v$, $(g,E_{\beta_v})_v = 1$ (so that $(g,E_{\beta'_v}) = 0$ for all $\beta'_{v}\neq \beta_v$). By definition, these sections satisfy $\deg (\sigma_g^*\mathscr{D}_{\alpha}) = n_{\alpha}^{\beta}$ for all $\alpha \in \mathscr{A}$. Thus, since the $\mathscr{D}_{\alpha}$ are effective, for any $\mathbf{n}$ such that $M_{\mathbf{n},\beta}\neq \varnothing$, the $n_{\alpha}^{\beta},\ \alpha\in\mathscr{A}$, are non-negative integers. \begin{lemma} Let $\mathbf{n}\in\mathbf{Z}^{\mathscr{A}}$ and $\beta\in \prod_{v}\mathscr{B}_{0,v}$ be such that $M_{\mathbf{n},\beta}$ is non-empty. There is a morphism of constructible sets defined by $$\begin{array}{ccc}M_{\mathbf{n},\beta}&\to& S^{\mathbf{n}^{\beta}}C\\ \sigma_g & \mapsto & \displaystyle{\sum_{v\in C(k)}}\left((g,\mathscr{D}_{\alpha})_v\right)_{\alpha\in\mathscr{A}}[v]\end{array} $$ \end{lemma} \begin{proof} We will start by making some reductions. To simplify notations, write $M_{\mathbf{n},\beta} = M$, and $\mathbf{n}^{\beta} = \mathbf{n}$. Since $S^{\mathbf{n}}C = \prod_{\alpha\in\mathscr{A}}S^{n_{\alpha}}C$, it suffices to prove constructibility for the map $$\begin{array}{rcl} M&\to &S^{n_{\alpha}}C\\ \sigma_g & \mapsto & \sum_{v}(g,\mathscr{D}_{\alpha})_{v}[v] \end{array}$$ associated to one $\mathscr{D}_{\alpha}$. To simplify notations further, write $n_{\alpha} = n$ and $\mathscr{D}_{\alpha} = \mathscr{D}$. By definition of the moduli space of sections, there is a morphism $$\begin{array}{rccc} \epsilon: &C\times M &\to& \mathscr{X}\\ &(v,\sigma) & \mapsto & \sigma(v)\end{array}$$ Denote by $s_{\mathscr{D}}$ the canonical section of the line bundle $\mathcal{O}_{\mathscr{X}}(\mathscr{D})$ and put $\Delta = \div (\epsilon^*s_{\mathscr{D}})$. This is a closed subscheme of $C\times M$, finite over $M$. By generic flatness, we may stratify~$M$ into locally closed subsets $U_i$ such that for every $i$, $\Delta\times_MU_i\subset C\times U_i$ is flat over $U_i$. By definition of Hilbert schemes, it therefore defines a morphism $U_i\to \mathrm{Hilb}(C)$ to the Hilbert scheme of points of $C$. Moreover, for every $\sigma \in M$, the fibre $\Delta_{\sigma} = \div(\sigma^{*}s_{\mathscr{D}})$ is a zero-dimensional subscheme of $C$ of length $n$. Thus, the image of the above morphism is in fact contained in the Hilbert scheme of $n$ points of $C$, which we may identify with the symmetric product $S^{n}C$. The constructible morphism we want is then obtained by combining these morphisms $U_i\to S^nC$ for every $i$. \end{proof} Recall that a point in $M_{\mathbf{n},\beta}$ represents a section that intersects components in $\mathscr{A}_D$ only above places in $S = C\setminus C_0$. Thus, in fact, $M_{\mathbf{n},\beta}$ lies above the constructible subset of $S^{\mathbf{n}^{\beta}}C$ consisting of zero-cycles $\mathbf{m} = (\mathbf{m}_v)_{v}$ with components with respect to $\alpha\in\mathscr{A}_D$ supported inside $S$: in other words, these are the zero-cycles $\mathbf{m}$ such that $(\mathbf{m},\beta)$ is integral. The exact correspondence between sections $\sigma:C\to \mathscr{X}$ and elements of $X(F)$ via $\sigma \mapsto \sigma(\eta_C)$ restricts to an exact correspondence between sections $\sigma \in M_{\mathbf{n},\beta}$ lying above $\mathbf{m}\in S^{\mathbf{n}^{\beta}}C$ and elements in $G(F)\cap H(\mathbf{m},\beta)$, where $H(\mathbf{m},\beta)$ is the adelic set defined in section \ref{sect.integral}. We denote $H_F(\mathbf{m},\beta):=G(F)\cap H(\mathbf{m},\beta)$, which is a constructible set over~$k$. Note that by definition of summation over rational points (see section \ref{twistedsummation} in chapter \ref{poissonformula}), we have, for every $\mathbf{m}\in S^{\mathbf{n}^{\beta}}C$ $$\sum_{x \in \kappa(\mathbf{m})(C)^n}\1_{H(\mathbf{m},\beta)}(x) = [H_F(\mathbf{m},\beta)] = (M_{\mathbf{n},\beta})_{\mathbf{m}}\in \mathscr{M}_{\kappa(\mathbf{m})}.$$ With the notation of section \ref{twoconstructiblefamilies}, the uniformly smooth family $(\1_{H(\mathbf{m},\beta)})_{\mathbf{m}\in S^{\mathbf{n}^\beta}C}$ is uniformly summable (see section \ref{sect.uniformlysummable} of chapter \ref{poissonformula}), and its sum is exactly the class of $M_{\mathbf{n},\beta}$ in $\mathscr{M}_{S^{\mathbf{n}^{\beta}}C}$. Taking classes in $\mathscr{M}_k$, we may therefore write the motivic summation $$ [M_{\mathbf{n},\beta}] = \sum_{\mathbf{m}\in S^{\mathbf{n}^{\beta}}C}[H_F(\mathbf{m},\beta)] .$$ \begin{remark} Here the existence of the moduli spaces $M_{\mathbf{n},\beta}$ shows that the family of functions $(\1_{H(\mathbf{m},\beta)})_{\mathbf{m}\in S^{\mathbf{n}^{\beta}}C}$ is uniformly summable, independently of section \ref{us.poisson}. By uniqueness of the sum (remark \ref{uniqueness.sum}), this sum (the class of $M_{\mathbf{n},\beta}$ in $\mathscr{E}xp\mathscr{M}_{S^{\mathbf{n}^\beta}C}$) is equal to the one given by the Poisson formula as stated in \ref{us.poisson}. \end{remark} Note that $$\mathbf{T}^{\mathbf{n}} = \prod_{\alpha\in\mathscr{A}}T_{\alpha}^{n_{\alpha}} = \prod_{\alpha\in\mathscr{A}}T_{\alpha}^{\sum_{v}e_{\alpha,\beta_v}}\prod_{\alpha\in\mathscr{A}}\mathbf{T}_{\alpha}^{n_{\alpha}^{\beta}}.$$ Write $\mathbf{T}^{||\beta||}$ for $ \prod_{\alpha\in\mathscr{A}}T_{\alpha}^{\sum_{v}e_{\alpha,\beta_v}}$. Then \begin{eqnarray*}Z(\mathbf{T}) = \sum_{\mathbf{n}\in \mathbf{Z}^{\mathscr{A}}}[M_{\mathbf{n}}]\mathbf{T}^{\mathbf{n}}&= &\sum_{\mathbf{n}\in\mathbf{Z}^{\mathscr{A}}}\sum_{\beta\in\prod_{v}\mathscr{B}_{0,v}}[M_{\mathbf{n},\beta}]\mathbf{T}^{\mathbf{n}}\\ & = & \sum_{\mathbf{n}\in\mathbf{Z}^{\mathscr{A}}}\left(\sum_{\beta\in\prod_{v}\mathscr{B}_{0,v}}\sum_{\mathbf{m}\in S^{\mathbf{n}^{\beta}}C}[H_F(\mathbf{m},\beta)]\right)\mathbf{T}^{\mathbf{n}}\\ & \substack{=\\ \mathbf{n} \leftarrow \mathbf{n}^{\beta}}& \sum_{\beta\in\prod_{v}\mathscr{B}_{0,v}}\mathbf{T}^{||\beta||}\sum_{\mathbf{n}\in\mathbf{N}^{\mathscr{A}}}\sum_{\mathbf{m}\in S^{\mathbf{n}}C} [H_F(\mathbf{m},\beta)] \mathbf{T}^{\mathbf{n} \end{eqnarray*} For clarity, let us remark that in this summation, we have three different types of sums: the sum over $\prod_{v}\mathscr{B}_{0,v}$, which is a finite sum, the one over $\mathbf{Z}^{\mathscr{A}}$ or over $\mathbf{N}^{\mathscr{A}}$ which is the sum of the formal series, and the motivic sum over $\mathbf{m}\in S^{\mathbf{n}}C$. The Poisson summation formula (see section \ref{Poisson.families}, and especially \ref{us.poisson} and \ref{reversesummation}; see also remark \ref{dropkappaD}), applied to the uniformly smooth constructible family of Schwartz-Bruhat functions $(\1_{H(\mathbf{m},\beta)})_{\mathbf{m}\in S^{\mathbf{n}}C}$ (see the end of section \ref{twoconstructiblefamilies}) gives, for any $\mathbf{n}\in\mathbf{N}^{\mathscr{A}}$: \begin{eqnarray*} \sum_{\mathbf{m}\in S^{\mathbf{n}}C}\ [H_F(\mathbf{m},\beta)] & = & \sum_{\mathbf{m}\in S^{\mathbf{n}}C}\ \ \sum_{x\in k(C)^n} \1_{H(\mathbf{m},\beta)}(x)\\ & = & \sum_{\mathbf{m}\in S^{\mathbf{n}}C}\ \ \left(\mathbf{L}^{(1-g)n}\sum_{\xi\in k(C)^n} \mathscr{F}(\1_{H(\mathbf{m},\beta)})(\xi)\right)\\ & = & \mathbf{L}^{(1-g)n}\sum_{\xi\in k(C)^n}\ \ \sum_{\mathbf{m}\in S^{\mathbf{n}}C}\mathscr{F}(\1_{H(\mathbf{m},\beta)})(\xi) \end{eqnarray*} Then $Z(\mathbf{T})$ may me rewritten in the form $$Z(\mathbf{T}) = \mathbf{L}^{(1-g)n}\sum_{\xi\in k(C)^n}\sum_{\beta\in\prod_{v}\mathscr{B}_{0,v}}\mathbf{T}^{||\beta||}\sum_{\mathbf{n}\in\mathbf{N}^{\mathscr{A}}} \ \sum_{\mathbf{m}\in S^{\mathbf{n}}C}\mathscr{F}(\1_{H(\mathbf{m},\beta)})(\xi)\mathbf{T}^{\mathbf{n}}.$$ By the definition of Euler products, we have \begin{equation}\label{zetaeulerproduct}\sum_{\mathbf{n}\in\mathbf{N}^{\mathscr{A}}}\sum_{\mathbf{m}\in S^{\mathbf{n}}C}\mathscr{F}(\1_{H(\mathbf{m},\beta)})(\xi)\mathbf{T}^{\mathbf{n}} = \prod_{v\in C}\left(\sum_{\mathbf{m}_v\in \mathbf{N}^{\mathscr{A}}}\mathscr{F}(\1_{H(\mathbf{m}_v,\beta_v)})(\xi_v)\mathbf{T}^{\mathbf{m}_v}\right).\end{equation} Indeed, we are dealing here with the uniformly compactly supported family $$(\mathscr{F} (\1_{H(\mathbf{m},\beta)}))_{\mathbf{m}\in S^{\mathbf{n}}C}\in \mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{n}}(\nu-s,\nu-s', 0, N_{\i})}$$ (see end of section \ref{twoconstructiblefamilies}). The summation over $k(C)^n$ is therefore in fact a summation on the $n$-th power of the Riemann-Roch space associated to the divisor $-\sum_v(\nu_v-s_v)[v]$. For every point $\xi$ of this space, as explained in the beginning of the proof of lemma \ref{prop.uniformsummationuc}, $\xi$ induces constructible morphisms $$\phi_{\xi,\i}:C\to \mathbf{A}_C^{(\nu-s,\nu-s'+N_{\i})}$$ for every $\i\in \mathbf{N}^{\mathscr{A}}$, which after taking symmetric products, give constructible morphisms $$S^{\mathbf{n}}\phi_{\xi}: S^{\mathbf{n}}C\to \mathscr{A}_{\mathbf{n}}(s',s,0,N)$$ such that $$(\mathscr{F} (\1_{H(\mathbf{m},\beta)})(\xi))_{\mathbf{m}\in S^{\mathbf{n}}C} = (S^{\mathbf{n}}\phi_{\xi})^*(\mathscr{F} (\1_{H(\mathbf{m},\beta)}))_{\mathbf{m}\in S^{\mathbf{n}}C}.$$ By section \ref{sect.symprodfourtransform} (especially proposition \ref{prop.symproductfourtransform}), we have the equality $$(\mathscr{F} (\1_{H(\mathbf{m},\beta)}))_{\mathbf{m}\in S^{\mathbf{n}}C}=S^{\mathbf{n}}( (\mathscr{F}\1_{H_{\i,\beta}})_{\i\in\mathbf{N}^{\mathscr{A}}})$$ in $\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{n}}(\nu-s,\nu-s', 0, N)}$. Pulling back via $S^{\mathbf{n}}\phi_{\xi} = S^{\mathbf{n}}((\phi_{\xi,\i})_{\i\in\mathbf{N}^{\mathscr{A}}})$ gives the equality $$(\mathscr{F} (\1_{H(\mathbf{m},\beta)})(\xi))_{\mathbf{m}\in S^{\mathbf{n}}C}=S^{\mathbf{n}}( ( \phi_{\xi,\i}^*\mathscr{F}(\1_{H_{\i,\beta}})_{\i\in\mathbf{N}^{\mathscr{A}}})$$ in $\mathscr{E}xp\mathscr{M}_{S^{\mathbf{n}}C}.$ The pullback to any $v\in C$ of the $\i$-th element $\phi_{\xi,\i}^*\mathscr{F}\1_{H_{\i,\beta}}\in \mathscr{E}xp\mathscr{M}_C$ of the family on the right-hand side is exactly $\mathscr{F}(\1_{H(\i,\beta_v)})(\xi_v)$, by the definition of $\phi_{\xi,\i}$ and by remark \ref{familyrestrictiontov}, so this should be the $\i$-th coefficient of the local factor of the Euler product corresponding to $v$. \begin{notation}\label{notationfromCL} We use the notation from \cite{CL}, 3.6: for every $v\in C$, $\alpha\in\mathscr{A}$, $\mathbf{m}_v\in\mathbf{N}^{\mathscr{A}}$ and $\beta\in \mathscr{B}_v$, we define $$||\mathbf{m}_v,\beta_v||_{\alpha} := m_{\alpha,v} + e_{\alpha,\beta_v}$$ and $\mathbf{T}^{||\mathbf{m}_v,\beta_v||}:= \prod_{\alpha\in\mathscr{A}}T_{\alpha}^{||\mathbf{m}_v,\beta_{v}||}.$ Note that for $g\in G(F_v)\cap G(\mathbf{m}_v,\beta_v)_v$, the local intersection degree $(g,\mathscr{L}_{\alpha})_v$ is exactly $$(g,\mathscr{L}_{\alpha})_v = (g,\mathscr{D}_{\alpha})_v + \sum_{\beta\in\mathscr{B}_v}e_{\alpha,\beta}(g,E_{\beta})_v = m_{\alpha,v} + e_{\alpha,\beta_v} = ||\mathbf{m}_v,\beta_v||_{\alpha}.$$ \end{notation} Finally, we have \begin{eqnarray*}Z(\mathbf{T}) &= &\mathbf{L}^{(1-g)n}\sum_{\xi\in k(C)^n}\sum_{\beta\in\prod_{v}\mathscr{B}_{0,v}}\prod_{v\in C}\prod_{\alpha\in \mathscr{A}}T_{\alpha}^{e_{\alpha,\beta_v}}\prod_{v\in C}\left(\sum_{\mathbf{m}_v\in \mathbf{N}^{\mathscr{A}}}\mathscr{F}(\1_{H(\mathbf{m}_v,\beta_v)})(\xi_v)\mathbf{T}^{\mathbf{m}_v}\right)\\ & = & \mathbf{L}^{(1-g)n}\sum_{\xi\in k(C)^n} \prod_{v\in C}\left(\sum_{\beta_v\in\mathscr{B}_{0,v}}\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{e_{\alpha,\beta_v}}\sum_{\mathbf{m}_v\in \mathbf{N}^{\mathscr{A}}}\mathscr{F}(\1_{H(\mathbf{m}_v,\beta_v)})(\xi_v)\mathbf{T}^{\mathbf{m}_v}\right)\\ & = & \mathbf{L}^{(1-g)n}\sum_{\xi\in k(C)^n} \prod_{v\in C}\left(\sum_{\substack{\mathbf{m}_v\in \mathbf{N}^{\mathscr{A}}\\ \beta_v\in\mathscr{B}_{0,v}}}\mathscr{F}(\1_{H(\mathbf{m}_v,\beta_v)})(\xi_v)\mathbf{T}^{||\mathbf{m}_v,\beta_v|| }\right)\end{eqnarray*} Thus, we have written $Z(\mathbf{T})$ in the form \begin{equation}\label{summation.zeta}Z(\mathbf{T}) = \mathbf{L}^{(1-g)n}\sum_{\xi\in k(C)^n}Z(\mathbf{T},\xi)\end{equation} \index{ZTxi@$Z(\mathbf{T},\xi)$} where $Z(\mathbf{T},\xi)$ has an Euler product decomposition with local factors $$Z_v(\mathbf{T},\xi) := \sum_{\substack{\mathbf{m}_v\in \mathbf{N}^{\mathscr{A}}\\ \beta_v\in\mathscr{B}_{0,v}}}\mathscr{F}(\1_{H(\mathbf{m}_v,\beta_v)})(\xi_v)\mathbf{T}^{||\mathbf{m}_v,\beta_v||} = \sum_{(\mathbf{m}_v,\beta_v)\ v-\text{integral}} \mathscr{F}(\1_{G(\mathbf{m}_v,\beta_v)})(\xi_v)\mathbf{T}^{||\mathbf{m}_v,\beta_v||}.$$ More precisely, it is the product of the finite number of factors corresponding to $v\in C\setminus C_1$, and of the Euler product of the series $$Z_{C_1}(\mathbf{T},\xi):= \sum_{\mathbf{m}\in\mathbf{N}^{\mathscr{A}}}\mathscr{F}(\1_{H_{\mathbf{m},\beta}\times_CC_1})(\xi)\mathbf{T}^{\mathbf{m}}\in \mathscr{E}xp\mathscr{M}_{C_1}[[T]]$$ where $\mathbf{T}^{\mathbf{m}} = \prod_{\alpha\in \mathscr{A}}T_{\alpha}^{m_{\alpha}}$. In what follows, we will study these local factors to be able to apply lemma \ref{convergence} to the series $Z_{C_1}(\mathbf{T},\xi)$. Combined with estimates for the remaining local factors, this will give us the convergence properties of the series $Z(\mathbf{T})$. \begin{remark}[Restriction of the summation domain]\label{summationdomain} As noted in the end of section \ref{twoconstructiblefamilies}, the Fourier transforms of the families $(\1_{H(\mathbf{m},\beta)})_{\mathbf{m}\in S^{\mathbf{m}}C}$ and $(\1_{G(\mathbf{m},\beta)})_{\mathbf{m}\in S^{\mathbf{m}}C}$ are uniformly compactly supported. We may conclude, as in \cite{CL}, 4.2, that there is a finite-dimensional $k$-vector space $V$ (given by an appropriate Riemann-Roch space), and a linear $F$-morphism $\a:V_F\to G_F$ such that the summation (\ref{summation.zeta}) restricts in fact to $$Z(\mathbf{T}) = \mathbf{L}^{(1-g)n}\sum_{\xi\in \a(V)}Z(\mathbf{T},\xi).$$ \end{remark} \section{Analysis of local factors and convergence}\label{sectLocal} The aim of this section is to study the local factors \begin{eqnarray*}Z_v(\mathbf{T},\xi) &= &\sum_{\substack{\mathbf{m}_v\in \mathbf{N}^{\mathscr{A}}\\ \beta\in\mathscr{B}_{0,v}}}\mathscr{F}(\1_{H(\mathbf{m}_v,\beta_v)})(\xi_v)\mathbf{T}^{||\mathbf{m}_v,\beta_v||}\\ &=& \sum_{\substack{\mathbf{m}_v\in \mathbf{N}^{\mathscr{A}}\\ \beta\in\mathscr{B}_{0,v}}} \int_{H(\mathbf{m}_v,\beta_v)}\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{(g,\mathscr{L}_{\alpha})_v}\mathrm{e}(\langle g,\xi\rangle)\mathrm{d} g.\end{eqnarray*} (We use here the notation $\mathrm{e}$ from \cite{CL}, which is a substitute for the notation $\psi\circ r$ from section \ref{sect.localfouriertransform}.) We follow \cite{CL} and rewrite this integral as an integral with respect to the motivic measure on the arc space~$\mathscr{L}(\mathscr{X}).$ We then give estimates for it, first in the case when $\xi$ is the trivial character, then in the case when it is non-trivial. This allows us to study convergence of the Euler product $Z(T,\xi)$. In this section, we will often omit the index $v$ once the place $v$ is fixed. \subsection{Motivic functions and integrals} In this section, we consider a field $k$ of characteristic zero, $R = k[[t]]$ and $K=k((t))$. For this and the next section only, let $\mathscr{X}$ \index{X@$\mathscr{X}$}be a smooth and flat $R$-scheme of finite type and of pure relative dimension $n$. We are going to use the notion of \textit{motivic residual function} \index{motivic residual function} on the arc space $\mathscr{L}(\mathscr{X})$, \index{arc space} \index{LX@$\mathscr{L}(\mathscr{X})$} introduced in section 2.4 of~\cite{CL}. Recall that the spaces of $m$-jets $\mathscr{L}_m(\mathscr{X})$ \index{LmX@$\mathscr{L}_m(\mathscr{X})$} for $m\geq 0$ come with natural affine morphisms $p^{m+1}_m:\mathscr{L}_{m+1}(\mathscr{X})\to \mathscr{L}_{m}(\mathscr{X})$ which turn the collection of relative Grothendieck rings $\mathscr{E}xp\mathscr{M}_{\mathscr{L}_m(\mathscr{X})}$ into an inductive system via the induced ring morphisms $$ \left(p_m^{m+1}\right)^*: \mathscr{E}xp\mathscr{M}_{ \mathscr{L}_{m}(\mathscr{X})} \to \mathscr{E}xp\mathscr{M}_{\mathscr{L}_{m+1}(\mathscr{X})} $$ sending the class of a variety $H\to \mathscr{L}_m(\mathscr{X})$ to the class of $H\times_{\mathscr{L}_m(\mathscr{X})}\mathscr{L}_{m+1}(\mathscr{X})\to \mathscr{L}_{m+1}(\mathscr{X})$ with the appropriate operation on exponentials. \begin{definition} The ring of motivic residual functions on $\mathscr{L}(\mathscr{X})$ is the inductive limit of all Grothendieck rings $\mathscr{E}xp\mathscr{M}_{\mathscr{L}_m(\mathscr{X})}$, $m\geq 0$. \end{definition} For example, take a constructible subset $W$ of $\mathscr{L}(\mathscr{X})$, that is, a subset of the form $p_m^{-1}(W_m)$ where $W_m$ is a constructible subset of $\mathscr{L}_m(\mathscr{X})$ and $p_m:\mathscr{L}(\mathscr{X})\to \mathscr{L}_m(\mathscr{X})$ is the projection morphism. Then the characteristic function of $W$ may be seen as a motivic residual function Let $h$ be a motivic residual function. Assume it to be of the form $[H,f]$ where $H$ is a variety over $\mathscr{L}_{m}(\mathscr{X})$ for some $m$, and $f:H\to\mathbf{A}^1$ a morphism. Then the integral of $h$ over the arc space $\mathscr{L}(\mathscr{X})$ is defined to be $$\int_{\mathscr{L}(\mathscr{X})}h(x)\mathrm{d} x = \int_{\mathscr{L}(\mathscr{X})}H_x\psi(r(f(x)))\mathrm{d} x := \mathbf{L}^{-(m+1)n}[H,f]_k\in\mathscr{E}xp\mathscr{M}_{k}.$$\index{motivic residual function!integration} This does not depend on $m$ because for any $m'\geq m$, the projection morphism $\mathscr{L}_{m'}(\mathscr{X})\to \mathscr{L}_m(\mathscr{X})$ is a locally trivial fibration with fibre $\mathbf{A}_k^{(m'-m)n}.$ More generally, one can consider integrals $\int_{W}$ over constructible subsets of $\mathscr{L}(\mathscr{X})$ by multiplying the integrand by the characteristic function $\1_W$. \begin{example}\label{volume.constset} If $W = p_m^{-1}(W_m)$ is a constructible subset of $\mathscr{L}(\mathscr{X})$ for some $m\geq 0$, then one may define the volume of $W$ \index{volume} to be $$\mathrm{vol}(W) := \int_{\mathscr{L}(\mathscr{X})}\1_{W}(x) \mathrm{d} x = \mathbf{L}^{-(m+1)n}[W_m,0]\in\mathscr{E}xp\mathscr{M}_{k}.$$\index{vol@$\mathrm{vol}$} In particular, if $W = \mathscr{L}(\mathscr{X}) = p_0^{-1}(\mathscr{X}_k)$ where $\mathscr{X}_k$ is the special fibre of $\mathscr{X}$, then \begin{equation}\label{volumearcspace}\mathrm{vol}(\mathscr{L}(\mathscr{X})) = \mathbf{L}^{-n}[\mathscr{X}_k,0].\end{equation} Another useful special case is the volume of the subspace $\mathscr{L}(\mathbf{A}^1,0)$ of $\mathscr{L}(\mathbf{A}^1)$ of arcs with origin in $0\in \mathbf{A}^1$. We have: \begin{equation}\label{volumeaffineline}\mathrm{vol}(\mathscr{L}(\mathbf{A}^1,0)) = \mathbf{L}^{-1}.\end{equation} Combining (\ref{volumearcspace}) and (\ref{volumeaffineline}), we also get that the volume of the subspace of arcs of order 0 in $\mathscr{L}(\mathbf{A}^1)$ is \begin{equation}\label{volumeaffinelineordzero}\mathrm{vol}(\{x\in\mathscr{L}(\mathbf{A}^1),\ \mathrm{ord}(x) = 0\}) = \mathrm{vol}(\mathscr{L}(\mathbf{A}^1)) - \mathrm{vol}(\mathscr{L}(\mathbf{A}^1,0)) = 1-\mathbf{L}^{-1}\end{equation} \end{example} \begin{remark}\label{volume.integrals} Let $W= p^{-1}(W_m)$ be a constructible subset of $\mathscr{L}(\mathscr{X})$ for some $W_m\subset \mathscr{L}_m(\mathscr{X})$ and some $m\geq 0$ and let $h = [\mathscr{L}_{m}(\mathscr{X}),f]$. Then by the triangular inequality for weights (chapter \ref{hodgemodules}, lemma \ref{triangularwt}), as well as lemma \ref{weightdimension}: \begin{eqnarray*}w\left(\int_{W}h(x)\mathrm{d} x \right)& = & w\left(\int_{\mathscr{L}(\mathscr{X})}\mathbf{1}_{W}(x)\mathrm{e}(f(x))\mathrm{d} x \right)\\ & =& w(\mathbf{L}^{-(m+1)n}[W,f_{|W}])\\ &\leq& -2(m+1)n + w([W,0])\\ & = & w(\mathrm{vol}(W))\end{eqnarray*} We are going to use this property repeatedly. \end{remark} \subsection{Some computations of motivic integrals} We keep the notations from the previous section. Let $r:K\to k$ be the linear map defined by $r(a) = \mathrm{res}_0(a\mathrm{d} t),$ so that $r(t^{-1}) =1$ and $r(t^n) = 0$ for any $n\neq -1$. \begin{lemma} Let $d$ be a non-zero integer and let $\xi\in K$ be such that $\mathrm{ord}(\xi) = 0$. Let $Q=a_0 + a_1 x + \ldots + a_dx^d \in k((t))[x]$ be a non-zero polynomial of degree $\leq d$ such that for all $i\in\{1,\ldots,d\}$, $\mathrm{ord} (a_i)> \mathrm{ord} (a_0)$. Then for any $n\in\mathbf{N}$ such that $-2n \leq \mathrm{ord}(a_0) < -n$, one has $$\int_{\xi + t^nk[[t]]}\psi(r(Q(x)x^{-d}))\mathrm{d} x =0.$$ \end{lemma} \begin{proof} The proof goes along the same lines as the proof of lemma 5.1.1 in \cite{CL}. The condition on the orders of the coefficients of $Q$ implies that: \begin{itemize} \item $\mathrm{ord} (Q(\xi)) = \mathrm{ord} (a_0)$. \item For all $i\in\{1,\ldots,d\}$, \begin{equation}\label{Qderivatives}\mathrm{ord} (Q^{(i)}(\xi))\geq \min_{i\leq j\leq d}\ \mathrm{ord} (a_j) > \mathrm{ord} (Q(\xi))\geq -2n.\end{equation} \end{itemize} Write $x = \xi(1 + t^n u)$ for some $u$ with $\mathrm{ord} (u) \geq 0$, so that we have $$\int_{\xi + t^nk[[t]]}\psi(r(Q(x)x^{-d}))\mathrm{d} x = \mathbf{L}^{-n}\int_{k[[t]]}\psi(r(Q(\xi(1 + t^nu))(\xi(1 + t^nu))^{-d})\mathrm{d} u.$$ We may expand $$Q(\xi(1 + t^n u)) = Q(\xi) + Q'(x)\xi t^n u + Q''(x)(\xi t^{n} u)^2 + \ldots$$ and $$(\xi(1 + t^n u))^{-d} = \xi^{-d}\left(1 - dt^n u + {-d\choose 2} t^{2n} u^2 + \ldots\right)$$ Taking the product and using (\ref{Qderivatives}), we have $$r\left(Q(\xi(1 + t^nu))\xi^{-d}(1 + t^n u)\right) = r(Q(\xi)\xi^{-d}) + r\left(Q'(\xi)\xi^{1-d} t^n u -Q(\xi)\xi^{-d} d t^nu\right),$$ since all the other terms belong to the maximal ideal $tk[[t]]$. We therefore have $$\int_{k[[t]]}\psi(r(Q(\xi(1 + t^nu))(\xi(1 + t^nu))^{-d})\mathrm{d} u$$ $$ = [\mathrm{Spec}\, k, r(Q(\xi)\xi^{-d})]\int_{k[[t]]}\psi(r\left(Q'(\xi)\xi^{1-d} t^n u -Q(\xi)\xi^{-d} d t^nu\right))\mathrm{d} u.$$ Now, $\mathrm{ord}( Q'(\xi)\xi^{1-d} t^n-Q(\xi)d\xi^{-d} t^n ) =\mathrm{ord} (Q(\xi)t^n) < 0$, and therefore the integral in the right-hand side is zero. \end{proof} For $m\in\mathbf{Z}$, let $C_m$ be the annulus defined by $\mathrm{ord}(x) = m$. \begin{lemma}\label{motivic.integral} Let $m$ and $d$ be positive integers and $P\in k[[t]][x]$ a non-zero polynomial such that $\mathrm{ord}(P(0)) = 0$. Then $$\int_{C_m}\psi(r(P(x)x^{-d}))\mathrm{d} x = \left\{\begin{array}{lc} -\mathbf{L}^{-2} & \text{if}\ m=d=1\\ 0 & otherwise \end{array}\right.$$ \end{lemma} \begin{proof} Let $I(m,d,P)$ be the above integral. A change of variable allows us to write $$I(m,d,P) = \mathbf{L}^{-m}\int_{C_0}\psi(r(P(t^mu)t^{-md}u^{-d}))\mathrm{d} u.$$ Thus, we have $I(m,d,P) = I(0,d,Q)$ where $Q(u) = a_0 + a_1 u + \ldots \in k((t))[u]$ is the polynomial given by $Q(u) = P(t^mu)t^{-md}$. Since $P$ has integral coefficients, and $\mathrm{ord}(P(0)) = 0$, we have \begin{itemize}\item $\mathrm{ord} (a_0) = \mathrm{ord} (P(0)t^{-md}) = -md$ \item for all $i\geq 1$, $\mathrm{ord} (a_i) \geq mi - md > -md = \mathrm{ord}(a_0).$ \end{itemize} If $md> 1$, then there exists a positive integer $n$ such that $-2n\leq -md < -n$, and choosing such an~$n$, the previous lemma tells us that $$I(m,d,P) = \mathbf{L}^{-m}\int_{C_0}\int_{k[[t]]}\psi(r(Q(u)(u+t^ny)^{-d})\mathrm{d} y\, \mathrm{d} u = 0.$$ Assume now $m=d=1$. Then, writing $P(0) = a$, we have \begin{eqnarray*}I(1,1,P) &= &\mathbf{L}^{-1}\int_{C_0}\psi(r(P(tu)t^{-1}u^{-1}))\mathrm{d} u = \mathbf{L}^{-1}\int_{C_0}\psi(r(at^{-1}u^{-1}))\mathrm{d} u\\ &=&\mathbf{L}^{-1}\int_{C_0}\psi(r(at^{-1}u))\mathrm{d} u\\ &=& \mathbf{L}^{-1}\left(\int_{k[[t]]}\psi(r(at^{-1}u))\mathrm{d} u - \int_{tk[[t]]}\psi(r(at^{-1}u))\mathrm{d} u \right)\\ \end{eqnarray*} which gives the result, since the first term in the parenthesis is zero (using again that $\mathrm{ord}(a)=0$), and the second one is equal to~$\mathbf{L}^{-1}$. \end{proof} \subsection{An integral over the arc space}\label{sect.integralarcspace} We now go back to the setting and notations of section \ref{sectGeometry}. In this section we recall briefly, following~\cite{CL}, how integrals of motivic Schwartz-Bruhat functions can be rewritten as integrals on arc spaces. \begin{lemma}(\cite{CL}, lemma 6.1.1) Let $\Phi\in\mathscr{S}(F_v^n)$ be a local motivic Schwartz-Bruhat function. Then the integral $\int_{G(F_v)}\Phi(g)\mathrm{d} g$ can be rewritten as $$\int_{\mathscr{L}(\mathscr{X})}\Phi(x)\mathbf{L}^{-\mathrm{ord}_{\omega}(x)}\mathrm{d} x$$ where $\mathrm{d} x$ denotes the motivic measure on the arc space $\mathscr{L}(\mathscr{X}).$ \end{lemma} For every subset $A\subset \mathscr{A}$ and every $\beta\in\mathscr{B}_{1,v}$ for some $v$, we denote by $\Delta(A,\beta)$ the locally closed subset of the special fibre $\mathscr{X}_v:= \mathscr{X}\times_R \mathrm{Spec}\,(k)$ of points belonging to the divisors $\mathscr{D}_{\mathscr{\alpha}},\ \alpha\in A$ and no other, as well as to the vertical divisor $E_{\beta}$, and no other. The special fibre $\mathscr{X}_v$ identifies with the jet scheme $\mathscr{L}_0(\mathscr{X})$ of order 0, and therefore there is a specialisation morphism $\mathscr{L}(\mathscr{X})\longrightarrow\mathscr{X}_v$. We denote by $\Omega(A,\beta)$ the preimage in $\mathscr{L}(\mathscr{X})$ of $\Delta(A,\beta).$ Lemma 5.2.6 of \cite{CL} then states the following:\index{omegaa@$\Omega(A,\beta)$}\index{Delta@$\Delta(A,\beta)$} \begin{lemma}\label{measure} Let $A$ be a subset of $\mathscr{A}$ and let $B$ be a set of cardinality $n-\mathrm{Card}(A)$. There exists a mesure-preserving definable isomorphism $\theta$ from $\Delta(A,\beta)\times \mathscr{L}(\mathbf{A}^1,0)^{A}\times \mathscr{L}(\mathbf{A}^1,0)^{B}$ with coordinates $x_{\alpha},\ \alpha\in A$ and $y_{\beta},\ \beta\in B$, to $\Omega(A,\beta)$, such that $\mathrm{ord}_{\mathscr{D}_{\alpha}}(\theta(x)) = \mathrm{ord}(x_{\alpha})$ for $\alpha\in A$, and $\mathrm{ord}_{\mathscr{D}_\alpha}(\theta(x)) = 0$ for $\alpha\not \in A$. \end{lemma} \begin{remark} We changed the normalisation slightly with respect to \cite{CL}. Note that this is consistent with example \ref{volume.constset}: the volume of $\Omega(A,\beta)$ is given by $[\Delta(A,\beta)]\mathbf{L}^{-n}$. \end{remark} In what follows, we therefore identify a point of $\Omega(A,\beta)$ with a triple $$(w,x,y)\in \Delta(A,\beta)\times \mathscr{L}(\mathbf{A}^1,0)^{A}\times \mathscr{L}(\mathbf{A}^1,0)^{B}.$$ We also recall Lemma 6.2.6 from \cite{CL}, which uses this isomorphism to rewrite the motivic Fourier transforms $Z_v(\mathbf{T},\xi)$ as sums of motivic integrals over arc spaces: \begin{lemma}\label{arcspace} For every motivic residual function $h$ on $\mathscr{L}(\mathscr{X})$ and every $\xi\in G(F_v)$, one has $$\int_{G(F_v)}\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{(g,\mathscr{L}_{\alpha})_v}h(g)\mathrm{e}(\langle g,\xi\rangle)\mathrm{d} g $$ $$= \sum_{\substack{A\subset \mathscr{A}\\ \beta\in\mathscr{B}_1}}\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{e_{\alpha,\beta}}\mathbf{L}^{\rho_{\beta}}\int_{\Omega(A,\beta)}\prod_{\alpha\in A}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{\mathrm{ord}(x_{\alpha})}h(x)\mathrm{e}(\langle x,\xi\rangle)\mathrm{d} x. $$\end{lemma} \subsection{A few words on convergence of Euler products in our setting} If in our general convergence result, proposition \ref{convergence}, we take $X$ to be a curve (so that $w(X) = 2$), $\epsilon = \frac12$ and $\beta =0$ (and replacing $\alpha$ by the letter $c$ to avoid confusion with the indices of the components $D_{\alpha}$), the obtain the following particular case which will be the one we are going to use: \begin{lemma}\label{convergencetoapply} Let $X$ be a curve over $\mathbf{C}$. Assume $$F(T) = 1 + \sum_{i\geq 1}X_it^i\in\mathscr{E}xp\mathscr{M}_{X}[[T]]$$ is such that there exist an integer $M\geq 1$ and a real number $c<1$ such that \begin{itemize}\item for all $i\in\{1,\ldots,M\}$, $w_X(X_i)\leq 2i-2$ \item for all $i\geq M$, $w_X(X_i)\leq 2c i -1$. \end{itemize} Then there exists $\delta>0$ such that the Euler product $\prod_{v\in X}F_v(T)\in\mathscr{E}xp\mathscr{M}_k[[T]]$ converges for $|T|<\mathbf{L}^{-1 + \delta}$ \end{lemma}\index{Euler product!convergence} In practice, $X$ will be some dense open subset of our original curve $C$, and the convergence of the remaining factors will be checked separately. \begin{remark}\label{bounddimension} Note that by lemma \ref{weightdimensionnoneffective}, to bound $w_X(X_i)$, it suffices to bound $$2\dim_X(X_i) + \dim X.$$ We are going to use this remark for almost all terms except the very few first ones where a bound on weights rather than dimensions is crucial. \end{remark} \begin{remark}\label{apply.polynomial}Whenever the series $F(T)$ is in fact a polynomial, we may take $M$ to be its degree and then it suffices to check the bound $w_X(X_i)\leq 2i-2$ for all $i$, taking $c$ to be zero in the statement of the lemma. In the case when we want to check it on dimensions, this boils down to the inequality $\dim_X(X_i)\leq i-2$. \end{remark} This remark motivates the following terminology which we will use to discard terms that won't obstruct convergence: \begin{definition}\label{admissible} Let $X$ be a $k$-variety and $i\geq 0$ be an integer. A polynomial $F = \sum a_iT^i\in\mathscr{E}xp\mathscr{M}_X[T]$ is said to be admissible if $\dim_X(a_i) \leq i-2$ for all $i\geq 0$. A polynomial $F = \sum_{\mathbf{m}\in \mathbf{N}^{\mathscr{A}}}a_{\mathbf{m}}\mathbf{T}^{\mathbf{m}}\in \mathscr{E}xp\mathscr{M}_X[\mathbf{T}]$ is said to be $\rho'$-admissible if $F((T^{\rho'_{\alpha}})_{\alpha\in\mathscr{A}})$ is admissible. \end{definition}\index{admissible}\index{rhopa@$\rho'$-admissible} Note that $F\in\mathscr{E}xp\mathscr{M}_{X}[T]$ is admissible if and only if for all $v\in X$, $F_v = \sum a_{i,v}T^{i}\in \mathscr{E}xp\mathscr{M}_{k}[T]$ is admissible, so admissibility may be checked locally. In what follows, we are going to use the weight function from chapter \ref{hodgemodules}. Therefore, unless explicitly stated, in all what follows, the base field $k$ will be the field of complex numbers~$\mathbf{C}$. \subsection{Places in $C_0$} Let $v$ be a place in $C_0$. In this case, for any character $\xi$, $Z_v(\mathbf{T},\xi)$ is given by lemma \ref{arcspace}, taking~$h$ to be the characteristic function of the set~$\mathscr{U}(\mathcal{O}_v)$ inside $G(F_v)$. In other words, one has $h=0$ on $\Omega(A,\beta)$ whenever $A\cap \mathscr{A}_D \neq \varnothing$ or $\beta\not\in\mathscr{B}_0$, and $h=1$ otherwise. Therefore, \begin{equation}\label{zeta.formula} Z_v(\mathbf{T},\xi) = \sum_{\substack{A\subset \mathscr{A}\setminus\mathscr{A}_D\\\beta\in\mathscr{B}_{0,v}}}\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{e_{\alpha,\beta}}\mathbf{L}^{\rho_{\beta}}\int_{\Omega(A,\beta)}\prod_{\alpha\in A}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{\mathrm{ord}(x_{\alpha})}\mathrm{e}(\langle x,\xi\rangle)\mathrm{d} x.\end{equation} We are going to study the local factors in this form. For all but a finite number of places in $C_0$, a precise analysis is required to prove meromorphic continuation of the Euler product (specialised for $\mathbf{T} = (T^{\rho'_{\alpha}})_{\alpha\in\mathscr{A}}$) for $|T|<\mathbf{L}^{-1 + \delta}$, with a pole at $\mathbf{L}^{-1}$. For the finite number of remaining places, a coarser estimate suffices, given by the following lemma: \begin{lemma}\label{finitenumberfactors} Let $v\in C_0$. The local factor $Z_v((T^{\rho'_{\alpha}})_{\alpha},\xi)$ converges for $|T|<\mathbf{L}^{-1 + \delta}$ for some $\delta >0$. \end{lemma} \begin{proof} Write $$Z_v(\mathbf{T},\xi) = \sum_{\beta\in\mathscr{B}_{0,v}}\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{e_{\alpha,\beta}}\mathbf{L}^{\rho_{\beta}}\sum_{A\subset \mathscr{A}\setminus \mathscr{A}_D}Z_{A,\beta}(\mathbf{T},\xi),$$ with $$Z_{A,\beta}(\mathbf{T},\xi) = \int_{\Omega(A,\beta)}\prod_{\alpha\in A}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{\mathrm{ord}(x_{\alpha})}\mathrm{e}(\langle x,\xi\rangle)\mathrm{d} x.$$ It suffices to check convergence of each $Z_{A,\beta}(\xi)$. Using the notation $f_{\xi}$ for the linear form $\langle \cdot, \xi \rangle$ on $G_F$ and the rational function it induces on $X$, we have \begin{eqnarray*}Z_{A,\beta}(\mathbf{T},\xi)& =& \sum_{\mathbf{m}\in\mathbf{N}_{>0}^{A}}\prod_{\alpha\in A}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{m_{\alpha}}\int_{\substack{\Delta(A,\beta)\times \mathscr{L}(\mathbf{A}^1,0)^{n- |A|}\times\mathscr{L}(\mathbf{A}^1,0)^{|A|}\\ \mathrm{ord}(x_{\alpha}) = m_{\alpha}}}\mathrm{e}(f_{\xi}(x,y))\mathrm{d} x \mathrm{d} y\\ & = & \sum_{\mathbf{m}\in\mathbf{N}_{>0}^{A}}\prod_{\alpha\in A}(\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha})^{m_{\alpha}}\int_{\substack{\Delta(A,\beta)\times \mathscr{L}(\mathbf{A}^1)^{n-|A|}\times\mathscr{L}(\mathbf{A}^1,0)^{|A|}\\ \mathrm{ord}(x_{\alpha}) = 0}}\mathrm{e}(f_{\xi}(t^{\mathbf{m}}x,y))\mathrm{d} x \mathrm{d} y.\end{eqnarray*} By remark \ref{volume.integrals}, each integral is of weight at most the weight of the volume of the domain of integration, which, by (\ref{volumeaffineline}) and (\ref{volumeaffinelineordzero}) is \begin{eqnarray*} w( [\Delta(A,\beta)] (1-\mathbf{L}^{-1})^{|A|} \mathbf{L}^{-n+ |A|})&\leq& 2\dim [\Delta(A,\beta)] + 2(- n + |A|)\\ & \leq& 0 \end{eqnarray*} because the divisors $D_{\alpha}$ intersect transversely. Thus, the series converges if the series $$\sum_{\mathbf{m} \in \mathbf{N}_{>0}^{A}}\prod_{\alpha\in A}(\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha})^{m_{\alpha}}.$$ converges. This series is equal to $$\prod_{\alpha\in A}\frac{\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}{1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}.$$ Specialising to $\mathbf{T} = (T^{\rho_{\alpha}})_{\alpha\in A}$ we get the result. \end{proof} For $\xi = 0$, we need to be more precise and to check the exact order of the pole, therefore we are going to use this lemma only for $\xi\neq 0$. \subsubsection{Trivial character}\label{trivchar} Assume $\xi = 0$. Then the factor $\mathrm{e}(\langle x,\xi\rangle)$ equals 1, and we may compute directly (see \cite{CL}, 6.4): $$Z_v(\mathbf{T},0) = \sum_{\beta\in\mathscr{B}_{0,v}}\prod_{\alpha\in \mathscr{A}}T^{e_{\alpha,\beta}}_{\alpha}\mathbf{L}^{\rho_\beta}\sum_{A\subset\mathscr{A}\setminus\mathscr{A}_D}[\Delta(A,\beta)]\mathbf{L}^{-n + |A|}(1-\mathbf{L}^{-1})^{|A|}\prod_{\alpha\in A}\frac{\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}{1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}.$$ Therefore, $Z_v(\mathbf{T},0)$ is a rational function, and $Z_v(\mathbf{T},0) \prod_{\alpha\in \mathscr{A}\setminus\mathscr{A}_D}(1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}) $ is a Laurent polynomial (and a polynomial for almost all $v$). \begin{prop}\label{trivialcharconv} There is a real number $\delta>0$ such that the product $$\prod_{v\in C_0}\left(Z_v((T^{\rho'_{\alpha}})_{\alpha},0)\prod_{\alpha\in \mathscr{A}\setminus\mathscr{A}_D}(1-\mathbf{L}^{\rho_{\alpha}-1} T^{\rho_{\alpha}}) \right)$$ converges for $|T|< \mathbf{L}^{-1+\delta}$ and takes a non-zero effective value in $\widehat{\mathscr{M}_k}$ at $T = \mathbf{L}^{-1}$. \end{prop} \begin{proof} Since all factors are Laurent polynomials, and by the properties of Euler products, it suffices to check convergence of the product over $v$ inside the dense open subset~$C_1$ of~$C_0$ (see notation \ref{C1}). Thus, we may assume that $\mathscr{B}_v = \{\beta\}$ (and denote $\Delta(A,\beta)$ simply by $\Delta(A)$) and that the integers $e_{\alpha,\beta}$ and $\rho_{\beta}$ are zero. Then the above formula simplifies to $$Z_v(\mathbf{T},0) = \sum_{A\subset \mathscr{A}\setminus\mathscr{A}_D}[\Delta(A)]\mathbf{L}^{-n + |A|}(1-\mathbf{L}^{-1})^{|A|}\prod_{\alpha\in A}\frac{\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}{1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}.$$ Put $F_v(\mathbf{T},0):= Z_{v}(\mathbf{T},0)\prod_{\alpha\in\mathscr{A}\setminus \mathscr{A}_D}(1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha})$. It is a polynomial. For $A$ such that $|A|\geq 2$ we have $\dim\Delta(A)\leq n-|A|$ so that \begin{eqnarray*}F_v(\mathbf{T},0) & = & 1 - \sum_{\alpha\in\mathscr{A}\setminus\mathscr{A}_D}\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha} + \sum_{\alpha\in \mathscr{A}\setminus\mathscr{A}_D}[\Delta(\{\alpha\})]\mathbf{L}^{1-n}\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha} + P_v(\mathbf{T})\\ & = & 1-\sum_{\alpha\in \mathscr{A}\setminus \mathscr{A}_D}([\mathscr{D}_{\alpha,v}]-\mathbf{L}^{n-1})\mathbf{L}^{-n}\mathbf{L}^{\rho_{\alpha}}T_{\alpha} + P_v(\mathbf{T})\end{eqnarray*} where $P_v(\mathbf{T})\in\mathscr{M}_k[\mathbf{T}]$ is a $\rho'$-admissible polynomial (see definition \ref{admissible}). This computation is uniform in $v\in C_1$, meaning that there is a polynomial $F(\mathbf{T},0)\in\mathscr{M}_{C_1}[\mathbf{T}]$ and a $\rho'$-admissible polynomial $P(\mathbf{T})\in\mathscr{M}_{C_1}[\mathbf{T}]$ such that, denoting by $v:\mathrm{Spec}\, k\to C_1$ the morphism defining the point $v$, we have $v^*F = F_v$, $v^*P = P_v$, and $$F(\mathbf{T},0) = 1 - \sum_{\alpha\in\mathscr{A}\setminus \mathscr{A}_D}([\mathscr{D}_{\alpha}] - \mathbf{L}^{n-1})\mathbf{L}^{-n}\mathbf{L}^{\rho_{\alpha}}T_{\alpha} + P(\mathbf{T})$$ in $\mathscr{M}_{C_1}[\mathbf{T}]$. By lemma \ref{weightcancellation}, we have $$w_{C_1}(([\mathscr{D}_{\alpha}] - \mathbf{L}^{n-1})\mathbf{L}^{-n}\mathbf{L}^{\rho_{\alpha}}) \leq 2(n-1) -2n + 2\rho_{\alpha} \leq 2(\rho_{\alpha} -1).$$ Specialising $T_{\alpha}$ to $T^{\rho_{\alpha}}$, we see that we can apply lemma \ref{convergencetoapply} with $X = C_1$, as explained in remark~\ref{apply.polynomial}. Thus, there is a real number $\delta>0$ such that the infinite product $\prod_{v\in C_1}F_v((T^{\rho'_{\alpha}})_{\alpha},0)$ converges for $|T|<\mathbf{L}^{-1 + \delta}$ and has a non-zero effective value at~$\mathbf{L}^{-1}$ in $\widehat{\mathscr{M}_{k}}$, which is of the form $1 + a$ with $w(a) < 0$. Multiplying by the finite number of factors $v\in (C_0\setminus C_1)(k)$ does not change convergence. Moreover, for $v\in C_0\setminus C_1(k)$ the value of $F_v((T^{\rho'_\alpha})_{\alpha},0)$ at $T = \mathbf{L}^{-1}$ is exactly: \begin{equation}\label{valueatL}(1-\mathbf{L}^{-1})^{|\mathscr{A}\setminus \mathscr{A}_D|}\sum_{\substack{A\subset\mathscr{A}\setminus\mathscr{A}_D\\ \beta\in \mathscr{B}_0}} \mathbf{L}^{\rho_{\beta}-\sum_{\alpha\in\mathscr{A}}\rho'_{\alpha}e_{\alpha,\beta}}[\Delta(A,\beta)]\mathbf{L}^{-n},\end{equation} which is clearly effective. It is non-zero unless $\Delta(A,\beta) = \varnothing$ for all $A\subset \mathscr{A}\setminus\mathscr{A}_D$ and all $\beta\in\mathscr{B}_{0,v}$, which would mean that $\mathscr{U}(\mathcal{O}_v) = \varnothing$. The latter was ruled out by our assumption of existence of local sections, see \ref{sect.goodmodelchoice}. We may conclude that the product $$\prod_{v\in C_0}F_v((T^{\rho'_\alpha})_{\alpha},0)$$ converges for $|T| < \mathbf{L}^{-1 + \delta}$ and has a non-zero effective value at~$\mathbf{L}^{-1}$ in $\widehat{\mathscr{M}_{\mathbf{C}}}$. Moreover, we may give a formula for this value. For all $v\in C_1$, the expression in (\ref{valueatL}) simplifies to $$(1-\mathbf{L}^{-1})^{|\mathscr{A}\setminus\mathscr{A}_D|}\sum_{A\subset\mathscr{A}\setminus\mathscr{A}_D}[\Delta(A)]\mathbf{L}^{-n}$$ $$ = (1-\mathbf{L}^{-1})^{|\mathscr{A}\setminus\mathscr{A}_D|}\left ( 1 + \mathbf{L}^{-n}\sum_{\varnothing\neq A\subset \mathscr{A}\setminus\mathscr{A}_D}[\Delta(A)]\right)$$ because $[\Delta(\varnothing)]= [G(k)] = \mathbf{L}^{n}$, so that the total value at $\mathbf{L}^{-1}$ is: $$\prod_{v\in C_1}(1-\mathbf{L}^{-1})^{|\mathscr{A}\setminus \mathscr{A}_D|}\left(1 + \mathbf{L}^{-n}\sum_{\varnothing\neq A\subset \mathscr{A}\setminus\mathscr{A}_D}[\Delta(A)]\right) $$ $$\times\prod_{v\in C_0\setminus C_1}(1-\mathbf{L}^{-1})^{|\mathscr{A}\setminus \mathscr{A}_D|}\left(\sum_{\substack{A\subset\mathscr{A}\setminus\mathscr{A}_D\\ \beta\in \mathscr{B}_0}} \mathbf{L}^{\rho_{\beta}-\sum_{\alpha\in\mathscr{A}}\rho'_{\alpha}e_{\alpha,\beta}}[\Delta(A,\beta)]\mathbf{L}^{-n}\right).$$\end{proof} This result implies that the infinite product $\prod_{v\in C_0}Z_v(0,(T^{\rho'_{\alpha}})_{\alpha})$ has a meromorphic continuation for $|T|<\mathbf{L}^{-1 + \delta}$, its only pole being a pole of order $\mathrm{Card}(\mathscr{A}\setminus\mathscr{A}_D) = \mathrm{rk\ Pic}(U)$ at $T = \mathbf{L}^{-1}$. \subsubsection{Non-trivial characters} \label{nontrivchar} For every $\xi\in G(F_v)$, the linear form $x\mapsto \langle x,\xi\rangle$ on $G_{F_v}$ defines a meromorphic function $f_{\xi}$ on $X$. The support of its divisor of poles is contained in $\bigcup_{\alpha} D_{\alpha}$, so that we can write $$\div f_{\xi} = E(\xi) - \sum_{\alpha\in \mathscr{A}}d_{\alpha}(\xi) D_{\alpha},$$ where $E(\xi)$ is an effective divisor, and the integers $d_{\alpha}(\xi)$ \index{dalph@$d_{\alpha}(\xi)$, $d_{\alpha}$} are non-negative. Once $\xi$ is fixed, we will simply write $E$ and $d_{\alpha}$, omitting the mention of $\xi$. In the following analysis, the place $v$ is fixed, so that $Z_v(\mathbf{T},\xi)$ will be simply denoted $Z(\mathbf{T},\xi)$. Moreover, using lemma \ref{finitenumberfactors}, we may assume that $v$ belongs to the open subset $C_1$ of $C$ (see notation \ref{C1}). We write $\Omega(A)$ \index{omegaa@$\Omega(A)$} for $\Omega(A,\beta)$. Recall that because of the restriction of the domain of summation (section \ref{summationdomain}), $\xi$ is an element of $(t^{\nu_v}k[[t]])^n = (k[[t]])^n$. We have $$Z(\mathbf{T},\xi) = \sum_{A\subset \mathscr{A}\setminus \mathscr{A}_D} Z_A(\mathbf{T},\xi)$$ where $$Z_{A}(\mathbf{T},\xi) = \int_{\Omega(A)}\prod_{\alpha\in A}\left(\mathbf{L}^{\rho_{\alpha}}T_{\alpha}\right)^{\mathrm{ord} x_{\alpha}}\mathrm{e}(f_{\xi}(x))\mathrm{d} x.$$\index{ZA@$Z_A(\mathbf{T},\xi)$} \paragraph{The case $A = \varnothing$} The set $\Omega(\varnothing)$ corresponds to arcs with origin contained in none of the $D_{\alpha}$, that is, contained in $G$. Then we get $$Z_{\varnothing}(\mathbf{T},\xi) = \int_{\mathscr{L}(\mathbf{G}^n_{\mathrm{a}})}\mathrm{e}(\langle x,\xi\rangle)\mathrm{d} x $$ Since $\xi$ is an element of $(t^{\nu_v}k[[t]])^n$, for all $x\in\mathscr{L}(\mathbf{G}^n_{\mathrm{a}})(k) = k[[t]]^n$, we have $$\mathrm{ord}(\langle x,\xi\rangle) = \mathrm{ord}(x_1\xi_1+\ldots + x_n\xi_n) \geq \nu_v.$$ Thus, in fact $r(\langle x,\xi\rangle) = 0$, and the integral is equal to 1. \paragraph{The case $A = \{\alpha\}$} We are going to cut the integral into two pieces: arcs with origin outside or inside the divisor $E$: \begin{eqnarray*} Z_{v,\{\alpha\}}(\mathbf{T},\xi) &= &\int_{\mathscr{L}(\mathscr{X},D_{\alpha}^{\circ}\backslash E)} (\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{\mathrm{ord} x_{\alpha}} \mathrm{e}(f_{\xi}(x_{\alpha},\mathbf{y}))\mathrm{d} x_{\alpha} \mathrm{d} \mathbf{y} \\ & &+ \int_{\mathscr{L}(\mathscr{X},D_{\alpha}^{\circ}\cap E)} (\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{\mathrm{ord} x_{\alpha}} \mathrm{e}(f_{\xi}(x_{\alpha},\mathbf{y})) \mathrm{d} x_{\alpha} \mathrm{d} \mathbf{y}.\end{eqnarray*} By the equality $X(k((t)))= \mathscr{X}(k[[t]])$, the rational function $f_{\xi}$ on $X$ induces a rational function $f_{\xi}$ on $\mathscr{L}(\mathscr{X})$. On the subspace $\mathscr{L}(\mathscr{X},D_{\alpha}^{\circ}\setminus E)$, $f_{\xi}$ is of the form $f_{\xi}(x,\mathbf{y}) = g_{\xi}(x,\mathbf{y})x^{-d}$, where $g_{\xi}$ is a regular function on $\mathscr{L}(\mathscr{X},D_{\alpha}^{\circ}\setminus E)$. We may expand the function $g_{\xi}$ as a converging and non-vanishing power series in $x\in\mathscr{L}^{1}(\mathbf{A}^1,0)$ and $\mathbf{y}\in(\mathscr{L}^{1}(\mathbf{A}^1,0))^{B}$ where $B = A\backslash\{\alpha\}$: \begin{equation}\label{g.series}g_{\xi}(x,\mathbf{y}) = \sum_{\substack{p\geq 0 \\ \mathbf{q}\in\mathbf{N}^B}}g_{p,q}x^{p}\mathbf{y}^{\mathbf{q}},\end{equation} with $g_{p,q}\in\mathcal{O}(D_{\alpha}^{\circ})[[t]].$ Since we consider only arcs with origin outside $E$, we have that $\mathrm{ord}(g_0)= 0$, and more generally, $\mathrm{ord} (g_{\xi}(x,\mathbf{y})) = 0$ for all $x,\mathbf{y}$. There are several cases to consider, depending on the order $d =d_{\alpha}$ of the pole of $f_{\xi}$ at~$D_{\alpha}$. Define $$\mathscr{A}_0(\xi)^D =\{\alpha\in\mathscr{A}\setminus \mathscr{A}_D,\ d_{\alpha} =0\}$$ and $$\mathscr{A}_1(\xi)^D =\{\alpha\in\mathscr{A}\setminus \mathscr{A}_D,\ d_{\alpha} =1\}.$$ \paragraph{The order of the pole at $D_{\alpha}$ is zero} Here we assume that $\alpha\in\mathscr{A}_0^{D}(\xi)$, so that $d=0$. Since $\mathrm{ord} (g) \geq 0$, we have $r\circ g = 0$. Therefore \begin{eqnarray*} & & \int_{\mathscr{L}(\mathscr{X},D_{\alpha}^{\circ}\backslash E)} (\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{\mathrm{ord} x_{\alpha}} \mathrm{e}(f_{\xi}(x,\mathbf{y}))\mathrm{d} x \mathrm{d} \mathbf{y}\\ &= &\int_{(D_{\alpha}^{\circ}\setminus E) \times \mathscr{L}(\mathbf{A}^1,0)^{n-1}}\sum_{m\geq 1}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{m}\left(\int_{\substack{\mathscr{L}(\mathbf{A}^1,0) \\ \mathrm{ord} x = m}}\psi(r(g_{\xi}(x,\mathbf{y})))\mathrm{d} x \right)\mathrm{d} \mathbf{y}\\ & = & \int_{(D_{\alpha}^{\circ}\setminus E) \times \mathscr{L}(\mathbf{A}^1,0)^{n-1}}\sum_{m\geq 1}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{m} \mathbf{L}^{-m}(1-\mathbf{L}^{-1})\\ & = & (1-\mathbf{L}^{-1})\frac{\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}{1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}[D^{\circ}_{\alpha}\setminus E]\mathbf{L}^{-n+1} \end{eqnarray*} \paragraph{The order of the pole at $D_{\alpha}$ is positive} Assume now $d\geq 1$. Then: $$\int_{\mathscr{L}(\mathscr{X},D_{\alpha}^{\circ}\setminus E)} (\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{\mathrm{ord} x_{\alpha}} \mathrm{e}(f_{\xi}(x,\mathbf{y}))\mathrm{d} x \mathrm{d} \mathbf{y} $$ \begin{eqnarray*} &=&\sum_{m\geq 1}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{m}\int_{\substack{\mathscr{L}(\mathscr{X},D_{\alpha}^{\circ}\setminus E)\\ \mathrm{ord} x = m}}\mathrm{e}(g_{\xi}(x,\mathbf{y})x^{-d})\mathrm{d} x \mathrm{d} \mathbf{y}\\ &=&\int_{(D_{\alpha}^{\circ}\setminus E)\times\mathscr{L}(\mathbf{A}^1,0)^{n-1}}\sum_{m\geq 1}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{m}\left(\int_{\substack{\mathscr{L}(\mathbf{A}^1,0)\\ \mathrm{ord} x = m}}\psi(r(g_{\xi}(x,\mathbf{y})x^{-d}))\mathrm{d} x \right)\mathrm{d} \mathbf{y}.\end{eqnarray*} Fixing $\mathbf{y}$ and viewing $g_{\xi}(x,\mathbf{y})$ as a power series in~$x$, we may apply lemma \ref{motivic.integral}: indeed, first of all we may drop all the terms of degree in $x$ greater than $d$ because of the invariance of $r$, so that we get a polynomial in $x$ with coefficients in $k[[t]]$. Moreover, its constant term is $g_{\xi}(0,\mathbf{y})$ which by a remark above is of order $0$. This shows that this expression is zero when $d>1$. When $d=1$, only the term for $m=1$ remains, and it is equal to $$[D_{\alpha}^{\circ}\setminus E]\times \mathbf{L}^{1-n} \mathbf{L}^{\rho_{\alpha}}T_{\alpha}(-\mathbf{L}^{-2}) ,$$ which is $\rho'_{\alpha}$-admissible, as $\dim [D_{\alpha}^{\circ}\setminus E] = n-1$. \paragraph{Arcs with origin in $E$} The term corresponding to arcs with origin in $E$ may be rewritten as \begin{eqnarray*}&&\sum_{m\geq 1}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{m}\int_{\substack{\mathscr{L}(\mathscr{X},D_{\alpha}^{\circ}\cap E)\\ \mathrm{ord} x = m}}\mathrm{e}(f_{\xi}(x,\mathbf{y})) \mathrm{d} x \mathrm{d} \mathbf{y}\\ & = & \sum_{m\geq 1}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{m}\int_{\substack{D_{\alpha}^{\circ}\cap E\times\mathscr{L}(\mathbf{A}^1) \mathscr{L}(\mathbf{A}^1,0)^{n-1}\\ \mathrm{ord} x = m}}\mathrm{e}(f_{\xi}(t^mx,\mathbf{y})) \mathrm{d} x \mathrm{d} \mathbf{y}\\ \end{eqnarray*} Using remark \ref{volume.integrals} again, the weight of the coefficient of degree $m$ is bounded by the the weight of $\mathbf{L}^{m\rho_{\alpha}}[D_{\alpha}^{\circ}\cap E]\times \mathbf{L}^{-m}(1-\mathbf{L}^{-1})\mathbf{L}^{-n+1}$. The dimension of the latter is smaller than $$m\rho_{\alpha}+ (n-2) -m -(n-1) = m\rho_{\alpha} -m -1 $$ Since $m\geq 1$, we see that all these terms are $\rho'$-admissible. It remains to bound all terms of sufficiently large degree as in lemma \ref{convergencetoapply}. For this, put \begin{equation}\label{defofc}c = \max_{\alpha\in\mathscr{A}\setminus\mathscr{A}_D}\left(1- \frac{1}{2\rho_{\alpha}}\right) < 1,\end{equation} so that for all $\alpha\in\mathscr{A}\setminus \mathscr{A}_D$, $\rho_{\alpha}-\frac{1}{2}\leq c\rho_{\alpha}.$ Using remark \ref{bounddimension}, we see that $$2(m\rho_{\alpha} - m-1) + \dim C_1 \leq 2(mc\rho_{\alpha} -1) + 1 = 2c (m\rho_{\alpha}) -1$$ so if $T_{\alpha}$ is specialised to $T^{\rho_{\alpha}}$ for all $\alpha\in\mathscr{A}\setminus\mathscr{A}_D$, we are in the situation of lemma \ref{convergencetoapply}. Note that here we in fact could have taken a smaller $c$, namely $c = \max_{\alpha\in \mathscr{A}\setminus\mathscr{A}_D}(1 -\rho_{\alpha}^{-1})$. The definition we chose will be important in the next case. \paragraph{The case $\#A > 2$} Then $Z_{A}$ can be rewritten as: \begin{eqnarray*}Z_{v,A}(\mathbf{T},\xi) &=& \sum_{\mathbf{m}\in\mathbf{N}_{>0}^{A}} \prod_{\alpha\in A}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{m_{\alpha}}\int_{\mathrm{ord} x_{\alpha} = m_{\alpha}}\mathrm{e}(f_{\xi}(x,y))\mathrm{d} x \mathrm{d} y \\ \end{eqnarray*} We proceed as in the previous case. The integral in each term is over the constructible subset of $\Omega(A)$ of points satisfying $\mathrm{ord}(x_{\alpha}) = m_{\alpha}$, which has motivic volume $$[D_A^{\circ}]\prod_{\alpha\in A}\left( \mathbf{L}^{-m_{\alpha}}(1-\mathbf{L}^{-1})\right) \mathbf{L}^{-n + \#A}.$$ The constructible set $D_A^{\circ}$ is given as an open subset of an intersection of the $|A|$ divisors $D_{\alpha}$, $\alpha\in A$, and therefore is of dimension at most $n-|A|$. We may conclude that the dimension of the element of $\mathscr{E}xp\mathscr{M}_k$ given by the integral is at most $-\sum_{\alpha\in A}m_{\alpha} \leq -2.$ Thus, all terms are $\rho'$-admissible, and it remains to give a stronger bound for all terms of sufficiently large degree. Using remark \ref{bounddimension} again, with $c$ given by (\ref{defofc}), we have \begin{eqnarray*}2\sum_{\alpha\in A}m_{\alpha}(\rho_{\alpha}-1) + \dim C_1 &\leq& 2 \sum_{\alpha\in A}m_{\alpha}\left(\rho_{\alpha}-\frac{1}{2}\right) - \sum_{\alpha\in A} m_{\alpha}+1\\ &\leq & 2c \sum_{\alpha\in A} m_{\alpha}\rho_{\alpha} - 1.\end{eqnarray*} Thus, here again we are in the situation of lemma \ref{convergencetoapply}. \paragraph{Putting everything together} We decomposed the zeta function at $v$ in the following manner: \begin{eqnarray*}Z_{v}(\mathbf{T},\xi) &= & 1 + \sum_{\alpha\in\mathscr{A}_0}Z_{v,\alpha}(\mathbf{T},\xi) + \sum_{\alpha\in\mathscr{A}_1}Z_{v,\alpha}(\mathbf{T},\xi) + \sum_{\# A>2} Z_{v,A}(\mathbf{T},\xi)\\ &= & 1 + \mathbf{L}^{-n}\sum_{\alpha\in \mathscr{A}_0^{D}(\xi)}[D_{\alpha}^{\circ}\setminus E] \frac{\mathbf{L}^{\rho_{\alpha}}T_{\alpha}}{1-\mathbf{L}^{\rho_{\alpha}-1} T_{\alpha}} + \substack{\text{terms satisfying the bounds}\\\text{of lemma \ref{convergencetoapply} for $c$ given by (\ref{defofc})}} \end{eqnarray*} Since $\rho_{\alpha}-1 \leq c\rho_{\alpha}$, multiplication by $\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}}$ preserves the bounds of lemma \ref{convergencetoapply}. Thus, multiplying everything by $$\prod_{\alpha\in \mathscr{A}^D_0(\xi)}(1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha})$$ and keeping only potentially non-admissible terms, we get: \begin{eqnarray}\prod_{\alpha\in \mathscr{A}_0^D(\xi)}(1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha})Z_{v}(\mathbf{T},\xi)& = & 1 - \sum_{\alpha\in\mathscr{A}_0^D(\xi)}\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha} \label{A0line}\\ && +\ \mathbf{L}^{1-n}\sum_{\alpha\in\mathscr{A}^D_0(\xi)}[D_{\alpha}^{\circ}\setminus E](\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha})\label{A1line}\\ & & +\ \ \substack{\text{terms satisfying the bounds}\\\text{of lemma \ref{convergencetoapply} for $c$ given by (\ref{defofc})}} \label{A2line} \end{eqnarray} \begin{prop}\label{nontrivcharconv} There is a real number $\delta>0$ such that the product $$\prod_{v\in C_0}\left(Z_v((T^{\rho'_{\alpha}})_{\alpha},\xi)\prod_{\alpha\in \mathscr{A}^D_0(\xi)}(1-\mathbf{L}^{\rho_{\alpha}-1} T^{\rho_{\alpha}}) \right)$$ converges for $|T|< \mathbf{L}^{-1+\delta}$. \end{prop} \begin{proof} The above calculations give explicit formulas for the main terms of all $v\in C_1$. Using lemma \ref{finitenumberfactors}, it suffices to check convergence for these places, by showing that they satisfy the conditions of lemma \ref{convergencetoapply} for $c$ given by (\ref{defofc}) and some sufficiently large $M$. According to the above, it suffices to bound the weights of the terms in (\ref{A0line}) and (\ref{A1line}), which is done exactly as in the proof of proposition \ref{trivialcharconv}. The conclusion follows. \end{proof} \subsection{Places in $S$} Let $v$ be a place in $S = C\setminus C_0$. In this case, for any character $\xi$, $Z(\mathbf{T},\xi)$ is given by lemma \ref{arcspace}, taking $h=1$: $$Z(\mathbf{T},\xi) = \sum_{\substack{A\subset \mathscr{A}\\\beta\in\mathscr{B}_0}}\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{e_{\alpha,\beta}}\mathbf{L}^{\rho_{\beta}}\int_{\Omega(A,\beta)}\prod_{\alpha\in A}(\mathbf{L}^{\rho_{\alpha}}T_{\alpha})^{\mathrm{ord}(x_{\alpha})}\mathrm{e}(\langle x,\xi\rangle))\mathrm{d} x.$$ In this section we are going to use Clemens complexes: see \cite{CL}, section 2.3, for the definition. \subsubsection{Trivial character}\label{trivcharS} Assume $\xi = 0$, so that, as in section 6.4 of \cite{CL}, \begin{equation}\label{zetatrivialcharS}Z_{v}(\mathbf{T},0) = \sum_{\substack{A\subset \mathscr{A}\\ \beta\in\mathscr{B}_{1,v}}}\left(\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{e_{\alpha,\beta}}\mathbf{L}^{\rho_{\beta}}\right) [\Delta(A,\beta)](1-\mathbf{L}^{-1})^{|A|}\mathbf{L}^{-n + |A|}\prod_{\alpha\in A}\frac{\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}{1-\mathbf{L}^{\rho_{\alpha}-1} T_{\alpha}}.\end{equation} This case has essentially been treated in section 6.4 and proposition 4.3.2 of \cite{CL}. The only modification we have to make is to adapt to the case where $D$ is not all of $X\setminus G$, so that the $\alpha\not\in \mathscr{A}_D$ do not contribute to the pole. For this, we proceed as in \cite{CL}, fixing for every pair $(A,\beta)$ a maximal subset $A_0$ of $\mathscr{A}_{D}$ such that $A\cap \mathscr{A}_D\subset A_0$ and $\Delta(A_0,\beta)\neq \emptyset$. We collect the terms in equation \ref{zetatrivialcharS} corresponding to pairs $(A,\beta)$ associated with any given $A_0$: $$Z_{v}(\mathbf{T},0) = \sum_{\substack{A_0\in \mathrm{Cl}_{v}^{\mathrm{an},\max}(X,D)}}\sum_{\substack{A\subset \mathscr{A}, \beta\in\mathscr{B}_{1,v}\\ (A,\beta) \mapsto A_0}}\left(\prod_{\alpha\in\mathscr{A}}T_{\alpha}^{e_{\alpha,\beta}}\mathbf{L}^{\rho_{\beta}}\right) [\Delta(A,\beta)]$$ $$\times(1-\mathbf{L}^{-1})^{|A|}\mathbf{L}^{-n + |A|}\prod_{\alpha\in A\setminus\mathscr{A}_D}\frac{\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}{1-\mathbf{L}^{\rho_{\alpha}-1} T_{\alpha}} \prod_{\alpha\in A\cap \mathscr{A}_D}\frac{\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}}{1-\mathbf{L}^{\rho_{\alpha}-1} T_{\alpha}}.$$ Thus, there exists a family of Laurent polynomials $(P_{v,A})$ with coefficients in $\mathscr{M}_k$ indexed by the set of maximal faces $A$ of $\mathrm{Cl}_v^{\mathrm{an}}(X,D)$ such that $$Z_v(\mathbf{T},0) = \sum_{A\in \mathrm{Cl}_{v}^{\mathrm{an},\max}(X,D)}\frac{P_{v,A}(\mathbf{T})}{\prod_{\alpha\in \mathscr{A}\setminus \mathscr{A}_D}(1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha})}\prod_{\alpha\in A}\frac{1}{1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}},$$ with $P_{A,v}( \mathbf{T})$ congruent to some non-zero effective element of $\mathscr{M}_k$ modulo the ideal generated by the polynomials $1-\mathbf{L}^{\rho_{\alpha}-1}T_{\alpha}$ for $\alpha\in \mathscr{A}$. Putting $T_{\alpha} = T^{\rho'_{\alpha}}$ for every $\alpha\in\mathscr{A}$, we may deduce from this that there is a family of Laurent series $(F_{v,A})$ with coefficients in $\mathscr{M}_k$, indexed by the set of maximal faces $A$ of $\mathrm{Cl}_v^{\mathrm{an}}(X,D)$, converging for $|T| < \mathbf{L}^{-1 + \min_{\alpha\in\mathscr{A}\setminus \mathscr{A}_D} \frac{1}{\rho_{\alpha}}}$, taking a non-zero effective value in $\widehat{\mathscr{M}}_k$ at $\mathbf{L}^{-1}$, and such that $$Z_v(\mathbf{T},0) = \sum_{A\in \mathrm{Cl}_{v}^{\mathrm{an},\max}(X,D)}F_{v,A}(T)\prod_{\alpha\in A}\frac{1}{1-(\mathbf{L} T)^{\rho_{\alpha}-1}}.$$ In particular, setting $d_v = 1 + \dim \mathrm{Cl}_v^{\mathrm{an}}(X,D)$, we may deduce the following result: \begin{prop} There is a real number $\delta >0$ such that for every non-zero common multiple $a$ of the integers $\rho_{\alpha}-1$, $\alpha\in\mathscr{A}_D$, the Laurent series $(1-\mathbf{L}^{a}T^{a})^{d_v}(Z_v(T,0))$ converges for $|T| <\mathbf{L}^{-1 + \delta}$ and takes a non-zero effective value at~$\mathbf{L}^{-1}$. \end{prop} \begin{remark} According to the above calculations, one may take $\delta = \min_{\alpha\in\mathscr{A}\setminus\mathscr{A}_D}\frac{1}{\rho_{\alpha}}.$ \end{remark} \subsubsection{Non-trivial characters} For this case we refer to proposition 4.3.4 and section~6.5 in~\cite{CL}. Recall from \ref{summationdomain} that we restricted the summation domain to a finite-dimensional $k$-vector space $V$. For any $v$, denote by $\a:V_{F_v}\to G_{F_v}$ the corresponding $F_v$-linear inclusion. For every $v$, we then have a Laurent series $Z_{v}(T,\a(\cdot))\in \mathscr{E}xp\mathscr{M}_{V}[[T]][T^{-1}]$, and we ask for its convergence properties. Section 6.5 of \cite{CL} gives an argument, based on ideas from section 3.4 of \cite{CLTi}, to describe the convergence properties of $Z_v(T,\a(\cdot))$, uniformly on the strata of a constructible partition of $V$. First of all, Chambert-Loir and Loeser show lemma~6.5.1, which allows them to resolve indeterminacies of the function~$f_{\xi}$ uniformly on each stratum $P$ of such a partition. Then they apply change of variables to show that one can compute the integral giving $Z_v(\mathbf{T},\xi)$ on the fibre above $\xi$ of such a resolution. On the other hand, they have a general result, proposition 5.3.1, giving a formula for motivic Igusa zeta functions without indeterminacies. It states that the poles of such an Igusa zeta function are controlled by the set of maximal faces of the subcomplex $\mathrm{Cl}^{\mathrm{an}}_v(X,D)_{\xi}$ of the complex $\mathrm{Cl}^{\mathrm{an}}_v(X,D)$ where we only keep vertices $\alpha\in \mathscr{A}_{D}$ such that $d_{\alpha}(\xi)=0$. This argument works in exactly the same way in our setting, the only difference being that in Chambert-Loir and Loeser's paper, the set $\mathscr{A}_D$ is equal to $\mathscr{A}$, which is not necessarily the case here. Therefore, proposition 4.3.4 from \cite{CL} adapts to our setting in the following form: \begin{prop}\label{nontrivcharS} Let $v\in S$ and let $d_v(\xi) = 1 + \dim \mathrm{Cl}_v^{\mathrm{an}}(X,D)_{\xi}$. There exists a constructible partition $(U_{v,i})$ of $V\setminus\{0\}$ on each stratum of which $\xi\mapsto d_v(\xi)$ is constant equal to some integer $d_{v,i}$, and, for every $i$, an element $P_{v,i}\in\mathscr{E}xp\mathscr{M}_{U_{v,i}}[\mathbf{T},\mathbf{T}^{-1}]$ and finite families $(a_{v,i,j})$, $(b_{v,i,j})$ where $a_{v,i,j}\in\mathbf{N}$, $b_{v,i,j}\in\mathbf{N}^{\mathscr{A}}$, such that the restriction of $Z_{v}(\mathbf{T},\a(\cdot))$ to $U_{v,i}$ equals $$\prod_{j}(1-\mathbf{L}^{a_{v,i,j}}\mathbf{T}^{b_{v,i,j}})^{-1} P_{v,i}(\mathbf{T};\cdot).$$ Moreover, there exist integers $a_{v,i}\geq 1$ and a real number $\delta>0$ such that the restriction to~$U_{v,i}$ of $(1-(\mathbf{L} T)^{a_{v,i}})^{d_{v,i}}Z_v(T,\a(\cdot))$ converges for $|T|<\mathbf{L}^{-1 + \delta}$. \end{prop} \section{Proof of the main theorem and its corollary} According to (\ref{summation.zeta}), we may write the multivariate zeta function $Z(\mathbf{T})$ in the form $$Z(\mathbf{T}) = \mathbf{L}^{(1-g)n}Z(\mathbf{T},0) + \mathbf{L}^{(1-g)n}\sum_{\xi\in V\setminus\{0\}}Z(\mathbf{T},\xi)$$ where $V$ is a finite-dimensional $k$-vector space contained in $k(C)^n$ and $g$ is the genus of the smooth projective curve $C$. We are interested in the convergence properties of $$Z(T) = \mathbf{L}^{(1-g)n}Z(T,0) + \mathbf{L}^{(1-g)n}\sum_{\xi\in V\setminus\{0\}}Z(T,\xi)$$ where for all $\xi$, we have $Z(T,\xi) = Z((T^{\rho'_{\alpha}})_{\alpha},\xi).$ Recall we still assume $k=\mathbf{C}$. \subsection{The function $Z(T,0)$}\label{zt0} In section \ref{trivchar} we showed the convergence beyond~$\mathbf{L}^{-1}$ of the product of local factors $Z_v(T,0)$ over $v\in C_0$ after multiplication by the polynomial $$\prod_{\alpha\in\mathscr{A}\setminus\mathscr{A}_D} (1-\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}})$$ at each place $v$. To derive a meromorphic continuation for $\prod_{v\in C_0}Z_v(T,0)$, it therefore suffices to describe the convergence of the product $$\prod_{\alpha\in\mathscr{A}\setminus\mathscr{A}_D}\prod_{v\in C_0} (1-\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}})^{-1}.$$ In the latter, we recognise for each $\alpha\in \mathscr{A}\setminus \mathscr{A}_D$ the Euler product decomposition of the motivic zeta function of $C_0$ at $\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}}$. Denote by $Z_C(T) = \sum_{m\geq 0}[S^m C] T^m\in\mathscr{M}_k[[T]]$ the motivic zeta function of the smooth projective curve $C$. Since $k=\mathbf{C}$ is algebraically closed, we have, by theorem 1.1.9 in \cite{Kapr}, that $$Z_C(T) = \frac{P_C(T)}{(1-T)(1-\mathbf{L} T)}$$ where $P_C(T)\in\mathscr{M}_k[T]$ is a polynomial of degree $2g$ such that $P_C(\mathbf{L}^{-1}) = \mathbf{L}^{-g}[J(C)]$, with $J(C)$ being the Jacobian of $C$. Moreover, the zeta function $Z_{C_0}(T)$ of the open dense subset $C_0\subset C$ is given by $$Z_{C_0}(T) = \prod_{v\in C_0}(1-T)^{-1} = Z_C(T)\prod_{v\in C\setminus C_0}(1-T).$$ We have \begin{eqnarray*}\prod_{v\in C} \prod_{\alpha\in\mathscr{A}\setminus \mathscr{A}_D}(1-\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}})^{-1} &=& \prod_{\alpha\in\mathscr{A}\setminus \mathscr{A}_D}Z_C(\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}})\\ & =& \prod_{\alpha\in\mathscr{A}\setminus\mathscr{A}_D}\frac{P_C(\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}})}{(1-\mathbf{L} ^{\rho_{\alpha}-1}T^{\rho_{\alpha}})(1-(\mathbf{L} T)^{\rho_{\alpha}})}.\end{eqnarray*} Since $1-\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}}$ evaluated at $\mathbf{L}^{-1}$ is $1-\mathbf{L}^{-1} = \mathbf{L}^{-1}[\mathbf{A}^1\setminus\{0\}]$ which is effective, we may conclude that the product over places in $C_0$ $$\prod_{v\in C_0}\prod_{\alpha\in \mathscr{A}\setminus\mathscr{A}_D}(1-\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}})^{-1} $$ is of the form $$\frac{F(T)}{\prod_{\alpha\in\mathscr{A}\setminus\mathscr{A}_D}(1-(\mathbf{L} T)^{\rho_{\alpha}})}$$ where $F\in \mathscr{M}_k[[T]]$ is a rational power series converging for $|T| < \mathbf{L}^{-1+\delta}$ for some $\delta>0$ (more precisely, any $\delta\leq \min_{\alpha\in\mathscr{A}\setminus \mathscr{A}_D} \frac{1}{\rho_{\alpha}}$ works) and taking a non-zero effective value at~$\mathbf{L}^{-1}$. Thus, by proposition \ref{trivialcharconv}, for a common multiple $a$ of the $\rho_{\alpha}, \alpha\in \mathscr{A}\setminus\mathscr{A}_D$, we may write the product $\prod_{v\in C_0}Z(T,0)$ in the form $$\frac{F_1(T)}{(1-(\mathbf{L} T)^{a})^{|\mathscr{A}\setminus \mathscr{A}_{D}|}}$$ for $F_1(T)\in\mathscr{M}_k[[T]]$ converging for $|T|<\mathbf{L}^{-1 + \delta}$ for some $\delta$ and taking a non-zero effective value at $\mathbf{L}^{-1}$. Thus, the product $\prod_{v\in C_0}Z_v(T,0)$ has a pole at $\mathbf{L}^{-1}$ of order $|\mathscr{A}\setminus \mathscr{A}_D| = \mathrm{rk}\ \mathrm{Pic} U$, and a meromorphic continuation beyond $\mathbf{L}^{-1}$. Combining this with the result of section \ref{trivcharS} we see each place $v\in C\setminus C_0$ gives an additional contribution to the pole at $\mathbf{L}^{-1}$, of order exactly $d_v = 1+ \mathrm{Cl}_v^{\mathrm{an}}(X,D)$. Finally, we can conclude that there is a real number $\delta >0$, a Laurent series $G_0(T)\in \mathscr{M}_k[[T]][T^{-1}]$ converging for $|T|<\mathbf{L}^{-1 + \delta}$, and taking a non-zero effective value at $\mathbf{L}^{-1}$, and an integer $a\geq 1$ (we may take $a$ to be any common multiple of the $\rho'_{\alpha},\ \alpha\in\mathscr{A}$) such that the product $Z(T,0) = \prod_{v\in C} Z(T,0)$ may be written in the form $$Z(T,0) = \frac{G_0(T)}{(1-(\mathbf{L} T)^a)^{r}},$$ where $$r = \mathrm{rk}\ \mathrm{Pic}(U) +\sum_{v\in C\setminus C_0}(1 + \dim\mathrm{Cl}_v^{\mathrm{an}}(X,D)).$$ \begin{example}\begin{enumerate}\item Assume $U = G$, so that $\mathrm{Pic}(U) = 0$. One then recovers the result from \cite{CL}. \item Assume $U=X$. Then one may take $C_0 = C$, and the order of the pole is exactly $\mathrm{rk}\ \mathrm{Pic}(X)$. \end{enumerate} \end{example} \subsection{The function $Z(T,\xi)$}\label{ztxi} We proceed as in section \ref{zt0}: according to proposition \ref{nontrivcharconv}, for every $\xi\in V\setminus\{0\}$ the product $$\left(\prod_{v\in C_0}Z_v(T,\xi)\right)\prod_{\alpha\in \mathscr{A}_0^{D}(\xi)}Z_C(\mathbf{L}^{\rho_{\alpha}-1}T^{\rho_{\alpha}})\in\mathscr{E}xp\mathscr{M}_k[[T]][T^{-1}]$$ converges for $|T| < \mathbf{L}^{-1 + \delta}$ for some $\delta >0$. We may apply this to the generic point of $V\setminus\{0\}$ and use spreading-out and induction on the dimension to show that there exists a finite constructible partition of $V\setminus\{0\}$ on which the functions $\xi\mapsto d_{\alpha}(\xi)$ are constant, and such that this convergence holds uniformly in $\xi$ on each piece of the partition. Proposition \ref{nontrivcharS} on the other hand tells us that the order of the pole of $\prod_{v\in C\setminus C_0} Z(T,\xi)$ at $\mathbf{L}^{-1}$ is at most $\sum_{v} d'_v$ where $d'_v = 1 + \dim \mathrm{Cl}^{\mathrm{an}}_v(X,D)_0$, again uniformly on the pieces of a constructible partition of $V\setminus\{0\}$. Using a partition of $V\setminus\{0\}$ refining the aforementioned two partitions, we may conclude that for any stratum $P$ of this partition, the order of the pole of $$\mathbf{L}^{(1-g)n}\sum_{\xi\in \a(P)}Z(T,\xi)$$ is at most $$|\mathscr{A}_0^D(\xi))| + \sum_{v\in C\setminus C_0}d_v(\xi)$$ for any $\xi\in \a(P)$ (recall $\xi\mapsto d_{\alpha}(\xi)$, and therefore also $\xi \mapsto |\mathscr{A}_0^D(\xi))|$ and $\xi \mapsto d_v(\xi)$ are constant on $P$). Lemma 3.5.4 in \cite{CLTi} shows that this is strictly less than the order~$r$ of the pole of $Z(T,0)$ at $\mathbf{L}^{-1}.$ \subsection{Conclusion of the proof of theorem \ref{main}} Taking $a$ to be the least common multiple of the integers $\rho'_{\alpha}$, $\alpha\in \mathscr{A}$ and of the integers $a_{v,i}$ appearing in proposition \ref{nontrivcharS}, we have shown that $(1-(\mathbf{L} T)^{a})^{r}Z(T)$ converges for $T = \mathbf{L}^{-1}$, and takes a non-zero effective value at $\mathbf{L}^{-1}$, which concludes the proof of theorem~\ref{main}. \subsection{Proof of corollary \ref{maincor}} Applying lemma \ref{coefgrowth}, we get corollary \ref{maincor} in the case where $k = \mathbf{C}$. We now explain how we may deduce from this the general case. All the geometric data in our counting problem involves a finite number of equations over the field $k$: we may therefore assume that everything is defined over a finitely generated subfield of $\mathbf{C}$. Moreover, the assumption $\mathscr{U}(\mathcal{O}_v) \neq \varnothing$ for all $v\in C_0$ may be reformulated more geometrically by saying that the volume of the arc space $\mathscr{L}(\mathscr{U}_v)$ at the place $v$ should be non-zero. With the notations of section \ref{sect.integralarcspace}, this volume may be expressed by the formula: \begin{eqnarray*} \mathrm{vol}(\mathscr{L}(\mathscr{U}_v)) &= &\sum_{\substack{ A \subset \mathscr{A}\setminus \mathscr{A}_D\\ \beta\in \mathscr{B}_{0,v}}}\mathrm{vol}(\Omega(A,\beta))\\ & = & \mathbf{L}^{-n}\sum_{\substack{ A \subset \mathscr{A}\setminus \mathscr{A}_D\\ \beta\in \mathscr{B}_{0,v}}}[\Delta(A,\beta)].\end{eqnarray*} Thus, at least one of the sets $\Delta(A,\beta)$ for $A \subset \mathscr{A}\setminus \mathscr{A}_D$, $\beta\in \mathscr{B}_{0,v}$ has a $k$-point. Since it is defined over $\mathbf{C}$, it also has a $\mathbf{C}$-point. Thus, we have shown that without loss of generality, even if the original problem was stated over some algebraically closed field $k$ of characteristic zero, we may in fact assume everything is defined over $\mathbf{C}$, and apply corollary \ref{maincor} in this setting. By functoriality of Hilbert schemes, we may then deduce corollary \ref{maincor} over $k$. \section{Symmetric products}\label{sect.reviewsymproducts} \subsection{Multidimensional partitions} Fix an integer $p\geq 1$, and denote $\mathcal{I} = \mathbf{N}^{p}-\{0\}$ and $\mathcal{I}_0 = \mathbf{N}^p$. Consider the free abelian monoid over~$\mathcal{I}$: $$\mathbf{N}^{(\mathcal{I})} = \{(m_{\i})_{\i\in \mathcal{I}}\in\mathbf{N}^{\mathcal{I}},\ m_{\i} = 0\ \text{for almost all}\ \i\}.$$ To an element $\pi = (m_{\i})_{\i\in \mathcal{I}}\in \mathbf{N}^{(\mathcal{I})}$ we can associate canonically a $p$-tuple $$\lambda(\pi) = \sum_{\i\in \mathcal{I}}m_{\i}\i\in \mathcal{I}_0.$$ Thus, we have a well-defined map $$\lambda: \mathbf{N}^{(\mathcal{I})}\longrightarrow \mathcal{I}_0.$$ We say $\pi$ is a partition of $\mathbf{m}\in \mathcal{I}$ if $\lambda(\pi) = \mathbf{m}$. \begin{notation} Recall from notation \ref{partitionnotation} that another notation for partitions is as follows: a partition of $\mathbf{m}$ can be written in the form $[\a_1,\ldots,\a_r]$ where $\a_1,\ldots,\a_r\in \mathcal{I}$ are not necessarily distinct and such that $\a_1 + \ldots + \a_r = \mathbf{m}$. The order of the $\a_i$ in this notation is not important: we consider $[\a_1,\ldots,\a_r]$ to be the same as $[\a_{\sigma(1)},\ldots,\a_{\sigma(r)}]$ for all $\sigma\in\mathfrak{S}_r$. \end{notation} \begin{example} For $p=1$, we recover partitions of integers: indeed, in this case an element $\pi$ of $\mathbf{N}^{(\mathcal{I})}$ is a finite family $(m_i)_{i\geq 0}$ of non-negative integers, $\lambda(\pi) = \sum_{i\geq 1}m_ii$ is some integer $m$, and $\pi$ determines a partition $$\sum_{i\geq 1} \underbrace{(i + \ldots + i)}_{m_i\ \text{times}} = m$$of the integer $m$, the non-negative integer $m_i$ being the number of occurrences of $i$ in this partition. For $p=2$, consider for example $$\pi = \left[\left(\begin{array}{c} 2 \\ 1 \end{array}\right),\left(\begin{array}{c} 2 \\ 1 \end{array}\right),\left(\begin{array}{c} 0 \\ 3 \end{array}\right)\right]$$ It is a partition of $$\left(\begin{array}{c} 4 \\ 5 \end{array}\right) = 2 \left(\begin{array}{c} 2 \\ 1 \end{array}\right) + \left(\begin{array}{c} 0 \\ 3 \end{array}\right).$$ Note that this 2-dimensional partition gives in particular a one-dimensional partition for each coordinate: $[2,2]$ for the first coordinate, and $[1,1,3]$ for the second one. However, it carries more information than just the choice of these two partitions, since it also matches up their parts in some way. Thus, the partition $$\left(\begin{array}{c} 4 \\ 5 \end{array}\right) = \left(\begin{array}{c} 0 \\ 1 \end{array}\right) + \left(\begin{array}{c} 2 \\ 1 \end{array}\right) + \left(\begin{array}{c} 2 \\ 3 \end{array}\right).$$ is different from $\pi$, but yields the same partitions of its coordinates. \end{example} \subsection{From partitions to symmetric products}\label{symproducts} Let $k$ be a perfect field. Let $\pi = (n_{\i})_{\i\in\mathcal{I}}\in \mathbf{N}^{(\mathcal{I})}$ and let $\mathscr{X} = (X_{\i})_{\i\in \mathcal{I}_0}$ be a family of constructible subsets of projective varieties over a quasi-projective $k$-variety $X$. Assume moreover that there is an open subset $U$ of $X$ such that $X_0\times_XU \simeq U$ and such that $X\setminus U$ is a finite union of closed points. In chapter~\ref{eulerproducts}, in particular in sections \ref{definition}, \ref{anyset} and \ref{sect.addX0}, we defined a notion of \emph{symmetric product}~$S^{\pi}\mathscr{X}$. It follows from the construction that $S^{\pi}\mathscr{X}$ comes with a natural morphism to $S^{\pi}X$. Define also for any $\mathbf{m}\in\mathcal{I}$, the constructible set $S^{\mathbf{m}}(\mathscr{X})$ to be the disjoint union of the $S^{\pi}(\mathscr{X})$ for all partitions $\pi$ of $\mathbf{m}$. \begin{remark} Recall that when $p=1$, for any quasi-projective variety $X$, the variety $S^nX$ can also be obtained directly by taking the quotient of $X^n$ by the natural permutation action of the symmetric group $\mathfrak{S}_n$. For $p\geq 2$ and $\mathbf{n} = (n_1,\ldots,n_p)\in\mathbf{N}^{p}$, note that giving an element $\sum_v\mathbf{n}_vv$ of $S^{\mathbf{n}}X$ is equivalent to giving its $p$ components $$\left(\sum_{v}n_{i,v}v\right)_{1\leq i\leq p}\in S^{n_1}X\times \ldots\times S^{n_p}X.$$ Thus, we in fact have a piecewise isomorphism $$S^{\mathbf{n}}X\simeq S^{n_1}X\times \ldots\times S^{n_p}X.$$ \end{remark} \section{Motivic Schwartz-Bruhat functions and Poisson formula}\label{SBreview} We start with a review of Hrushovski and Kazhdan's motivic Poisson formula from \cite{HK}, following the exposition in sections 1.2 and 1.3 of \cite{CL}. Let $k$ be a perfect field. \subsection{Local Schwartz-Bruhat functions}\label{sect.localSB}\index{Schwartz-Bruhat function!local} Let $F = k((t))$ be the completion of a function field of a curve at a closed point, with uniformiser $t$, ring of integers~$\mathcal{O}$ and residue field $k$. In \cite{HK}, Hrushovski and Kazhdan considered local motivic exponential Schwartz-Bruhat functions on $F$: such functions are analogues of classical Schwartz-Bruhat functions on non-archimedean local fields, that is, locally constant and compactly supported functions. For each such function $\phi$, there exist integers $M\leq N$ such that $\phi$ is zero outside $t^M\mathcal{O}$, and invariant modulo $t^N\mathcal{O}$, so that $\phi$ can be seen as a function on the quotient $t^M\mathcal{O}/t^N\mathcal{O}$. The latter can be endowed with the structure of a $k$-variety, and more precisely of an affine space over $k$, through the following identification: \begin{equation}\label{affineidentification}\begin{array}{ccc}t^M\mathcal{O}/t^N\mathcal{O}&\longrightarrow& \mathbf{A}_{k}^{N-M}(k)\\ x_Mt^M + \ldots + x_{N-1}t^{N-1} + t^N\mathcal{O} & \mapsto & (x_M,\ldots,x_{N-1})\end{array}\end{equation} This affine space is denoted by $\A_k^{(M,N)}$. \index{Ak@$\A_k^{(M,N)}$} More generally, for any $n\geq 1$ we denote by $\mathbf{A}_k^{n(M,N)}$ \index{Ak@$\mathbf{A}_k^{n(M,N)}$} the affine space $(\A_k^{(M,N)})^n$, which is viewed as a motivic incarnation of $(t^M\mathcal{O}/t^N\mathcal{O})^n$. Thus, a Schwartz-Bruhat function of level $(M,N)$ \index{Schwartz-Bruhat function!of level $(M,N)$} on $F^n$ will by definition be an element of $\mathscr{S}(F^n;(M,N)) := \mathscr{E}xp\mathscr{M}_{\mathbf{A}_k^{n(M,N)}}$. \index{SFM@$\mathscr{S}(F^n;(M,N))$} An element $E$ of this ring can indeed be interpreted as a function $$\phi:\mathbf{A}_k^{n(M,N)}\longrightarrow\mathscr{E}xp\mathscr{M}_{k(x)}$$ by sending a point $x\in\mathbf{A}_k^{n(M,N)}$ to the class of the fibre $E_x$, where $k(x)$ is the residue field of $x$. As $M$ and $N$ vary, the rings $\mathscr{S}(F^n;(M,N))$ fit into a directed system the direct limit of which is the total ring $\mathscr{S}(F^n)$ \index{SF@$\mathscr{S}(F^n)$} of Schwartz-Bruhat functions. More precisely, let us point out that the natural injection $t^{M}\mathcal{O}/t^{N}\mathcal{O} \to t^{M-1}\mathcal{O}/t^{N}\mathcal{O}$ gives rise to the closed immersion \begin{equation}\label{immersion}\begin{array}{rcl}i:\mathbf{A}_k^{(M,N)}&\longrightarrow& \mathbf{A}_k^{(M-1,N)} \\ (x_M,\ldots,x_{N-1})&\mapsto& (0,x_M,\ldots,x_{N-1})\end{array}\end{equation} whereas the natural projection $t^{M}\mathcal{O}/t^{N+1}\mathcal{O}\to t^{M}\mathcal{O}/t^{N}\mathcal{O}$ induces a morphism \begin{equation}\label{projection}\begin{array}{rcl}p:\mathbf{A}_k^{(M,N+1)}&\longrightarrow& \mathbf{A}_k^{(M,N)} \\ (x_M,\ldots,x_{N})&\mapsto& (x_M,\ldots,x_{N-1})\end{array}\end{equation} which is a trivial fibration with fibre $\mathbf{A}^1$. They induce ring morphisms $i_!:\mathscr{S}(F^n;(M,N))\to \mathscr{S}(F^n;(M-1,N))$ (extension by zero) and $p^{*}:\mathscr{S}(F^n;(M,N))\to \mathscr{S}(F^n;(M,N+1))$. \subsection{Integration}\label{HKintegration} For any Schwartz-Bruhat function $\phi\in\mathscr{S}(F^n)$, choosing a pair $(M,N)$ such that $\phi\in\mathscr{S}(F^n;(M,N))$ one may define, using the exponential sum notation, $$\int_{F^n}\phi(x) \mathrm{d} x = \mathbf{L}^{-nN}\sum_{x\in\mathbf{A}_k^{n(M,N)}}\phi(x)\in\mathscr{E}xp\mathscr{M}_k.$$ This does not depend on the choice of $(M,N)$, and defines an $\mathscr{E}xp\mathscr{M}_k$-linear map $$\int_{F^n}:\mathscr{S}(F^n)\to \mathscr{E}xp\mathscr{M}_k.$$ \subsection{Fourier kernel} Fix a $k$-linear function $r:F\longrightarrow k$ such that there is an integer $a$ with $r_{|t^a\mathcal{O}} = 0$. The least such integer $a$ is called the \textit{conductor} of $r$ and denoted by $\nu$. Note that because of its linearity, $r$ is invariant modulo $r^{\nu}\mathcal{O}$, so that for any pair of integers $(M,N)$ such that $M\leq N$ and $\nu\leq N$, it induces a well-defined morphism $r^{(M,N)}:\mathbf{A}_k^{(M,N)}\longrightarrow \mathbf{A}_k^1$. On the other hand, for any two pairs of integers $(M,N)$ and $(M',N')$ satisfying $M\leq N$ and $M'\leq N'$, the product map $F\times F\longrightarrow F$ induces a well-defined map on classes $$t^{M}\mathcal{O}/t^{N}\mathcal{O} \times t^{M'}\mathcal{O}/t^{N'}\mathcal{O}\longrightarrow t^{M+M'}\mathcal{O}/t^{N''}\mathcal{O}$$ where $N''\geq \mathrm{min}(M'+N,M+N')$, that is, a well-defined morphism \begin{equation}\label{localprod}\mathbf{A}_k^{(M,N)}\times \mathbf{A}_k^{(M',N')}\longrightarrow \mathbf{A}_k^{(M+M',N'')}.\end{equation} Whenever $N''\geq \nu$, this map may be composed with $r^{(M+M',N'')}$ which yields a morphism $$\mathbf{A}_k^{(M,N)}\times \mathbf{A}_k^{(M',N')}\longrightarrow \mathbf{A}_k^1.$$ More generally, taking $n$-th powers and summing the corresponding maps, we get a morphism $$\mathbf{A}_k^{n(M,N)}\times \mathbf{A}_k^{n(M',N')}\longrightarrow \mathbf{A}_k^1.$$ Note that when $M' = \nu-N$ and $N' = \nu-M$, the condition $N''\geq \nu$ is satisfied. The morphism \begin{equation}\label{localkernel}r:\mathbf{A}_k^{n(M,N)}\times \mathbf{A}_k^{n(\nu-N,\nu-M)}\to \mathbf{A}^1_k\end{equation} defined in this setting is called the \textit{Fourier kernel}. \index{Fourier kernel!local} \subsection{Local Fourier transform} \label{sect.localfouriertransform} The Fourier transform \index{Fourier transform!local} of a motivic Schwartz-Bruhat function $\phi\in\mathscr{S}(F^n;(M,N))$ is defined to be the element $\mathscr{F} \phi\in \mathscr{S}(F^n;(\nu-N,\nu-M))$ given by $$\mathscr{F} \phi = \mathbf{L}^{-Nn}\phi\cdot[\mathbf{A}_k^{n(M,N)}\times \mathbf{A}_k^{n(\nu-N,\nu-M)},r],$$ where $r$ is the morphism (\ref{localkernel}), and the product is taken in $\mathscr{E}xp\mathscr{M}_{\mathbf{A}_k^{n(M,N)}}$, and viewed in $\mathscr{E}xp\mathscr{M}_{\mathbf{A}_k^{n(\nu-N,\nu-M)}}$. For every $y\in \mathbf{A}_k^{n(\nu-N,\nu-M)}$, using the notation from section~\ref{exponentialsumnotation} of chapter~\ref{grothrings}, as well as the definition of the integral in section \ref{HKintegration} we have $$\mathscr{F} \phi(y) = \int_F\phi(x)\psi(r(xy))\mathrm{d} x.$$ \subsection{Global Schwartz-Bruhat functions}\label{CLglobal} One can extend the above definitions to finite products of fields. Consider a finite family $(F_v)_{v\in S}$ of such fields $F_v = k_v((t_v))$, with local parameters $t_v$ and residue fields $k_v$ (which are assumed to be finite extensions of $k$) and an integer $n\geq 1$. For any family of pairs of integers $(M_v,N_v)_{v\in S}$, with $M_v\leq N_v$ the space of Schwartz-Bruhat functions on $\prod_{v\in S}F_v^n$ of levels $(M_v,N_v)_{v\in S}$ is defined to be $$\mathscr{S}\left(\prod_{v\in S}F_v^n; (M_v,N_v)_{v\in S}\right):= \mathscr{E}xp\mathscr{M}_{\prod_{v\in S}\mathrm{Res}_{k_v/k}\mathbf{A}_{k_v}^{n(M_v,N_v)}},$$ where $\mathrm{Res}_{k_v/k}$ denotes the functor of Weil restriction of scalars. In the case where $k$ is algebraically closed, we have $$\mathscr{S}\left(\prod_{v\in S}F_v^n; (M_v,N_v)_{v\in S}\right):= \mathscr{E}xp\mathscr{M}_{\prod_{v\in S}\mathbf{A}_{k}^{n(M_v,N_v)}}.$$ The ring $\mathscr{S}\left(\prod_{v\in S}F_v^{n}\right)$ is defined as a direct limit of these rings, with the appropriate compatibilities. The notions of integral, Fourier kernel and Fourier transform defined above extend easily to such functions (see \cite{CL}, 1.2.10). \index{Fourier kernel!global}\index{Fourier transform!global} We are going to use this in the following setting: let $k$ be a perfect field, $C$ a smooth projective curve over $k$, $F = k(C)$ its function field. Denote by $\mathbb{A}_F$ the ring of adeles of the field~$F$. The rings $\mathscr{S}(\prod_{v\in S}F_v^n)$, for finite sets $S$ of closed points of $C$, form a directed system, and their direct limit is the ring $\mathscr{S}(\mathbb{A}^n_F)$ of global motivic Schwartz-Bruhat functions on $\mathbb{A}^n_F$. \index{Schwartz-Bruhat function!global} \subsection{Summation over rational points} \label{CLsummation}\index{summation over rational points} For details on the contents of this paragraph, see \cite{CL}, 1.3.5. Let $\phi$ be a global Schwartz-Bruhat function on $\mathbb{A}^n_F$, represented by a class in the ring $$\mathscr{E}xp\mathscr{M}_{\prod_{v\in S}\mathrm{Res}_{k_v/k}\mathbf{A}_{k_v}^{n(M_v,N_v)}}$$ for some finite set $S$ of closed points of $C$ and some family $(M_v,N_v)_{v\in S}$ of pairs of integers such that $M_v\leq N_v$ for all $v\in S$. Consider the divisor $D = -\sum_{v\in S}M_vv$ on $C$. For every $v\in C$, the natural embedding of the field $F = k(C)$ into its completion $F_v$ maps the Riemann-Roch space $$L(D) = \{0\}\cup \{f\in k(C)^{\times},\div(f)\geq \sum_{v}M_vv\} $$ into $t^{M_v}\mathcal{O}_v$. This gives rise to a morphism of algebraic varieties $$\theta: L(D)^n\longrightarrow\left(\prod_{v}\mathrm{Res}_{k_v/k}\mathbf{A}_{k_v}^{(M_v,N_v)}\right)^n.$$ The sum over rational points of $\phi\in \mathscr{E}xp\mathscr{M}_{\left(\prod_{v}\mathrm{Res}_{k_v/k}\mathbf{A}_{k_v}^{(M_v,N_v)}\right)^n}$, denoted by $\sum_{x\in F^n}\phi(x),$ \index{sum@$\sum_{x\in k(C)^n}$} is then defined to be the image in $\mathscr{E}xp\mathscr{M}_k$ of the pull-back $\theta^*\phi\in\mathscr{E}xp\mathscr{M}_{L(D)^n}.$ It does not depend on choices. \begin{remark} This definition is motivated by the fact that, when $k$ is a finite field, for a Schwartz-Bruhat function $\phi:(\mathbf{A}_F)^n\to\mathbf{C}$ which is supported inside $\left(\prod_vt^{M_v}\mathcal{O}_v\right)^n$, we have $\phi(x)=0$ for all $x\not\in F^n\cap\left(\prod_vt^{M_v}\mathcal{O}_v\right)^n = L(D)^n$, so that we have the equality $$\sum_{x\in F^n}\phi(x) = \sum_{x\in L(D)^n}\phi(x).$$ \end{remark} \subsection{Motivic Poisson formula}\label{CLpoissonformula} We fix a non-zero meromorphic differential form $\omega\in\Omega^{1}_{F/k}$. For every $v\in C$, we choose the linear map $r_v:F_v\to k$ defined by $r_v:x\mapsto \mathrm{res}_v(x\omega)$ and we compute Fourier transforms with respect to those. Theorem 1.3.10 in \cite{CL} states that for $\phi\in\mathscr{S}(\mathbb{A}^n_F)$, we have $\mathscr{F}\phi\in\mathscr{S}(\mathbb{A}^n_F)$ and $$\sum_{x\in F^n}\phi(x) = \mathbf{L}^{(1-g)n}\sum_{y\in F^n}\mathscr{F}\phi(y).$$ \index{Poisson formula!motivic} \section{Families of Schwartz-Bruhat functions} \subsection{Parametrising domains of definition} \label{sect.domainsofdef} Let $k$ be an algebraically closed field of characteristic zero, and $C$ a smooth projective connected curve over~$k$ \begin{definition} Let $X$ be a variety over $k$. A function $\alpha:X\to \mathbf{Z}$ is said to be constructible if for every $n\in \mathbf{Z}$, $\alpha^{-1}(n)$ is a constructible subset of $X$. \index{constructible function} \end{definition} \begin{remark} When $X = C$, $\alpha:C\to \mathbf{Z}$ is constructible if and only if it is constant on some dense open subset of $C$. If it is zero on some dense open subset of $C$, we say it is \textit{almost zero}. \index{almost zero function} \end{remark} The value of a constructible function $\alpha$ at a point $v\in C$ will be denoted $\alpha_v$. \begin{definition}\label{affinespacedef} Let $M,N:C\longrightarrow\mathbf{Z}$ be constructible functions such that $M\leq N$. Let $U\subset C$ be a dense open set over which they are constant, equal respectively to $M_0\in \mathbf{Z}$ and $N_0\in \mathbf{Z}$. We will denote by $\mathbf{A}_C^{(M,N)}$ \index{Ac@$\mathbf{A}_C^{(M,N)}$} the variety over $C$ isomorphic to $U\times \mathbf{A}_k^{(M_0,N_0)}$ over $U$, and with fibre above $u\not\in U$ given by~$\mathbf{A}_{k}^{(M_u,N_u)}$. Furthermore, we will denote by $\left(\mathbf{A}_C^{(M,N)}\right)^{n}$,\ or $\mathbf{A}_C^{n(M,N)}$, \index{Acn@$\mathbf{A}_C^{n(M,N)}$} the variety over $C$ defined by $$\mathbf{A}_C^{(M,N)}\times_C\ldots \times _C\mathbf{A}_C^{(M,N)},$$ where the product contains $n$ factors. \end{definition} Recall that we denote by $\mathcal{I}_0$ the additive monoid $\mathbf{Z}_{\geq 0}^p$ and $\mathcal{I} = \mathcal{I}_0\setminus\{0\}$. Fix two almost zero functions $\alpha,\beta:C\longrightarrow\mathbf{Z}$ such that $\alpha\leq 0\leq \beta$. Denote by $U$ a dense open subset of $C$ over which $\alpha$ and $\beta$ are zero. Fix also two families of non-negative integers~$M = (M_{\i})_{\i\in \mathcal{I}_0}$ and $N = (N_{\i})_{\i\in \mathcal{I}_0}$, with $M_0 = 0$ and $N_0 = 0$. We have a family of varieties $\left(\mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}\right)_{\i\in \mathcal{I}_0}$ over $C$, giving rise to symmetric products \begin{equation}\label{domdefinitionformula}\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N) := S^{\mathbf{m}}\left(\left(\mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}\right)_{\i\in \mathcal{I}_0}\right)\end{equation} for all $\mathbf{m}\in \mathcal{I}_0$. \index{Am@$\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)$!definition} By the definition of symmetric products, all these objects are varieties endowed with natural morphisms to $S^{\mathbf{m}}C$, which we denote $\varpi_{\mathbf{m}}:\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\longrightarrow S^{\mathbf{m}}C$ for every $\mathbf{m}\in \mathcal{I}_0$. \begin{remark} For clarity, let us point out that, where in definition \ref{affinespacedef} objects denoted $M,N$ were \textit{constructible functions}, from now on, except when explicitly stated (that is, except in section \ref{localkernel}), they will denote integers. The possible variation above a finite number of places will be taken care of by the almost zero functions $\alpha$ and $\beta$. \end{remark} \begin{remark}\label{remark.m0expansion} Denote by $\Sigma = \{x_1,\ldots,x_s\}$ the complement $C\setminus U$, which is a finite union of closed points. By corollary \ref{multzeta} from section \ref{sect.cutapplications} of chapter \ref{eulerproducts}, $\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)$ is the disjoint union of locally closed subsets isomorphic to products \begin{equation}\label{m0expansion}S^{\mathbf{m}_0}\left(\left(\mathbf{A}_{U}^{n(-M_{\i},N_{\i})}\right)_{\i\in\mathcal{I}}\right)\times \prod_{j=1}^sS^{\mathbf{m}_j}\left(\left(\mathbf{A}_{\{x_j\}}^{n(\alpha_{x_j} - M_{\i},\beta_{x_j} + N_{\i})}\right)_{\i\in\mathcal{I}_0}\right)\end{equation} for all $\mathbf{m}_0,\ldots,\mathbf{m}_s\in\mathcal{I}_0$ such that $\mathbf{m}_0 + \ldots + \mathbf{m}_s = \mathbf{m}$. By example \ref{generalexn} of chapter \ref{eulerproducts}, the variety~(\ref{m0expansion}) is isomorphic to $$S^{\mathbf{m}_0}\left(\left(\mathbf{A}_{U}^{n(-M_{\i},N_{\i})}\right)_{\i\in\mathcal{I}}\right)\times \prod_{j=1}^s \mathbf{A}_{\{x_j\}}^{n(\alpha_{x_j} - M_{\mathbf{m}_j},\beta_{x_j} + N_{\mathbf{m}_j})}.$$ \end{remark} \begin{remark} Though these definitions depend on the choice of $U$, the ring $$\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$$ which we will consider later won't depend on it. \end{remark} \subsection{The fibres of the domains of definition}\label{sect.fibres} \index{Am@$\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)$!fibre} Let $D\in S^{\mathbf{m}}C$, with residue field $\kappa(D)$. We want to describe the fibre $\varpi_{\mathbf{m}}^{-1}(D)$ above the point $D$. We know that $S^{\mathbf{m}}C$ is the disjoint union of locally closed subsets isomorphic to $$S^{\mathbf{m}_0}U\times S^{\mathbf{m}-\mathbf{m}_0}\Sigma$$ for all $\mathbf{m}_0\in\mathcal{I}_0$ such that $\mathbf{m}_0\leq \mathbf{m}$. Let $\mathbf{m}_0\leq \mathbf{m}$ be such that $D$ belongs to the subset corresponding to $\mathbf{m}_0$. Since the field $k$ is algebraically closed, $S^{\mathbf{m}-\mathbf{m}_0}\Sigma$ is a disjoint union of a finite number of closed points, and therefore the variety $S^{\mathbf{m}_0}U\times S^{\mathbf{m}-\mathbf{m}_0}\Sigma$ has a finite number of connected components each corresponding to a point of $S^{\mathbf{m}-\mathbf{m}_0}\Sigma$. Thus, the schematic point $D$ of $S^{\mathbf{m}}C$ is of the form $(D_U,D_{\Sigma})$, where $D_U\in S^{\mathbf{m}_0} U$ and $D_{\Sigma}\in S^{\mathbf{m}-\mathbf{m}_0}\Sigma$. Let $\pi = (m_{\i})_{\i\in\mathcal{I}}$ be the partition of $\mathbf{m}_0$ such that $D_U\in S^{\pi}U$. On the other hand, $D_{\Sigma}$ is an effective zero-cycle with coefficients in $\mathcal{I}_0$ and with support contained in $\Sigma$, so it may be written in the form $$D_{\Sigma} = \mathbf{m}_1 x_1 + \ldots + \mathbf{m}_s x_s\in S^{\mathbf{m}-\mathbf{m}_0} \Sigma$$ for $\mathbf{m}_1,\dots,\mathbf{m}_s\in\mathcal{I}_0$ such that $\mathbf{m}_0 + \ldots + \mathbf{m}_s = \mathbf{m}$. Using remark \ref{remark.m0expansion} as well as proposition~\ref{affine} from section~\ref{affinespaces} of chapter~\ref{eulerproducts}, the fibre above $D$ is of the form \begin{equation}\label{fibre}\prod_{\i\in\mathcal{I}} \mathbf{A}_{\kappa(D)}^{m_{\i}n(N_{\i} + M_{\i}) }\times_{\kappa(D)} \prod_{j=1}^s \mathbf{A}_{\kappa(D)}^{n(\alpha_{x_j} - M_{\mathbf{m}_j},\beta_{x_j} + N_{\mathbf{m}_j})}.\end{equation} More precisely, we have the diagram $$\xymatrix{\left(\prod_{\i\in\mathcal{I}} U^{m_{\i}}\right)_* \ar[d] & \ar[l]\left(\prod_{\i\in \mathcal{I}}\mathbf{A}^{m_{\i}n(-M_{\i},N_{\i})}_U\right)_{*,U}\ar[d]\\ S^{\pi}U & \ar[l] S^{\pi}((\mathbf{A}^{m_{\i}n(-M_{\i},N_{\i})}_U)_{\i\in\mathcal{I}})}$$ where the vertical maps are the quotient morphisms, the upper horizontal line is a trivial vector bundle, and the lower line is a vector bundle. Let $D'_U$ be a point of $\left(\prod_{\i\in\mathcal{I}} U^{m_{\i}}\right)_*$ lifting $D_U\in S^{\pi}U$. Taking fibres above $D_U$ and $D'_U$ and denoting by $K$ the residue field of~$D'_U$ (so that $K$ is a finite extension of $\kappa(D)$), the diagram becomes $$\xymatrix{ \mathrm{Spec}\, K \ar[d] & \ar[l] \prod_{\i\in \mathcal{I}}\mathbf{A}_K^{m_{\i}n(-M_{\i},N_{\i})}\ar[d] \\ \mathrm{Spec}\, \kappa(D) & \ar[l]\prod_{\i\in\mathcal{I}} \mathbf{A}_{\kappa(D)}^{m_{\i}n(N_{\i} + M_{\i}) }}$$ so that we have a linear $K$-isomorphism $$\prod_{\i\in \mathcal{I}}\mathbf{A}_K^{m_{\i}n(-M_{\i},N_{\i})} \simeq \prod_{\i\in \mathcal{I}}\mathbf{A}_{\kappa(D)}^{m_{\i}n(N_{\i} + M_{\i}) }\otimes_{\kappa(D)}K.$$ In other words, the fibre above $D_U$ is a \textit{twisted form} of the variety $$\prod_{\i\in \mathcal{I}}\mathbf{A}_{\kappa(D)}^{m_{\i}n(-M_{\i},N_{\i})}$$ which splits above the finite extension $K$ of $\kappa(D)$. We may conclude that the fibre $\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_{D}$ above $D$ may be seen as the domain of definition of a Schwartz-Bruhat function, up to extension of scalars to some finite extension of $\kappa(D)$. \begin{remark} We write $\mathbf{A}_{\kappa(D)}^{m_{\i}n(N_{\i} + M_{\i})}$ instead of $\mathbf{A}_{\kappa(D)}^{m_{\i}n(M_{\i},N_{\i})}$ to signify that through the quotient morphism, the chosen identification of the form (\ref{affineidentification}) is twisted. Therefore, when looking at functions on such a fibre $\omega_{\mathbf{m}}^{-1}(D)$ later, for example in section \ref{twistedsummation}, we will pull them back via the quotient morphism before performing operations on them which via (\ref{affineidentification}) can be understood as analogues of operations from classical Fourier theory . \end{remark} Let us make the particular case where $D\in S^{\mathbf{m}}C(k)$ more explicit. Recall $k$ is algebraically closed. Thus, $D$ may be seen as an effective zero-cycle $\sum \i_v v$ for points $v\in C(k)$ and $\i_v\in \mathcal{I}_0$, and (\ref{fibre}) may be written in the form $$\prod_{v\in C} \mathbf{A}_k^{n (\alpha_v - M_{\i_v},\beta_v + N_{\i_v})}$$ because the field $k$ is algebraically closed. \subsection{Uniform choice of uniformisers}\label{sect.uniformisers}\index{uniformisers} In section \ref{sect.domainsofdef}, we have defined families of domains of definition of Schwartz-Bruhat functions: the product $\prod_{v}\mathbf{A}_k^{(M_v,N_v)}$ has to be understood as representing $\prod_{v}t_v^{M_v}\mathcal{O}_v/t_v^{N_v}\mathcal{O}_v.$ However, this identification depends on the choice of the uniformisers $t_v$ at each place $v$, and therefore so will some of the operations we are going to perform in what follows. This choice has to be made as uniformly as possible, so that these operations remain algebraic. We explain in this section how this can be done. \begin{lemma} Fix a non-constant element $t\in k(C)$. Then there is a dense open set $U\subset C$ such that for all $v\in U$, the function $t_v = t-t(v)$ is a local parameter at $v$. \end{lemma} \begin{proof} Denote by $U_0$ an open dense set of $C$ on which $t$ is regular. We therefore get a holomorphic differential $\mathrm{d}t$ on $U_0$. It is non-vanishing when restricted to some open dense subset $U$ of $U_0$. At any $v\in U$, the function $t_v$ is an element of the maximal ideal $\mathfrak{m}_v$, and its differential $\mathrm{d}(t_v) = \mathrm{d}t$ is non-zero, so it is a local parameter. \end{proof} From now on, we fix such an element $t\in k(C)$. The uniformiser at any place $v$ in the open set $U$ furnished by the lemma will be given by $t_v = t-t(v)$. For $v\in C\backslash U$, we fix some arbitrary uniformiser~$t_v$. \begin{lemma}\label{coef} Let $f\in k(C)$. For any $v\in C$, write the $t_v-$adic expansion of $f$ as $$\sum_{p\in \mathbf{Z}}a_p(f,v)t_v^p\in F_v.$$ There is an open dense subset $U'$ of $U$ such that for any integer $p$, the map $$\begin{array}{rcl}a_p(f,\cdot): U'&\longrightarrow& k\\ v&\mapsto& a_p(f,v)\end{array}$$ is a regular function. \end{lemma} \begin{proof} Denote by $U'$ an open dense subset of $U$ over which $f$ is regular, so that the $a_{i}$ with $ i<0$ are identically zero. For any $v\in U'$, we have by definition $f(v) = a_0(f,v)$, and therefore $a_0(f,\cdot) = f_{|U'}$ is a regular map. Now, the differential $\mathrm{d}f$ is a holomorphic differential on $U'$, so there is a regular function $f_1:U'\longrightarrow k$ such that $\mathrm{d}f = f_1\mathrm{d}t.$ On the other hand, differentiating the $t_v$-adic expansion of $f$, we get, since $\mathrm{d}t = \mathrm{d}t_v$, $$\mathrm{d}f = (a_1(f,v) + 2a_2(f,v) t_v + 3a_2(f,v)t_v^2+ \ldots)\mathrm{d}t$$ Thus, since the differential $\mathrm{d}t$ doesn't vanish on $U'$, we have $a_1(f,\cdot) = f_1$, which is regular. To prove regularity of $a_2(f,\cdot)$, replace $f$ by $f_1$ and proceed in the same way. By induction, we get regularity of all $a_p$'s. \end{proof} \subsection{Families of Schwartz-Bruhat functions} \begin{definition}\label{def.constr.families} Let $\mathbf{m}\in \mathcal{I}_0$. Let $\alpha,be:C\to \mathbf{Z}$ be almost zero functions such that $\alpha\leq 0 \leq \beta$, and let $M = (M_{\i})_{\i\in\mathcal{I}_0}$, $N= (N_{\i})_{\i\in\mathcal{I}_0}$ be two families of non-negative integers such that $M_0 = N_0 = 0$. The elements of $\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$\ are called constructible families of Schwartz-Bruhat functions of level $\mathbf{m}$. \index{Schwartz-Bruhat function!family} \end{definition} Let $\Phi\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$ be such a family of functions, and let $D$ be a schematic point of $S^{\mathbf{m}}C$. The fibre of $\mathscr{A_{\mathbf{m}}}(\alpha,\beta,M,N)$ above $D$ has been computed in (\ref{fibre}). Restricting~$\Phi$ to it, we obtain, up to extension of scalars to a finite extension of the residue field of $D$, a Schwartz-Bruhat function $\Phi_D$ in the sense of Hrushovski and Kazhdan. Thus, $\Phi$ gives rise to a family of ``twisted'' Schwartz-Bruhat functions $(\Phi_D)_{D\in S^{\mathbf{m}}C}$ indexed by $D\in S^{\mathbf{m}}C$. In the particular case when $D\in S^{\mathbf{m}}C(k)$, denoting by $|D|$ the support of the effective zero-cycle~$D$, in the notation of section \ref{CLglobal}, we have $$\Phi_D\in\mathscr{S}\left(\prod_{v\in |D|\cup \Sigma}k(C)_v^n,(\alpha_v-M_{\i_v},\beta_v+N_{\i_v})_{v\in |D|\cup \Sigma}\right),$$ and $k(C)_v$ is the completion of the function field $k(C)$ at the place $v$. \subsection{Uniformly smooth or uniformly compactly supported families}\label{sect.uniformfamilies} In a similar manner to (\ref{immersion}) and (\ref{projection}), we may define constructible morphisms $$ p: \mathbf{A}_C^{(\alpha - M_{\i},\beta + N_{\i})}\to \mathbf{A}_C^{(\alpha-M_{\i},\beta)}$$ and $$i: \mathbf{A}_C^{(\alpha,\beta + N_{\i})}\to \mathbf{A}_C^{(\alpha - M_{\i},\beta + N_{\i})}$$ for every $\i\in \mathcal{I}_0$. Above $v\in C$, the first one is a projection on the first $\beta_v-\alpha_v + M_{\i}$ coordinates, and the second one is $$(x_{\alpha_v},\ldots , x_{\beta_v + N_{\i} -1})\mapsto (\!\!\!\underbrace{0,\ldots,0}_{M_{\i} \ \text{coordinates}}\!\!\!, x_{\alpha_v},\ldots , x_{\beta_v + N_{\i} -1}).$$ Taking symmetric products, we get, for every $\mathbf{m}\in\mathcal{I}_0$, morphisms $$p_{\mathbf{m}}: S^{\mathbf{m}} ((\mathbf{A}_C^{(\alpha - M_{\i},\beta + N_{\i})})_{\i\in\mathcal{I}_0}) \to S^{\mathbf{m}} ((\mathbf{A}_C^{(\alpha - M_{\i},\beta)})_{\i\in\mathcal{I}_0}),$$ $$i_{\mathbf{m}}: S^{\mathbf{m}} ((\mathbf{A}_C^{(\alpha,\beta + N_{\i})})_{\i\in\mathcal{I}_0}) \to S^{\mathbf{m}} ((\mathbf{A}_C^{(\alpha - M_{\i},\beta + N_{\i})})_{\i\in\mathcal{I}_0}),$$ that is, $$p_{\mathbf{m}}:\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N) \to \mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,0) $$ and $$i_{\mathbf{m}}:\mathscr{A}_{\mathbf{m}}(\alpha,\beta,0,N) \to \mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N) .$$ Those induce injective ring morphisms $$p_{\mathbf{m}}^*:\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,0)} \to \mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$$ and $$i_{\mathbf{m},!}:\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,0,N)} \to \mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}.$$ \begin{definition} Let $\Phi\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$ be a constructible family of Schwartz-Bruhat functions of level $\mathbf{m}$. It is said to be \begin{itemize}\item \textit{uniformly smooth} if it belongs to the image of the morphism $p^*_{\mathbf{m}}$. \item \textit{uniformly compactly supported} if it belongs to the image of the morphism $i_{\mathbf{m},*}$. \end{itemize}\index{Schwartz-Bruhat function!family!uniformly smooth}\index{Schwartz-Bruhat function!family!uniformly compactly supported} \end{definition} A word of explanation on this terminology, which is inherited from the classical $p$-adic setting: let $\Phi$ be a constructible family of functions. If $\Phi$ is uniformly smooth, then every~$\Phi_D$ for $D = \sum_{v}\i_vv\in S^{\mathbf{m}}C(k)$ may be seen as an element of $\mathscr{E}xp\mathscr{M}_{\prod_{v}\mathbf{A}_k^{n(\alpha_v-M_{\i_v},\beta_v)}}$, that is as a function on $\prod_{v}t^{\alpha_v-M_{\i_v}}\mathcal{O}_v$, invariant modulo $\prod_{v}t^{\beta_v}\mathcal{O}_v$, this invariance domain being independent of $D$. In the same manner, if~$\Phi$ is uniformly compactly supported, all~$\Phi_D$ are supported inside $\prod_{v}t^{\alpha_v}\mathcal{O}_v$ independently of $D$. \section{Fourier transformation in families}\label{sect.fouriertransform} \subsection{Local construction of the Fourier kernel in families}\label{localkernelfamilies}\index{Fourier kernel!in families!local} Let $\omega\in \Omega_{F/k}$ be a non-zero meromorphic differential form. For every closed point $v\in C$ we write $\nu_v=-\mathrm{ord}_v\omega$, so that $\div\, \omega = -\sum_{v\in C}\nu_vv$. Then for every $v$ we get a $k$-linear map $r_v:F_v\longrightarrow k$ given by $$r_v(x) = \mathrm{res}_v(x\omega).$$ It is non-zero, and its conductor, that is, the least integer $a$ such that $r_{v|t^a\mathcal{O}_v}$ is zero, is equal to $\nu_v$. Let $M\leq N$ be constructible functions as in definition \ref{affinespacedef}, with the additional assumption that for every~$v$, we have $\nu_v\leq N_v$. Using lemma \ref{coef}, we see that the map~$r$ gives rise to a piecewise morphism $r^{(M,N)}:\mathbf{A}_C^{(M,N)}\longrightarrow\mathbf{A}_k^1$, sending an element $(v,x)$ to $r_v(x)$. Fix two additional constructible functions $M',N':C\longrightarrow\mathbf{Z}$ such that $M'\leq N'$. For every $v$, the product map $F_v\times F_v\longrightarrow F_v$ defines a morphism \begin{equation}\label{product}\mathbf{A}_{\kappa(v)}^{(M_v,N_v)}\times_{\kappa(v)} \mathbf{A}_{\kappa(v)}^{(M'_v,N'_v)}\longrightarrow \mathbf{A}_{\kappa(v)}^{(M_v+M_v',N_v'')}\end{equation} where $N'' = \min\{M+N',M'+N\}$ (see (\ref{localprod})). More precisely, there is a morphism of constructible sets over~$C$ $$\mathbf{A}_C^{(M,N)}\times_C \mathbf{A}_C^{(M',N')}\longrightarrow \mathbf{A}_C^{(M+M',N'')}$$ such that for every $v\in C$ the induced morphism on the fibre above $v$ is (\ref{product}). When $N''\geq \nu$, for example when $M'=\nu-N$ and $N' =\nu-M$, this can be composed with $r^{(M+M',N'')}$ to get a constructible morphism \begin{equation}\label{kernelfamily}\mathbf{A}_C^{(M,N)}\times_C \mathbf{A}_C^{(M',N')}\longrightarrow\mathbf{A}^1.\end{equation} The restriction to the fibre above $v$ is given by the Fourier kernel $(x,y)\mapsto r_v(xy)$ from~(\ref{localkernel}). Thus, the map (\ref{kernelfamily}) may be interpreted as a parametrisation of all local Fourier kernel maps induced by the differential form $\omega$. \subsection{Global construction of the Fourier kernel}\label{sect.fourkernel} \index{Fourier kernel!in families!global} Fix two almost zero functions $\alpha\leq 0\leq \beta$, and two non-negative families of integers $M = (M_{\i})_{\i\in\mathcal{I}_0}$ and $N = (N_{\i})_{\i\in\mathcal{I}_0}$ such that $M_0 =N_0 = 0$. According to the discussion in the previous paragraph, for any $\i$ there exists a constructible Fourier kernel morphism $$\mathbf{A}_C^{n(\alpha-M_{\i} ,\beta + N_{\i})}\times_C\mathbf{A}_C^{n(\nu-\beta-N_{\i},\nu- \alpha + M_{\i})}\to \mathbf{A}^1$$ Taking symmetric products, we get morphisms \begin{equation}\label{eq.fourierkernel}r_{\mathbf{m}}:\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\times_{S^{\mathbf{m}}C}\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)\to \mathbf{A}^1.\end{equation} for any $\mathbf{m}\in\mathcal{I}_0,$ using the following straightforward lemma: \begin{lemma}\label{symmetricfibreproduct} Let $\mathscr{X} = (X_{\i})_{\i}$ and $\mathscr{Y} = (Y_{\i})_{\i}$ be families of constructible sets over $X$, and assume that for every~$\i$ we are given a constructible morphism $f_{\i}:X_{\i}\times_XY_{\i}\to \mathbf{A}^1$. Denote by $\mathscr{X}\times_X\mathscr{Y}$ the family $(X_{\i}\times_X Y_{\i})_{\i}$. Then for every $\pi\in\mathbf{N}^{(\mathcal{I})}$ there is a natural piecewise isomorphism $$S^{\pi}(\mathscr{X}\times_X\mathscr{Y})\simeq S^{\pi}(\mathscr{X})\times_{S^{\pi}(X)}S^{\pi}(\mathscr{Y}) $$ given by $$\sum_{\i}\i((x_{\i,1},y_{\i,1}),\ldots,(x_{\i,n_{\i}},y_{\i,n_{\i}}))\mapsto \left(\sum_{\i}\i(x_{\i,1} + \ldots + x_{\i,n_{\i}}),\sum_{\i}\i(y_{\i,1} + \ldots + y_{\i,n_{\i}})\right), $$ through which the morphism $f^{(\pi)}:S^{\pi}(\mathscr{X}\times_X\mathscr{Y})\to \mathbf{A}^1$ becomes $$\left(\sum_{\i}\i(x_{\i,1} + \ldots + x_{\i,n_{\i}}),\sum_{\i}\i(y_{\i,1} + \ldots + y_{\i,n_{\i}})\right)\mapsto \sum_{\i}(f(x_{\i,1},y_{\i,1}) + \ldots f(x_{\i,n_{\i}},y_{\i,n_{\i}})).$$ \end{lemma} Using the description of the fibre above $D$ from \ref{sect.fibres}, we see that for every $D\in S^{\mathbf{m}}C$, the morphism $$r_{D}: \mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D\times_{\kappa(D)}\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)_D\to \mathbf{A}^1$$ induced by $r_{\mathbf{m}}$ on the fibre above $D$ is a twisted form of the Fourier kernel of Hrushovski and Kazhdan. The twisting being linear, $r_D$ is a $\kappa(D)$-bilinear map. \subsection{Fourier transform}\index{Fourier transform!in families} To define the Fourier transform of a family of Schwartz-Bruhat functions $$\Phi\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)},$$ we start by defining the factor we need to normalise it by, so that it does not depend on the choice of $\beta$ and $N$. For this, we start with the family of $C$-varieties $(\mathbf{A}_C^{n(\beta + N_{\i})})_{\i\in\mathcal{I}_0}$. Taking symmetric products, we get a constructible morphism \begin{equation}\label{normalisation}S^{\mathbf{m}}((\mathbf{A}_C^{n(\beta + N_{\i})})_{\i\in\mathcal{I}_0})\to S^{\mathbf{m}}C.\end{equation} Using the notation in remark \ref{remark.m0expansion} and section \ref{sect.fibres}, $U$ an open dense subset of $C$ above which $\beta$ is zero, $\Sigma = C\setminus U$, $\mathbf{m}_0\in\mathcal{I}_0$ such that $\mathbf{m}_0 \leq \mathbf{m}$, $D_{\Sigma} = \sum_{v\in\Sigma}\i_v [v]\in S^{\mathbf{m}-\mathbf{m}_0}\Sigma$ and $\pi = (m_{\i})_{\i\in \mathcal{I}}$ a partition of $\mathbf{m}_0$ we get, by proposition~\ref{affine} from section~\ref{affinespaces} of chapter~\ref{eulerproducts}, that the restriction of (\ref{normalisation}) above the locally closed subset $S^{\pi}U\times \{D_{\Sigma}\}$ of $S^{\mathbf{m}_0}C$ is a vector bundle of rank $\sum_{\i}nm_{\i}N_{\i} + \sum_{v\in\Sigma}n(\beta_{v} + N_{\i_v})$. Thus, the class of (\ref{normalisation}) in $\mathscr{E}xp\mathscr{M}_{S^{\mathbf{m}}C}$ is $$\sum_{\mathbf{m}_0 \leq \mathbf{m}}\ \ \sum_{\substack{D_{\Sigma} \in S^{\mathbf{m}-\mathbf{m}_0}\Sigma\\ D_{\Sigma} =\sum_{v\in\Sigma}\i_v [v]}}\ \ \ \sum_{\substack{\pi = (m_{\i})_{\i\in \mathcal{I}}\\ \sum_{\i\in\mathcal{I}}m_{\i}\i = \mathbf{m}_0}}\mathbf{L}^{\sum_{\i}nm_{\i}N_{\i} + \sum_{v\in\Sigma}n(\beta_{v} + N_{\i_v})}[S^{\pi}U\times \{D_{\Sigma}\}\to S^{\mathbf{m}}C],$$ and it makes sense to consider $$[S^{\mathbf{m}}((\mathbf{A}_C^{n(\beta + N_{\i})})_{\i\in\mathcal{I}_0})]^{-1}\in\mathscr{E}xp\mathscr{M}_{S^{\mathbf{m}}C}$$ defined by the formula $$\sum_{\mathbf{m}_0 \leq \mathbf{m}}\ \ \sum_{\substack{D_{\Sigma} \in S^{\mathbf{m}-\mathbf{m}_0}\Sigma\\ D_{\Sigma} =\sum_{v\in\Sigma}\i_v [v]}}\ \ \ \sum_{\substack{\pi = (m_{\i})_{\i\in \mathcal{I}}\\ \sum_{\i\in\mathcal{I}}m_{\i}\i = \mathbf{m}_0}}\mathbf{L}^{-\sum_{\i}nm_{\i}N_{\i} - \sum_{v\in\Sigma}n(\beta_{v} + N_{\i_v})}[S^{\pi}U\times \{D_{\Sigma}\}\to S^{\mathbf{m}}C],$$ that is, the same one as above, but with the powers of $\mathbf{L}$ inverted. \begin{remark} This element is indeed the inverse of $[S^{\mathbf{m}}((\mathbf{A}_C^{n(\beta + N_{\i})})_{\i\in\mathcal{I}_0})]$ in the ring $\mathscr{E}xp\mathscr{M}_{S^{\mathbf{m}}C}$, so our notation is consistent. \end{remark} We denote by $R_{\mathbf{m}}$ the element of the Grothendieck ring $$\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\times_{S^{\mathbf{m}}C}\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)}$$ given by $$R_{\mathbf{m}}:= [\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\times_{S^{\mathbf{m}}C}\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M), r_{\mathbf{m}}].$$ Moreover, we denote by $\mathrm{pr}_1,\mathrm{pr}_2$ the projections $$\mathrm{pr}_1:\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\times_{S^{\mathbf{m}}C}\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)\to \mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)$$ and $$\mathrm{pr}_2:\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\times_{S^{\mathbf{m}}C}\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)\to \mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M).$$ Let $\Phi\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$ be a constructible family of Schwartz-Bruhat functions. The family $\mathscr{F}\Phi\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu - \alpha,N,M)}$ is defined by the formula $$\mathscr{F} \Phi:= [(S^{\mathbf{m}}((\mathbf{A}_C^{n(\beta + N_{\i})})_{\i\in\mathcal{I}_0}))]^{-1}(\mathrm{pr}_2)_!((\mathrm{pr}_1)^*\Phi \cdot R_{\mathbf{m}})\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)}$$ where $\cdot$ is the product in the Grothendieck ring $\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\times_{S^{\mathbf{m}}C}\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)}$. Explicitly, if $\Phi = [V,f]\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$, then $\mathscr{F} \Phi$ is given by $$ [V\times_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\times_{S^{\mathbf{m}}C}\mathscr{A}_{\mathbf{m}}(\nu- \beta,\nu-\alpha,N,M), f\circ\mathrm{pr}_1 + r_{\mathbf{m}}(\mathrm{pr}_2\cdot\mathrm{pr}_3)]$$ multiplied by the normalisation factor $[(S^{\mathbf{m}}((\mathbf{A}_C^{n(\beta + N_{\i})})_{\i\in\mathcal{I}_0}))]^{-1}$. \begin{remark}\label{duality} Taking $N=0$ (resp. $M=0$) one can see that the Fourier transform of a family of uniformly smooth (resp. uniformly compactly supported) functions is a family of uniformly compactly supported (resp. uniformly smooth) functions. \end{remark} \subsection{Compatibility between symmetric products and Fourier transformation}\label{sect.symprodfourtransform} This section deals with the special case where $\Phi\in \mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$ is given by a symmetric product. More precisely, suppose we are given, for any $\i\in\mathcal{I}_o$, an element $\phi_{\i}\in \mathscr{E}xp\mathscr{M}_{\mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}}$, and that $\Phi$ is the symmetric product $S^{\mathbf{m}}((\phi_{\i})_{\i \in \mathcal{I}_0})$. There is a natural notion of Fourier transformation for elements of the ring $\mathscr{E}xp\mathscr{M}_{\mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}}$, defined in the following manner: denote by $r_{\i}$ the Fourier kernel $$\mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}\times_C \mathbf{A}_C^{n(\nu -\beta+ N_{\i},\nu - \alpha + M_{\i})}\to \mathbf{A}^1$$ defined in (\ref{kernelfamily}) and by $R_{\i}$ the element of the Grothendieck ring $$\mathscr{E}xp\mathscr{M}_{\mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}\times_C \mathbf{A}_C^{n(\nu -\beta+ N_{\i},\nu - \alpha + M_{\i})}}$$ given by $$R_{\i} = [\mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}\times_C \mathbf{A}_C^{n(\nu -\beta+ N_{\i},\nu - \alpha + M_{\i})},r_{\i}].$$ Moreover, denoting again by $U$ the open subset of $C$ on which $\alpha$ and $\beta$ are zero and by $\Sigma$ its complement, we define the element $\left(\mathbf{A}_C^{n(\beta + N_i)}\right)^{-1}$ of $\mathscr{E}xp\mathscr{M}_C$ by $$\mathbf{L}^{-nN_{\i}}[U\to C]+ \sum_{v\in \Sigma}\mathbf{L}^{-n(\beta_v - N_{i})}[\{v\}\to C].$$ We denote by $\mathrm{pr}_1$, $\mathrm{pr}_2$ the projections $$\mathrm{pr}_1: \mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}\times_C \mathbf{A}_C^{n(\nu -\beta+ N_{\i},\nu - \alpha + M_{\i})}\to \mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}$$ and $$\mathrm{pr}_2: \mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}\times_C \mathbf{A}_C^{n(\nu -\beta+ N_{\i},\nu - \alpha + M_{\i})}\to \mathbf{A}_C^{n(\nu -\beta+ N_{\i},\nu - \alpha + M_{\i})}.$$ We then define $$\mathscr{F} \phi_i = \left(\mathbf{A}_C^{n(\beta + N_i)}\right)^{-1}(\mathrm{pr}_2)_!((\mathrm{pr}_1)^*\phi_i\cdot R_{\i} )\in \mathscr{E}xp\mathscr{M}_{\mathbf{A}_C^{n(\nu -\beta+ N_{\i},\nu - \alpha + M_{\i})}}$$ where $\cdot$ is the product in the ring $\mathscr{E}xp\mathscr{M}_{\mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}\times_C \mathbf{A}_C^{n(\nu -\beta+ N_{\i},\nu - \alpha + M_{\i})}}.$ \begin{remark}\label{familyrestrictiontov} For every $v\in C$, denote by $\phi_{\i,v}\in \mathscr{E}xp\mathscr{M}_{\mathbf{A}_{\kappa(v)}^{n(\alpha_v-M_{\i},\beta_v + N_{\i}}}$ the local Schwartz-Bruhat function obtained from $\phi_{\i}$ by restriction to the fibre $\mathbf{A}_{\kappa(v)}^{n(\alpha_v-M_{\i},\beta_v + N_{\i})}$ of $\mathbf{A}_C^{n(\alpha-M_{\i},\beta + N_{\i})}$ above $v$. By definition of the Fourier transform of a local Schwartz-Bruhat function (see section \ref{sect.localfouriertransform} ) as well as of the Fourier kernel $r_{\i}$ (see section \ref{localkernelfamilies}), we see that $$\mathscr{F}(\phi_{\i,v}) = (\mathscr{F}\phi_i)_v$$ in $\mathscr{E}xp\mathscr{M}_{\mathbf{A}_{\kappa(v)}^{n(\nu_v -\beta_v+ N_{\i},\nu_v - \alpha_v + M_{\i})}},$ where the right-hand side is the restriction of $\mathscr{F}\phi_i$ to the fibre $\mathbf{A}_{\kappa(v)}^{n(\nu_v -\beta_v+ N_{\i},\nu_v - \alpha_v + M_{\i})}$. Thus, the operation we just defined performs Fourier transformation on families of local Schwartz-Bruhat functions parametrised by $C$, with level constant except at a finite number of closed points. \end{remark} \begin{prop} \label{prop.symproductfourtransform} We have $$\mathscr{F} S^{\mathbf{m}}((\phi_{\i})_{\i\in\mathcal{I}_0}) = S^{\mathbf{m}}((\mathscr{F} \phi_{\i})_{\i\in\mathcal{I}_0}) .$$ \end{prop} \begin{proof} By definition of $r_{\mathbf{m}}$ and of $\Phi$, with the notations from the previous sections we have $$(\mathrm{pr}_2)_!((\mathrm{pr}_1)^*S^{\mathbf{m}}((\phi_{\i})_{\i\in\mathcal{I}_0})\cdot R_{\mathbf{m}}) = S^{\mathbf{m}}\left( ((\mathrm{pr}_2)_!((\mathrm{pr}_1)^*\phi_i\cdot R_{\i} ))_{\i\in\mathcal{I}_0}\right),$$ since the projections in the previous section are obtained from those in the previous section by taking symmetric products. Therefore, by lemma \ref{symmetricfibreproduct} we get the result, comparing the normalisation factors. \end{proof} \subsection{Inversion formula} This section will not be used in what follows, but we include it for the sake of completeness. For every $\i\in \mathcal{I}_0$, define a constructible morphism $\mathbf{A}_C^{(\alpha-M_{\i},\beta + N_{\i})}\to \mathbf{A}_C^{(\alpha-M_{\i},\beta + N_{\i})} $ of $C$-varieties by $$(v,x_{\alpha_{v}-M_{\i}},\ldots,x_{\beta_v + N_{\i}-1})\to (v,-x_{\alpha_{v}-M_{\i}},\ldots,-x_{\beta_v + N_{\i}-1}).$$ Taking symmetric products, it induces a constructible morphism of $S^{\mathbf{m}}C$-varieties $$\mathrm{inv}_{m}:\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\to \mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N).$$ This Fourier transform satisfies the following Fourier inversion formula: \begin{prop} For every $\Phi$ in $\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$, we have $$\mathscr{F}\four \Phi = \mathbf{L}^{n(2g-2)}\Phi\circ \mathrm{inv}_{\mathbf{m}}$$ in $\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$. \index{inversion formula!families} \end{prop} \begin{proof} Note that looking at the fibres of the constructions in the previous paragraphs above every rational point $D\in S^{\mathbf{m}}C(k)$, we recover the theory from \cite{CL}, described in section \ref{SBreview}. For a general schematic point $D\in S^{\mathbf{m}}C$, we recover a twisted version of this theory. Theorem 1.2.9 in \cite{CL} implies that we have $$\mathscr{F}\four \Phi_D(x) = \mathbf{L}^{n(2g-2)}\Phi_D(-x)$$ for all rational points $D\in S^{\mathbf{m}}C(k)$. The heart of the proof of theorem 1.2.9 is the fact that the domains of definition of our functions are affine spaces, that the Fourier kernel is bilinear and the use of the relation $[\mathbf{A}^1,\mathrm{id}] = 0$. Therefore, this proof generalises easily to the twisted setting, and the Fourier inversion formula is valid for $\Phi_D$ for any schematic point $D\in S^{\mathbf{m}}C$. We may conclude using lemma \ref{function.equality}. \end{proof} \section{Summation over $k(C)^n$} \label{summation.ratpoints} \subsection{Summation for twisted Schwartz-Bruhat functions}\label{twistedsummation} We defined constructible sets $\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)$ lying above the symmetric power $S^{\mathbf{m}}C$ in section \ref{sect.domainsofdef}, and gave an explicit description of the fibre above a point $D \in S^{\mathbf{m}}C$ in section \ref{sect.fibres}, which shows that it may be seen as a twisted version of the domain of definition of a Schwartz-Bruhat function in the sense of Hrushovski and Kazhdan's theory. A function $\Phi$ on $\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)$ may therefore be seen as a family $(\Phi_D)_{D\in S^{\mathbf{m}}C}$, each $\Phi_D$ being a function on $\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D$. In this section, we explain how Hrushovski and Kazhdan's operation of summation over rational points from section \ref{CLsummation} may be extended to the functions $\Phi_D$. In the notation of section \ref{sect.fibres}, we have $D = (D_U,D_{\Sigma})\in S^{\mathbf{m}_0}U\times S^{\mathbf{m}-\mathbf{m}_0}\Sigma$ where $U$ is a dense open subset over which $\alpha$ and $\beta$ are zero, and $\Sigma = C\setminus U$, a finite union of closed points, is its complement. Moreover, for some partition $\pi = (m_{\i})_{\i\in\mathcal{I}}$ of $\mathbf{m}_0$, we have $D_U\in S^{\pi}U$. Via the quotient morphism $$\left(\prod_{\i\in \mathcal{I}} U^{m_{\i}}\right)_*\to S^{\pi} U,$$ the schematic point $D_U\in S^{\pi}U$ pulls back to some $K$-point $D'_U$ of $\left(\prod_{\i\in \mathcal{I}}U^{m_{\i}}\right)_*$, where~$K$ is a finite extension of the residue field $\kappa(D)$ of $D$. We choose $D'_U$ so that $K$ is of minimal degree. We denote by $v_{\i,j}$ the projection of $D'_U$ on the $j$-th copy of $U$ in the factor~$U^{m_{\i}}$. This gives us a collection $\{v_{\i,j}\}_{\substack{\i\in \mathcal{I}\\ j\in\{1,\ldots,m_{\i}\}}}$ of~$K$-points of the open subset $U$ of the curve~$C$, all distinct because they come as projections from a point in the complement of the diagonal. \begin{remark}\label{rationalpointfibre} A different choice of $D'_U$ amounts to a permutation of the points $v_{\i,j}$ via the $G = \prod_{\i\in\mathcal{I}}\mathfrak{S}_{m_{\i}}$-action on $\left(\prod_{\i\in \mathcal{I}}U^{m_{\i}}\right)_*$. More precisely, the quotient morphism $\left(\prod_{\i\in \mathcal{I}} U^{m_{\i}}\right)_*\to S^{\pi} U$ being étale, the fibre product $\left(\prod_{i\in \mathcal{I}}U^{m_{\i}}\right)_* \times_{S^{\pi}U}\mathrm{Spec}\, \kappa(D)$ is the spectrum of an étale algebra $\mathcal{E}$ over $\kappa(D)$, endowed with a $G$-action such that $\mathcal{E}^{G} = \kappa(D)$. The étale algebra $\mathcal{E}$ is isomorphic to some power of $K$, and the point~$D'_U$ is one of the irreducible components of $\mathrm{Spec}\, \mathcal{E}$. If we denote by $H$ the subgroup of $G$ stabilising $D'_U$, then the invariant field $K^H$ is $\kappa(D)$ (see e.g. proposition \ref{subquotient} in chapter~\ref{eulerproducts}). Consequently, in the commutative diagram $$\xymatrix{\mathrm{Spec}\, K\ar[d] & \ar[d]^{q_{D_U}}\ar[l]\prod_{\i\in \mathcal{I}}\mathbf{A}_K^{m_{\i}n(-M_{\i},N_{\i})}\\ \mathrm{Spec}\, \kappa(D)& \ar[l]\prod_{\i\in \mathcal{I}}\mathbf{A}_{\kappa(D)}^{m_{\i}n(M_{\i}+N_{\i})}} $$ the vertical morphisms, induced by the quotient morphisms on the fibres above $D_U$ and~$D'_U$, are exactly the quotients by the action of the finite group $H$. The diagram gives an equivariant $K$-linear isomorphism $$\prod_{\i\in \mathcal{I}}\mathbf{A}_K^{m_{\i}n(-M_{\i},N_{\i})}\simeq \prod_{\i\in \mathcal{I}}\mathbf{A}_{\kappa(D)}^{m_{\i}n(M_{\i}+N_{\i})}\times_{\kappa(D)} K.$$ This induces a $\kappa(D)$-linear isomorphism between the $\kappa(D)$-points of the left-hand side (that is, the points invariant by the $H$-action) and the affine space $\prod_{\i\in \mathcal{I}}\mathbf{A}_{\kappa(D)}^{m_{\i}n(M_{\i}+N_{\i})}$. In other words, the morphism $q_{D_U}$ induces a $\kappa(D)$-linear isomorphism on $\kappa(D)$-points. \end{remark} We define an effective zero-cycle $E_D$ on the curve $C_{\kappa(D)}$, that is, the curve $C$ seen as a curve over the field $\kappa(D)$, by $$E_D:= \sum_{\i\in \mathcal{I}}M_{\i}(v_{\i,1} + \ldots + v_{\i,m_{\i}}) - \sum_{v\in\Sigma}(\alpha_v - M_{\i_v})v$$ where the $\i_v$ are given by $D_{\Sigma} = \sum_{v\in\Sigma}\i_v[v]$. The zero-cycle $E_D$ has the same field of definition as $D$, that is, $\kappa(D)$, and an associated Riemann-Roch space $$L_{\kappa(D)}(E_D ) = \{0\}\cup \{f\in \kappa(D)(C), \div f \geq -E_D\},$$ which is a finite-dimensional vector space over $\kappa(D)$. Then we may define a morphism $$\theta_D: L_{\kappa(D)}(E_D)^n \to \mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D = \prod_{\i\in \mathcal{I}}\mathbf{A}_{\kappa(D)}^{nm_{\i}(N_{\i} + M_{\i})}\times_{\kappa(D)} \prod_{v\in \Sigma}\mathbf{A}_{\kappa(D)}^{n(\alpha_v - M_{\i_v},\beta_v + N_{\i_v})}$$ in the following manner. We start by defining an intermediate morphism $$\theta'_D: L_{\kappa(D)}(E_D)^n \to \prod_{\i\in\mathcal{I}}\mathbf{A}_K^{nm_{\i}(-M_{\i},N_{\i})}\times_{K}\prod_{v\in \Sigma}\mathbf{A}_{K}^{n(\alpha_v - M_{\i_v},\beta_v + N_{\i_v})}.$$ For simplicity, assume $n=1$. Then, for any $f\in L_{\kappa(D)}(E_D)$: \begin{enumerate}\item \textbf{Image of $f$ in the component $\prod_{v\in \Sigma}\mathbf{A}_{K}^{(\alpha_v - M_{\i_v},\beta_v + N_{\i_v})}$:} it is given, for each $v\in \Sigma$, by the coefficients of the $v$-adic expansion of $f$ in the range $\alpha_v -M_{\i_v},\ldots ,\beta_v + N_{\i_v} -1$. \item \textbf{Image of $f$ in the component $\prod_{\i\in \mathcal{I}}\mathbf{A}_{K}^{m_{\i}(-M_{\i}, N_{\i})}$:} We may consider $f$ as an element of the function field $K(C)$ of the curve $C$ seen as a curve over $K$. We therefore send $f$ to the point of $$\prod_{\i\in \mathcal{I}}\prod_{j=1}^{m_{\i}}\mathbf{A}_K^{(-M_{\i},N_{\i})}$$ with $(\i,j)$-component given by the coefficients of orders $-M_{\i},\ldots, N_{\i}-1$ of the $v_{i,j}$-adic expansion of $f$. Note that $f$ will define a $\kappa(D)$-point of this above affine space. \end{enumerate} Then we compose $\theta'_D$ with the quotient morphism $$q_{D}: \prod_{\i\in\mathcal{I}}\mathbf{A}_K^{m_{\i}n(-M_{\i},N_{\i})}\times_{K}\prod_{v\in \Sigma}\mathbf{A}_{K}^{n(\alpha_v - M_{\i_v},\beta_v + N_{\i_v})}\to \mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D$$ to get $\theta_D = q_D\circ \theta'_D$. \begin{remark} Because of the composition with the quotient morphism, a different choice of $D'_U$ gives the same $\theta_D$. \end{remark} \begin{definition} For $\Phi_D\in \mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D}$, we define its \textit{summation over rational points}, denoted $\sum_{x\in\kappa(D)(C)} \Phi_D(x)$, to be the class of $\theta_D^*\Phi_D$ in $\mathscr{E}xp\mathscr{M}_{\kappa(D)}.$ \index{summation over rational points!twisted} \end{definition} \begin{lemma}[Poisson formula] \label{twistedpoisson} For $\Phi_D\in \mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D}$, we have $$\sum_{x\in \kappa(D)(C)^n}\Phi_D(x) = \mathbf{L}^{(1-g)n} \sum_{x\in \kappa(D)(C)^n}\mathscr{F} \Phi_D(x).$$ \index{Poisson formula!twisted} \end{lemma} \begin{proof} By the same kind of reduction as in the proof of theorem 1.3.10 in \cite{CL}, it suffices to prove the formula in the case where $\Phi_D$ is given by a class $[\{a\}\to \mathscr{A}(\alpha,\beta,M,N)_D]$ where $a$ is a rational point of $\mathscr{A}(\alpha,\beta,M,N)_D$. We may also assume $n=1$. We are going to use the notations from section \ref{twistedsummation} throughout the proof, denoting with a tilde the objects pertaining to the Fourier side: $\tilde{q}_D$ for the quotient morphism $$\prod_{\i\in\mathcal{I}}\mathbf{A}_K^{(-N_{\i},M_{\i})} \times_{K}\prod_{v\in\Sigma}\mathbf{A}_{K}^{(\nu_v -\beta_v - N_{\i_v}, \nu_v -\alpha_v + M_{\i_v})}\to\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha, N,M)_D,$$ $\tilde{E}_D$ for the divisor $$\tilde{E}_D = \sum_{\i\in\mathcal{I}}N_{\i}(v_{\i,1} + \ldots + v_{\i,m_{\i}}) - \sum_{v\in\Sigma}(\nu_v - \beta_v -N_{\i_{v}})v,$$ $\tilde{\theta}_D, \tilde{\theta'}_D$ for the summation morphisms, etc. Define the zero-cycle $$\Lambda_D = \sum_{\i,j}N_{\i}v_{\i,j} + \sum_{v\in \Sigma}(\beta_v + N_{\i_v})v = \tilde{E}_D - \div\, \omega$$ on $C_{\kappa(D)}$. It has the same field of definition as $D$, namely $\kappa(D)$. The proof is essentially the same as the proof of theorem 1.3.10 in \cite{CL}: it will boil down to the theorem of Riemann-Roch and Serre duality for the divisor $\Lambda_D$ on the curve $C_{\kappa(D)}$ over the field $\kappa(D)$. We refer to \cite{CL}, 1.3.7 for reminders on these results. Denote by $F_D$ the function field $\kappa(D)(C)$ of the curve $C_{\kappa(D)}$. For any divisor $E$ on $C_{\kappa(D)}$ define $$\Omega(E) = \{\omega\in \Omega_{F_D/\kappa(D)}, \div\, \omega \geq E\},$$ $$\mathbb{A}_{F_D}(E) = \{a\in \mathbb{A}_{F_D}, \div\, a \geq -E\}.$$ Recall that Serre's duality theorem says that for any divisor $E$ on $C_{\kappa(D)}$, the morphism $$\Omega_{F_D/\kappa(D)} \to \mathrm{Hom}(\mathbb{A}_{F_D},\kappa(D))$$ given by $$\omega\mapsto \left( (x_s)_s\mapsto \sum_{s}\mathrm{res}_s(x_s\omega)\right)$$ induces an isomorphism $$\Omega(E) \to \mathrm{Hom}(\mathbb{A}_{F_D} / (\mathbb{A}_{F_D}(E) + F_D), \kappa(D))$$ identifying $\Omega(E)$ with the orthogonal subspace of $\mathbb{A}_{F_D}(E) + F_D$ in $\mathrm{Hom}(\mathbb{A}_{F_D},\kappa(D))$, which itself is isomorphic to the dual of the cohomology group $H^1(\mathscr{L}(E))$. By remark \ref{rationalpointfibre}, a rational point $a\in \mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D$ comes from a $\kappa(D)$-point $b$ of the fibre $$\prod_{\i\in\mathcal{I}}\mathbf{A}_K^{m_{\i}(-M_{\i},N_{\i})} \times_k \prod_{v\in\Sigma}\mathbf{A}_k^{(\alpha_v - M_{\i_v},\beta_v + N_{\i_v})} $$ via the quotient morphism $$q_D:\prod_{\i\in\mathcal{I}}\mathbf{A}_K^{m_{\i}(-M_{\i},N_{\i})} \times_k \prod_{v\in\Sigma}\mathbf{A}_k^{(\alpha_v - M_{\i_v},\beta_v + N_{\i_v})}\to \mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D.$$ Thus, it corresponds to the characteristic function of a polydisc inside the adeles $\mathbb{A}_{K(C)}$ with $\kappa(D)$-rational centre $b$ and radius described by the divisor $\Lambda_D$. The Fourier transform $\mathscr{F} \Phi_D$ is defined on the $\kappa(D)$-variety $\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)_D$, which is the image of the quotient map $$\tilde{q}_D:\prod_{\i\in\mathcal{I}}\mathbf{A}_K^{m_{\i}(-N_{\i},M_{\i})} \times_k \prod_{v\in\Sigma}\mathbf{A}_k^{(\nu -\beta_v - N_{\i_v},\nu -\alpha_v - M_{\i_v})}\to \mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)_D.$$ Let us compute the right-hand side of the Poisson formula. For this, recall that the Fourier kernel $$r_D:\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)\times_{\kappa(D)}\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)\to \mathbf{A}^1$$ is a $\kappa(D)$-bilinear morphism satisfying $$r_D(q_D(u),\tilde{q}_D(v)) = r(u,v)$$ for any $\kappa(D)$-rational points $u,v$ of the fibres described above, where $r$ is the Fourier kernel associated to the differential form $\omega$ on the adeles $\mathbb{A}_{K(C)}$. By definition, for any $y\in \mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha, N,M)_D(\kappa(D))$, we have \begin{eqnarray*}\mathscr{F} \Phi_D(y)& = &\mathbf{L}^{-\deg \Lambda_D}[\{a\}\times_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D}\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)_D\times_{\kappa(D)}\{y\}, r_D(\mathrm{pr}_2,\mathrm{pr}_3)]\\ & = & \mathbf{L}^{-\deg \Lambda_D}[\mathrm{Spec}\, k,r_D(a,y)]\\ & = & \mathbf{L}^{-\deg \Lambda_D}\psi(r_D(a,y))\end{eqnarray*} For any $f\in L_{\kappa(D)}(\div\,\omega + \Lambda_D)$, we have $$r_D(a,\tilde{\theta}_D(f)) = r(b,\tilde{\theta'}_D(f)) = \sum_{s} \mathrm{res}_s(b_sf\omega)$$ where the sum goes over the points of the curve $C_K$. Note that the map $f\mapsto f\omega$ identifies $L_{\kappa(D)}(\div(\omega) + \Lambda_D)$ with $\Omega(-\Lambda_D)$. By invariance of the residue, $f\mapsto r_D(a,\theta_D(f)) $ is identically zero on $L_{\kappa(D)}(\div\,\omega + \Lambda_D)$ if and only if $b\in \Omega(-\Lambda_D)^{\perp}.$ By lemma 1.1.11 in \cite{CL}, we have \begin{eqnarray*}\sum_{x\in F_D}\mathscr{F}\Phi_D(x) &= &\mathbf{L}^{-\deg \Lambda_D} \sum_{f\in L(\div(w) + \Lambda_D)} \psi(r_D(a,\tilde{\theta}_D(f)))\\ & = & \left\{\begin{array}{cc} \mathbf{L}^{-\deg\Lambda_D + \dim L_{\kappa(D)}(\div\,\omega + \Lambda_D)}& \text{if\ $b\in \Omega(-\Lambda_D)^{\perp} $}\\ 0 & \text{otherwise}.\end{array}\right. \end{eqnarray*} Thus, by the Riemann-Roch theorem applied to $\Lambda_D$ on the curve $C_{\kappa(D)}$ we have: $$\mathbf{L}^{1-g}\sum_{x\in F_D}\mathscr{F} \Phi_D(x) =\left\{\begin{array}{cc} \mathbf{L}^{\dim L(-\Lambda_D)}& \text{if\ $b\in \Omega(-\Lambda_D)^{\perp} $}\\ 0 & \text{otherwise}\end{array}\right.$$ We now compute the left-hand side of the Poisson formula. If $b = (b_s)_s \in \Omega(-\Lambda_D)^{\perp} = \mathbb{A}_{F_D}(-\Lambda_D) + F_D$, there exists $c\in F_D$ such that $\div(c-b)\geq \Lambda_D$. In other words, there exists an element $c$ of $F_D$ in the polydisc of centre $b$ with radii controlled by the divisor~$\Lambda_D$. Then the intersection of $F_D$ with this polydisc is exactly $c + L(-\Lambda_D)$, so that $$\sum_{x\in F_D} \Phi_D(x) = \sum_{x\in F_D} \Phi_D(x-c) = \sum_{x\in L(-D)}1 = \mathbf{L}^{\dim L(-D)}.$$ If on the other hand $b\not\in \Omega(-\Lambda_D)^{\perp}$, then the intersection of $F_D$ with the polydisc is empty, and the sum on the left-hand side of the Poisson formula is zero. This concludes the proof. \end{proof} \subsection{Uniform summation for uniformly compactly supported functions} In what follows, we are going to show that summation over $\kappa(D)(C)$ may be done \textit{uniformly} in~$D$ if $\Phi$ is a family of uniformly compactly supported functions, that is, if all integers in the family $M$ are zero. Let us recall the key point of the construction in this particular case: the zero-cycle $E_D$ from section \ref{twistedsummation} is equal to $D_{\alpha} = -\sum_{v}\alpha_v v$, and we are interested in the space $L_{\kappa(D)}(D_{\alpha})$. Since $D_{\alpha}$ is defined over $k$, by flat base change (see e.g. \cite{Milne}, theorem 4.2 $(a)$) we have a $\kappa(D)$-linear canonical isomorphism $$L_{\kappa(D)}(D_{\alpha})\simeq L(D_{\alpha})\otimes_k\kappa(D)$$ where $$L(D_{\alpha}) = \{0\}\cup \{f\in k(C), \div f \geq -D_{\alpha}\}.$$ There is a morphism of algebraic varieties over $\kappa(D)$: $$\theta_D: L(D_{\alpha})^n\otimes_k \kappa(D)\longrightarrow \mathscr{A}_{\mathbf{m}}(\alpha,\beta,0,N)_D,$$ constructed in the previous section. The aim of the following proposition is to show that, as $D$ varies in $S^{\mathbf{m}}C$, the maps $\theta_D$ can be combined into a morphism performing summation over rational points uniformly in $D$. Of course, our uniform choice of uniformisers will be crucial here. \begin{prop} \label{prop.uniformsummationuc} There exists a constructible morphism $$\theta_{\mathbf{m}}: L(D_{\alpha})^n\times S^{\mathbf{m}}C \longrightarrow \mathscr{A}_{\mathbf{m}}(\alpha,\beta,0,N)$$ over $S^{\mathbf{m}}C$ such that for any $D\in S^{\mathbf{m}}C$, the induced morphism $$L(D_{\alpha})^n\otimes_k \kappa(D)\longrightarrow \mathscr{A}_{\mathbf{m}}(\alpha,\beta,0,N)_D$$ on fibres above $D$ is exactly the morphism $\theta_D$ constructed above. \end{prop} \begin{proof} Lemma \ref{coef} shows that for any $f\in L(D_{\alpha})$ and any $\i\in \mathcal{I}_0$, there is a constructible morphism $$\phi_{f,\i}: C\longrightarrow \mathbf{A}_C^{(\alpha,\beta + N_{\i})}.$$ over $C$, sending $v\in C$ to $(v, a_{\alpha_{v}}(f,v),\ldots,a_{\beta_v + N_{\i}-1}(f,v))$ (where we use the notation from lemma \ref{coef}). Taking products over $C$, for any $f = (f_1,\ldots,f_n)\in L(D_{\alpha})^n,$ and any $\i\in \mathcal{I}_0$, there is a constructible morphism $$\phi_{f,\i} :C\longrightarrow \mathbf{A}_C^{n(\alpha,\beta + N_{\i})}.$$ Taking symmetric products, for any $\mathbf{m}\in \mathcal{I}_0$ we get morphisms $$S^{\mathbf{m}}\phi_f: S^{\mathbf{m}}C\longrightarrow \mathscr{A}_{\mathbf{m}}(\alpha,\beta,0,N).$$ Finally, $L(D_{\alpha})^n$ being a finite-dimensional $k$-vector space, combining these morphisms for a basis of the latter, we get constructible morphisms $$\theta_{\mathbf{m}}: L(D_{\alpha})^n\times S^{\mathbf{m}}C \longrightarrow \mathscr{A}_{\mathbf{m}}(\alpha,\beta,0,N).$$ By construction, the restriction to the fibre over a schematic point $D\in S^{\mathbf{m}}C$ corresponds indeed to $\theta_D$. \end{proof} We see $L(D_{\alpha})^n\times S^{\mathbf{m}}C$ as a variety over $S^{\mathbf{m}}C$ via the second projection, which yields a group morphism $$\mathscr{E}xp\mathscr{M}_{L(D_{\alpha})^n\times S^{\mathbf{m}}C}\longrightarrow \mathscr{E}xp\mathscr{M}_{S^{\mathbf{m}}C},$$ interpreted as ``summation over rational points in each fibre''. Using this morphism, we can define uniform summation over rational points: \begin{definition} \label{cs.summable} Let $\Phi\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}^{\mathbf{m}}(\alpha,\beta,0,N)}$ be a family of uniformly compactly supported functions. The uniform summation of $\Phi$ over rational points, denoted by $$\left(\sum_{x\in \kappa(D)(C)^n}\Phi_D(x)\right)_{D\in S^{\mathbf{m}}C},$$ is the image in $\mathscr{E}xp\mathscr{M}_{S^{\mathbf{m}}C}$ of the pullback $\theta_{\mathbf{m}}^*\Phi$. \end{definition} \begin{remark}[Order of summation] \label{orderofsummation} Starting from a uniformly compactly supported family $\Phi\in \mathscr{A}^{\mathbf{m}}(\alpha,\beta,0,N)$ and pulling it back to $L(D_{\alpha})^n\times S^{\mathbf{m}}C$ via $\theta_{\mathbf{m}}$, we obtain an object of $\mathscr{E}xp\mathscr{M}_{L(D_{\alpha})^n\times S^{\mathbf{m}}C}$, from which we can get an object of $\mathscr{E}xp\mathscr{M}_k$, by either projecting first to $S^{\mathbf{m}}C$ and then to $k$, or first to $L(D_{\alpha})$ and then to $k$. The fact that these two operations commute can be interpreted in terms of motivic sums as the possibility of interchanging the order of summation: $$\sum_{D\in S^{\mathbf{m}}C}\ \sum_{x\in \kappa(D)(C)^n}\Phi_D(x) = \sum_{x\in k(C)^n}\sum_{D\in S^{\mathbf{m}}C} \Phi_D(x).$$ Here $\sum_{x\in k(C)^n}$ denotes summation over $L(D_{\alpha})$. \end{remark} \section{Poisson formula in families}\label{Poisson.families} In this section we describe the way in which the Poisson formula is going to be used in section \ref{applicPoisson} of chapter \ref{motheightzeta}. \subsection{Uniformly summable families}\label{sect.uniformlysummable} \begin{definition} We say that a constructible family of functions $$\Phi\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$$ is uniformly summable \index{uniformly summable} if there is an element $\Sigma \in \mathscr{E}xp\mathscr{M}_{S^{\mathbf{m}}C}$ such that for any schematic point $D\in S^{\mathbf{m}}(C)$, the pullback of~$\Sigma$ along $D$ is $\sum_{x\in \kappa(D)(C)^n}\Phi_D(x)\in \mathscr{E}xp\mathscr{M}_{\kappa(D)}$. \end{definition} \begin{remark}\label{uniqueness.sum} By lemma \ref{function.equality}, such an element is unique. \end{remark} \begin{notation}\label{summable.not} The element $\Sigma$ from the definition will be denoted $$\left(\sum_{x\in \kappa(D)(C)^n}\Phi_D(x)\right)_{D\in S^{\mathbf{m}}C}.$$ \end{notation} \begin{example}\label{cs.summable.ex} We showed in the previous section that a uniformly compactly supported family of functions is uniformly summable. The notation in definition \ref{cs.summable} is consistent with the one in \ref{summable.not}. \end{example} \subsection{Motivic Poisson formula in families}\label{poissonfamiliesproof} Let $\Phi\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\alpha,\beta,M,N)}$ be a constructible family of Schwartz-Bruhat functions, which we assume to be uniformly summable. By the motivic Poisson formula as it is stated in lemma \ref{twistedpoisson}, for any schematic point $D\in S^{\mathbf{m}}C$, we have $$\sum_{x\in \kappa(D)(C)^n}\Phi_D(x) = \mathbf{L}^{(1-g)n}\sum_{y\in \kappa(D)(C)^n}\mathscr{F}\Phi_D(y).$$ We may conclude from this and from lemma \ref{function.equality} that the family $$\mathscr{F} \Phi\in\mathscr{E}xp\mathscr{M}_{\mathscr{A}_{\mathbf{m}}(\nu-\beta,\nu-\alpha,N,M)}$$ is uniformly summable as well, and that \begin{equation}\label{uniform.Poisson}\left(\sum_{x\in \kappa(D)(C)^n}\Phi_D(x)\right)_{D\in S^{\mathbf{m}}C} = \mathbf{L}^{(1-g)n}\left(\sum_{y\in \kappa(D)(C)^n}\mathscr{F}\Phi_D(y)\right)_{D\in S^{\mathbf{m}}C}\end{equation} as elements of $\mathscr{E}xp\mathscr{M}_{S^{\mathbf{m}}C}.$ In other words, the Poisson formula from lemma \ref{twistedpoisson} shows that $\Phi$ is uniformly summable if and only if $\mathscr{F} \Phi$ is uniformly summable, and that in this case the families of their sums are related by a Poisson formula \ref{uniform.Poisson}. \index{Poisson formula!in families} \subsection{The case of a uniformly smooth family}\label{us.poisson} In chapter \ref{motheightzeta}, section \ref{applicPoisson}, we are going to use this formula in the case where $\Phi$ is a uniformly smooth family, so that $\mathscr{F}\Phi$ is uniformly compactly supported. By example \ref{cs.summable.ex}, $\mathscr{F} \Phi$ is then uniformly summable, and section \ref{poissonfamiliesproof} states that so is $\Phi$, and that we have equality (\ref{uniform.Poisson}). Taking classes in $\mathscr{E}xp\mathscr{M}_k$ (written as motivic sums), we then have: \begin{equation}\label{sc.Poisson}\sum_{D\in S^{\mathbf{m}}C}\sum_{x\in \kappa(D)(C)^n}\Phi_D(x) = \mathbf{L}^{(1-g)n} \sum_{D\in S^{\mathbf{m}}C}\sum_{y\in \kappa(D)(C)^n}\mathscr{F}\Phi_D(y).\end{equation} \index{Poisson formula!uniformly smooth family} \subsection{Reversing the order of summation}\label{reversesummation} By remark \ref{orderofsummation}, it makes sense to reverse the order of summation in the right-hand-side of (\ref{sc.Poisson}), to get: \begin{equation}\label{equation.reverseorder}\sum_{D\in S^{\mathbf{m}}C}\sum_{x\in \kappa(D)(C)^n}\Phi_D(x) = \mathbf{L}^{(1-g)n}\sum_{y\in k(C)^n}\sum_{D\in S^{\mathbf{m}}C}\mathscr{F}\Phi_D(y).\end{equation} \begin{remark}\label{dropkappaD} For simplicity of notation, in chapter \ref{motheightzeta} we will drop the mention of $\kappa(D)$ in the summations, and write simply $\sum_{x\in k(C)}$. \end{remark}
{ "timestamp": "2018-02-21T02:01:47", "yymm": "1802", "arxiv_id": "1802.06836", "language": "en", "url": "https://arxiv.org/abs/1802.06836" }
\section{Introduction}\label{sec:introduction} Autonomous systems are becoming increasingly prevalent in our daily life, with examples such as self-driving vehicles, package delivery drones and household service robots~\cite{dunbabin2012robots}. Nevertheless, these autonomous systems often perform the intended tasks under the supervision or collaboration with human operators~\cite{fong2003survey}. On the high level, the human operator could assign tasks for the robot to execute or monitor the task execution progress during run time. On the low level, the operator could directly influence or even overtake the control commands of the robot from the on-board autonomous controller, which can be useful to guide the robot through difficult parts of the task~\cite{fong2003survey, goodrich2007human, loizou2007mixed}. On the other hand, the autonomous controller should take into account possibly erroneous inputs from the operator and ensure that safety constraints are never violated. Thus, addressing properly these online interactions between the autonomous system and the operator during the design process is essential for the safety and efficiency of the overall system. In this work, we consider the interactions on both levels. Particularly, on the high level, the operator assigns (i) offline a local task as LTL formulas for hard and soft task constraints, and (ii) online temporary tasks with deadlines. On the low level, the operator's control inputs is fused directly with the autonomous controller via a mixed-initiative controller. The proposed motion and task planning scheme ensures that the hard task constraints regarding safety are obeyed at all time, while the soft constraints for performance are improved gradually as the robot is guided to explore the workspace and more importantly, learn about the human preference over the synthesized plan. We rely on LTL as the formal language~\cite{baier2008principles} to describe complex high-level tasks beyond the classic point-to-point navigation. Many recent papers can be found that combine robot motion planning with model-checking-based task planning, e.g., a single robot under LTL motion tasks~\cite{fainekos2009temporal, bhatia2010sampling, ding2011mdp, belta2017formal}, a multi-robot system under a global task~\cite{ ulusoy2013optimality}, or a multi-robot system under local independent~\cite{guo2015multi} or dependent tasks~\cite{guo2017task,tumova2016multi}. However, none of the above addresses directly the human initiative, neither in the continuous control nor in the discrete planning. On the other hand, human inputs are considered in~\cite{kress2009temporal} via GR(1) task formulas that require the robot to be reactive to simple sensory signals from the human. The high-level robot-human interaction is modeled as a two-player Markov Decision Process (MDP) game in~\cite{feng2016synthesis, fu2015pareto} where they take turns to influence the system evolution. The goal is to design a shared control policy to satisfy a LTL formula and minimize a cost function. Another recent work~\cite{jansen17synthesis} addresses the control problem of MDP under LTL formulas where the autonomy strategy is blended into the human strategy in a minimal way that also ensures safety and performance. However, the direct interaction on the low level is not investigated in the aforementioned work. More importantly, an all-time involvement of the human is required for these frameworks, while we only assume human intervention whenever preferred by the operator itself. Furthermore, the notion of mixed-initiative controller is firstly proposed in~\cite{loizou2007mixed} that combines external human inputs with the traditional navigation controller~\cite{koditschek1990robot}, while ensuring safety of the overall system. The work in~\cite{panagou2013multi, wang2016multi} proposes a systematic way to compose multiple control initiatives using barrier functions. However, high-level complex temporal tasks are not considered in these work. The main contribution of this work lies in a novel human-in-the-loop control framework that allows human interaction on both high level as complex tasks and low level as continuous inputs. We ensure all-time safety during the interaction and accommodation of short-term contingent tasks assigned during run time. Lastly, the proposed IRL algorithm enables the robot to asymptotically learn and adapt to the human operator's preference in the plan synthesis. The rest of the paper is organized as follows: Section~\ref{sec:prelims} introduces some preliminaries of LTL. Section~\ref{sec:problem-formulate} formulates the problem. Main algorithmic parts are presented in Section~\ref{sec:blocks}, which are integrated into the complete framework in Section~\ref{sec:complete}. Numerical and experiment studies are shown in Sections~\ref{sec:case}. We conclude in Section~\ref{sec:future}. \section{Preliminaries}\label{sec:prelims} \subsection{Linear Temporal Logic (LTL)}\label{subsec:LTL} A LTL formula over a set of {atomic propositions} $AP$ that can be evaluated as true or false is defined inductively according to the following syntax~\cite{baier2008principles}: $\varphi::=\top \;|\; a \;|\; \varphi_1 \wedge \varphi_2 \;|\; \neg \varphi \;|\; \bigcirc \varphi \;|\; \varphi_1\, \textsf{U}\, \varphi_2,$ where $\top\triangleq \texttt{True}$, $a \in AP$ and $\bigcirc$ (\emph{next}), $\textsf{U}$ (\emph{until}). There are other useful operators like $\square$ (\emph{always}), $\Diamond$ (\emph{eventually}), $\Rightarrow$ (\emph{implication}). The full semantics and syntax of LTL are omitted here due to limited space, see e.g.,~\cite{baier2008principles}. {Syntactically co-safe LTL (sc-LTL)} is a subclass of LTL that can be fulfilled by finite satisfying prefix~\cite{kupferman2001model}. \subsection{B\"uchi Automaton}\label{subsec:buchi} Given a LTL formula~$\varphi$, there always exists a Nondeterministic B\"uchi Automaton (NBA) $\mathcal{A}_{\varphi}$ that accepts all the languages that satisfy~$\varphi$~\cite{baier2008principles}. It is defined as $\mathcal{A}_{\varphi}=(Q, \,2^{AP},\, \delta,\, Q_0,\, \mathcal{F})$, where $Q$ is a finite set of states; $Q_0 \subseteq Q$ is the set of initial states, $2^{AP}$ is the set of input alphabets; $\delta: Q\times 2^{AP} \rightarrow {2^Q}$ is a transition relation and $\mathcal{F}\subseteq Q$ is a set of accepting states. There are fast translation algorithms \cite{gastin2001fast} to obtain~$\mathcal{A}_{\varphi}$. Moreover, denote by $\chi(q_m,\, q_n)=\{\ell\in 2^{AP}| \, q_n\in \delta (q_m,\,\ell) \}$ the set of all input alphabets that enable the transition from $q_m$ to $q_n$ in $\delta$. Then the distance between~$\ell \in 2^{AP}$ and $\chi \subseteq 2^{AP}$($\chi \neq \emptyset$) is defined by $\texttt{Dist}(\ell, \, \chi)=0$ if $\ell \in \chi$ and $\min_{\ell'\in \chi}\; |\{a\in AP\,|\,a\in \ell, a\notin \ell'\}|$, otherwise. Namely, it returns the minimal difference between~$\ell$ and any element in~$\chi$. \section{Problem Formulation}\label{sec:problem-formulate} \subsection{Dynamic Workspace and Motion Abstraction}\label{subsec:dynamic-workspace} The bounded workspace where the robot is deployed is denoted by $\mathcal{W}\subset \mathbb{R}^2$. It consists of $N>0$ regions of interest, denoted by~$\Pi=\{\pi_1,\pi_2,\cdots,\pi_N\}$, where~$\pi_{n}\subset \mathcal{W}$. Furthermore, there is a set of $M>0$ properties (atomic propositions) associated with $\Pi$, denoted by~$AP=\{a_0,a_1,\cdots,a_M\}$, e.g., ``this is a public area'', ``this is office room one'' and ``this meeting room is in use''. The robot's motion within the workspace is abstracted as a labeled transition system~$\mathcal{T}\triangleq (\Pi,\,\rightarrow,\, \Pi_0,\,AP,\, L)$, where~$\Pi,\, AP$ are defined above, $\rightarrow \subseteq \Pi \times \Pi$ is the transition relation that~$(\pi_i,\pi_j)\in \rightarrow$ if the robot can move from region~$\pi_i$ to region~$\pi_j$ without crossing other regions in~$\Pi$, $\Pi_0\in \Pi$ is where the robot starts initially, $L: \Pi\rightarrow 2^{AP}$ is the labeling function where~$L(\pi_i)$ returns the set of properties satisfied by~$\pi_i$. Since the workspace is assumed to be only \emph{partially-known} and dynamic, the labeling function and the transition relation may change over time. \subsection{Mixed-initiative Controller}\label{subsec:robot-control} For the simplicity of discussion, we assume that the robot satisfies the single-integrator dynamics, i.e., $\dot{x}=u$, where~$x,\,u\in \mathbb{R}^2$ are the robot position and control inputs. For each transition~$(\pi_s,\pi_g)\in \rightarrow$, the robot is controlled by the mixed-initiative navigation controller~\cite{loizou2007mixed} below: \begin{equation}\label{eq:u-model} u \triangleq u_r(x,\pi_s,\pi_g) + \kappa(x,\Pi)\, u_h(t) \end{equation} where $u_r(x,\pi_s,\pi_g)\in \mathbb{R}^2$ is a given autonomous controller that navigates the robot from region $\pi_s$ to $\pi_g$, while staying within~$\mathcal{W}$ and without crossing other regions in~$\Pi$; the function $\kappa(x,\Pi)\in [0,\; 1]$ is a smooth function to be designed; and $u_h(t)\in \mathbb{R}^2$ is the human input function, which is \emph{uncontrollable} and \emph{unknown} by the robot. \begin{remark}\label{remark:model} The proposed motion and task coordination scheme can be readily extended to robotic platforms with other dynamics and different navigation controllers, such as potential-field-based~\cite{loizou2007mixed} and sampling-based~\cite{bhatia2010sampling,lavalle2006planning}. \hfill $\blacksquare$ \end{remark} \subsection{Robot Task Assignment}\label{sec:robot-task} The robot is assigned by the human operator a local task as LTL formulas over~$AP$, which has the following structure: \begin{equation}\label{eq:ltl-task} \varphi \triangleq \varphi_{\text{hard}} \wedge \varphi_{\text{soft}} \wedge \varphi_{\text{temp}} \end{equation} where $\varphi_{\text{soft}}$ and $\varphi_{\text{hard}}$ are ``soft'' and ``hard'' sub-formulas that are assigned \emph{offline}. Particularly, $\varphi_{\text{hard}}$ includes safety constraints such as collision avoidance: ``avoid all obstacles" or power-supply guarantees: ``visit the charging station infinitely often"; $\varphi_{\text{soft}}$ contains additional requirements for performance such as surveillance: ``surveil all bases infinitely often". Introducing soft and hard constraints is due to the observation that the partially-known workspace might render parts of the task infeasible initially and thus yielding the need for them to be relaxed, while the safety-critical parts should not be relaxed; Lastly, $\varphi_{\text{temp}}$ contains short-term contingent tasks that are assigned as sc-LTL formulas \emph{online} and {unknown} beforehand. The structure in~\eqref{eq:ltl-task} provides an effective way for the operator to handle both standard operational tasks and contingent demands. \subsection{Control Objective}\label{sec:control-objective} Given the abstraction model~$\mathcal{T}$ and the task formula~$\varphi$, the control objective is to design function~$\kappa(\cdot)$ and control input~$u$ in~\eqref{eq:u-model} such that: (I) the hard constraints in~$\varphi_{\text{hard}}$ are always satisfied, given all possible human inputs; (II) each time a temporary task~$\varphi_{\text{temp}}$ is assigned, it is satisfied in finite time; and (III) the satisfaction of the soft constraints in~$\varphi_{\text{soft}}$ adapts to the human inputs. \section{Algorithmic Components}\label{sec:blocks} In this section, we present the four algorithmic components of the overall solution presented in Section~\ref{sec:complete}. Particularly, we start from constructing a parameterized product automaton for the plan synthesis. Then we present a mixed-initiative controller that guarantees safety and meaningful inputs from the operator. Furthermore, we discuss a plan adaptation algorithms for real-time updates of the workspace model and contingent task assignment. At last, we describe a IRL algorithm to learn about the human preference. \subsection{Initial Discrete Plan Synthesis}\label{subsec:init-syn} Denote by $\mathcal{A}_{\text{hard}}=(Q_1, \,2^{AP},\, \delta_1,\, Q_{1,0},\, \mathcal{F}_1)$ and $\mathcal{A}_{\text{soft}}=(Q_2, \,2^{AP},\, \delta_2,\, Q_{2,0},\, \mathcal{F}_2)$ as the NBAs associated with $\varphi_{\text{hard}}$ and $\varphi_{\text{soft}}$, respectively, where the notations are defined analogously as in Section~\ref{subsec:buchi}. Now we propose a way to compose~$\mathcal{T}$, $\mathcal{A}_{\text{hard}}$ and $\mathcal{A}_{\text{soft}}$ into a product automaton. \begin{definition}\label{def:para-prod} The \emph{parameterized} product automaton $\mathcal{A}_{p}\triangleq (Q_p,\, \delta_p, \, Q_{p,0},\, \mathcal{F}_p)$ is defined as: $Q_p=\Pi \times Q_1 \times Q_2 \times \{1,2\}$ are the states with $q_p= \langle \pi,\, q_1,\,q_2,c \rangle \in Q_p$, $\forall \pi\in \Pi$, $\forall q_1 \in Q_1$, $\forall q_2 \in Q_2$ and $\forall c\in \{1,2\}$; $\delta_p: Q_p \times Q_p \rightarrow (\mathbb{R}_{\geq 0}\cup \{\infty\})^3$ maps each transition to a column vector such that $\delta_p(\langle \pi,\, q_1,\,q_2,c \rangle, \langle \check{\pi},\, \check{q}_1,\,\check{q}_2,\check{c} \rangle) = [\alpha_1,\alpha_2,\alpha_3]^\intercal$, where \begin{itemize} \item $\alpha_1$ is the control cost for the robot to move from~$\pi$ to $\check{\pi}$, where $\alpha_1>0$ if~$(\pi,\,\check{\pi})\in \rightarrow$, otherwise $\alpha_1 \triangleq \infty$; \item $\alpha_2$ is the indicator for whether a transition violates the hard constraints. It satisfies that $\alpha_2\triangleq 0$ if the following conditions hold: (i)~$L(\pi)\in \chi_1(q_1,\,\check{q}_1)$; (ii) $\chi_2(q_2,\,\check{q}_2)\neq \emptyset$; (iii) $q_1\notin \mathcal{F}_1$ and $\check{c}=c=1$; or $q_2\notin \mathcal{F}_2$ and $\check{c}=c=2$; or $q_1\in \mathcal{F}_1$, $c=1$ and $\check{c}=2$; or $q_2\in \mathcal{F}_2$, $c=2$ and $\check{c}=1$. Otherwise, $\alpha_2\triangleq \infty$. \item $\alpha_3$ is the measure of how much a transition violates the soft constraints, where~$\alpha_3\triangleq \texttt{Dist}(L(\pi),\,\chi_2(q_2,\check{q}_2))$, where the functions~$\texttt{Dist}(\cdot)$ and $\chi_2(\cdot)$ for $\mathcal{A}_{\text{soft}}$ are defined in Section~\ref{subsec:buchi}. \end{itemize} and $Q_{p,0}= \Pi_{0}\times Q_{1,0}\times Q_{2,0}\times \{1\}$, $\mathcal{F}_p=\Pi \times \mathcal{F}_1 \times Q_2 \times \{1\}$ are the sets of initial and accepting states, respectively. \hfill $\blacksquare$ \end{definition} An accepting run of~$\mathcal{A}_p$ is an infinite run that starts from any initial state and intersects with the accepting states infinitely often. Note that the component $c$ above to ensure that an accepting run intersects with the accepting states of both $\mathcal{A}_{\text{hard}}$ and $\mathcal{A}_{\text{soft}}$ infinitely often. More details can be found in Chapter 4 of~\cite{baier2008principles}. Furthermore, since the workspace~$\mathcal{T}$ is partially-known, we denote by $\mathcal{T}^t$ the workspace model at time $t\geq 0$, and the associated product automaton by~$\mathcal{A}_p^t$. To simplify the notation, given a finite run~$R=q_p^0q_p^1\cdots q_p^S$ of $\mathcal{A}_p$, where $q_p^s\in Q_p$, $\forall s=0,1,\cdots,S$, we denote by $\boldsymbol{\delta}(R)=\sum_{s=0}^{S-1}\delta_{p}(q_p^{s},q_p^{s+1}), $ where~$\boldsymbol{\delta}(R)\in \mathbb{R}^3$ is the accumulated cost vector~$\delta_p$ along~$R$. Similar definitions hold for~$\boldsymbol{\alpha}_{k}(R)\in \mathbb{R}$ as the accumulated $\alpha_{k}$ cost along~$R$, $\forall k=1,2,3$. We consider an accepting run of~$\mathcal{A}_p$ with the prefix-suffix structure:~$R_p\triangleq q_p^1 q_p^2\cdots q_p^S\big{(}q_p^{S+1} q_p^{S+2}\cdots q_p^{S+F}\big{)}^\omega$, where~$q_p^j\in Q_p$, $\forall j=1,2,\cdots,S+F$, where~$S,F>0$. The plan prefix $R_p^{\text{pre}}\triangleq q_p^1 q_p^2\cdots q_p^S$ is executed only once while the plan suffix $R_p^{\text{suf}} \triangleq q_p^{S+1} q_p^{S+2}\cdots q_p^{S+F}$ is repeated infinitely often. Then the total cost of~${R}_p$ is defined as: \begin{equation}\label{eq:plan-cost} \texttt{C}_{\beta}(R_p) \triangleq [1,\,\; \gamma] \otimes \begin{bmatrix} 1 \\ 1 \\ \;\beta \end{bmatrix}^\intercal \cdot \begin{bmatrix} \boldsymbol{\delta}(R_p^{\text{pre}})\\ \boldsymbol{\delta}(R_p^{\text{suf}}) \end{bmatrix}, \end{equation} where $\texttt{C}_{\beta}(R_p)\geq 0$;~$\otimes$ is the Kronector product; $\gamma\geq 0$ is a weighting parameter between the cost of the plan prefix and suffix; $\beta\geq 0$ is a weighting parameter between total control cost of the plan and the satisfaction of the soft task~$\varphi_{\text{soft}}$. Note that~$\gamma$ is normally constant~\cite{guo2015multi} (set to $1$ in this work), while $\beta$ can change according to the robot's internal model or the operator's \emph{preference}. For instance, as the robot has more accurate workspace model, $\beta$ can be increased to penalize the violation of~$\varphi_{\text{soft}}$ such that~$R_p$ satisfies~$\varphi_{\text{soft}}$ more. Or the operator prefers that $\beta$ is decreased so that $R_p$ satisfies~$\varphi_{\text{soft}}$ less and the robot reserves more power. Given the initial values of~$\gamma$ and~$\beta$, an initial accepting run of~$\mathcal{A}_p$, denoted by~$R_p^0$, can be found that {minimizes} the total cost in~\eqref{eq:plan-cost}. The algorithms are based on the nested Dijkstra's search, which are omitted here and details can be found in~\cite{guo2015multi}. As a result, the robot's initial plan, denoted by~$\tau_r^0$, can be derived by projecting~$R_p^0$ onto~$\Pi$, as a sequence of regions that the robot should reach: $\tau_r^0=\pi^1\pi^2\cdots \pi^S\big{(}\pi^{S+1}\pi^{S+2}\cdots \pi^{S+F}\big{)}^{\omega}$, where~$\pi^j$ is the projection of $q_p^j$ onto $\Pi$, $\forall j=1,2,\cdots,S+F$. \subsection{Mixed-initiative Controller Design}\label{subsec:design-control} After the system starts, the robot executes the initial plan~$\tau_r^0$ by reaching the sequence of regions defined by it. However, as described in Section~\ref{subsec:robot-control}, the robot controller is also influenced by the human input. In the following, we show how to construct function~$\kappa(\cdot)$ in~\eqref{eq:u-model} such that the hard task~$\varphi_{\text{hard}}$ is respected at all times for all human inputs. First, we need to find the set of product states~$\mathcal{O}_t\subset Q_p$ in~$\mathcal{A}_p^t$ at time~$t\geq 0$, such that once the robot belongs to any state in~$\mathcal{O}_t$ it means that~$\varphi_{\text{hard}}$ can not be satisfied any more. \begin{lemma}\label{lem:unsafe} Assume that the robot belongs to state~$q_{p}\in Q_p$ at time $t>0$. Then the hard task~$\varphi_{\text{hard}}$ can not be satisfied in the future, if $\mathcal{A}_p^t$ remains unchanged and the minimal cost of all paths from~$q_p$ to any accepting state in~$\mathcal{F}_p$ is~$\infty$. \end{lemma} \begin{proof} Omitted as it is a simple inference of~\eqref{eq:plan-cost}. \end{proof} Thus denote by~$Q_t\subset Q_p$ the set of \emph{reachable} states by the robot at time $t>0$. For each~$q_p\in Q_t$, we perform a Dijkstra search to compute the shortest distance from $q_p$ to all accepting states in~$\mathcal{F}_p$. Lastly, $\mathcal{O}_t$ is given as the subset of $Q_t$ that have an infinite cost to \emph{all} accepting states, i.e., \begin{equation}\label{eq:ot} \mathcal{O}_t = \{q_p\in Q_t\,|\,\texttt{C}_{\beta}(\overline{R}_{q_p,\,q_F})=\infty, \forall q_F \in \mathcal{F}_p\}, \end{equation} where~$\overline{R}_{q_p,\,q_F}$ is the shortest path from~$q_p$ to~$q_F$. Given~$\mathcal{O}_t$ above, we now design the function $\kappa(x,\Pi)$ in~\eqref{eq:u-model} such that $\mathcal{O}_t$ can be avoided. Consider the function: \begin{equation}\label{eq:kappa} \kappa(x,\Pi) = \kappa(x,\mathcal{O}_t) \triangleq \frac{\rho(d_t-d_s)}{\rho(d_t-d_s)+\rho(\varepsilon+d_s-d_t)} \end{equation} where $d_t\triangleq \textbf{min}_{\langle \pi,q_1,q_2,c\rangle\in \mathcal{O}_t}\; \|x-\pi\|$ is the minimum distance between the robot and any region within~$\mathcal{O}_t$; $\rho(s)\triangleq e^{-1/s}$ for~$s>0$ and $\rho(s)\triangleq 0$ for~$s\leq 0$, and $d_s,\, \varepsilon >0$ are design parameters as the safety distance and a small buffer. Thus the mixed-initiative controller is given by \begin{equation}\label{eq:mixed-init} u \triangleq u_r(x,\pi_s,\pi_g) + \kappa(x,\mathcal{O}_t)\, u_h(t). \end{equation} As discussed in~\cite{loizou2007mixed}, the function~$\kappa(\cdot)$ above is $0$ on the boundary of undesired regions in~$\mathcal{O}_t$, and close to $1$ when the robot is away from~$\mathcal{O}_t$ to allow for meaningful inputs from the operator. This degree of closeness is tunable via changing~$d_t$ and~$d_s$. However, the original definition in~\cite{loizou2007mixed} only considers static obstacles, instead of the general set~$\mathcal{O}_t$. \begin{lemma}\label{lemma:iss} Assume that the navigation control $u_r$ is perpendicular to the boundary of regions in $\mathcal{O}_t$ and points inwards. Then the robot can avoid~$\mathcal{O}_t$ under the mixed-initiative controller by~\eqref{eq:mixed-init} for all human input~$u_h$. \end{lemma} \begin{proof} The proof follows from Proposition 3 of~\cite{loizou2007mixed}. Namely, for~$x \in \partial \mathcal{O}_t$ on the boundary of~$\mathcal{O}_t$, $$\dot{x}^\intercal u_r = \|u_r(x)\|^2 +\kappa(x)\, u_h(t)^\intercal u_r(x)>0$$ since~$\kappa(x)=0$ for~$x\in \partial \mathcal{O}_t$. Thus the workspace excluding all regions in~$\mathcal{O}_t$ is positive invariant under controller~\eqref{eq:mixed-init}. In other words, if the navigation control avoids $\mathcal{O}_t$, the same property is ensured by~\eqref{eq:mixed-init} for all human inputs. \end{proof} \subsection{Discrete Plan Adaptation}\label{subsec:plan-adapt} In this section, we describe how the discrete plan~$\tau_r^t$ at $t\geq 0$ can be updated to (i) accommodate changes in the partially-known workspace model, and (ii) fulfill contingent tasks that are assigned by the operator during run time. \subsubsection{Updated Workspace Model}\label{subsub:update-ws} The robot can explore new features of the workspace while executing the discrete plan $\tau_r^t$ or being guided by the human operator. Thus the motion model~$\mathcal{T}^t$ can be updated as follows: (i) the transition relation~$\rightarrow$ is modified based on the status feedback from the navigation controller, i.e., whether it is feasible to navigate from region~$\pi_i$ to $\pi_j$ without human inputs; (ii) the labeling function $L(\pi)$ is changed based on the feedback from the robot's sensing module, i.e., the properties that region~$\pi$ satisfies. For example, ``the corridor connecting two rooms is blocked'' or ``the object of interest is no longer in one room''. Given the updated $\mathcal{T}^t$, the mapping function $\delta_p$ of the product automaton~$\mathcal{A}^t_p$ is re-evaluated. Consequently, the current plan~$\tau_r^t$ might not be optimal anymore regarding the cost in~\eqref{eq:plan-cost}. Thus we consider the following problem. \begin{problem}\label{prob:update-ws} Update~$\tau_r^t$ such that it has the minimum cost in~\eqref{eq:plan-cost} given the updated model~$\mathcal{T}^t$. \hfill $\blacksquare$ \end{problem} Given the set of reachable states~$Q_t\subset Q_p$ at time $t>0$, for each state~$q_p \in Q_t$, we perform a nested Dijkstra's search~\cite{guo2015multi} to find the accepting run that starts from~$q_p$ and has the prefix-suffix structure with the minimum cost defined in~\eqref{eq:plan-cost}. Denote by~$R^t_{+}(q_p)$ this optimal run for~$q_p\in Q_t$. Moreover, let $\mathbf{R}^t_{+}\triangleq \{R^t_{+}(q_p),\, \forall q_p \in Q_t\}$ collect all such runs. Then we find among $\mathbf{R}^t_{+}$ the accepting run with the minimum cost, which becomes the updated run~$R_p^{t+}$: \begin{equation}\label{eq:update-run} R_p^{t+} \triangleq \textbf{argmin}_{R_p\in \mathbf{R}^{t}_+} \texttt{C}_{\beta}(R_p). \end{equation} Thus the updated plan~$\tau_r^t$ is given by the projection of~$R_p^{t+}$ onto~$\Pi$. Note that the above procedure is performed whenever the motion model~$\mathcal{T}^t$ is updated. \subsubsection{Contingent Task Fulfillment}\label{subsubsec:cont-task} As defined in~\eqref{eq:ltl-task}, the operator can assign contingent and short-term tasks~$\varphi_{\text{temp}}$ to the robot during run time. Particularly, we consider the following ``\emph{pick-up and deliver}'' task with deadlines, i.e., \begin{equation}\label{eq:temp-task} (\varphi_{\textup{temp}}^t,\, T_{sg})\triangleq (\Diamond (\pi_s \wedge \Diamond \pi_{g}),\,T_{sg}), \end{equation} where~$\varphi_{\textup{temp}}^t\triangleq \Diamond (\pi_s \wedge \Diamond \pi_{g})$ is the temporary task assigned at time $t>0$, meaning that the robot needs to pick up some objects at region $\pi_s$ and deliver them to $\pi_g$ (note that action propositions are omitted here, see~\cite{guo2017task}), where~$\pi_s,\pi_g\in \Pi$; $T_{sg}>0$ is the \emph{preferred} deadline that task~$\varphi_{\textup{temp}}^t$ is accomplished. It can be verified that $\varphi_{\textup{temp}}^t$ are sc-LTL formulas and can be fulfilled in finite time. Assume that $\varphi_{\textup{temp}}^t$ is satisfied at time $t'>t$ then the delay is defined as $\overline{t}_{sg}\triangleq t'-T_{sg}$. We consider to following problem to incorporate $\varphi_{\textup{temp}}^t$. \begin{problem}\label{prob:fulfill-temp} Update~$R_p^t$ such that $\varphi_{\textup{temp}}^t$ can be fulfilled \emph{without} delay (if possible) and with \emph{minimum} extra cost, while respecting the hard constraints~$\varphi_{\textup{hard}}$. \hfill $\blacksquare$ \end{problem} Assume the remaining optimal run at time $t>0$ is given by~$R_p^t=q^{k_0}_p q_p^{k_0+1}\cdots(q_p^S\cdots q_p^{S+F})^\omega$ and $q^{k_s}_p,q^{k_g}_p\in Q_p$ are the $k_s$-th, $k_g$-th state of~$R_p^t$, where $k_s\geq k_g\geq k_0$. Since~$R_p^t$ is optimal for the current~$\mathcal{T}^t$, we search for the index~$k_s$ where the robot can deviate from~$R_p^t$ to reach region~$\pi_s$ and back, and another index~$k_g$ where the robot can deviate from~$R_p^t$ to reach~$\pi_g$ and back. Denote by~$R_p^{t+}$ the updated run after incorporating~$\pi_s,\pi_g$. In this way,~$\varphi_{\textup{temp}}^t$ is satisfied when~$\pi_{g}$ is reached after $\pi_s$ is reached, where~$t'=\sum_{j=k_0}^{k_g}\alpha_1(q_p^j,q_p^{j+1})$ is the total time. Moreover, the total cost of~$R^t_p$ in~\eqref{eq:plan-cost} is changed by~$\overline{\texttt{C}}_{\beta} (R^t_p) \triangleq \texttt{C}_{\beta}(R^{t+}_p)-\texttt{C}_{\beta}(R^t_p)$. Thus we formulate a 2-stage optimization below: first, we solve \begin{subequations}\label{eq:delay} \begin{align} \overline{d}_{sg} &= \textbf{min}_{\{k_g>k_s\geq 0\}}\; \{\overline{\texttt{C}}_{\beta}(R^t_p)\},\quad \textbf{s.t.}\quad \overline{t}_{sg} \leq 0, \label{eq:delay-1}\\ \intertext{in order to find out whether it is possible to avoid delay while satisfying $\varphi_{\textup{temp}}$. If no solution is found, we solve the relaxed optimization that allows the deadline to be missed:} \overline{d}_{sg} &= \textbf{min}_{\{k_g>k_s\geq 0\}}\; \{\overline{t}'_{sg} + \overline{\texttt{C}}_{\beta}(R^t_p)\},\label{eq:delay-2} \end{align} \end{subequations} where $\overline{d}_{sg}\geq 0$; $\overline{t}'_{sg}=0$ if $\overline{t}_{sg}\leq 0$ and $\overline{t}'_{sg} = \overline{t}_{sg}$, otherwise. Note that $\overline{\texttt{C}}_{\beta}(R^t_p)$ is $\infty$ if~$\varphi_{\textup{hard}}$ is violated by~$R_p^{t+}$. Since the suffix of~$R_p^t$ is repeated infinitely often, the choice of indices $k_s,k_g$ for \eqref{eq:delay} is finite. Thus \eqref{eq:delay} can be solved as follows: starting from $k_s=k_0$, we iterate through $k_g\in \{k_0+1,\cdots,\,S+F\}$ and compute the corresponding $\overline{t}_{sg}$ and $\overline{d}_{sg}$ for both cases in~\eqref{eq:delay}. Then we increase $k_s$ incrementally by~$k_s=k_0+1$, iterate through $k_g\in [k_0+2,\cdots,\, S+F]$ and compute $\overline{t}_{sg}$ and $\overline{d}_{sg}$ for both cases. This procedure repeats itself \emph{until} $k_s=S+F-1$. Then, we find among these candidates if there is a pair $k^\star_s,k^\star_g$ that solves~\eqref{eq:delay-1}. If so, they are the optimal choice of $k_s,k_g$. Otherwise, we search for the optimal solution to~\eqref{eq:delay-2}, of which the solution always exists as it is unconstrained. At last, $R_p^{t+}$ is derived by inserting the product states associated with $\pi_s$, $\pi_g$ at indices $k^\star_s$, $k^\star_g$ of $R_p^t$, respectively. \subsection{Human Preference Learning}\label{subsec:pref-learn} As discussed in Section~\ref{subsec:design-control}, the mixed-initiative controller~\eqref{eq:mixed-init} allows the operator to interfere the robot's trajectory such that it deviates from its discrete plan $\tau_r^t$, while always obeying~$\varphi_{\textup{hard}}$. This is beneficial as the robot could be guided to (i) explore \emph{unknown} features to update its workspace model, as described in Section~\ref{subsub:update-ws}; and (ii) follow the trajectory that is \emph{preferred} by the operator. Particularly, as discussed in Section~\ref{subsec:init-syn}, the initial run~$R_p^0$ is a balanced plan between reducing the control cost and improving the satisfaction of~$\varphi_{\textup{soft}}$, where the weighting parameter is $\beta$ in~\eqref{eq:plan-cost}. Clearly, different choices of~$\beta$ may result in different~$R^0_p$. The initial plan~$R_p^0$ is synthesized under the initial value $\beta_0\geq 0$, which however might \emph{not} be what the operator prefers. In the following, we present how the robot could learn about the {preferred}~$\beta$ from the operator's inputs during run time. Consider that at time~$t\geq 0$, the robot's past trajectory is given by~$\zeta|_0^{t}\triangleq \pi_0\pi_1\cdots \pi_{k_t}$. Assume now that during time $[t,\,t']$, where $t'>t>0$, via the mixed-initiative controller in~\eqref{eq:mixed-init}, the operator guides the robot to reach a sequence of regions that s/he prefers, which is defined by: \begin{equation}\label{eq:human-traj} \zeta_h|_t^{t'}\triangleq \pi'_1\pi'_2\cdots \pi'_H \end{equation} where~$\pi'_h\in \Pi$, $\forall h=1,2\cdots H$ and~$H\geq 1$ is the length of~$\zeta_h$ that can vary each time the operator acts. Afterwards, the robot continues executing its current plan~$\tau_r^t$. Thus, the actual robot trajectory until time $t'$ is given by $\zeta_h|_0^{t'}\triangleq \zeta|_0^{t}\,\zeta_h|_t^{t'}$, which is the concatenation of $\zeta|_0^{t}$ and $\zeta_h|_t^{t'}$. \begin{problem}\label{prob:IRL} Given the actual robot trajectory~$\zeta_h|_0^{t'}$, design an algorithm to estimate the preferred value of $\beta$ as~$\beta_h^\star$ such that~$\zeta_h|_0^{t'}$ corresponds to the optimal plan under $\beta_h^\star$. \hfill $\blacksquare$ \end{problem} The above problem is closely related to the inverse reinforcement learning (IRL) problem~\cite{ng2000algorithms, ratliff2006maximum}, where the robot learns about the cost functions of the system model based on demonstration of the preferred plans. On the other hand, in reinforcement learning~\cite{sutton1998reinforcement,Bertsekas1996} problem, the robot learns the optimal plan given these functions. As mentioned in~\cite{ng2000algorithms}, most problems of IRL are ill-posed. In our case, it means that there are more than one $\beta_h^\star$ that render $\zeta_h|_0^{t'}$ to be the optimal plan under $\beta_h^\star$. In order to improve \emph{generalization} such that the robot could infer the human preference based on the human's past inputs (instead of simply repeating them), our solution is based on the maximum margin planning algorithm from~\cite{ratliff2006maximum}. The general idea is to iteratively update $\beta$ via a sub-gradient descent, where the gradient is computed based on the difference in cost between $\zeta_h|_0^{t'}$ and the optimal plan under the current $\beta$. First, we compute the set of all finite runs within~$\mathcal{A}_p^{t'}$, denoted by $\mathbf{R}_h^{t'}$, that are associated with $\zeta_h|_0^{t'}$. It can be derived iteratively via a breadth-first graph search~\cite{baier2008principles}. Among~$\mathbf{R}_h^{t'}$, we find the one with the minimal cost over $\alpha_3$, i.e., \begin{equation}\label{eq:minimal} R^\star_h \triangleq \textbf{argmin}_{R\in \mathbf{R}_h^{t'}}\; \boldsymbol{\alpha}_3 (R). \end{equation} Let $R^\star_h\triangleq q_1q_2\cdots q_H$, where $q_h\in Q_p$, $\forall h=1,2,\cdots,H$. Denote by $\beta_k$ the value of $\beta$ at the $k$-th iteration, for $k\geq 0$. Note that $\beta_0\triangleq \beta_t$, where $\beta_t$ is the value of $\beta$ at time $t>0$. For the $k$-th iteration, we find the optimal run from~$q_1$ to $q_H$ under $\beta_k$ with certain margins, i.e., \begin{equation}\label{eq:beta-optimal} \hat{R}_{\beta_k}^\star \triangleq \textbf{argmin}_{R\in \mathbf{R}_{q_1q_H}}\; \Big{(}\texttt{C}_{\beta_k}(R) - M(R,R^\star_h)\Big{)} \end{equation} where $\mathbf{R}_{q_1q_H}$ is the set of \emph{all} runs from $q_1$ to $q_H$ in $\mathcal{A}_p^{t'}$; and $M:Q^H\times Q^H\rightarrow \mathbb{N}$ is the margin function~\cite{ratliff2006maximum}: \begin{equation}\label{eq:margin} M(R,\,R^\star_h) = |\{(q_s,q_t)\in R\,|\, (q_s,q_t)\notin R_h^\star\}|, \end{equation} which returns the number of edges within $R$ that however do not belong to $R_h^\star$. The margin function decreases the total cost $\texttt{C}_{\beta_k}(R)$ by the difference between $R$ and $R_h^\star$. It can improve generalization and help address the ill-posed nature of Problem~\ref{prob:IRL}. To solve~\eqref{eq:beta-optimal}, we first modify~$\mathcal{A}_p^{t'}$ by reducing the $\alpha_1$ cost of each edge $(q_s,q_t)\in R_h^\star$ by one. Then a Dijkstra shortest path search can be performed over the modified $\mathcal{A}_p$ to find the shortest run from $q_1$ to $q_H$ that minimizes the cost with margins in~\eqref{eq:beta-optimal}. Given~$\hat{R}_{\beta_k}^\star$, we can compute the sub-gradient~\cite{shor2012minimization} that $\beta_k$ should follow: \begin{equation}\label{eq:sub-gradient} \nabla \beta_k = \lambda\cdot \beta_k + \big{(}\boldsymbol{\alpha}_3(R^\star_h) - \boldsymbol{\alpha}_3(\hat{R}_{\beta_k}^\star)\big{)}, \end{equation} where~$\nabla\beta_k \in \mathbb{R}$ and $\lambda>0$ is a design parameter. Thus, at this iteration the value of~$\beta_k$ is updated by \begin{equation}\label{eq:update-beta} \beta_{k+1}=\beta_{k} - \theta_{k} \cdot \nabla \beta_{k}, \end{equation} where~$\theta_k>0$ is the step size or learning rate~\cite{sutton1998reinforcement}. Given the updated~$\beta_{k+1}$, the same process in~\eqref{eq:minimal}-\eqref{eq:update-beta} is repeated until the difference $|\beta_{k+1}-\beta_k|$ is less than a predefined threshold~$\varepsilon>0$. At last, the value of $\beta_t$ is updated to $\beta_{k+1}$. The discussion above is summarized in Alg.~\ref{alg:learn-beta}. Each time the human operator guides the robot to reach a new sequence of regions, the estimation of the value of~$\beta_t$ is updated by running Alg.~\ref{alg:learn-beta}. In the following, we show that Alg.~\ref{alg:learn-beta} ensures the convergence of $\{\beta_k\}$. \begin{algorithm}[t] \caption{On-line IRL algorithm for $\beta$.} \label{alg:learn-beta} \LinesNumbered \KwIn{$\mathcal{A}^t_p$, $\zeta_h|_0^{t'}$, $\beta_t$, $\varepsilon$} Initialize $\beta_k=\beta_t$ for iteration $k=0$\; \While(\tcp*[f]{Iteration $k$}){$|\beta_{k+1}-\beta_k|>\varepsilon$} {Compute $R_h^\star$ in~\eqref{eq:minimal} given~$\zeta_h|_0^{t'}$\; Find $\hat{R}_{\beta_k}^{\star}$ in~\eqref{eq:beta-optimal} given $\beta_k$ and $R_h^\star$\; Compute $\nabla \beta_k$ by~\eqref{eq:sub-gradient} and update $\beta_k$ by~\eqref{eq:update-beta}\;} \Return $\beta_t^+=\beta_{k+1}$ \end{algorithm} \begin{lemma}\label{lemma:whole} The sequence $\{\beta_k\}$ in Alg.~\ref{alg:learn-beta} converges to a fixed $\beta_l^\star\geq 0$ and the optimal plan under $\beta_l^\star$ is $\zeta_h|_0^{t'}$. \end{lemma} \begin{proof} Firstly, the optimal run $R_h^\star$ associated with $\zeta_h|_0^{t'}$ under $\beta_h^\star$ minimizes the balanced cost~$\texttt{C}_\beta$ from~\eqref{eq:plan-cost}, i.e., \begin{equation}\label{eq:opt-1} \texttt{C}_{\beta}(R_h^\star) \leq \texttt{C}_{\beta}(R), \quad \forall R\in \mathbf{R}_{q_1q_H}, \end{equation} where $\mathbf{R}_{q_1q_H}$ is defined in~\eqref{eq:beta-optimal}. Solving~\eqref{eq:opt-1} directly can be computationally expensive due to the large set~$\mathbf{R}_{q_1q_H}$. We introduce a slack variable~$\xi\in \mathbb{R}$ to relax the constraints: \begin{equation}\label{eq:opt-2} \begin{split} &\textbf{min}_{\beta\geq 0} \quad \frac{\lambda}{2}\beta^2 + \xi \\ & \textbf{s.t.} \quad \texttt{C}_{\beta}(R^\star_h) - \xi \leq \min_{R\in \mathbf{R}_{q_1q_H}} \Big{(}\texttt{C}_{\beta}(R) - M(R,R^\star_h)\Big{)}, \\ \end{split} \end{equation} where~$\lambda>0$ is the same as in~\eqref{eq:sub-gradient} and the margin function $M(\cdot)$ is from~\eqref{eq:margin}. Thus, by enforcing the slack variables to be tight, $\beta$ also {minimizes} the combined cost function: \begin{equation}\label{eq:combined} \frac{\lambda}{2}\beta^2 + \texttt{C}_{\beta}(R^\star_h) - \min_{R\in \mathbf{R}_{q_1q_H}} \Big{(}\texttt{C}_{\beta}(R) - M(R,R^\star_h)\Big{)}, \end{equation} which is convex but non-differentiable. Instead, we compute the sub-gradient~\cite{shor2012minimization} of~\eqref{eq:combined}: $\nabla \beta = \lambda \beta + \big{(}\boldsymbol{\alpha}_3(R^\star_h) - \boldsymbol{\alpha}_3(\hat{R}_{\beta}^\star)\big{)}$ and $\hat{R}_{\beta}^\star=\textbf{argmin}_{R\in \mathbf{R}_{q_1q_H}} {(}\texttt{C}_{\beta}(R) - M(R,R^\star_h) {)}$, which is equivalent to~\eqref{eq:sub-gradient}. Lastly, by the strong convexity of~\eqref{eq:combined} and Theorem~1 of~\cite{ratliff2006maximum}, the estimation $\beta_t$ approaches the optimal $\beta_l^\star$ with linear convergence rate under constant stepsize $\theta_k=\theta$, i.e., $|\beta_k-\beta_l^\star|^2\leq (1- \theta\lambda)^{k+1}|\beta_0-\beta_l^\star|^2+\frac{\theta |\nabla \beta|_{\max}}{\lambda}$. A detailed analysis on this deviation can be found in~\cite{ratliff2006maximum,shor2012minimization}. \end{proof} \begin{remark} It is worth noting that the convergent value $\beta_l^\star$ might be \emph{different} from the preferred~$\beta^\star_h$, while they both satisfy~\eqref{eq:opt-1} with the same optimal run $R_h^\star$. However, the margin function in~\eqref{eq:opt-2} ensures that $\beta_l^\star$ is \emph{better} than or at least equivalent to $\beta_h^\star$ in terms of the similarity between $R_h^\star$ and the run with the second minimum cost by~\eqref{eq:plan-cost}. \end{remark} \begin{table}[t] \begin{center} \scalebox{1.1}{ \begin{tabular}{| c| c | c | c | c|} \hline Method & $|\mathcal{A}_p|$ & $\beta_l^\star$ & NO. of Dijkstra & Time[\si{\second}] \\ \hline \hline Alg.1 & 25 & 13.4 & 8 & 3.8\\ \hline M1 & 25 & 10.0 & 200 & 124.4 \\ \hline M2 & 25 & 11.7 & 350 & 337.2\\ \hline \hline Alg.1 & 100 & 16.5 & 12 & 150.8\\ \hline M1 & 100 & 14.2 & 200 & 2203.5 \\ \hline M2 & 100 & -- & 800+ & 3000+\\ \hline \end{tabular}} \caption{Comparison of computational complexity and performance of Alg.~\ref{alg:learn-beta} and two alternative methods in Example~\ref{example:compare}.} \label{table:beta-statistics} \end{center} \end{table} Now we show the computational efficiency of Alg.~\ref{alg:learn-beta} compared with two straight-forward solutions: (M1) choose the optimal $\beta$ among a set of guessed values of $\beta$, denoted by $S_\beta$; (M2) solve~\eqref{eq:opt-1} directly by enumerating all runs in $\mathbf{R}_{q_1q_H}$. The first method's accuracy relies on $S_\beta$ being large, which however results in high computational cost. Similarly, the second method relies on evaluating \emph{every} run in~$\mathbf{R}_{q_1q_H}$, the size of which is combinatorial to the size of $\mathcal{A}^t_p$. The following example shows some numerical comparison. \begin{example}\label{example:compare} Assume that $\beta_h^\star=15$ and initially $\beta_0=0$. We use three methods: Alg.~\ref{alg:learn-beta}, M1 and M2 above to estimate $\beta_h^\star$. As shown in Table~\ref{table:beta-statistics}, we compare the final convergence $\beta_l^\star$ and the computation time under varying sizes of $\mathcal{A}_p$. It can be seen that the computation time for Alg.~\ref{alg:learn-beta} is significantly less than M1 and M2, where for the second case M2 fails to converge within $50\si{\minute}$.\hfill $\blacksquare$ \end{example} \section{The Integrated System}\label{sec:complete} In this section, we describe the the real-time execution of the integrated system given the components in Section~\ref{sec:blocks}. Then we discuss the computational complexity. \subsection{Human-in-the-loop Motion and Task Planning}\label{subsec:summary} The complete algorithm is shown in Alg.~\ref{alg:complete}. Before the system starts, given the initial model $\mathcal{T}^0$ and the task formulas in~\eqref{eq:ltl-task}, the initial plan~$\tau_r^0$ is synthesized by the algorithm from Section~\ref{subsec:init-syn} under the initial~$\beta_0$. From~$t=0$, the robot executes~$\tau_r^0$ by following the sequence of goal regions, see Lines 1-3. Meanwhile, the operator can directly modify the control input~$u(\cdot)$ via~\eqref{eq:mixed-init} to change the robot's trajectory. Thus, the robot can explore regions that are not in its initial plan and update the model $\mathcal{T}^t$ as described in Section~\ref{subsub:update-ws}. As a result, the plan~$\tau_r^t$ is updated by~\eqref{eq:update-run} accordingly, see Lines 4-5. Moreover, as described in Section~\ref{subsubsec:cont-task}, the operator can assign temporary tasks with deadlines as in~\eqref{eq:temp-task}, for which~$\tau_r^t$ is modified by solving~\eqref{eq:delay}, see Lines 6-7. Last but not least, each time the operator guides the robot to follow a new trajectory, the parameter $\beta$ is updated via Alg.~\ref{alg:learn-beta} to estimate the human preference. Then, its current plan~$\tau_r^t$ is updated using the updated~$\beta$, see Lines 8-12. The above procedure repeats until the system is terminated. \begin{theorem}\label{theorem:correctness} Alg.~\ref{alg:complete} above fulfills the three control objectives of Section~\ref{sec:control-objective}, i.e., (I)~$\varphi_{\textup{hard}}$ is satisfied for all time; (II) each~$\varphi_{\textup{temp}}$ is satisfied in finite time; and (III) the satisfaction of $\varphi_{\textup{soft}}$ adapts to the human inputs. \end{theorem} \begin{proof} (I) Firstly, both the initial synthesis algorithm in Section~\ref{subsec:init-syn} and the plan adaptation algorithm in Section~\ref{subsub:update-ws} ensure $\varphi_{\textup{hard}}$ is satisfied by minimizing the total cost in~\eqref{eq:plan-cost}. Then Lemma~\ref{lemma:iss} ensures that~$\varphi_{\textup{hard}}$ is respected for all possible inputs from the human operator. (II) The combined cost~\eqref{eq:delay} ensures that~$\varphi_{\textup{temp}}$ is satisfied within finite time. (III) Convergence of the learning Alg.~\ref{alg:learn-beta} is shown in Lemma~\eqref{lemma:whole}. Thus, the updated plan~$\tau_r^t$ under the learned value of $\beta$ adapts to the plan preferred by the operator. \end{proof} \begin{algorithm}[t] \caption{Mixed-initiative Motion and Task Planning} \label{alg:complete} \LinesNumbered \KwIn{$\mathcal{T}^t$, $\varphi_{\text{hard}}$, $\varphi_{\text{soft}}$, $\beta_0$, $u_h(t)$, $(\varphi^t_{\text{temp}},T_{sg})$} Compute~$\mathcal{A}_p^0$ and construct initial plan~$\tau_r^0$ under $\beta_0$\; \ForAll{$t\geq 0$} { Compute $u_r(\cdot)$ in~\eqref{eq:mixed-init} to reach next $\pi^j\in \tau_r^t$\; \If(\tcp*[f]{Model update}){$\mathcal{T}^t$ updated} {Update product~$\mathcal{A}_p^t$ and plan~$\tau_r^t$ by~\eqref{eq:update-run}\;} \If(\tcp*[f]{Temp. task}){$(\varphi_{\textup{temp}}^t,T_{sg})$ received} {Update plan~$\tau_r^t$ by solving~\eqref{eq:delay}\;} \If(\tcp*[f]{Human input}){$\|u_h(t)\|>0$} {Compute control~$u(t)$ by~\eqref{eq:mixed-init}\; Compute~$\zeta_h|_0^t$ by~\eqref{eq:human-traj}\; Learn~$\beta_l^\star$ by Alg.~\ref{alg:learn-beta} and set $\beta_t^+=\beta_l^\star$\; Update~$\tau_r^t$ by~\eqref{eq:update-run} given the learned $\beta_t^+$\;} \Return $u(t)$, $\tau_r^t$, $\beta_t^+$ } \end{algorithm} \subsection{Computational Complexity}\label{subsec:complexity} The process to synthesize~$\tau_r^t$ given $\mathcal{A}^t_p$ via Alg.~\ref{alg:complete} (in Line~1) and the plan revision given~$\mathcal{T}^t$ (in Line~5) both have complexity~$\mathcal{O}(|\mathcal{A}^t_p|^2)$~\cite{guo2015multi}. The adaptation algorithm for temporary tasks (in Line~7) has complexity~$\mathcal{O}(|R_p^t|^2)$. Lastly, the learning Alg.~\ref{alg:learn-beta} (in Line~11) has complexity~$\mathcal{O}(|R_h^\star|^2)$, where~$|R_h^\star|$ is the length of the optimal run from~\eqref{eq:minimal}. \section{Case Study}\label{sec:case} In this section, we present numerical studies both in simulation and experiment. The Robot Operation System (ROS) is used as the simulation and experiment platform. All algorithms are implemented in Python 2.7 and available online~\cite{mixed-package}. All computations are carried out on a laptop (3.06GHz Duo CPU and 8GB of RAM). \subsection{Simulation}\label{subsec:simulate} \begin{figure}[t] \centering \includegraphics[width =0.4\textwidth]{figures/sim_ws_new.png} \caption{Office environment in Gazebo with TIAGo robot, where the regions of interest and allowed transitions are marked.} \label{fig:office-sim} \end{figure} \subsubsection{Workspace and Robot Description} Consider the simulated office environment in Gazebo as shown in Fig.~\ref{fig:office-sim} with dimension $100m\times 45m$, in which there are 9 regions of interest (denoted by~$r_0,\cdots,r_8$) and 4 corridors (denoted by $c_1$, $\cdots$, $c_4$). The transition relations are determined by whether there exists a collision-free path from the center of one region to another, without crossing other regions. We simulate the TIAGo robot from PAL robotics, of which the navigation control~$u_r(\cdot)$ with obstacle avoidance, localization and mapping are all based on the ROS navigation stack. The human operator monitors the robot motion through Rviz. Moreover, the control~$u_h(\cdot)$ from the operator can be generated from a keyboard or joystick, while the temporary task in LTL formulas~$\varphi_{\text{temp}}$ are specified via ROS messages. More details can be found in the software implementation~\cite{mixed-package} and simulation video~\cite{icra18-video}. \begin{figure}[t] \begin{minipage}[t]{0.495\linewidth} \centering \includegraphics[width =1.02\textwidth]{figures/traj_sim_1_zoom_1.pdf} \end{minipage} \begin{minipage}[t]{0.495\linewidth} \centering \includegraphics[width =1.02\textwidth]{figures/traj_sim_1_zoom_2.pdf} \end{minipage} \caption{The robot's trajectory in simulation case One, where the robot's initial plan is in blue, the human-guided part is in red, and the updated plan is in green.} \label{fig:sim-traj-1} \end{figure} \subsubsection{Case One} The hard task for \emph{delivery} is given by~$\varphi_{1,\text{hard}} = \big{(}\square \Diamond (r_0 \wedge \Diamond (r_7 \wedge \Diamond r_8))\big{)} \wedge \big{(}\square \Diamond (r_2 \wedge \Diamond (r_3 \vee r_6))\big{)} \wedge \big{(}\square \neg r_5\big{)}$, i.e., to transfer objects from $r_0$ to $r_7$ (then $r_8$) and from $r_2$ to $r_3$ (or $r_6$), while avoiding $r_5$ for all time. The soft task is $\varphi_{1,\text{soft}} = (\square \neg c_4)$, i.e., to avoid $c_4$ if possible. It took $0.2s$ to compute the parameterized product automaton, which has $312$ states and $1716$ transitions. The parameter $\beta$ is initially set to a large value $30$, thus the initial plan satisfies both the soft and hard tasks but with a large cost due to the long traveling distance, as shown in Fig.~\ref{fig:sim-traj-1}. During~$[700s,950s]$, the operator drives the robot to go through corridor~$c_4$ and reach $r_8$, which violates the soft task~$\varphi_{1,\text{soft}}$. As a result, $\beta$ is updated by Alg.~\ref{alg:learn-beta} and the final value is~$16.35$ after $20$ iterations with~$\varepsilon=0.2$, as shown in Fig.~\ref{fig:sim-beta-1}. Namely, the robot has learned that the operator allows \emph{more violation} of the soft task to reduce the total cost. The resulting updated plan is shown in Fig.~\ref{fig:sim-traj-1}. Moreover, to demonstrate the ensured safety in Lemma~\ref{lemma:iss}, the human operator drives the robot towards~$r_5$ during~$[1250s,1350s]$, which is not allowed by $\varphi_{\textup{hard}}$. The weighting function~$\kappa(\cdot)$ in the mixed controller~\eqref{eq:mixed-init} approaches $0$. Thus the robot still follows its updated plan and avoids~$r_5$. The mixed control inputs during these periods are shown in Fig.~\ref{fig:sim-v-1-zoom}. \begin{figure}[t] \centering \includegraphics[width =0.49\textwidth]{figures/sim_control_v_1_zoom.pdf} \caption{The mixed control for linear velocity during two time periods when the human control (in red) is active.} \label{fig:sim-v-1-zoom} \end{figure} \begin{figure}[t] \begin{minipage}[t]{0.495\linewidth} \centering \includegraphics[width =1.02\textwidth]{figures/traj_sim_2_zoom_1.pdf} \end{minipage} \begin{minipage}[t]{0.495\linewidth} \centering \includegraphics[width =1.02\textwidth]{figures/traj_sim_2_zoom_2.pdf} \end{minipage} \caption{The robot's trajectory in simulation case Two. The robot's trajectory while performing the temporary task is in magenta.} \label{fig:sim-traj-2} \end{figure} \begin{figure}[t] \begin{minipage}[t]{0.495\linewidth} \centering \includegraphics[width =1.02\textwidth]{figures/sim_beta_1.pdf} \end{minipage} \begin{minipage}[t]{0.495\linewidth} \centering \includegraphics[width =0.97\textwidth]{figures/sim_beta_2.pdf} \end{minipage} \caption{Evolution of the learned value of $\beta$ in simulation case One (left) and case Two (right).} \label{fig:sim-beta-1} \end{figure} \subsubsection{Case Two} The hard task for \emph{surveillance} is given by~$\varphi_{2,\text{hard}} = (\square \Diamond r_2) \wedge (\square \Diamond r_3)\wedge (\square \Diamond r_8)$, i.e., to surveil regions $r_2$, $r_3$ and $r_8$ infinitely often. The soft task for extra performance is $\varphi_{2,\text{soft}} = \square \Diamond \big{(}r_4 \rightarrow (\neg r_5 \mathsf{U} \Diamond r_6)\big{)}$, i.e., to collect goods from region $r_4$ and drop it at $r_6$ (without crossing $r_5$ before that). Moreover, the workspace model in this case is \emph{different} from the initial model that the corridor $c_2$ has been blocked. By following Alg.~\ref{alg:complete}, it took $0.17s$ to compute the product automaton, which has $418$ states and $3360$ transitions. Initially, $\beta=0$ meaning that the initial plan~$\tau_r^0$ only satisfies~$\varphi_{2,\text{hard}}$ while $\varphi_{2,\text{soft}}$ is fully relaxed. During $[150s, 250s]$, the operator drives the robot to sense that the corridor $c_2$ has been blocked. As a result, the discrete plan~$\tau_r^0$ is updated such that the robot chooses to reach~$r_8$ from $r_2$ via $c_1$, as shown in Fig.~\ref{fig:sim-traj-2}. Afterwards, during~$[1100s,1200s]$, the operator drives the robot to~$r_4$ after reaching $r_2$, which satisfies part of~$\varphi_{2,\text{soft}}$. As a result, $\beta$ is increased by Alg.~\ref{alg:learn-beta} to~$11.3$ after $12$ iterations with~$\varepsilon=0.1$, as shown in Fig.~\ref{fig:sim-beta-1}. Namely, the robot has learned that the soft task should be satisfied \emph{more}. Lastly, at time~$2100s$, the operator assigns a temporary task~$\varphi_{\text{temp}}=\Diamond(r_1 \wedge \Diamond r_7)$ with a deadline $2700s$, i.e., to deliver an object from $r_1$ to $r_7$. This temporary task is incorporated into $\tau_r^t$ and is fulfilled at $2400s$, which is shown in Fig.~\ref{fig:sim-traj-2}. \subsection{Experiment}\label{subsec:experiment} The experiment setup involves a TurtleBot within the office environment at Automatic Control Lab, KTH. Details are omitted here due to limited space, which are given in the the software implementation~\cite{mixed-package} and experiment video~\cite{icra18-video}. \begin{figure}[t] \centering \includegraphics[width =0.49\textwidth]{figures/office_exp.png} \caption{The human-in-the-loop experiment setup, where the robot is controlled by both its autonomous controller and the human inputs.} \label{fig:office-exp} \end{figure} \subsubsection{Workspace and Task Specification} The office environment consists of three office rooms ($r_1$, $r_2$, $r_3$) and one corridor $r_0$, as shown in Fig.~\ref{fig:office-exp}. The robot's task specification is similar to case study Two above, i.e., the hard task is given by $\varphi_{\text{hard}}=\square\Diamond r_0 \wedge \square\Diamond r_1$ (to surveil regions $r_0$ and $r_1$) while the soft task is $\varphi_{\text{soft}}=\square\Diamond r_2 \wedge \square\Diamond r_3$ (to surveil regions $r_2$ and $r_3$). The TurtleBot is controlled via ROS navigation stack and behaves similarly to the TIAGo robot in Section~\ref{subsec:simulate}. \subsubsection{Experiment Results} Since~$\beta$ is initially set to $0$, the robot only surveils $r_0$ and $r_1$ for the hard task, as shown in Fig.~\ref{fig:exp-traj}. From~$t=59s$, the operator starts driving the robot towards $r_2$ and back to $r_0$ until $t=137s$. As a result, the estimated~$\beta_t$ is updated by Alg.~\ref{alg:learn-beta} given the robot's past trajectory. The final convergence value is~$1.58$ with $\varepsilon=0.01$ after 15 iterations. Then updated plan is shown in Fig.~\ref{fig:exp-traj} which intersects with not only regions~$r_0$ and $r_1$ for the hard task, but also regions~$r_2$ and $r_3$ for the soft task. Notice that the operator only needs to interfere the robot's motion for a small fraction of the operation time. \begin{figure}[t] \centering \includegraphics[width =0.49\textwidth]{figures/exp_traj.pdf} \caption{The robot's trajectory in the experiment study, where the robot's initial plan is in blue (left), the human-guided segment is in red (middle), and the updated plan is in green (right). } \label{fig:exp-traj} \end{figure} \section{Summary and Future Work}\label{sec:future} In this paper, we present a human-in-the-loop task and motion planning strategy for mobile robots with mixed-initiative control. The proposed coordination scheme ensures the satisfaction of high-level LTL tasks given the human initiative both through continuous control inputs and discrete task assignments. Future work includes consideration of multi-robot systems.
{ "timestamp": "2018-02-21T02:01:49", "yymm": "1802", "arxiv_id": "1802.06839", "language": "en", "url": "https://arxiv.org/abs/1802.06839" }
\section{Introduction} Multiple-input multiple-output (MIMO) systems \cite{LiStoica2008} have the capacity to transmit independent probing signal or waveforms from each transmit antenna. Such waveform diversity feature leads to many desirable properties for MIMO systems. For example, a modern MIMO radar has many appealing features, like higher spatial resolution, superior moving target detection and better parameter identifiability, compared to the classical phased-array radar \cite{BlissForsythe2003,FishlerHaimovichBlumChizhikCiminiValenzuela2004,ForsytheBlissFawcett2004}. The MIMO transmit beampattern matching problem is critically important in many fields, like in defense systems, communication systems, and biomedical applications. This problem is concerned with designing the probing waveforms to approximate a desired antenna array transmit beampattern (i.e., an energy distribution in space and frequency) and also to minimize the the cross-correlation of the signals reflected back from various targets of interest by considering some practical waveform constraints. The MIMO transmit beampattern matching problem appears to be difficult from an optimization point of view because the existence of the fourth-order nonconvex objective function and the possibly nonconvex waveform constraints which are used to represent desirable properties and/or enforced from an hardware implementation perspective \cite{Skolnik1990}. In \cite{FuhrmannSanAntonio2004}, the MIMO transmit beampattern matching problem was formulated to minimize the difference between the designed beampattern and the desired one. The formulation in \cite{FuhrmannSanAntonio2004} was modified in \cite{FuhrmannSanAntonio2008,StoicaLiXie2007} by introducing the cross-correlation between the signals. And in \cite{StoicaLiXie2007}, the authors proposed to design the waveform covariance matrix to match the desired beampattern through semidefinite programming. A closed-form waveform covariance matrix design method was also proposed based on discrete Fourier transform (DFT) coefficients and Toeplitz matrices in \cite{LiporAhmedAlouini2014,BouchouchaAhmedAl-NaffouriAlouini2017}. But such kind of methods can perform badly for small number of antennas. After the waveform covariance matrix is obtained, other methods should be applied to synthesize a desired waveform from its covariance matrix. For example, a cyclic algorithm was proposed in \cite{StoicaLiZhu2008} to synthesize a constant modulus waveform from its covariance matrix. These methods are usually called two-steps methods. In practice, they could become inefficient and suboptimal if more waveform constraints are considered. In \cite{WangWangLiuLuo2012}, it was found that directly designing the waveform to match the desired beampattern can give a better performance, which is referred to as the one-step method. But the method in \cite{WangWangLiuLuo2012} is tailored to the constant modulus constraint and can be slow in convergence. In \cite{ChengHeZhangLi2017}, the problem was solved based on the alternating direction method of multipliers (ADMM) \cite{BoydParikhChuPeleatoEckstein2011}. However, again the proposed algorithm is only designed for dealing with unimodulus constraint. The majorization-minimization (MM) method \cite{HunterLange2004,SunBabuPalomar2016} has shown its great efficiency in deriving fast and convergent algorithms to solve nonconvex problems in many different applications \cite{SongBabuPalomar2015,ZhaoPalomar2018}. In this paper, we propose a one-step method to directly solve the MIMO transmit beampattern matching problem based on the MM method by considering different waveform constraints. The performance of our algorithms compared to the existing algorithms is verified through numerical simulations. \section{MIMO Transmit Beampattern Matching Problem Formulation} A colocated MIMO radar \cite{LiStoica2007} with $M$ transmit antennas in a uniform linear array (ULA), as shown in Fig. \ref{fig:MIMO-radar-transceiver}, is considered. Each transmit antenna can emit a different waveform $x_{m}\left(n\right)$ with $m=1,2,\ldots,M$, $n=1,2,\ldots,N$, where $N$ is the number of samples. Let $\mathbf{x}\left(n\right)=\Bigl[x_{1}\left(n\right),x_{2}\left(n\right),\ldots,x_{M}\left(n\right)\Bigr]^{T}$ be the $n$th sample of the $M$ transmit waveforms and $\mathbf{x}=\Bigl[\mathbf{x}^{T}\left(1\right),$ $\mathbf{x}^{T}\left(2\right),\ldots,\mathbf{x}^{T}\left(N\right)\Bigr]^{T}$ denote the waveform vector. \begin{figure}[t] \centering{}\includegraphics[scale=0.7]{ULA}\caption{\label{fig:MIMO-radar-transceiver}MIMO transceiver with $M$ antennas and $\theta$ is the spacial direction of interest.} \end{figure} The signal at a target location with angle $\theta$ ($\theta\in\Theta$, which is the angle set) is represented by \[ \sum_{m=1}^{M}e^{-j\pi\left(m-1\right)\sin\theta}x_{m}\left(n\right)=\mathbf{a}^{T}\left(\theta\right)\mathbf{x}\left(n\right),\,n=1,\ldots,N, \] where $\mathbf{a}\left(\theta\right)$ is the transmit steering vector written as $\mathbf{a}\left(\theta\right)=\left[1,e^{-j\pi\sin\theta},\ldots,e^{-j\pi\left(M-1\right)\sin\theta}\right]^{T}$. Then, the power for the probing signal $\mathbf{x}$ at location $\theta$ which is named the \emph{transmit beampattern} can be written as follows: \[ \begin{aligned} & P\left(\theta,\mathbf{x}\right)\\ = & \sum_{n=1}^{N}\left(\mathbf{a}^{T}\left(\theta\right)\mathbf{x}\left(n\right)\right)^{\ast}\left(\mathbf{a}^{T}\left(\theta\right)\mathbf{x}\left(n\right)\right)\\ = & \left(\left(\mathbf{I}_{N}\otimes\mathbf{a}^{T}\left(\theta\right)\right)\mathbf{x}\right)^{H}\left(\left(\mathbf{I}_{N}\otimes\mathbf{a}^{T}\left(\theta\right)\right)\mathbf{x}\right)\\ = & \mathbf{x}^{H}\left(\mathbf{I}_{N}\otimes\mathbf{a}^{\ast}\left(\theta\right)\mathbf{a}^{T}\left(\theta\right)\right)\mathbf{x}=\mathbf{x}^{H}\mathbf{A}\left(\theta\right)\mathbf{x}, \end{aligned} \] where $\mathbf{A}\left(\theta\right)=\mathbf{I}_{N}\otimes\mathbf{a}^{\ast}\left(\theta\right)\mathbf{a}^{T}\left(\theta\right)$. Suppose there are $\overline{K}$ targets of interest, and then the spatial cross-correlation sidelobes (cross-correlation beampattern) between the probing signals at locations $\theta_{i}$ and $\theta_{j}$ ($i\neq j$, $i,j=1,\ldots,\overline{K}$ and $\theta_{i},\theta_{j}\in\Theta$) is given by \[ \begin{aligned} & P_{cc}\left(\theta_{i},\theta_{j},\mathbf{x}\right)\\ = & \sum_{n=1}^{N}\left(\mathbf{a}^{T}\left(\theta_{i}\right)\mathbf{x}\left(n\right)\right)^{\ast}\left(\mathbf{a}^{T}\left(\theta_{j}\right)\mathbf{x}\left(n\right)\right)\\ = & \left(\left(\mathbf{I}_{N}\otimes\mathbf{a}^{T}\left(\theta_{i}\right)\right)\mathbf{x}\right)^{H}\left(\left(\mathbf{I}_{N}\otimes\mathbf{a}^{T}\left(\theta_{j}\right)\right)\mathbf{x}\right)\\ = & \mathbf{x}^{H}\left(\mathbf{I}_{N}\otimes\mathbf{a}^{\ast}\left(\theta_{i}\right)\mathbf{a}^{T}\left(\theta_{j}\right)\right)\mathbf{x}=\mathbf{x}^{H}\mathbf{A}\left(\theta_{i},\theta_{j}\right)\mathbf{x}, \end{aligned} \] where $\mathbf{A}\left(\theta_{i},\theta_{j}\right)=\mathbf{I}_{N}\otimes\mathbf{a}^{\ast}\left(\theta_{i}\right)\mathbf{a}^{T}\left(\theta_{j}\right)$. The objective of the transmit beampattern matching problem is as follows: i) to match a desired transmit beampattern denoted as $p\left(\theta\right)$, which can be formulated as follows\footnote{Variable $\alpha$ is introduced since $p\left(\theta\right)$ is typically given in a \textquotedblleft normalized form\textquotedblright{} and we want to approximate a scaled version of $p\left(\theta\right)$, not $p\left(\theta\right)$ itself.}: \begin{equation} J\left(\alpha,\mathbf{x}\right)=\sum_{\theta\in\Theta}\omega\left(\theta\right)\left|\alpha p\left(\theta\right)-P\left(\theta,\mathbf{x}\right)\right|^{2},\label{eq:beampattern matching} \end{equation} where $\omega\left(\theta\right)\geq0$ is the weight for the direction $\theta$; and ii) to minimize the cross-correlation between the probing signals at a number of given target locations due to the fact that the statistical performance of adaptive MIMO radar techniques rely on the cross-correlation beampattern, which is given as \begin{equation} E\left(\mathbf{x}\right)=\sum_{\theta_{i},\theta_{j}\in\overline{\Theta},\,i\neq j}\left|P_{cc}\left(\theta_{i},\theta_{j},\mathbf{x}\right)\right|^{2}.\label{eq:sidelobe term} \end{equation} Then, by considering $J\left(\alpha,\mathbf{x}\right)$ and $E\left(\mathbf{x}\right)$, the MIMO transmit beampattern matching problem is formulated as follows: \begin{equation} \begin{aligned} & \underset{\alpha,\mathbf{x}}{\mathsf{minimize}} & & f\left(\alpha,\mathbf{x}\right)\triangleq J\left(\alpha,\mathbf{x}\right)+\omega_{cc}E\left(\mathbf{x}\right)\\ & \mathsf{subject\:to} & & \mathbf{x}\in\mathcal{X}\triangleq\mathcal{X}_{0}\cap\left(\cap_{i}\mathcal{X}_{i}\right), \end{aligned} \label{eq:problem} \end{equation} where $\omega_{cc}$ controls the sidelobe term, $\mathcal{X}$ generally denotes the waveform constraint, and $\mathcal{X}_{0}=\left\{ \mathbf{x}\in\mathbb{C}^{MN}\mid\left\Vert \mathbf{x}\right\Vert _{2}^{2}=c_{e}^{2}\right\} $ representing the \textbf{total transmit energy (power) constraint}. We are also interested in other practical waveform constraints: \textbf{i) Constant modulus constraint }is to prevent the non-linearity distortion of the power amplifier to maximize the efficiency of the transmitter, which is given by $\begin{array}{c} \mathcal{X}_{1}=\left\{ \mathbf{x}\mid\left|x\left(l\right)\right|=c_{d}=\frac{c_{e}}{\sqrt{MN}}\right\} \end{array}$ for $l=1,\ldots,MN$; \textbf{ii) Peak-to-Average Ratio (PAR) constraint} is the ratio of the peak signal power to its average power ($\mathrm{PAR}\left(\mathbf{x}\right)=\frac{\max\left|x\left(l\right)\right|^{2}}{\left\Vert \mathbf{x}\right\Vert _{2}^{2}/MN}$ with $1\leq\mathrm{PAR}\left(\mathbf{x}\right)\leq MN$). The $\mathrm{PAR}\left(\mathbf{x}\right)$ is constrained to a small threshold, so that the analog-to-digital and digital-to-analog converters can have lower dynamic range, and fewer linear power amplifiers are needed. Since $\mathcal{X}_{0}$, the PAR constraint is $\mathcal{X}_{2}=\left\{ \mathbf{x}\mid\left|x\left(l\right)\right|\leq c_{p},\frac{c_{e}}{\sqrt{MN}}\leq c_{p}\leq c_{e}\right\} $ for $l=1,\ldots,MN$; \textbf{iii) Similarity constraint} is to allow the designed waveforms to lie in the neighborhood of a reference one which already can attain a good performance \cite{LiGuerciXu2006a}, which is denoted as $\begin{array}{c} \mathcal{X}_{3}=\left\{ \mathbf{x}\mid\left|\mathbf{x}-\mathbf{x}_{\mathrm{ref}}\right|\leq c_{\epsilon},0\leq c_{\epsilon}\leq\frac{2}{\sqrt{MN}}\right\} \end{array}$. Problem \eqref{eq:problem} is a constrained nonconvex problem due to the nonconvex objective and constraints. We are trying to solve it by using efficient nonconvex optimization methods. \section{Problem Solving via The MM Method} \subsection{The Majorization-Minimization (MM) Method} The MM method \cite{HunterLange2004,RazaviyaynHongLuo2013,SunBabuPalomar2016} is a generalization of the well-known EM method. For an optimization problem given by \[ \begin{aligned} & \underset{\mathbf{x}}{\mathsf{minimize}} & & f\left(\mathbf{x}\right)\\ & \mathsf{subject\:to} & & \mathbf{x}\in{\cal X}, \end{aligned} \] instead of dealing with this problem directly which could be difficult, the MM-based algorithm solves a series of simpler subproblems with surrogate functions that majorize $f\left(\mathbf{x}\right)$ over ${\cal X}$. More specifically, starting from an initial point $\mathbf{x}^{\left(0\right)}$, it produces a sequence $\left\{ \mathbf{x}^{\left(k\right)}\right\} $ by the following update rule: \[ \mathbf{x}^{\left(k\right)}\in\arg\min_{\mathbf{x}\in{\cal X}}\:\overline{f}\left(\mathbf{x},\mathbf{x}^{\left(k-1\right)}\right), \] where the surrogate majorizing function $\overline{f}\left(\mathbf{x},\mathbf{x}^{\left(k\right)}\right)$ satisfies \[ \begin{array}{cl} \overline{f}\left(\mathbf{x}^{\left(k\right)},\mathbf{x}^{\left(k\right)}\right)=f\left(\mathbf{x}^{\left(k\right)}\right), & \forall\mathbf{x}^{\left(k\right)}\in{\cal X},\\ \overline{f}\left(\mathbf{x},\mathbf{x}^{\left(k\right)}\right)\geq f\left(\mathbf{x}\right), & \forall\mathbf{x},\mathbf{x}^{\left(k\right)}\in{\cal X},\\ \overline{f}^{\prime}\left(\mathbf{x}^{\left(k\right)},\mathbf{x}^{\left(k\right)};\mathbf{d}\right)=f^{\prime}\left(\mathbf{x}^{\left(k\right)};\mathbf{d}\right), & \forall\mathbf{d},\mbox{\text{ s.t. }}\mathbf{x}^{\left(k\right)}+\mathbf{d}\in{\cal X}. \end{array} \] The objective function value is monotonically nonincreasing at each iteration. To use the MM method, the key step is to find a majorizing function to make the subproblem easy to solve, which will be discussed in the following subsections. \subsection{Majorization Steps For The Beampattern Matching Term $J\left(\alpha,\mathbf{x}\right)$} In this section, we discuss the majorization steps, i.e., how to construct a good majorizing function for the beampattern matching term $J\left(\alpha,\mathbf{x}\right)$ in \eqref{eq:beampattern matching}. First, we have \[ \begin{aligned}J\left(\alpha,\mathbf{x}\right)= & \sum_{\theta\in\Theta}\omega\left(\theta\right)\left|\alpha p\left(\theta\right)-P\left(\theta,\mathbf{x}\right)\right|^{2}\\ = & \alpha^{2}\sum_{\theta\in\Theta}\omega\left(\theta\right)p^{2}\left(\theta\right)-2\alpha\sum_{\theta\in\Theta}\omega\left(\theta\right)p\left(\theta\right)P\left(\theta,\mathbf{x}\right)+\sum_{\theta\in\Theta}\omega\left(\theta\right)\left(P\left(\theta,\mathbf{x}\right)\right)^{2}, \end{aligned} \] which is a quadratic function in variable $\alpha$. Then, it follows that the minimum of $J\left(\alpha,\mathbf{x}\right)$ is attained when \[ \alpha\left(\mathbf{x}\right)=\sum_{\theta\in\Theta}\omega\left(\theta\right)p\left(\theta\right)P\left(\theta,\mathbf{x}\right)/\sum_{\theta\in\Theta}\omega\left(\theta\right)p^{2}\left(\theta\right). \] Substituting $\alpha\left(\mathbf{x}\right)$ back into $J\left(\alpha,\mathbf{x}\right)$ and considering \[ P\left(\theta,\mathbf{x}\right)=\mathrm{Tr}\left(\mathbf{x}\mathbf{x}^{H}\mathbf{A}\left(\theta\right)\right)=\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)^{H}\mathrm{vec}\left(\mathbf{A}\left(\theta\right)\right), \] we get \[ \begin{aligned}J\left(\mathbf{x}\right)= & \sum_{\theta\in\Theta}\omega\left(\theta\right)\left(\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)^{H}\mathrm{vec}\left(\mathbf{A}\left(\theta\right)\right)\right)^{2}-\left(\sum_{\theta\in\Theta}\omega\left(\theta\right)p^{2}\left(\theta\right)\right)^{-1}\\ & \times\left(\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)^{H}\right.\mathrm{vec}\Bigl(\sum_{\theta\in\Theta}\omega\left(\theta\right)\left.p\left(\theta\right)\mathbf{A}\left(\theta\right)\Bigr)\right)^{2}\\ = & \mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)^{H}\mathbf{H}_{J}\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right), \end{aligned} \] where \[ \begin{aligned}\mathbf{H}_{J}= & \sum_{\theta\in\Theta}\omega\left(\theta\right)\mathrm{vec}\left(\mathbf{A}\left(\theta\right)\right)\mathrm{vec}\left(\mathbf{A}\left(\theta\right)\right)^{H}-\Bigl(\sum_{\theta\in\Theta}\omega\left(\theta\right)p^{2}\left(\theta\right)\Bigr)^{-1}\\ & \times\mathrm{vec}\Bigl(\sum_{\theta\in\Theta}\omega\left(\theta\right)p\left(\theta\right)\mathbf{A}\left(\theta\right)\Bigr)\mathrm{vec}\Bigl(\sum_{\theta\in\Theta}\omega\left(\theta\right)p\left(\theta\right)\mathbf{A}\left(\theta\right)\Bigr)^{H}, \end{aligned} \] and it is easy to see that $J\left(\mathbf{x}\right)$ is a quartic function in $\mathbf{x}$. Next, we introduce a useful lemma. \begin{lem} \label{lem:quadratic majorization} Let $\mathbf{A}\in\mathbb{H}^{K}$ and $\mathbf{B}\in\mathbb{H}^{K}$ such that $\mathbf{B}\succeq\mathbf{A}$. At any point $\mathbf{x}_{0}\in\mathbb{C}^{K}$, the quadratic function \textbf{$\mathbf{x}^{T}\mathbf{A}\mathbf{x}$} is majorized by $\mathbf{x}^{H}\mathbf{B}\mathbf{x}+2\mathrm{Re}\left(\mathbf{x}^{H}\left(\mathbf{A}-\mathbf{B}\right)\mathbf{x}_{0}\right)+\mathbf{x}_{0}^{H}\left(\mathbf{B}-\mathbf{A}\right)\mathbf{x}_{0}$. \end{lem} \begin{IEEEproof} Notice that $\left(\mathbf{x}-\mathbf{x}_{0}\right)^{H}\left(\mathbf{B}-\mathbf{A}\right)\left(\mathbf{x}-\mathbf{x}_{0}\right)\geq0$. Based on Lemma \ref{lem:quadratic majorization}, we can choose $\psi_{J,1}\geq\lambda_{\mathrm{max}}\left(\mathbf{H}_{J}\right)$, and because $\psi_{J,1}\mathbf{I}\succeq\mathbf{H}_{J}$, at iterate $\mathbf{x}^{\left(t\right)}$ we have \[ \begin{aligned}J\left(\mathbf{x}\right)\leq & \psi_{J,1}\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)^{H}\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)\\ & +2\mathrm{Re}\left(\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)^{H}\left(\mathbf{H}_{J}-\psi_{J,1}\mathbf{I}\right)\mathrm{vec}\left(\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}\right)\right)\\ & +\mathrm{vec}\left(\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}\right)^{H}\left(\psi_{J,1}\mathbf{I}-\mathbf{H}_{J}\right)\mathrm{vec}\left(\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}\right), \end{aligned} \] where since $\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)^{H}\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)=\left\Vert \mathbf{x}\right\Vert _{2}^{4}=c_{e}^{4}$, the first term is just a constant. Then after ignoring the constant terms, we get the following majorizing function for $J\left(\mathbf{x}\right)$: \[ \overline{J}_{1}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)\simeq2\mathrm{Re}\left(\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)^{H}\left(\mathbf{H}_{J}-\psi_{J,1}\mathbf{I}\right)\mathrm{vec}\left(\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}\right)\right), \] where ``$\simeq$'' stands for ``equivalence'' up to additive constants. Substituting $\mathbf{H}_{J}$ back into function $\overline{J}_{1}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)$ and dropping the constants, we have \begin{equation} \overline{J}_{1}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)\simeq2\mathbf{x}^{H}\left(\mathbf{M}_{J}-\psi_{J,1}\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}\right)\mathbf{x}, \end{equation} where $\mathbf{M}_{J}=$$\sum_{\theta\in\Theta}\omega\left(\theta\right)\left(P\left(\theta,\mathbf{x}^{\left(t\right)}\right)-p\left(\theta\right)\alpha\left(\mathbf{x}^{\left(t\right)}\right)\right)\mathbf{A}\left(\theta\right)$. It is easy to see that after majorization, the majorizing function $\overline{J}_{1}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)$ becomes quadratic in $\mathbf{x}$ rather than quartic in $J\left(\mathbf{x}\right)$. However, using this function as the objective to solve is still hard due to the waveform constraint ${\cal X}$.\footnote{It is a NP-hard unimodular quadratic program even only considering $\mathcal{X}_{1}$.} So we propose to majorize $\overline{J}_{1}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)$ again to simplify the problem to solve in each iteration. Thus, we can consider choosing $\psi_{J,2}\geq\lambda_{\mathrm{max}}\left(\mathbf{M}_{J}\right)\geq\lambda_{\mathrm{max}}\left(\mathbf{M}_{J}-\psi_{J,1}\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}\right)$ for majorization, where we can have the following useful property. \end{IEEEproof} \begin{lem} \label{lem:Hermitian Toeplitz}\cite{JorgeFerreira1994,HornJohnson1990} Define \[ \begin{aligned}\mathbf{B}= & \sum_{\theta\in\Theta}\omega\left(\theta\right)\Bigl(P\left(\theta,\mathbf{x}^{\left(t\right)}\right)-p\left(\theta\right)\alpha\left(\mathbf{x}^{\left(t\right)}\right)\Bigr)\mathbf{a}^{\ast}\left(\theta\right)\mathbf{a}^{T}\left(\theta\right)\\ = & \left[\begin{array}{cccc} b_{0} & b_{1}^{\ast} & \cdots & b_{M-1}^{\ast}\\ b_{1} & b_{0} & \ddots & \vdots\\ \vdots & \ddots & \ddots & b_{1}^{\ast}\\ b_{M-1} & \ldots & b_{1} & b_{0} \end{array}\right], \end{aligned} \] which is Hermitian Toeplitz, $\mathbf{F}$ as a $2M\times2M$ FFT matrix, and $\mathbf{b}=\left[b_{0},b_{1},\ldots,b_{M-1},0,b_{M-1}^{\ast},,\ldots,b_{1}^{\ast}\right]^{T}$. Then, we have $\mathbf{M}_{J}=\mathbf{I}_{N}\otimes\mathbf{B}$, $\lambda_{\mathrm{max}}\left(\mathbf{M}_{J}\right)=\lambda_{\mathrm{max}}\left(\mathbf{B}\right)$, and \[ \lambda_{\mathrm{max}}\left(\mathbf{B}\right)\leq\lambda_{\mu}=\frac{1}{2}\left(\underset{1\leq i\leq M}{\max}\mu_{2i}+\underset{1\leq i\leq M}{\max}\mu_{2i-1}\right), \] where $\boldsymbol{\mu}=\mathbf{F}\mathbf{b}$, which is the discrete Fourier transform for $\mathbf{b}$. \end{lem} Lemma \ref{lem:Hermitian Toeplitz} provides an easy way for the computation of $\psi_{J,2}.$ Based on Lemma \ref{lem:quadratic majorization} and using $\psi_{J,2}=\lambda_{\mu}$, the majorizing function $\overline{J}_{1}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)$ can be further majorized as \[ \begin{aligned}\overline{J}_{1}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)\leq & 2\psi_{J,2}\mathbf{x}^{H}\mathbf{x}+4\mathrm{Re}\Bigl(\mathbf{x}^{H}\Bigl(\mathbf{M}_{J}-\psi_{J,1}\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}-\psi_{J,2}\mathbf{I}\Bigr)\mathbf{x}^{\left(t\right)}\Bigr)\\ & +2\mathbf{x}^{\left(t\right)H}\left(\psi_{J,2}\mathbf{I}-\mathbf{M}_{J}+\psi_{J,1}\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}\right)\mathbf{x}^{\left(t\right)}, \end{aligned} \] where since $\left\Vert \mathbf{x}\right\Vert _{2}^{2}=c_{e}^{2}$, the first term is a constant. Then by ignoring the constant terms, the objective becomes a linear majorizing function at iterate $\mathbf{x}^{\left(t\right)}$ as follows: \begin{equation} \overline{J}_{2}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)\simeq-4\mathrm{Re}\left(\mathbf{x}^{H}\mathbf{y}_{J}\right), \end{equation} where $\mathbf{y}_{J}=-\left(\mathbf{M}_{J}-c_{e}^{2}\psi_{J,1}\mathbf{I}-\psi_{J,2}\mathbf{I}\right)\mathbf{x}^{\left(t\right)}$. \subsection{Majorization Steps For The Sidelobe Term $E\left(\mathbf{x}\right)$} To deal with the sidelobe term $E\left(\mathbf{x}\right)$ in \eqref{eq:sidelobe term}, the majorization steps are similar to $J\left(\mathbf{x}\right)$. First, we have \[ \begin{aligned}E\left(\mathbf{x}\right)= & \sum_{\theta_{i},\theta_{j}\in\overline{\Theta},\,i\neq j}\left|P_{cc}\left(\theta_{i},\theta_{j},\mathbf{x}\right)\right|^{2}\\ = & \mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right)^{H}\mathbf{H}_{E}\mathrm{vec}\left(\mathbf{x}\mathbf{x}^{H}\right), \end{aligned} \] where $\mathbf{H}_{E}=\sum_{\theta_{i},\theta_{j}\in\overline{\Theta},\,i\neq j}\mathrm{vec}\left(\mathbf{A}\left(\theta_{i},\theta_{j}\right)\right)\mathrm{vec}\left(\mathbf{A}\left(\theta_{i},\theta_{j}\right)\right)^{H}$. Then, based on Lemma \ref{lem:quadratic majorization}, by choosing $\psi_{E,1}\geq\lambda_{\mathrm{max}}\left(\mathbf{H}_{E}\right)$ and $\psi_{E,2}\geq\lambda_{\mathrm{max}}\left(\mathbf{M}_{E}-\psi_{E,1}\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}\right)$, we can get the majorizing functions at iterate $\mathbf{x}^{\left(t\right)}$ written as follows: \begin{equation} \begin{aligned}\overline{E}_{1}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)\simeq & 2\mathbf{x}^{H}\left(\mathbf{M}_{E}-\psi_{E,1}\mathbf{x}^{\left(t\right)}\mathbf{x}^{\left(t\right)H}\right)\mathbf{x}\\ \leq & \overline{E}_{2}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)\\ \simeq & -4\mathrm{Re}\left(\mathbf{x}^{H}\mathbf{y}_{E}\right), \end{aligned} \end{equation} where $\mathbf{M}_{E}=\sum_{\theta_{i},\theta_{j}\in\overline{\Theta},\,i\neq j}P_{cc}\left(\theta_{j},\theta_{i},\mathbf{x}^{\left(t\right)}\right)\mathbf{A}\left(\theta_{i},\theta_{j}\right)$ and $\mathbf{y}_{E}=-\left(\mathbf{M}_{E}-c_{e}^{2}\psi_{E,1}\mathbf{I}-\psi_{E,2}\mathbf{I}\right)\mathbf{x}^{\left(t\right)}$. \subsection{Solving The Majorized Subproblem in MM} By combing the two majorizing functions $\overline{J}_{2}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)$ and $\overline{E}_{2}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)$, the overall majorizing function at iterate $\mathbf{x}^{\left(k\right)}$ for the objective $f\left(\mathbf{x}\right)$ is given as follows: \[ \begin{aligned}f\left(\mathbf{x}\right)\leq & \overline{f}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)\\ = & \overline{J}_{2}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)+\omega_{cc}\overline{E}_{2}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)\\ \simeq & -4\mathrm{Re}\left(\mathbf{x}^{H}\mathbf{y}_{J}\right)-4\omega_{cc}\mathrm{Re}\left(\mathbf{x}^{H}\mathbf{y}_{E}\right)\\ = & -\mathrm{Re}\left(\mathbf{x}^{H}\mathbf{y}\right), \end{aligned} \] where \[ \begin{aligned}\mathbf{y}= & -4\left(\mathbf{M}_{J}+\omega_{cc}\mathbf{M}_{E}-c_{e}^{2}\left(\psi_{J,1}+\omega_{cc}\psi_{E,1}\right)\mathbf{I}\right.\\ & -\left.\left.\left(\psi_{J,2}\right.+\omega_{cc}\psi_{E,2}\right)\mathbf{I}\right)\mathbf{x}^{\left(t\right)}. \end{aligned} \] Finally, by majorizing the objective function in \eqref{eq:problem} using the MM method, the subproblem we need to solve at each iteration is given as follows: \begin{equation} \begin{aligned} & \mathrm{minimize}_{\mathbf{x}} & & \overline{f}\left(\mathbf{x},\mathbf{x}^{\left(t\right)}\right)\simeq-\mathrm{Re}\left(\mathbf{x}^{H}\mathbf{y}\right)\\ & \mathrm{subject\:to} & & \mathbf{x}\in{\cal X}. \end{aligned} \label{eq:subproblem} \end{equation} For problem \eqref{eq:subproblem}, as to different interested waveform constraints, closed-form optimal solutions $\mathbf{x}^{\star}$ can be derived, which are summarized in the following lemma. \begin{lem} \label{lem:closed-form solution}\textbf{i)} For fixed energy constraint (i.e., ${\cal X}={\cal X}_{0}$), $\mathbf{x}^{\star}=c_{e}\mathbf{y}/\left\Vert \mathbf{y}\right\Vert _{2}$; \textbf{ii)} for constant modulus constraint (i.e., ${\cal X}={\cal X}_{1}$), $\mathbf{x}^{\star}=c_{d}e^{j\arg\left(\mathbf{y}\right)}$;\footnote{The operation $\arg\left(\mathbf{y}\right)$ is applied element-wise for $\mathbf{y}$.} \textbf{iii)} for fixed energy with PAR constraint (i.e., ${\cal X}={\cal X}_{0}\cap{\cal X}_{2}$), the solution $\mathbf{x}^{\star}$ can be found in \cite[Alg. 2]{TroppDhillonHeathStrohmer2005}; \textbf{iv)} for constant modulus with similarity constraint (i.e., ${\cal X}={\cal X}_{1}\cap{\cal X}_{3}$), the solution $\mathbf{x}^{\star}$ can be found in \cite{ZhaoPalomar2017}. \end{lem} \subsection{The MM-Based Beampattern Matching Algorithm} Based on the MM method, in order to solve the original problem \eqref{eq:problem}, we just need to iteratively solve the subproblem \eqref{eq:subproblem} with a closed-form solution update in Lemma \ref{lem:closed-form solution} at each iteration. The overall algorithm is summarized as follows. \noindent\fbox{\begin{minipage}[t]{1\columnwidth - 2\fboxsep - 2\fboxrule}% \textbf{Input:} $\mathbf{a}\left(\theta\right)$, $p\left(\theta\right)$,\textbf{ }$\mathbf{x}^{\left(0\right)}$ and $t=0$. \textbf{Repeat} $\:\:$1. Compute $\mathbf{M}_{J}$, $\mathbf{M}_{E},$ $\psi_{J,1}$, $\psi_{E,1}$, $\psi_{J,2}$, $\psi_{E,2}$ and $\mathbf{y}$; $\:\:$2. Update $\mathbf{x}^{\left(t\right)}$ in a closed-form according to Lemma \ref{lem:closed-form solution}; $\:\:$3. $t=t+1$; \textbf{Until} $\mathbf{x}$ and $f\left(\mathbf{x}\right)$ satisfy a termination criterion. \textbf{Output:} $\alpha$, $\mathbf{x}$.% \end{minipage}} \section{Numerical Simulations\label{sec:Numerical-Simulations}} The performance of the proposed algorithm for MIMO transmit beampattern matching is evaluated by numerical simulations. A colocated MIMO radar system is considered with a ULA comprising $M=10$ antennas with half-wavelength spacing between adjacent antennas. Without loss of generality, the total transmit power is set to $c_{e}^{2}=1$. Each transmit pulse has $N=32$ samples. The range of angle is $\Theta=\left(-90^{\circ},90^{\circ}\right)$ with spacing $1^{\circ}$ under which the weight $\omega\left(\theta\right)=1$ for $\theta\in\Theta$, and $\omega_{cc}=0$, which is the same setting as \cite{ChengHeZhangLi2017}. We consider a desired beampattern with three targets or mainlobes ($K=3$) at $\theta_{1}=-40^{\circ}$, $\theta_{2}=0^{\circ}$, $\theta_{3}=40^{\circ}$, and each width of them is $\triangle\theta=20^{\circ}$. The desired beampattern is \[ p\left(\theta\right)=\begin{cases} 1, & \theta\in\left[\theta_{k}-\triangle\theta/2,\theta_{k}+\triangle\theta/2\right],\,k=1,2,K\\ 0, & \mathrm{otherwise}. \end{cases} \] We compare the convergence property over iterations of the objective function for the beampattern matching problem under unimodulus waveform constraint by using the proposed MM-based algorithm (denoted as MM-based algorithm (prop.)) and the ADMM-based algorithm in \cite{ChengHeZhangLi2017} (denoted as ADMM-based algorithm) , which is shown in Fig. \ref{fig:Convergence-comparison-for}. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.43]{convergence} \par\end{centering} \centering{}\caption{\label{fig:Convergence-comparison-for}Convergence comparison for objective function value.} \end{figure} As shown in Fig. \ref{fig:Convergence-comparison-for}, the MM-based algorithm can have a monotonic convergence property. And it can converge within $20$ iterations which is faster than the benchmark algorithm. Then, we also compare the matching performance of the designed beampatterns in terms of the mean-squared error (MSE) defined as \[ \text{MSE}\left(P\left(\theta,\mathbf{x}\right)\right)=\mathbb{E}\left[\sum_{\theta\in\Theta}\omega\left(\theta\right)\left|\alpha p\left(\theta\right)-P\left(\theta,\mathbf{x}\right)\right|^{2}\right]. \] In Fig. \ref{fig:beampattern-matching}, we show the simulation results for $\text{MSE}\left(P\left(\theta,\mathbf{x}\right)\right)$ by using different design methods. \begin{figure}[H] \begin{centering} \includegraphics[scale=0.43]{bm} \par\end{centering} \caption{\label{fig:beampattern-matching}Transmit beampattern design with $3$ targets .} \end{figure} From Fig. \ref{fig:beampattern-matching}, we can see that compared to the benchmark, our proposed algorithm can have a tighter matching performance and can obtain a lower MSE. Based on these, the proposed algorithm is validated. \section{Conclusions\label{sec:Conclusions}} This paper has considered the MIMO transmit beampattern matching problem. Efficient algorithms have been proposed based on the MM method. Numerical simulations show that the proposed algorithms are efficient in solving the beampattern matching problem and can obtain a better performance compared to the the state-of-art method. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-02-21T02:04:50", "yymm": "1802", "arxiv_id": "1802.06957", "language": "en", "url": "https://arxiv.org/abs/1802.06957" }
\section{Introduction} \label{sec:intro} It has been an interesting subject to study the rotational motion of triaxially deformed nucleus in the field of nuclear structure~\cite{BM75}. Although the triaxial deformation rarely realized in the ground state of nuclei~\cite{Mol06}, it is more frequently expected at high-spin states, see, e.g., Refs.~\cite{VDS83,Fra01,Pan11} and references therein. When the nuclear mean-field is triaxially deformed, collective rotation about all three principal axes is possible, and therefore the total angular-momentum vector may tilts away from either of three principal axes. Then the quantized rotational spectrum of the rigid-rotor will emerge, which is called nuclear wobbling~\cite{BM75}. Such rotational bands have been sought for a long time and finally identified first in $^{163}$Lu~\cite{Ode01}; see, e.g., Refs.~\cite{NMM16,Fra17} for recent theoretical review articles. This wobbling motion has been investigated in the first part of the present study~\cite{SFTSI}; we refer to it as part~I hereafter. Another specific rotational motion expected in a triaxially deformed nucleus are the chiral doublet bands, predicted for the first time in Ref.~\cite{FM97}, where the tilting of the angular-momentum vector is caused by other degrees of freedom than the collective rotation; see e.g., Ref.~\cite{SK17} for a recent review. For an odd-odd nucleus, where both an odd proton particle and an odd neutron hole occupies a high-$j$ intruder orbitals, as a typical example, the odd particle angular-momentum aligns along the short axis and the odd hole angular-momentum along the long axis, because such alignments maximize the overlap of the wave function of the aligned particle or hole with triaxial density distribution of the core. If the three moments of inertia of the core are in irrotational-like ordering, the collective angular-momentum aligns along the medium axis, which has the largest moment of inertia. For moderate high-spin states, where all three kinds of angular momenta are sizable, these three vectors are aplanar, and the chiral symmetry between the right- and left-handedness is broken in such a system. It is then expected that a pair of degenerate ${\mit\Delta}I=1$ rotational bands appear as a result of breaking this symmetry. In Ref.~\cite{KSH04}, it is discussed that characteristic patterns are expected for the electromagnetic transition rates, $B(E2)$ and $B(M1)$ in this prototype situation with broken chiral symmetry. These interesting types of rotational motion characteristic for triaxially deformed nucleus have been investigated mainly by phenomenological models such as the triaxial-rotor~\cite{BM75} or the particle-hole coupled to triaxial-rotor~\cite{FM97}. Here we study such rotational motion by employing the fully microscopic framework, where the nuclear wave function is constructed from the triaxially deformed mean-field and the broken rotational-symmetry is recovered by angular-momentum projection; see, e.g., Ref.~\cite{RS80}. With the projection method, the regular rotational spectrum is naturally obtained. Full 3D projection from the mean-field wave function should be performed for triaxially deformed nuclei, so that an efficient method is necessary. We have developed such a method in Ref.~\cite{TS12}, and applied it to the study of nuclear tetrahedral deformation~\cite{TSD13,TSD15}, the $\gamma$-vibration~\cite{TS16}, and the ground-state rotational bands~\cite{STS15,STS16} in rare earth nuclei. In this second part of the present investigation we also employ the same method to study the chiral doublet band for the case where the prototype considered in Ref.~\cite{KSH04} is realized. It should be mentioned that chiral doublet bands have been studied by a similar microscopic approach, the triaxial projected shell model, for the first time in Ref.~\cite{BSP12}; see Refs.~\cite{Sun16,SBD16} for recent review articles. The authors are successful to reproduce the experimental data. The purpose of the present work is not to reproduce the experimental data, but rather to understand how the chiral doublet bands appear and how the ideal chiral geometry reflects to the observable quantities such as the electromagnetic transition rates. We believe that such an investigation is meaningful for deeper comprehension of the rotational motion in the triaxially deformed nucleus from the microscopic view point. The paper is organized as follows. We briefly recapitulate our formulation in Sec.~\ref{sec:formulation}, where only the necessary mathematical expressions for discussion of the present study are included. The more detailed content is presented in part~I~\cite{SFTSI}. Possible occurrence of the chiral doublet band and its properties of the electromagnetic transition probabilities are studied for $^{128}$Cs and $^{104}$Rh nuclei in Sec.~\ref{sec:chiral}. Finally the results of the present study are summarized in Sec.~\ref{sec:summary}. Preliminary results were already published in Ref.~\cite{TSF14}. \section{Basic Formulation} \label{sec:formulation} In the series of the present work, we study collective rotation of triaxially deformed nucleus with the microscopic angular-momentum-projection method. The quantum eigenstates of rotational band are obtained by \begin{equation} |\Psi_{M\alpha}^{I}\rangle = \sum_{K} g_{K,\alpha}^{I}\, \hat P_{MK}^I|\Phi \rangle \label{eq:wfProj} \end{equation} from the mean-field state $|\Phi \rangle$, where the angular-momentum projector is denoted by $\hat P_{MK}^I$ and the amplitude $g^I_{K,\alpha}$ is determined by the so-called Hill-Wheeler equation, see, e.g., Ref~\cite{RS80}; \begin{equation} \sum_{K'}{\cal H}^I_{K,K'}\ g^I_{K',\alpha} = E^I_\alpha\, \sum_{K'}{\cal N}^I_{K,K'}\ g^I_{K',\alpha}, \label{eq:HW} \end{equation} with the definition of the Hamiltonian and norm kernels, \begin{equation} \left\{ \begin{array}{c} {\cal H}^I_{K,K'} \\ {\cal N}^I_{K,K'} \end{array} \right\} = \langle \Phi | \left\{ \begin{array}{c} \hat H \\ 1 \end{array} \right\} \hat{P}_{KK'}^I | \Phi \rangle. \label{eq:kernels} \end{equation} To investigate how the interesting types of rotational motion appear and what kind of properties they have, it is preferable to be able to change the mean-field parameters, e.g., the deformation parameters, arbitrarily. Therefore, we employ a model Hamiltonian $\hat{H}$ composed of the phenomenological Woods-Saxon potential and the schematic separable-type interaction, which has been also utilized in Refs.~\cite{TS12,TSD13}. Its precise form is given in part~I and we will not repeat it here. When the projected wave function in Eq.~(\ref{eq:wfProj}) is obtained, it is straightforward to calculate the electromagnetic transition probabilities~\cite{RS80}. No effective charge is used for the calculation of $B(E2)$ because the full model space is employed without any kind of core. The effective spin $g$-factor of $0.7\times g_{s,{\rm free}}$ is adopted for both neutrons and protons for the calculation of $B(M1)$. In this way there is no ambiguity for the calculation of these transition probabilities. The product-type mean-field wave function with the pairing correlations, $|\Phi \rangle$ in Eq.~(\ref{eq:wfProj}), is generated by the mean-field Hamiltonian $\hat{h}_{\rm mf}$ composed of the deformed Woods-Saxon potential and the monopole-type pairing potential, where the pairing potential has the form factor of the derivative of the Woods-Saxon potential, see part~I for details. The deformation in the body-fixed frame is specified with respect to the equi-potential surface at the half depth for the Woods-Saxon potential with the usual radius parameterization, \begin{equation} R(\theta,\varphi)= R_0 \,c_v(\{\alpha\}) \bigg[ 1+\sum_{\lambda\mu}\alpha^*_{\lambda\mu} Y_{\lambda\mu}^{}(\theta,\varphi) \bigg], \label{eq:surf} \end{equation} with the quantity $c_v(\{\alpha\})$ that guarantees the volume-conservation condition. In the present work, we employ $\lambda=2$ and 4 deformations with the parameters $(\beta_2,\beta_4,\gamma)$, where the so-called Lund convention~\cite{BR85} is used for the sign of triaxiality parameter $\gamma$, and therefore, for example, $\langle x^2 \rangle < \langle y^2 \rangle < \langle z^2 \rangle$ for $0^\circ < \gamma < 60^\circ$. Here $\langle x^2 \rangle$ etc. are abbreviated notations of ${\displaystyle \Bigl\langle \sum_{i=a}^A \bigl(x^2\bigr)_a\Bigr\rangle}$ etc., which will be also used in the following discussions. It is worthwhile mentioning that the triaxiality parameter in the Woods-Saxon potential, $\gamma\equiv\gamma({\rm WS})$, and the corresponding parameter in the Nilsson potential, $\gamma({\rm Nils})$, are somewhat different from that of the density distribution for the mean-field state, $\gamma({\rm den})$, which is defined by \begin{equation} \gamma({\rm den})\equiv \tan^{-1}\biggl[-\frac{\sqrt{2}\langle Q_{22}\rangle}{\langle Q_{20}\rangle} \biggr] , \label{eq:gammaden} \end{equation} where $Q_{2\mu}$ is the quadrupole operator; see Ref.~\cite{SSM08} for the precise definitions of the various $\gamma$ parameters and discussion related to them. Although the difference between these quantities, e.g., $\gamma({\rm WS})$ and $\gamma({\rm den})$, are not so large as in the case of the wobbling motion for the triaxial superdeformed nuclei, they are still sizable and one has to be careful for discussing the triaxial deformation. One of the interesting quantities studied in part~I and also in Ref.~\cite{TS16} is the expectation value of the angular-momentum vector in the body-fixed frame specified by the mean-field, from which the projection is performed. Following the previous work~\cite{TS16}, we define the expectation value of each component of the angular-momentum vector in the intrinsic frame for the projected eigenstate $\alpha$ in the following way, \begin{equation} (\!( J^2_i )\!)_\alpha \equiv \sum_{KK'} f^{I*}_{K,\alpha}\, \langle IK|J^2_i|IK'\rangle\,f^I_{K',\alpha}, \label{eq:exJJ} \end{equation} where the index $i=x,y,z$ denotes the axis specified by the deformed mean-field wave function $|\Phi\rangle$, and the $(f^I_{K,\alpha})$ are the properly orthonormalized amplitudes~\cite{RS80}, which are defined with the help of the square-root matrix of the norm kernel by \begin{equation} f^I_{K,\alpha}=\sum_{K'} \bigl(\sqrt{{\cal N}^I}\,\bigr)_{K,K'}\, g^I_{K',\alpha}. \label{eq:normfNocm} \end{equation} Needless to say, the purely algebraic quantity $\langle IK|J^2_i|IK'\rangle$, e.g., $\langle IK|J^2_z|IK'\rangle=\delta_{KK'}K^2$, should be calculated in the intrinsic frame with $[J_x,J_y]=-i\hbar J_z$ etc. The microscopic geometrical information is contained in the amplitude $f^I_{K,\alpha}$. A more microscopic definition by using the mean-field wave function is necessary to obtain the neutron and proton contributions separately; they are evaluated by ($\tau={\rm n,p}$) \begin{equation} \langle\!\langle {J_i^{(\tau)2}} \rangle\!\rangle_\alpha \equiv {\rm Re}\biggl[\sum_{KK'} g^{I*}_{K,\alpha}\, \langle \Phi|{J_i^{(\tau)2}} \hat{P}_{KK'}^I|\Phi\rangle\, g^I_{K',\alpha}\biggr], \label{eq:exJJm} \end{equation} which are shown to be consistent with the definition of the total expectation value in Eq.~(\ref{eq:exJJ}); i.e., $\langle\!\langle {J_i^{({\rm n})2}} \rangle\!\rangle_\alpha +\langle\!\langle {J_i^{({\rm p})2}} \rangle\!\rangle_\alpha \approx (\!({J_i^2})\!)_\alpha$ in a very good approximation, see the discussion in Appendix of Ref.~\cite{TS16}. \section{Application to a chiral doublet band} \label{sec:chiral} The possible existence of chiral doublet bands was first pointed out by Frauendorf and Meng in Ref.~\cite{FM97}, and has been explored experimentally since then; see, e.g., Refs.~\cite{SKC01,KSC03}. This interesting rotational motion is characteristic for triaxially deformed nuclei. As it was already discussed in part~I of the study of nuclear wobbling motion, there are three distinct directions in the body-fixed frame of triaxially deformed nucleus. In the present work, we choose the intrinsic coordinate system that satisfies $\langle x^2 \rangle < \langle y^2 \rangle < \langle z^2 \rangle$; namely, $0 < \gamma < 60^\circ$, and the short, medium, and long axes are $x$, $y$, and $z$ axes, respectively. If there are three different kinds of angular-momentum vectors, which favor aligning their vectors along these three principal axes, the three vectors are aplanar in the intrinsic frame. In such a situation, the symmetry of the ``handedness'' is broken; i.e., whether these three angular-momentum vectors are right-handed or left-handed in the $xyz$ intrinsic coordinate is chosen by the selfconsistent mean-field. Just as in the case of the parity doublet, two almost degenerate ${\mit\Delta I}=1$ rotational bands are expected as different linear combinations of the right- and left-handed states, which appear as the chiral doublet bands, see Sec.~\ref{sec:selrule} for details. A prototype example, which was considered in Ref.~\cite{FM97} and in Ref.~\cite{KSH04} is the odd-odd nucleus with an odd proton sitting in the high-$j$ particle-like orbit and an odd neutron in the high-$j$ hole-like orbit (or vice versa). The high-$j$ particle-like orbit tends to align its angular-momentum vector along the short ($x$) axis, while the high-$j$ hole-like orbit tends to align along the long ($z$) axis. Moreover, the collective angular-momentum prefers to align to the axis with the largest moment of inertia, which is the medium ($y$) axis for the irrotational-like moments of inertia. Thus an aplanar angular-momentum geometry, i.e., chiral geometry, is expected to appear at the critical spin $I_{\rm c}$ Below $I_{\rm c}$ the collective angular-momentum lies in the $xz$ plane, see discussion of transverse wobbling~\cite{Fra17}. In the present work we study the nucleus $^{128}$Cs, in which the odd proton (neutron) occupies the quasiparticle state whose main component is particle-like (hole-like) $h_{11/2}$ orbit, and discuss how the chiral geometry comes about. Especially, it is shown that the ideal situation considered in Ref.~\cite{KSH04} is indeed realized in our microscopic calculations. In order to demonstrate that the appearance of such an ideal chiral doublet band is not very rare in the calculation, we also briefly discuss another example, $^{104}$Rh, in which the odd proton (neutron) occupies the quasiparticle state whose main component is hole-like $g_{9/2}$ (particle-like $h_{11/2}$) orbit. In the following we investigate the chiral doublet bands by the fully microscopic framework of the angular-momentum-projection approach in contrast to the original work~\cite{FM97,KSH04}, where the macroscopic model of the triaxial-rotor coupled to a particle and a hole is employed. The calculational procedure is the same as in part~I. The calculations are performed within the isotropic harmonic oscillator basis and the basis states are truncated up to the maximum oscillator shells, $N_{\rm osc}^{\rm max}=12$. As it was explained in detail in part~I the monopole-type pairing force strengths are determined to reproduce the even-odd mass differences for the neighboring even-even nuclei, and the average of those is adopted for the odd-odd nucleus. In the present calculation the average pairing gaps for both neutrons and protons are calculated selfconsistently using the strengths thus determined. Since we do not intend to reproduce the experimental data but perform explorational calculations, we arbitrarily choose an appropriate value for the deformation parameter $\beta_2$, and $\beta_4=0.0$ for simplicity, to obtain an ideal chiral geometry. As for the triaxial deformation $\gamma({\rm WS})=30^\circ$ is adopted for the Woods-Saxon mean-field. \subsection{Chiral geometry and selection rules for transition rates} \label{sec:selrule} Before showing the result of our angular-momentum projection calculation, we briefly discuss how the chiral geometry is realized and what is expected for it according to Refs.~\cite{FM97,KSH04}; see also the review articles~\cite{Fra01,SK17,Fra17}. In the simple classical model, where a particle and a hole angular momenta, $j_{\rm p}$ and $j_{\rm h}$, align along the short ($x$) axis and the long ($z$) axis , respectively, the trajectory of the angular-momentum vector ($J_x,J_y,J_z$) is given by the intersection of the sphere and the shifted ellipsoid, which are described by the equations, \begin{equation} \left\{\begin{array}{l} J_x^2+J_y^2+J_z^2=I(I+1), \vspace*{2mm}\cr {\displaystyle \frac{(J_x-j_{\rm p})^2}{2{\cal J}_x} +\frac{J_y^2}{2{\cal J}_y} +\frac{(J_z-j_{\rm h})^2}{2{\cal J}_z}=E }, \end{array}\right. \label{eq:paroteq} \end{equation} representing the angular-momentum conservation, $I$, and the rotor model energy, $E$, respectively. The quantities, ${\cal J}_x$, ${\cal J}_y$ and ${\cal J}_z$, are the moments of inertia of the core nucleus in the body-fixed frame, and it is assumed that the medium axis has the largest inertia, i.e., ${\cal J}_y> {\cal J}_x,\,{\cal J}_z$. At low spins the trajectory of the lowest energy state is mainly confined in the $xz$ principal plane, $J_x\approx j_{\rm p}$, $J_z\approx j_{\rm h}$, and $J_y \approx 0$, and in the first excited state the angular-momentum vector vibrates with respect to this plane, i.e., the so-called the chiral vibration~\cite{SKC01}. This chiral vibrational excitation has been studied microscopically by the quasiparticle random-phase approximation in Ref.~\cite{ADF11}. When the spin increases and exceeds the critical spin~\cite{ODD04}, \begin{equation} I_{\rm c}= \left[\left( \frac{j_{\rm p}{\cal J}_y}{{\cal J}_y-{\cal J}_x}\right)^2 +\left(\frac{j_{\rm h}{\cal J}_y}{{\cal J}_y-{\cal J}_z}\right)^2\right]^{1/2}, \label{eq:critI} \end{equation} the chiral symmetry is broken in the yrast states; i.e., the aplanar angular-momentum geometry is realized giving the lowest two degenerate solutions, the right-handed (e.g., $J_y >0$), and the left-handed (e.g., $J_y <0$) ones, which we denote by $|r \rangle$ and $|l \rangle$, respectively. They are related by $|l \rangle = {\cal T}\hat{R}_y(\pi)|r \rangle$, where the operation ${\cal T}$ is the time-reversal transformation and $\hat{R}_y(\pi)$ is the $\pi$-rotation about the $y$ axis. The mean-field solution, which shows the aplanar chiral geometry, was obtained for the first time in Ref.~\cite{VFD00} by the microscopic framework of shell-correction tilted axis cranking approach. With the same approach, the transition to the chiral geometry and its critical spin value were investigated from the microscopic view point in Ref.~\cite{ZGN03} in comparison with the experimental data. There is tunneling effect between the two solutions, $|r \rangle$ and $|l \rangle$, and the quantum mechanical eigenstates are obtained by the linear combinations, \begin{equation} |+\rangle=\frac{1}{\sqrt{2}}\bigl(|r \rangle+|l \rangle\bigr),\qquad |-\rangle=\frac{i}{\sqrt{2}}\bigl(|r \rangle-|l \rangle\bigr), \label{eq:chrleigs} \end{equation} which are interpreted as the chiral doublet states just like the parity doublet states. Note that the partner states in Eq.~(\ref{eq:chrleigs}) are constructed for each spin value, $\cdots$, $I-1$, $I$, $I+1$, $\cdots$. Now let us consider the electromagnetic transition rates such as $E2$ and $M1$. Since the photon with these low multipolarities cannot turn the angular-momentum vector from the right- to the left-hand position, the overlap $\langle l|\widehat{\cal M}|r \rangle$ of the transition operator $\widehat{\cal M}$ is essentially vanishing after the static chiral geometry is realized. Thus these transition rates satisfy the selection rules, \begin{equation} B({\cal M};\,+\rightarrow +)\approx B({\cal M};\,-\rightarrow -),\qquad B({\cal M};\,-\rightarrow +)\approx B({\cal M};\,+\rightarrow -). \label{eq:Btrchrl} \end{equation} Namely, both the in-band transitions and the out-of-band for the pair of the doublet bands are the same. \begin{figure}[!htb] \begin{center} \includegraphics[width=150mm]{selchrl.eps} \vspace*{-4mm} \caption{(Color online) Schematic figure representing the selection rules of the electromagnetic transitions for the ideal chiral geometry considered in Ref.~\cite{KSH04}, where the thick (thin) arrow denotes the large (small) transition rate. } \label{fig:selchrl} \end{center} \end{figure} In Ref.~\cite{KSH04} an interesting typical case is considered, which shows especially characteristic properties for the $E2$ and $M1$ transition rates, when the chiral symmetry is broken. Namely, the system is invariant with respect to the combined operation of the $\pi/2$-rotation about the medium ($y$) axis and an exchange of the valence neutron and proton; this operation is called $\hat{A}$ hereafter, and the eigenstates are classified by the eigenvalues $\pm 1$ of $\hat{A}$. Within the simple model in Eq.~(\ref{eq:paroteq}) the system is $\hat{A}$-invariant if the moments of inertia satisfy the condition, ${\cal J}_x={\cal J}_z$ and the valence neutron and proton sit in the same high-$j$ orbit, because the $\pi/2$-rotation about the $y$ axis interchanges the $x$ and $z$ axes and at the same time the exchange of valence neutron and proton interchanges the particle and hole alignments $j_{\rm p}$ and $j_{\rm h}$. Considering that the contribution of the valence neutron and proton is almost negligible for the $E2$ operator and that the $M1$ operator has approximate isovector character, it has been shown that these transitions are almost prohibited between the states with same eigenvalue of $\hat{A}$. Moreover, chirality requires that the partner states in Eq.~(\ref{eq:chrleigs}) at a given spin have different eigenvalues of $\hat{A}$, because the exchange between the valence neutron and proton while keeping the direction of the rotor angular-momentum changes the right- into left-handed states. Taking into account the considerations on top of Eq.~(\ref{eq:Btrchrl}), the selection rules for the $E2$ and $M1$ transition rates inside and/or between the chiral doublet bands can be summarized in Fig.~\ref{fig:selchrl}; see Ref.~\cite{KSH04} for more detailed discussions. An especially interesting property is seen in the ${\mit\Delta}I=1$ $E2$ and $M1$ transitions; the large in-band and small out-of-band transitions and the small in-band and large out-of-band transitions alternate with spin, which can be more clearly observed for the ratio of in-band and out-of-band transitions, e.g., $B(M1)_{\rm in}/B(M1)_{\rm out}$, as it is depicted schematically in the right part of Fig.~\ref{fig:selchrl}. \begin{figure}[!htb] \begin{center} \includegraphics[width=120mm]{momiXe.eps} \vspace*{-4mm} \caption{(Color online) Cranking moments of inertia of the three intrinsic axes, $x$, $y$, and $z$, which are the short, medium, and long axes, (denoted by dotted, solid, and dashed lines, respectively) as functions of the triaxiality parameter $\gamma$ for the even-even core nucleus $^{128}$Xe of $^{128}$Cs. The deformation parameters are $\beta_2=0.30,\beta_4=0.0$ and the pairing gaps are $\Delta_{\rm n}=0.85$ MeV and $\Delta_{\rm p}=1.07$ MeV, which are are employed for the study of chirality in $^{128}$Cs. The $\gamma$ parameter of the Woods-Saxon potential is utilized in a) and that of the density distribution, Eq.~(\ref{eq:gammaden}), in b). } \label{fig:momiXe} \end{center} \end{figure} \begin{figure}[!htb] \begin{center} \includegraphics[width=120mm]{momiPd.eps} \vspace*{-4mm} \caption{(Color online) The same as Fig.~\ref{fig:momiXe} but for the even-even core nucleus $^{104}$Pd of $^{104}$Rh. The deformation parameters are $\beta_2=0.25,\beta_4=0.0$ and the pairing gaps are $\Delta_{\rm n}=0.95$ MeV and $\Delta_{\rm p}=0.76$ MeV, which are employed for the study of chirality in $^{104}$Rh. } \label{fig:momiPd} \end{center} \end{figure} In the following we will show that the ideal chiral geometry considered in Ref.~\cite{KSH04} is indeed realized in our angular-momentum projection calculation. This is non-trivial because we do not introduce any kind of rotor and/or valence nucleons explicitly in our fully microscopic framework. However, it is instructive to see moments of inertia of the even-even ``core''nuclei; i.e., $^{128}$Xe for the odd-odd nucleus $^{128}$Cs with an odd proton particle and an odd neutron hole, and $^{104}$Pd for the odd-odd nucleus $^{104}$Rh with an odd neutron particle and an odd proton hole. Although we do not explicitly use the three moments of inertia of the principal axes in our framework, their values are of interest. They can be estimated by the cranking procedure; ${\cal J}_i ={\displaystyle \mathop{\rm lim}_{\omega_i \rightarrow 0}} \langle J_i \rangle/\omega_i$, where $\omega_i$ is the cranking frequency about the $i$-th axis ($i=x,y,z$) of the intrinsic frame. Figures~\ref{fig:momiXe} and~\ref{fig:momiPd} display the calculated cranking moments of inertia for $^{128}$Xe and $^{104}$Pd, respectively, as functions of the triaxiality parameter, $\gamma{\rm (WS)}$, for the Woods-Saxon potential. As it has been discussed for the wobbling motion in part~I, the different definitions of the triaxiality parameter give considerably different values~\cite{SSM08}. Therefore, we show the same quantities as functions of the triaxiality parameter, $\gamma{\rm (den)}$, for the density distribution defined in Eq.~(\ref{eq:gammaden}), although the differences are not so large as in the case of the triaxial superdeformed states in part~I. It should be mentioned that the used mean-field parameters other than the triaxiality are those employed for the analysis of the chiral doublet bands for $^{128}$Cs and $^{104}$Rh. Therefore the calculated moments of inertia may not be very realistic for $^{128}$Xe and $^{104}$Pd nuclei themselves. The dependence of three moments of inertia on $\gamma{\rm (den)}$ resembles that of irrotational flow; see, e.g., Fig.~1 of part~I. However, the relative values are considerably different; at $\gamma=30^\circ$ ${\cal J}_y$ is larger than ${\cal J}_x={\cal J}_z$ by a factor 4 for the irrotational flow, while the factor is 2.4--2.6 for the microscopic ones in Figs.~\ref{fig:momiXe} and~\ref{fig:momiPd}. This result is similar to the triaxial superdeformed state in $^{163}$Lu studied in part~I. We adopt the value $\gamma{\rm (WS)}=30^\circ$ for the analysis, and it can be seen that the necessary condition, ${\cal J}_y> {\cal J}_x\approx{\cal J}_z$, is approximately satisfied in both $^{128}$Cs and $^{104}$Rh. With these cranking moments of inertia at $\gamma{\rm (WS)}=30^\circ$ and assuming full alignments for the $h_{11/2}$ and $g_{9/2}$ orbits, the critical angular-momentum in Eq.~(\ref{eq:critI}) can be estimated to $I_{\rm c}\approx 12.9$ for $^{128}$Cs and $I_{\rm c}\approx 11.8$ for $^{104}$Rh. \subsection{Chiral doublet band in $^{128}$Cs} \label{sec:chiralCs} In the course of our investigation we have found that it is difficult to obtain the doublet bands if the mean-field is cranked with finite rotational frequencies. Therefore, we do not crank the mean-field or just try to make infinitesimal cranking~\cite{TS16} with 10~keV frequencies about three principal axes as it was studied in part~I. If the mean-field is constructed without cranking there is an ambiguity related to the fact that the single-particle states are doubly degenerate (i.e. the Kramers degeneracy), which was already discussed in part~I for the odd nucleus $^{163}$Lu. These doubly degenerate states are usually classified by the signature, which is the symmetry with respect to the $\pi$ rotation about one of the intrinsic coordinate axes. We choose the $x$ axis and classify the single-particle orbits by $\hat{R}_x(\pi)$; namely a {\it favored} signature state $\alpha$ and its conjugate {\it unfavored} state $\bar{\alpha}=-\alpha$. In the case of odd nucleus there is no ambiguity, because the mean-field state with an odd particle in the $\bar{\alpha}$ state is obtained from the mean-field state with an odd particle in the $\alpha$ state by the rotation $\hat{R}_x(\pi)$, and therefore the result of the angular-momentum projection from these two states is exactly the same. For an odd-odd nucleus, however, there are four possible configurations for occupying the odd neutron and odd proton; i.e., \begin{equation} \mbox{(a) }(\alpha_\nu,\alpha_\pi),\quad \mbox{(b) }(\bar{\alpha}_\nu,\alpha_\pi),\quad \mbox{(c) }(\alpha_\nu,\bar{\alpha}_\pi),\quad \mbox{(d) }(\bar{\alpha}_\nu,\bar{\alpha}_\pi) \,. \label{eq:sig4class} \end{equation} Among them the configuration (d) is obtained from (a) by $\hat{R}_x(\pi)$, and (c) from (b) by $\hat{R}_x(\pi)$, but the configurations (a) and (b) are independent for the angular-momentum projection calculation. We have numerically confirmed this fact; i.e., the result of projection from the blocked configuration (d) is exactly the same as that from (a), and the result from (c) is the same as that from (b). However, the results of projection from (a) and from (b) are different, although the differences are found to be very small. One possible way to get rid of the ambiguity is to mix these two independent configurations (a) and (b) in Eq.~(\ref{eq:sig4class}). We will discuss this point in the following. Practically we use extremely small cranking frequency $\omega_x=10^{-10}$ MeV/$\hbar$ and block the lowest quasiparticle state to generate the configuration $\alpha$ and block the second lowest state for the configuration $\bar{\alpha}$. In the following study of $^{128}$Cs the blocking of the negative-parity quasiparticle-state originating from the $h_{11/2}$ orbit has been done for both neutron and proton. \begin{figure}[!htb] \begin{center} \includegraphics[width=75mm]{sCsnocr.eps} \vspace*{-4mm} \caption{(Color online) Energy spectrum for $^{128}$Cs calculated by the angular-momentum-projection method from the non-cranked mean-field with the configuration (a) in Eq.~(\ref{eq:sig4class}). A rigid-rotor reference energy $0.013\,I(I+1)$ MeV is subtracted. } \label{fig:sCsnocr} \end{center} \end{figure} The ideal chiral geometry for doublet band does not always appear in the calculation; we need to choose proper deformation parameter. We found that it appears at the deformation parameter $\beta_2=0.30$ without hexadecapole deformation; thus we have chosen $\beta_2=0.30$, $\beta_4=0.0$ and $\gamma=30^\circ$ for the Woods-Saxon mean-field in the following investigation of the chiral doublet band in $^{128}$Cs. Note that $\gamma=\gamma({\rm WS})=30^\circ$ corresponds to $\gamma({\rm den})=24.0^\circ$ in this case. Then, the average pairing gaps calculated selfconsistently are $\Delta_{\rm n}=0.85$ MeV and $\Delta_{\rm p}=1.07$ MeV for neutrons and protons, respectively. It should be mentioned that the adopted value, $\beta_2=0.30$, is considerably larger than the ordinary used value, $\beta_2 \approx 0.15-0.20$, in the nuclear region around $^{128}$Cs. The resultant rotational spectrum is displayed in Fig.~\ref{fig:sCsnocr}, where the angular-momentum projection is performed from the non-cranked mean-field with the configuration (a) in Eq.~(\ref{eq:sig4class}). We have done the same calculation with the configuration (b), but the result is very similar and is not shown. In this and the following figures the even-$I$ and odd-$I$ sequences of the band are connected by the solid and dashed lines, respectively, and a rigid-rotor reference energy $0.013\,I(I+1)$ MeV is subtracted to see more clearly the degeneracy of the bands. This reference energy is selected such that the experimentally observed yrast band is almost flat; see Fig.~\ref{fig:sCsmixed} below. At first sight the excitation spectrum of the multiple-band structure is similar to the wobbling bands in $^{163}$Lu studied in part~I. However, the yrast wobbling band in $^{163}$Lu has the spin values $I-1/2$ being even, and the first excited band has the spin values $I-1/2$ being odd, etc.; i.e., the bands are composed of the ${\mit\Delta}I=2$ states and the signature of the multiple wobbling band alternates with increasing energy. In the present case of the multiple bands in $^{128}$Cs, the even-$I$ and odd-$I$ bands are almost degenerate and form ${\mit\Delta}I=1$ rotational bands. This is because the odd proton aligns its angular-momentum vector along the short ($x$) axis, while the odd neutron-hole aligns its angular-momentum vector along the long ($z$) axis, and consequently the signature symmetry is strongly broken already at low spins. This is confirmed later by looking at the expectation values of the intrinsic angular-momentum vector in Fig.~\ref{fig:jCsmixed} below. What is important for the spectrum of $^{128}$Cs in Fig.~\ref{fig:sCsnocr} is that the energies of the lowest (yrast) and the second lowest (yrare) ${\mit\Delta}I=1$ bands become very close in the spin range, $15 \ltsim I \ltsim 25$, which can be well interpreted as the chiral doublet band~\cite{FM97}. The calculated minimum energy-difference between the two bands is about 120 keV at $I=18$. The estimated critical spin $I_{\rm c}\approx 13$ in Sec.~\ref{sec:selrule} is slightly smaller than the spin where the two bands become almost degenerate. \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{sCsmixed.eps} \vspace*{-4mm} \caption{(Color online) Left panel: Energy spectrum for $^{128}$Cs calculated by the angular-momentum-projected configuration-mixing method with the two mean-field configurations (a) and (b) in Eq.~(\ref{eq:sig4class}). Right panel: Comparison of the calculated and experimental chiral doublet bands in $^{128}$Cs. Experimental data are taken from Ref.~\cite{KSC03}. } \label{fig:sCsmixed} \end{center} \end{figure*} There are two possible configurations for the odd-odd nucleus, those of (a) and (b) in Eq.~(\ref{eq:sig4class}). Although the resultant spectra obtained from the two configurations are rather similar, we have performed the projected configuration-mixing by including these two configurations in order to obtain the unambiguous result, which is shown in the left panel of Fig.~\ref{fig:sCsmixed}. Comparing with the result of Fig.~\ref{fig:sCsnocr}, the higher-lying spectrum changes considerably; especially the excitation energies become lower and the level density of the excited bands is higher than those without configuration-mixing. However, it should be stressed that the yrast and the yrare bands, which are almost degenerate and so interpreted as chiral doublet bands, are almost the same as without configuration-mixing in Fig.~\ref{fig:sCsnocr}. However, the agreement with the experimental data is not very good as seen in the right panel of Fig.~\ref{fig:sCsmixed}. The calculated bands become degenerate at about $I\approx 15$, while experimentally two bands come together already at $I\approx 11$. Moreover, the slope of the bands is too steep in the calculated bands; namely the moments of inertia are too small compared with the experimental data, which is similar to the calculation of the wobbling band in $^{163}$Lu studied in part~I. As it has been pointed out in Ref.~\cite{TS16} infinitesimal cranking quite often improves the calculated moments of inertia. Therefore, we have tried to apply it also to the present case of $^{128}$Cs with frequencies $\omega_x=\omega_y=\omega_z=0.01$ MeV$/\hbar$, where the projection is performed from the single configuration of the infinitesimally cranked mean-field state that is composed of the lowest energy quasiparticle state for both the odd neutron and proton being blocked. We have found that the result of calculation is very similar to that of mixing the two configurations in Fig.~\ref{fig:sCsmixed}, and it is not shown. Namely, the moments of inertia of calculated rotational bands are not improved in this particular case. This result indicates that the time-odd components of the wave function induced by the infinitesimal cranking do not contribute to increase the moments of inertia in this case, although they contain the two different configurations (a) and (b) in Eq.~(\ref{eq:sig4class}) and their mixing effect. \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{jCsmixed.eps} \vspace*{-4mm} \caption{(Color online) The calculated expectation values of the angular-momentum vector in the intrinsic frame for the configuration-mixed calculation of $^{128}$Cs corresponding to the spectrum in Fig.~\ref{fig:sCsmixed}. The left panel shows the expectation values of the total vector for the yrast (filled symbols) and yrare (open symbols) ${\mit\Delta}I=1$ bands, while the right panel shows the neutron (filled symbols) and proton (open symbols) contributions in Eq.~(\ref{eq:exJJm}) separately for the yrast band. Note that the $x$, $y$, and $z$ axes are the short, medium, and long axes, respectively. } \label{fig:jCsmixed} \end{center} \end{figure*} In order to study the dynamics of the angular-momentum vector, the expectation values of its components in the intrinsic frame, calculated by Eq.~(\ref{eq:exJJ}) for the yrast and yrare bands, are shown in the left panel of Fig.~\ref{fig:jCsmixed}. Here the results of the configuration-mixed calculation corresponding to Fig.~\ref{fig:sCsmixed} are displayed but the results are qualitatively similar for other cases. As it is shown in the left panel, all the three components of the expectation values of the intrinsic angular-momentum vector are non-negligible. In the lower spin region, $I \ltsim 8$, the dominant components are those along the short ($x$) and the long ($z$) axes for the yrast band. As the spin increases, the component of the medium ($y$) axis, which is the axis with the largest moment of inertia of the core nucleus (see Fig.~\ref{fig:momiXe}) quickly increases. In the spin range $15 \ltsim I \ltsim 25$ the yrast and yrare have nearly the same geometries, which is characteristic for the chiral regime. The largest component of the angular-momentum vector changes from being along the $x$ axis to along the $y$ axis at $I\approx 18$, which roughly corresponds to the critical angular-momentum of the appearance of the chiral doublet band in Fig.~\ref{fig:sCsmixed}. This correspondence between the critical angular-momentum and the transition of direction of the angular-momentum vector in the intrinsic frame has also been discussed for the wobbling motion in the $^{163}$Lu nucleus studied in part~I. \begin{figure*}[!htb] \begin{center} \includegraphics[width=120mm]{arwsCs.eps} \vspace*{-4mm} \caption{(Color online) Angular-momentum vectors in the intrinsic frame for the $I=11$, 16, and 21 yrast (upper panel) and yrare (lower panel) states in $^{128}$Cs according to the expectation values shown in Fig.~\ref{fig:jCsmixed}. } \label{fig:arwsCs} \end{center} \end{figure*} It can be also seen in Fig.~\ref{fig:jCsmixed} that the $y$ component of the yrast band is considerably smaller than that of the yrare band for $I \ltsim 17$, which suggests that the vector of the yrast band stays near the $xz$ principal plane, while the vector of the yrare band goes back and forth with respect to this plane; i.e., the system is in the regime of chiral vibration~\cite{SKC01}. Note that the quantities $\langle\!\langle {J_i^2} \rangle\!\rangle$ in Eq.~(\ref{eq:exJJ}) include such effect of angular-momentum fluctuations. In contrast, the three components of angular-momentum vectors for a pair of the yrast and yrare bands are similar at $I \gtsim 18$, and form the aplanar configuration, i.e., the system is in the regime of the static chirality; this situation is exactly what is expected for the chiral doublet band to appear. The transition from the regime of the chiral vibration to that of the static chirality occurs gradually. A similar transition expected for the transverse wobbling and related to the direction of the angular-momentum vector in the intrinsic frame has been discussed in part~I. How the total angular-momentum vector changes the direction is shown pictorially in Fig.~\ref{fig:arwsCs} according to the calculation in Fig.~\ref{fig:jCsmixed}: The directions of two vectors for the yrast and yrare bands are rather different at $I=11$; in fact that of the yrare band vibrates with respect to the $xz$ principal plane. At $I=21$ the two vectors points almost to the same direction, and the yrast and yrare bands can be interpreted as a pair of the doublet band. The $x$ and $z$ components stays almost constants at $I \gtsim 18$ and the $y$-component becomes dominant at higher spins, $I \gtsim 30$. In the chiral regime, the particle-like proton quasiparticle aligns its angular-momentum along the $x$ axis and the hole-like neutron quasiparticle aligns its angular-momentum along the $z$ axis, while the collective angular-momentum vector is mainly along the $y$ axis. To see how neutrons and protons contribute to the expectation values of the angular-momentum vector, the neutron and proton contributions for the yrast band are depicted separately in the right panel of Fig.~\ref{fig:jCsmixed}. The contribution of neutrons or of protons cannot be calculated by Eq.~(\ref{eq:exJJ}); one needs to look into the microscopic wave function explicitly, and Eq.~(\ref{eq:exJJm}) should be used instead. It is seen in the right panel of Fig.~\ref{fig:jCsmixed} that indeed the dominant contribution to the $x$ component comes from proton and that to the $z$ component from neutron, while both neutron and proton contribute to the $y$ component as expected for collective angular-momentum. Note that for increase of the $x$ component in the lower spin $I \ltsim 16$, both neutron and proton contribute, which suggests that there is non-negligible amount of collective angular-momentum for it. This is consistent with the ``classical'' model in Eq.~(\ref{eq:paroteq}). Namely, collective angular-momentum increases in the $xz$ plane below the critical spin $I_{\rm c}$, while its $y$ component starts to increase after $I_{\rm c}$ with constant $xz$ components. Thus, the ideal situation of the chiral geometry is realized in this case by our fully microscopic angular-momentum-projection calculation. A similar analysis of the expectation values of angular-momentum vectors has been performed within the particle-rotor model in Refs.~\cite{QZM09,QZW09}, which allows one to discriminate between collective core and quasiparticle angular-momenta in contrast to our microscopic analysis. \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{tRCsmixed.eps} \vspace*{-4mm} \caption{(Color online) The calculated $B(E2:I\rightarrow I-2)$ values for the yrast band (solid lines) and the yrare band (dashed lines) as functions of spin in $^{128}$Cs. The left and right panels show the in-band and out-of-band values, respectively. Note that the ordinate scale in the right panel is different from that in the left panel. Shown are the results of the configuration-mixed projection calculation corresponding to Fig.~\ref{fig:sCsmixed}. } \label{fig:tRCsmixed} \end{center} \end{figure*} One of the merits of the angular-momentum-projection method is that the electromagnetic transition rates between any eigenstates can be calculated straightforwardly. In the present work, we concentrate on the in-band and out-of-band transitions for the two lowest ${\mit\Delta}I=1$ bands, the yrast and the yrare bands, obtained by configuration-mixing of the two configurations (a) and (b) in Eq.~(\ref{eq:sig4class}). The $I \rightarrow I-2$ rotational $E2$ transition rates are shown in Fig.~\ref{fig:tRCsmixed}. The in-band transitions are always large. In fact the large $B(E2:I\rightarrow I-2)$ values are used to define each rotational band. The in-band transition rates are similar for the pair of the yrast and yrare bands, where those of the yrare band are slightly larger at lower spins, $I \le 17$, while those of the yrast band are slightly larger at higher spins, $I \ge 22$. In contrast, the out-of-band transition probabilities are generally small, although they are non-negligible in $17 \le I \le 23$, where the splitting of the energies of the two bands are smallest and the mixing between them is expected. Note that the out-of-band transitions from the yrare to the yrast at low spins, $I \le 15$, are not so small, while those from the yrast to the yrare are very small, which is characteristic for chiral vibrations. The increasing behavior of the in-band $E2$ transitions results from the change of direction of the angular-momentum vector in the intrinsic frame for the fixed deformation of the mean-field. In the semiclassical approximation the $B(E2)$ values are proportional to $\bigl|\langle x_j^2-x_k^2 \rangle\bigr|^2$ for the rotation about the $i$-th principal axis ($ijk$-cyclic), and for rotation about the tilted axis the transition amplitudes are given by the linear combination of these moments $\langle x_j^2-x_k^2 \rangle$ depending on the angles of the angular-momentum vector (c.f. the formulas in Refs.~\cite{Fra93,Fra00}). In the present case, $0 < \gamma < 60^\circ$, the moment $\langle z^2-x^2 \rangle$ is largest and the maximum value of $B(E2)$ is expected for the rotation about the $y$ axis, which is realized at much higher spins, see the left panel of Fig.~\ref{fig:jCsmixed}. This increase of the rotational $B(E2)$ values has been also seen in the particle-rotor model calculation of Ref.~\cite{QZW09}. However, the measured rotational $B(E2)$ values~\cite{GSP06} do not show such increase with spin; they even slightly decrease at highest spins observed. Moreover, the calculated $B(E2)$ values are about factor two larger than the measured values; this is because the value of $\beta_2$ employed in the present study is too large as mentioned previously. Therefore, we do not attempt to make detailed comparison with experimental data except for the $B(M1)_{\rm in}/B(M1)_{\rm out}$ ratio. \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{tEMCsmixed.eps} \vspace*{-4mm} \caption{(Color online) The calculated $B(E2:I\rightarrow I-1)$ and $B(M1:I\rightarrow I-1)$ values for the yrast band (solid lines) and the yrare band (dashed lines) as functions of spin in $^{128}$Cs. The upper (lower) panels show the in-band (out-of-band) transition rates. These are the results of the configuration-mixed projection calculation corresponding to Fig.~\ref{fig:sCsmixed}. } \label{fig:tEMCsmixed} \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{trCsmixed.eps} \vspace*{-4mm} \caption{(Color online) Left panel: The calculated $B(M1:I\rightarrow I-1)/B(E2:I\rightarrow I-2)$ ratios inside the yrast band (solid lines) and inside the yrare band (dashed lines) as functions of spin for $^{128}$Cs. Right panel: The ratio of the $I\rightarrow I-1$ in-band and out-of-band $M1$ transitions, $B(M1)_{\rm in}/B(M1)_{\rm out}$, where the in-band transitions are those inside the yrare band and the out-of-band transitions are those from the yrare to the yrast band. As for $B(M1)_{\rm in}/B(M1)_{\rm out}$ the experimental data are also included~\cite{KSC03}. These are the results of projection with the configuration-mixing corresponding to Fig.~\ref{fig:sCsmixed}. } \label{fig:trCsmixed} \end{center} \end{figure*} The characteristic geometry of the static chirality is reflected in the ${\mit\Delta}I=1$ electromagnetic transition rates as it is reviewed in Sec.~\ref{sec:selrule}. We show the $I\rightarrow I-1$ $E2$ and $M1$ transition rates as functions of spin in Fig.~\ref{fig:tEMCsmixed} (both in-band and out-of-band transitions). It is clearly seen that the behavior of both $E2$ and $M1$ transitions changes around $I=16$. The $B(E2)$ and $B(M1)$ for the yrast and yrare bands become similar after the chiral geometry is realized in $I \gtsim 16$. For $I \ltsim 15$ the in-band transitions are larger than the out-of-band transitions. For $I \gtsim 16$ the in-band and out-of-band transitions are of similar magnitude, and both of them show the characteristic zigzag pattern. Especially, the in-band (out-of-band) transition rates are prohibitively small at even (odd) spins, and which transition is stronger, the in-band or the out-of-band, changes alternatively as a function of spin. This is exactly what is expected from the prototype model of Ref.~\cite{KSH04} (see Fig.~\ref{fig:selchrl}). As often shown in the experimental data, the $B(M1)/B(E2)$ ratios for the yrast and yrare bands are displayed in a logarithmic scale in the left panel of Fig.~\ref{fig:trCsmixed}. Again, the behavior of the ratios changes after the critical spin and clearly shows a regular zigzag pattern, which comes from the $M1$ transitions. As it is discussed in Sec.~\ref{sec:selrule}, the clear signature of this ideal scenario can be seen for the ratio of in-band versus out-of-band $M1$ transitions, which is compared with experimental data~\cite{KSC03} in the right panel of Fig.~\ref{fig:trCsmixed}. For $I \gtsim 17$ this ratio alternates values greater than one at odd spin and smaller than one at even spin alternatively, which well corresponds to experimentally observed feature, although the spin range is slightly shifted as expected from the excitation energies in the right panel of Fig.~\ref{fig:sCsmixed}. In this way, the result of the present microscopic calculation clearly shows the characteristic behavior of the chiral geometry predicted by the phenomenological model of Ref.~\cite{KSH04} not only for the energy spectrum but also for the transition rates. \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{trCsmixedg25.eps} \vspace*{-4mm} \caption{(Color online) The same as Fig.~\ref{fig:trCsmixed} but the calculation with using $\gamma({\rm WS})=25^\circ$. } \label{fig:trCsmixedg25} \end{center} \end{figure*} To see the effect of the triaxial deformation we have done the same calculation using smaller values of $\gamma$ than $30^\circ$: The resultant energy difference between the yrast and yrare bands increases and the amplitude of the zigzag behavior of the $I\rightarrow I-1$ $E2$ and $M1$ transitions decreases, while the $I\rightarrow I-2$ rotational $E2$ transitions do not essentially change. As an example, we show the result of the $B(M1)/B(E2)$ and $B(M1)_{\rm in}/B(M1)_{\rm out}$ ratios in Fig.~\ref{fig:trCsmixedg25}, which is obtained by the calculation using $\gamma({\rm WS})=25^\circ$ and keeping the other parameters unchanged. Note that $\gamma({\rm WS})=25^\circ$ corresponds to $\gamma({\rm den})=19.3^\circ$. As it is seen in Fig.~\ref{fig:momiXe}, the moment of inertia ${\cal J}_x$ is about factor two larger than ${\cal J}_z$ in contrast to the case of $\gamma({\rm WS})=30^\circ$ with ${\cal J}_x\approx {\cal J}_z$, which is the necessary condition for the model of Ref.~\cite{KSH04}. The minimum energy difference between the yrast and yrare bands in this case is about 340 keV at $I=21$, which is about factor three larger than that in the case of $\gamma({\rm WS})=30^\circ$. As it is clearly seen by comparing Fig.~\ref{fig:trCsmixedg25} with Fig.~\ref{fig:trCsmixed}, the amplitude of the zigzag behavior is reduced by one to two orders of magnitude. Therefore, as it is emphasized in Ref.~\cite{KSH04}, the $B(M1)_{\rm in}/B(M1)_{\rm out}$ ratio tells how well the situation of the model is realized. These results are consistent with those of the particle-rotor model in Ref.~\cite{QZW09}, where the results of calculations with several different $\gamma$ values are presented. \subsection{Chiral doublet band in $^{104}$Rh} \label{sec:chiralRh} As another example of a chiral doublet band in an odd-odd nucleus, we present the result of calculation for $^{104}$Rh, where the odd neutron occupies the particle-like negative parity orbit (mainly $h_{11/2}$) and the odd proton occupies the hole-like positive parity orbit (mainly $g_{9/2}$). In this case the high-$j$ orbits of the odd neutron and proton are different. The resultant rotational band has negative parity. The calculational procedure is the same as for $^{128}$Cs. The adopted deformation parameters are $\beta_2=0.25$, $\beta_4=0.0$, and $\gamma=30^\circ$, for which we have found that a chiral doublet band appears in the calculations. Note that $\gamma=\gamma({\rm WS})=30^\circ$ corresponds to $\gamma({\rm den})=24.9^\circ$ in this case. The average pairing gaps, calculated selfconsistently, are $\Delta_{\rm n}=0.95$ MeV and $\Delta_{\rm p}=0.76$ MeV for neutrons and protons, respectively. The adopted value, $\beta_2=0.25$, is again larger than the commonly used value, $\beta_2 \approx 0.18-0.23$, in the nuclear region around $^{104}$Rh. For this nucleus we show only the result of projection from the non-cranked mean-field state constructed by the configuration (a) in Eq.~(\ref{eq:sig4class}) for simplicity; other results are qualitatively similar. \begin{figure}[!htb] \begin{center} \includegraphics[width=155mm]{sRhnocr.eps} \vspace*{-4mm} \caption{(Color online) Left panel: Energy spectrum for $^{104}$Rh calculated by the angular-momentum-projection method from the non-cranked mean-field with the configuration (a) in Eq.~(\ref{eq:sig4class}). A rigid-rotor reference energy of $0.017\,I(I+1)$ MeV is subtracted. The lowest fourteen bands for both even-$I$ and odd-$I$ sequences are shown. Right panel: Comparison of the calculated and experimental chiral doublet bands in $^{104}$Rh. Experimental data are taken from Ref.~\cite{VFK04}. } \label{fig:sRhnocr} \end{center} \end{figure} \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{jRhnocr.eps} \vspace*{-4mm} \caption{(Color online) The calculated expectation values of the angular-momentum vector in the intrinsic frame for the non-cranked mean-field of $^{104}$Rh corresponding to the spectrum in Fig.~\ref{fig:sRhnocr}. The left panel shows the expectation values of the total vector for the yrast (filled symbols) and yrare (open symbols) ${\mit\Delta}I=1$ bands, while the right panel shows the neutron (filled symbols) and proton (open symbols) contributions in Eq.~(\ref{eq:exJJm}) separately for the yrast band. Note that the $x$, $y$, and $z$ axes are the short, medium, and long axes, respectively. } \label{fig:jRhnocr} \end{center} \end{figure*} The calculated spectrum is displayed in the left panel of Fig.~\ref{fig:sRhnocr}. The rigid-rotor reference energy $0.017\,I(I+1)$ MeV is subtracted to see the details more clearly. Just like the case of $^{128}$Cs in Fig.~\ref{fig:sCsnocr}, the even-$I$ and odd-$I$ sequences are nearly degenerate indicating the signature symmetry is strongly broken. The lowest two ${\mit\Delta}I=1$ bands, which are separated by more than 1 MeV at low spins, quickly approach each other within about 200$-$350 keV in the spin range $14 \ltsim I \ltsim 20$. The estimated critical spin $I_{\rm c}\approx 12$ in Eq.~(\ref{eq:critI}) is slightly smaller than the spin where the two bands become almost degenerate. This behavior rather well corresponds to the observed one~\cite{VFK04}, although the moments of inertia of these bands are underestimated compared with the experimental data as it is shown in the right panel of Fig.~\ref{fig:sRhnocr}. The chiral geometry is confirmed also in this case by the expectation values of the angular-momentum vector in the intrinsic frame, which are depicted in Fig.~\ref{fig:jRhnocr}. At the low spins, $I \ltsim 8$, the components of the short ($x$) and long ($z$) axes are dominant for the yrast band, while the component of the medium ($y$) axis quickly grows. All three components give important contributions at the intermediate spin region; see the left panel of Fig.~\ref{fig:jRhnocr}. As it is discussed in the case of $^{128}$Cs, the $y$ component of the angular-momentum vector for the yrare band is considerably larger than that for the yrast band at $I \ltsim 15$, which indicates that the yrare band can be interpreted as one-phonon excitation of the chiral vibration at this lower spin region. As expected, for $I \gtsim 16$ the behavior of the angular-momentum vectors indicates that the system is in the regime of the static chirality. Looking into the right panel of Fig.~\ref{fig:jRhnocr}, where the neutron and proton contributions are displayed separately, the main contribution comes from the neutron for the $x$ component and from the proton for the $z$ component, while both neutron and proton coherently contribute to the $y$ component as expected for collective angular-momentum. The axis with the largest moment inertia is the $y$ axis as it is seen from the cranking inertias of the core nucleus in Fig.~\ref{fig:momiPd}. This collective angular-momentum component is very small below $I_{\rm c}$ and starts to increase at higher spins $I>I_{\rm c}$. Below $I_{\rm c}$, the collective part lies mainly in the $xz$ plane with the $x$ component being more favored as in the case of $^{128}$Cs, but neutrons contribute to it more than protons in contrast to $^{128}$Cs. This behavior is again consistent with the model in Sec.~\ref{sec:selrule}. Thus, the expected transition from the regime of the chiral vibration to that of the static chirality is also confirmed in this case. \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{tRRhnocr.eps} \vspace*{-4mm} \caption{(Color online) The calculated $B(E2:I\rightarrow I-2)$ values for the yrast band (solid lines) and the yrare band (dashed lines) as functions of spin in $^{104}$Rh. The left and right panels show the in-band and out-of-band transition rates, respectively. Note that the ordinate scale in the right panel is different from that in the left panel. These are the results of projection from the non-cranked lowest configuration (a) in Eq.~(\ref{eq:sig4class}) corresponding to Fig.~\ref{fig:sRhnocr}. } \label{fig:tRRhnocr} \end{center} \end{figure*} The $B(E2)$ and $B(M1)$ values inside the yrast and yrare bands as well between the two bands are also calculated for $^{104}$Rh. The $I\rightarrow I-2$ stretched $B(E2)$ values are displayed in Fig.~\ref{fig:tRRhnocr}. It is seen that the in-band values are large and are similar for the yrast and the yrare bands. These rotational transition probabilities increase as functions of spin, which corresponds to the fact that the direction of the angular-momentum vector changes gradually to the medium ($y$) axis as the spin increases, as it is shown in Fig.~\ref{fig:jRhnocr}. The out-of-band transitions are non-negligible for $16 \le I \le 19$, where the difference of energies between the two bands is very small and band mixing is expected. These features are very similar to the case of $^{128}$Cs. The $B(E2)$ values seem to be overestimated compared with the experimental data~\cite{SRK08}, because the deformation parameter $\beta_2=0.25$ may be too large. \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{tEMRhnocr.eps} \vspace*{-4mm} \caption{(Color online) The calculated $B(E2:I\rightarrow I-1)$ and $B(M1:I\rightarrow I-1)$ values for the yrast band (solid lines) and the yrare band (dashed lines) as functions of spin in $^{104}$Rh. The upper (lower) panels show the in-band (out-of-band) transition rates. These are the results of projection from the non-cranked lowest configuration (a) in Eq.~(\ref{eq:sig4class}) corresponding to Fig.~\ref{fig:sRhnocr}. } \label{fig:tEMRhnocr} \end{center} \end{figure*} \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{trRhnocr.eps} \vspace*{-4mm} \caption{(Color online) Left panel: The calculated $B(M1:I\rightarrow I-1)/B(E2:I\rightarrow I-2)$ ratios inside the yrast band (solid lines) and inside the yrare band (dashed lines) as functions of spin for $^{104}$Rh. Right panel: The ratio of the $I\rightarrow I-1$ in-band and out-of-band $M1$ transitions, $B(M1)_{\rm in}/B(M1)_{\rm out}$, where the in-band transitions are those inside the yrare band and the out-of-band transitions are those from the yrare to the yrast band. These are the results of projection from the non-cranked mean-field corresponding to Fig.~\ref{fig:sRhnocr}. } \label{fig:trRhnocr} \end{center} \end{figure*} The calculated $B(E2:I\rightarrow I-1)$ and $B(M1:I\rightarrow I-1)$ values are shown in Fig.~\ref{fig:tEMRhnocr} in the same way as in the case of $^{128}$Cs. The behavior of both $B(E2)$ and $B(M1)$ as functions of spin changes at around $I\approx 15$; after this spin they exhibit typical zigzag behavior, which is expected after the static chirality is realized. As it is predicted in the prototype model of Ref.~\cite{KSH04}, the in-band transitions are large when the out-of-band transitions are small and vice versa. In case of $^{104}$Rh the in-band (out-of-band) transitions are almost negligible for odd (even) spins, which is opposite to the case of $^{128}$Cs. This may be expected because both the particle and hole orbits are mainly $h_{11/2}$ in $^{128}$Cs, while the proton hole orbit in $^{104}$Rh is mainly $g_{9/2}$: If one $h_{11/2}$ hole is replaced with $g_{g/2}$, the coupled total spin may be reduced by one unit. Finally, the $B(M1)/B(E2)$ ratio and the in-band versus out-of-band $B(M1)$ ratio are shown in the logarithmic scale in the left and right panels of Fig.~\ref{fig:trRhnocr}, respectively. As discussed in the case of $^{128}$Cs, The $B(M1)/B(E2)$ ratio changes to the regular zigzag behavior after the chiral geometry is realized, which reflects the behavior of the $B(M1)$ values. The in-band versus out-of-band $B(M1)$ ratio also shows a characteristic pattern, namely it alternates between values greater than one and smaller than one as a function of spin. This is the expected behavior from the model in Ref.~\cite{KSH04} (see Fig.~\ref{fig:selchrl}). However, it should be noted that the neutron-proton symmetry prerequisite in the model of Ref.~\cite{KSH04} is not precisely satisfied in the present example, because the high-$j$ orbits of the odd neutron and proton are different. It is interesting that the calculation shows the characteristic selection rules of the model even in this case. In fact, the particle-rotor model calculation with proton $g_{9/2}$ and neutron $h_{11/2}$ orbits in Ref.~\cite{JJR04} shows similar zigzag behavior for $B(M1)$ for the neighboring nucleus $^{106}$Rh. Although the zigzag behavior of $B(M1)$ are observed in the experimental data~\cite{VFK04,SRK08}, its amplitude is too large in the present calculation. The observed doublet band may not come as close to the model of Ref.~\cite{KSH04} as our calculations. \begin{figure*}[!htb] \begin{center} \includegraphics[width=155mm]{trRhnocrg25.eps} \vspace*{-4mm} \caption{(Color online) The same as Fig.~\ref{fig:trRhnocr} but the calculation with using $\gamma({\rm WS})=25^\circ$. } \label{fig:trRhnocrg25} \end{center} \end{figure*} Finally we show the results of a calculation using $\gamma=\gamma({\rm WS})=25^\circ$ for this nucleus. Note that $\gamma=\gamma({\rm WS})=25^\circ$ corresponds to $\gamma({\rm den})=20.2^\circ$ in this case. The moment of inertia ${\cal J}_x$ is about factor two larger than ${\cal J}_z$ (see Fig.~\ref{fig:momiPd}). Figure~\ref{fig:trRhnocrg25} depicts the $B(M1)/B(E2)$ and $B(M1)_{\rm in}/B(M1)_{\rm out}$ ratios like Fig.~\ref{fig:trRhnocr}. Apparently the magnitude of the oscillation of the $B(M1)$ values are reduced by one to two orders of magnitude. The out-of-band $B(M1)$ values become smaller than the in-band values, and the center of the oscillations is changed to a value that is considerably larger than one. \section{Summary} \label{sec:summary} In this series of investigation, we have studied rotational motion that is characteristic for nuclei with triaxial deformation. The basic method we employed is the fully microscopic framework of angular-momentum projection from the mean-field wave function, where the microscopic Hamiltonian is composed of the Woods-Saxon mean-field and the separable schematic interaction. Among various interesting types of rotational motion, we have concentrated on the nuclear wobbling motion and the chiral vibrations and rotations. The former is the subject of part~I and the latter is the subject of the present part~II in the series. The nuclear chirality of rotating triaxially deformed nucleus is a relatively new concept and it is expected in odd-odd nuclei as typical examples. We have applied our microscopic framework to the typical cases of two odd-odd nuclei, $^{128}$Cs and $^{104}$Rh, where the odd proton (neutron) occupies the high-$j$ particle-like orbit and the odd neutron (proton) occupies the high-$j$ hole-like orbit in the former (latter) nucleus. The odd nucleons occupying the particle- and hole-like orbits align their angular-momentum vectors along the short and long axes, respectively. Combined with the collective rotation around the medium axis, which has the largest moment of inertia, these three angular-momentum vectors form an aplanar configuration, i.e., the chiral geometry is realized in the body-fixed frame of the triaxial mean-field. In such a situation the chiral symmetry between the right- and left-handedness is broken, which is the reason why the chiral doublet band emerges~\cite{FM97}. Adjusting the quadrupole deformation parameter $\beta_2$ and fixing the triaxiality parameter at $\gamma({\rm WS})=30^\circ$ we are able to obtain the yrast and yrare bands as a chiral doublet by our fully microscopic angular-momentum-projection calculation. By calculating the expectation values of the angular-momentum vector with respect to the three principal axes, it is confirmed that the chiral geometry is realized for the selected examples of $^{128}$Cs and $^{104}$Rh. However, the moments of inertia of the calculated bands are too small compared with the experimental data. One of the merits of the angular-momentum-projection method is feasibility for calculating the electromagnetic transition probabilities. We have studied the $E2$ and $M1$ transitions between the members of the doublet bands. It is demonstrated that the $I\rightarrow I-1$ transition rates completely change their behavior after the static chirality is reached. Large and small reduced probabilities alternate as functions of spin, and this behavior is out of phase for the in-band and out-of-band transitions. This characteristic feature is in accordance with the prototype model proposed in Ref.~\cite{KSH04}, and qualitatively corresponds to the experimental data for both $^{128}$Cs and $^{104}$Rh. In this way, we have confirmed that the two interesting types of rotational motion, the wobbling motion and the chiral rotation, which are characteristic for the triaxially deformed nuclei, naturally emerge as results of our fully microscopic angular-momentum-projection calculation. The wobbling bands and chiral doublet bands were originally predicted based on the macroscopic rotor model or the phenomenological particle-rotor coupling model. Considering the fact that the predicted properties by these models are confirmed by our microscopic calculations, the macroscopic rotor model picture is well realized for the triaxially deformed nucleus. It should, however, be noticed that a quantitative description of these rotational bands is not achieved in the present series of work. Further investigation is needed for a quantitative description of the data. \vspace*{10mm}
{ "timestamp": "2018-02-21T02:06:16", "yymm": "1802", "arxiv_id": "1802.06991", "language": "en", "url": "https://arxiv.org/abs/1802.06991" }
\section{Introduction} Novelty detection implies finding elements that have not appeared before, or new, or original with respect to relevant references. The explosive growth of documents across the web has resulted in the accumulation of redundant ones, thereby consuming space as well as precious time of readers seeking new information. This necessitates finding means for discarding redundant document(s) and retaining ones containing novel information. The level of information duplication is not just limited to the lexical surface form of texts but has encroached the barriers of semantics and pragmatics too. Paraphrasing, semantic level plagiarism etc. are instances of such practices. Intelligent text reuse, synonym replacement and careful alignment may lead to a surface form which is very different from the originating source yet convey the same meaning. Present state-of-the-art text matching techniques are unable to process such redundancy. The quest of new information is an eternal human need and urges attention in this very age of exploratory data redundancy. One major objective of this work is to provide a benchmark setup for experiments to filter out superfluous information across the web. With this work we introduce a simplistic dataset to the research community so as to inculcate efficient methods for detecting \textit{document level novelty} or on the contrary document level redundancy. We create the resource by crawling news reporting of events of different categories and coin it as \textit{TAP-DLND 1.0}\footnote{http://www.iitp.ac.in/~ai-nlp-ml/resources.html} (after the initial names of the principal investigators \textit{Tirthankar-Asif-Pushpak}) which also stands for \textit{Explore Document Level Novelty Detection (DLND)}. In this work we view the problem of novelty detection as a two-class classification problem with the judgment that whether an incoming document bears sufficiently new information to be labeled as novel with respect to a set of source documents. The source document set could be seen as the memory of the reader which stores known information. We extract features from target documents with respect to corresponding source documents and develop a classification system. We report promising results with our features on the developed dataset. \subsection{Related Works} Although sentence level novelty detection is a well studied problem in information retrieval literature, very little has been done to address the problem at the document level. To begin with \cite{li2005novelty} rightly pointed out that, research in novelty detection from texts has been carried out at three levels : event level, sentence level and document level. Research in novelty mining could be traced back to the Topic Detection and Tracking (TDT) ~\cite{allan2002introduction} evaluation campaigns where the concern was to detect new event from online news streams. Although the intention was to detect the \textit{first story} or reporting of a new event from a series of news stories, the notion of \textit{novelty detection} from texts came into light for the research community. Some notable approaches for New Event Detection with the TDT corpus are by \cite{allan1998line,yang2002topic,stokes2001first,franz2001first,yang1998study,allan2000first,brants2003system}. However, the Novelty track in TREC \cite{soboroff2005novelty} was the first to explicitly explore the concept of Novelty Detection from texts. Under the paradigm of information retrieval, given a query, the TREC experiments were designed to retrieve relevant and novel sentences from a given collection. Some notable approaches for sentence level novelty detection from the TREC exercises are by \cite{allan2003retrieval,kwee2009sentence,li2005novelty,zhang2003expansion,collins2002information,gabrilovich2004newsjunkie,ru2004improved}. Textual Entailment based sentence level novelty mining was explored in the novelty subtask of RTE-TAC 6 and 7\cite{bentivogli2011seventh}. At the document level the problem is attempted by a few like \cite{zhang2002novelty,tsai2011d2s,karkali2013efficient,dasgupta2016automatic}. However we find that there is a dearth of a proper evaluation setup (e.g. corpus, baseline and evaluation methods) for novelty detection at the document level. This inspired us to create one and establish a benchmark for the same. \subsection{Motivation and Contribution Our understanding and survey revealed that in spite of having several applications in various natural language processing (NLP) tasks, novelty detection at the document level has not attracted the coveted attention. Hence, we deem that novelty at the document level needs to be understood first, investigated in-depth, and benchmark setup (gold standard resources etc.) be created to validate the investigations as well as provide baselines for further research. We hope that the knowledge gained from this dataset and experiments would be a step towards our more ambitious vision of semantic level plagiarism detection in scholarly articles. Our contributions here could be outlined as : \begin{itemize} \item Proposing a benchmark dataset for document level novelty detection. We are unaware of the availability of any such corpus; and \item A supervised machine learning model for document level novelty detection. This can be treated as a baseline model for further research. \end{itemize} \section{Document Level Novelty Novelty detection from texts implies search for new information with respect to whatever is already known or seen. Hence, the problem of novelty detection from texts is very subjective and depends upon the view of the intended reader. The knowledge of the reader regarding a particular event serves as the reference against which s/he decides the novelty of an incoming information. Careful observation of data characteristics led us to believe that \textit{Relevance}, \textit{Relativity} and \textit{Temporality} are three important properties of novelty detection. For example, searching for novelty between two documents, one talking about \textit{jaguar}, the animal and the other about \textit{jaguar}, the car is futile as one is not relevant to the other. Quite obvious that each one would contain different information than the other. Also when we talk about a document being novel it is always with respect to a reference set of documents already seen (information already gained from those seen documents) or what we say as the knowledge base of the reader. Also novel information is usually a temporal update over existing knowledge. With this view of novelty we went on to create a resource that effectively taps these properties, \textit{viz.,} \textbf{Relevance}, \textbf{Relativity} and \textbf{Temporality}. Our resource not only encompasses the lexical form of redundancy (a straight forward form of \textit{non-novelty}) but also delves deep into semantic textual redundancy (a more complex form of \textit{non-novelty}) with the expertise of human annotators. \section{Benchmark Setup} To address the issues pointed out in the previous section we develop a benchmark setup as discussed below. \subsection{Data Collection} We design a web crawler\footnote{using the www.webhose.io API} to perform systematic, unbiased, \textit{event-specific} crawling of news articles, mostly from the online versions of Indian English newspapers. The news domains we looked into are : \textit{Accident (ACC), Politics (PLT), Business (BUS), Arts and Entertainment (ART), Crime (CRM), Nature (NAT), Terrorism (TER), Government (GOV), Sports (SPT), and Society (SOC)}. To ensure that \textit{Temporality} criteria is preserved, our web crawler is designed to fetch web documents for a certain event in a timely manner i.e. the crawled documents are grouped as per their dates of publications in different forums (See Figure \ref{dlnd_crawl}). Event wise statistics of the corpus are in Table \ref{DLND-cat}. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth,height= 4 cm]{dlnd_crawled.png} \caption{Temporal Crawling} \label{dlnd_crawl} \end{figure} \begin{table}[h] \begin{center} \begin{tabular}{ |c|c| } \hline \bf Features & \bf Statistics \\ \hline Crawling period & Nov'16 - Nov'17 \\ \hline Number of events & 223 \\ \hline Number of sources per event & 3 \\ \hline Total novel documents & 2736 \\ \hline Total non-novel documents & 2704 \\ \hline Total documents in TAP-DLND 1.0 & 6109 \\ \hline Average number of sentences &15\\\hline Average number of words&353\\\hline \end{tabular} \end{center} \caption{\label{DLND-stats} Statistics of TAP-DLND 1.0 corpus. Here, average number of sentences and words is per document.} \end{table} \subsection{Preprocessing} As the data were crawled from various web sources\footnote{List of few news sources : www.ndtv.com, indianexpress.com, timesofindia.indiatimes.com, indiatoday.intoday.in, thehindu.com, news18.com, firstpost.com, dnaindia.com, deccanchronicle.com, financialexpress.com, business-standard.com, sify.com, newskerala.com, mid-day.com, thedailystar.net, theweek.in, tribuneindia.com } we perform some manual preprocessing works like removal of headlines, news source, date, time, noises (advertisements, images, hyperlinks) and convert the data into desired shape. \subsection{Source Document Selection} To mandate the \textit{Relevance} and \textit{Relativity} criteria, we select three documents for each event as the seed source documents. They are usually selected from the initial dates of reporting. Also so chosen that they represent different facets of information regarding that particular event (\textit{information coverage}). These source documents serve as the reference against which we asked the annotators to tag a target document (chosen from the remaining crawled documents for that event) as \textit{novel} or \textit{non-novel}. The source documents could be perceived as the memory of the reader or information already known against which it is to be determined with reasonable level of certainty that whether a target document contains sufficient new information to be labeled as \textit{novel}. \begin{table} \begin{center} \begin{tabular}{ |c|c|c|c| } \hline \bf Category & \bf \# Events & \bf \# N & \bf \# NN \\ \hline \hline ACC & 10 & 231 & 272\\ \hline PLT & 97 & 669 & 685\\ \hline BUS & 35 & 202 & 264 \\ \hline ART & 21 & 397 & 258\\ \hline CRM & 10 & 237 & 174\\ \hline NAT & 10 & 87 & 250 \\ \hline TER & 18 & 255 & 468\\ \hline GOV & 15 & 405 & 219\\ \hline SPT & 2 & 39 & 51\\ \hline SOC & 5 & 214 & 63\\ \hline \end{tabular} \end{center} \caption{\label{DLND-cat} Event wise statistics of TAP-DLND 1.0, $\# N\rightarrow$ Number of Novel documents, $\# NN\rightarrow$ Number of Non-Novel documents} \end{table} \subsection{Renaming files}For ease of information retrieval we rename each document in the corpus. A certain document bearing \textbf{'ACCE005SRC003.txt'} as file name indicates that it is the 3rd source document of the 5th event in the \textit{accident category}. For target documents 'SRC' is replaced by 'TGT'. \subsection{Meta files} We generate meta files (.xml) for each document in the corpus. These meta files contain background information regarding a source/target document within structured XML tags and have the same file name as that of the corresponding document. The information content of the meta files are : \textit{date of publishing, publisher, title of reporting, source id, event id, event name, category, Document Level Annotation (DLA), number of words and sentences}. We develop a semi-automatic \textit{meta file generator interface} where attribute values are automatically captured from the hierarchically organized data (See Figure \ref{dlnd}). Stanford CoreNLP \cite{manning2014stanford} integrated with our interface gave us the field values for \textit{sentence} and \textit{word} count. We asked our annotators to provide their judgments for the \textit{DLA} attribute based on the guidelines specified in the next section. \subsection{Annotation}Three annotators with post-graduate level knowledge in English were involved in labeling the DLND target documents. Having read the source document(s) we asked the annotators to annotate an incoming on-event document as \textit{non-novel} or \textit{novel} solely based on the information coverage in the source documents. The \textbf{annotation guidelines} were simple: \begin{enumerate} \item To annotate a document as \textit{non-novel} whose semantic content significantly overlaps with the source document(s) (maximum redundant information). \item To annotate a document as \textit{novel} if its semantic content as well as intent (direction of reporting) significantly differs from the source document(s)(minimum or no information overlap). It could be an update on the same event or describing a post-event situation. \item To leave out the ambiguous cases (for which the human annotators were not sure about the label). \end{enumerate} Two annotators independently labeled the target documents. The third annotator resolved the differences via majority voting. We found that novel items with respect to the source documents were mostly found in the reporting published in subsequent dates on the same event. Whereas non-novel items we found in the reporting published by different agency in the same date as that of the source documents. This is in line with the \textit{Temporality} criteria we discussed earlier. The inter-annotator agreement ratio was found to be \textbf{0.82} in terms of \textbf{Kappa coefficient}~\cite{fleiss1971measuring} which is assumed to be good as per ~\cite{landis1977measurement}. The final structure of DLND is in Figure \ref{dlnd}. \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth,height=6 cm]{dlnd_structure.png} \caption{The DLND corpus structure} \label{dlnd} \end{figure} \section{Evaluation} \begin{savenotes} \begin{table*}[ht] \begin{center} \begin{tabular}{ |c|c|p{11 cm}| } \hline \bf Type & \bf Features & \bf Description\\ \hline Semantic & Paragraph Vector (pv) + Cosine & We represent the source and target documents in terms of \textit{paragraph vectors}\footnote{Distributed Bag-Of-Words (DBOW) paragraph vector model trained on Wikipedia articles.}\cite{le2014distributed}. Then we take the maximum of the cosine similarity between the source-target pairs.\\ \hline Semantic & Concept Centrality & To identify the central theme of a document we use the \textit{TextRank} summarization algorithm by \cite{mihalcea2004textrank}. Thereafter we vectorize the ranked summary for each source and target document by simple \textit{word2vec}\footnote{Trained on Google News Corpus of 100 billion words. 300 dimension vectors using CBOW model}\cite{mikolov2013distributed} concatenation. Finally we take the maximum of the cosine similarity between the source and target vectors. \\ \hline Lexical& n-gram similarity& We compute lexical overlap of target \textit{n-gram}'s with respect to source documents for $n$ = 2,3 and 8. Octagrams we use to put emphasis on phrase overlap.\\ \hline Lexical & Named Entities and & As Named Entities\footnote{Entities were extracted using the Stanford Tagger.} and Keywords\footnote{Using the Rapid Automatic Keyword Extraction (RAKE) algorithm.} play a significant role in determining\\ & Keywords match (kw-ner) & \textit{relevance}, we put additional weightage to them by considering their match (target w.r.t. sources) as a separate feature.\\ \hline Lexico- & New Word Count & The number of new words could be an effective indicator of the amount of\\ Semantic & (nwc) & novel information content in the target document w.r.t. the source(s) given. Here, for calculating new words, along with the surface forms, we consider their synonyms\footnote{Obtained from English WordNet~\cite{fellbaum1998wordnet}} as well to establish semantic relatedness.\\ \hline Language & Divergence& We use this feature to measure the dissimilarity between two documents\\ Model& (kld) & represented as language models. We concatenate all the source documents into one and then measure the Kullback-Leibler Divergence with the target. \\ \hline \end{tabular} \end{center} \caption{\label{feature_set} Feature Set} \end{table*} \end{savenotes} We frame document level novelty detection as a binary classification problem and choose the features in parlance with the objective nature of texts that we consider for our experiments. We develop a binary classifier based on Random Forest\footnote{RF of 100 trees with minimum number of instances per leaf set to 1 implemented in WEKA machine learning toolkit} (RF)~\cite{Breiman2001} algorithm that classifies a document into either \textit{novel} or \textit{non-novel}. Our key focus is on extracting features that contribute to the semantics of a document. The set of features that we use for training and/or testing RF is listed in Table \ref{feature_set}. As is evident from the discussion in Section 3, TAP-DLND 1.0 consists a fair share of different levels (lexical as well as semantic) of text representations. We first take a simple yet popular lexical baseline: Jaccard similarity with unigrams between the source document and the target \cite{zhang2003expansion}. We train a Logistic Regression (LR) classifier with the Jaccard score to classify a document based on its overlap with the source document. Table \ref{dlnd_res} clearly indicates that the lexical baseline fails miserably in identifying \textit{non-novel} documents. Next we went ahead with three approaches by \cite{zhang2002novelty} for novelty detection at the document level. The first one i.e. the Set Difference is essentially the count of new words in the target document with respect to the set of source document(s). For this we concatenate the source document(s) of each event to form one source against each target. The Geometric Distance measures the cosine similarity between two document vectors represented as \textit{tf-idf} vectors. For three source documents against one target document in TAP-DLND 1.0, we take the maximum of the cosine similarity score. The third approach measures the Kullback-Leibler divergence between the concatenated source document(s) and the prospective target document where a document $d$ is represented as a probabilistic unigram word distribution (language model $\theta_d$). Instead of setting a fixed threshold as \cite{zhang2003expansion}, we train a Logistic Regression classifier based on those measures to automatically determine the decision boundary. Another approach by \cite{karkali2013efficient} based on Novelty Scoring via Inverse Document Frequency (IDF) performed poorly in recognizing novel/non-novel documents in TAP-DLND 1.0. We also compare our method with a more recent approach of \cite{dasgupta2016automatic} on our data. This particular entropy-based approach produces novelty score ($NS$) of a document $d$ with respect to a collection $C$. We adapt their respective threshold criteria and infer that documents with novelty score above (\textit{average+standard deviation}) are \textit{novel} and that with novelty score below (\textit{average-standard deviation}) are \textit{non-novel}. We left out the remaining (average novelty class) cases for our experiments. Table \ref{dlnd_res} numbers clearly show that our method superseded the baselines and purported \textit{state-of-the-art} by a substantial margin. This significance we attribute to the choice of semantic features for our experiments (see Figure \ref{feature significance}). Lexico-Semantic feature new word count has the maximum contribution, for which we argue that novel events in context to newspaper articles would contain new entities, concepts, numbers whereas non-novel documents would consist identical or synonymous entities. Semantic features play a vital role which indicates that detection of novelty extends beyond lexical characteristics of text. \begin{table*}[ht] \begin{center} \begin{tabular}{ |c|c|c|c|c|c|c|c| } \hline \bf Systems & \bf P(N) & \bf R(N) & \bf $F_1$(N) & \bf P(NN) & \bf R(NN) & \bf $F_1$(NN) & \bf Accuracy\\ \hline Jaccard+LR (Baseline) & 52.2 & 96.1 & 67.6 & 74.0 & 10.9 & 19.0 & 53.8\\ \hline Set Difference+LR & & & & & & &\\ \cite{zhang2002novelty} & 74.3 & 71.5 & 72.8 & 72.2 & 74.9 & 73.5 & 73.2\\ \hline Geometric Distance+LR & & & & & & &\\ \cite{zhang2002novelty} & 65.6 & 84.3 & 73.7 & 84.2 & 55.3 & 66.7 & 69.8\\ \hline Language Model (KLD)+LR & & & & & & &\\ \cite{zhang2002novelty} & 73.2 & 74.9 & 74.1 & 74.0 & 72.3 & 73.1 & 73.6\\ \hline Novelty (IDF)+LR & & & & & & &\\ \cite{karkali2013efficient} & 52.5 & 92.1 & 66.9 & 66.5 & 15.9 & 25.6 & 54.2\\ \hline \cite{dasgupta2016automatic} & 65.1 & 63.8 & 64.4 & 64.1 & 65.3 & 64.6 & 64.5\\ \hline \hline \bf Proposed Approach (RF) & \bf 77.6 & 82.3 & \bf 79.8 & 80.9 & \bf 76.1 & \bf 78.4 & 79.2\\ \hline \end{tabular} \end{center} \caption{\label{dlnd_res} \textit{10-fold} cross-validation results on TAP-DLND 1.0 (in \%), $P\rightarrow Precision$, $R\rightarrow Recall$, $N\rightarrow Novel$, $NN\rightarrow Non-Novel$, $LR\rightarrow$ Logistic Regression, $IDF\rightarrow$ Inverse Document Frequency, $KLD\rightarrow$ Kullback- Leibler Divergence} \end{table*} \begin{figure}[h!] \centering \includegraphics[width=0.5\textwidth]{ig_tapdlnd1.png} \caption{\label{feature significance} Significance of features \textit{based on Information Gain (IG)}. The length of the bar corresponds to the average merit (X : IG) of the feature (: Y).} \end{figure} \section{Conclusion}In this work we put forward a benchmark resource for \textit{document level novelty detection} and an evaluation scheme for the same. Our resource has an extensive coverage of ten different news categories and also includes the \textit{relevance, relativity,} and \textit{temporality} criteria inherently within its schema. Along with straightforward lexical characteristics it also manifests the high level semantic understanding of human annotators in its gold labels which is very essential for detecting semantic level redundancy. We hope that TAP-DLND 1.0 would evolve as a benchmark resource for experiments on document level novelty detection and provide valuable insights into the problem. In future we plan to annotate the TAP-DLND 1.0 corpus at the sentence level to have more fine perception regarding the amount of new information required to deem a document as \textit{novel}. Also we intend to include more target documents in data scarce categories. \section{Acknowledgements} The first author is supported by Visvesvaraya PhD Scheme, an initiative of Ministry of Electronics and Information Technology (MeitY), Government of India. The work is a product of the the Elsevier Centre of Excellence, Indian Institute of Technology Patna. \section{Bibliographical References} \label{main:ref} \bibliographystyle{lrec}
{ "timestamp": "2018-02-21T02:04:45", "yymm": "1802", "arxiv_id": "1802.06950", "language": "en", "url": "https://arxiv.org/abs/1802.06950" }
\section{Introduction} Connections between coverings and properties of solutions to the heat equation have been studied previously by various authors for both Riemannian manifolds \cite{Bro81, LS84, Bro85, Li86, Gri99, PS12} and graphs \cite{CY99, DM06}, among other works. Here, we contribute to this investigation by looking at three recently developed properties of interest for the heat equation on infinite weighted graphs, namely, stochastic incompleteness, the Feller property and uniform transience; and study how these properties behave with respect to coverings. All three of these properties involve heat escaping to infinity in some sense. Stochastic incompleteness (or non-conservativeness) concerns a loss in the total amount of heat at some time. This has been studied rather thoroughly for manifolds, see \cite{Gri99} for an overview, and, more recently, for graphs \cite{DM06, Woj08, Woj09, Web10, Hua11, Woj11, Hua12, KL12, GHM12, KLW13, Hua14, Fol14b, HL17, MW}. The Feller property concerns heat vanishing at infinity. It has been investigated for manifolds in \cite{Aze74, Yau78, Dod83, Hsu89, Dav92, PS12}, among other works and, again more recently, for graphs in \cite{Woj17}. Uniform transience is a strengthening of transience (as well as of the Feller property) which was recently introduced for graphs in \cite{KLSW17}, following previous work in \cite{BCG01, Win10, Kas10, Kas13}. This is also related to the notion of uniform subcriticality which has been studied for elliptic operators defined on domains in Euclidean space in \cite{Pin88} and, more recently, for Schr{\"o}dinger operators on weighted graphs in \cite{KPP}. Intuitively, if transience means that heat (or a random walker) escapes to infinity eventually, uniform transience means that it does so in all directions. The only connection in general between these properties is that uniform transience always implies the Feller property, see \cite{KLSW17}. \smallskip In this note, we investigate how these three properties propagate between a base space and a regular covering of the space. As we will see, in general, a base graph is stochastically incomplete if and only if the cover is stochastically incomplete while for the Feller property and for uniform transience we show that if the base satisfies these properties, then so does the covering but not the other way around, see Theorem~\ref{t:covering}. In the case of finitely many sheets, all three statements become equivalences. For both stochastic incompleteness and the Feller property, we are guided in this by previous work on Riemannian manifolds. For stochastic incompleteness, this result is known for manifolds but the proof found in the literature uses stochastic partial differential equations see, for example, \cite{Elw82}. It was asked in \cite{PS12} to find a deterministic proof of this result. In this note, we provide such a proof in Theorem~\ref{thm:scequ}. We use a result of Li, see Theorem~\ref{thm:pli}, which reveals a simple connection between the heat kernel on the base and the heat kernel on the cover and can be derived from the arguments found in \cite{Bor00, Li12}. We first develop this approach for graphs, see Theorem~\ref{t:equality} and Theorem~\ref{t:covering}~(i). Likewise, for the Feller property on manifolds, the recent paper of \cite{PS12} offers such a result with a slightly more difficult proof while it is rather a direct consequence of the connection between the heat kernels mentioned above, see Theorem~\ref{t:covering}~(ii). For uniform transience, we are not guided by any work on manifolds but rather exploit a connection between the various equivalent statements for uniform transience found in \cite{KLSW17} and the Green's function. Before establishing these connections, we further explore the Feller property for graphs. We first show that the Feller property enjoys a certain uniformity with respect to time in Lemma~\ref{l:uniformity}. We then give some new conditions for the Feller property to hold. Specifically, we first utilize the elliptic characterization of the Feller property to give a condition for the Feller property in terms of an inner degree growth in Theorem~\ref{t:Feller1}. This greatly improves a result found in \cite{Woj17} where a parabolic viewpoint is used to show that a uniform bound on the vertex degree implies the Feller property in analogy to \cite{Yau78, Dod83}. Given that many criteria for the Feller property on Riemannian manifolds involve lower bounds on the Ricci curvature, it is surprising that such lower bounds do not imply the Feller property in the discrete setting as we show in Proposition~\ref{p:curvature_feller}. We then use the heat kernel estimates proven in \cite{BHY17} to give another growth condition for the Feller property which connects the decay of the vertex measure and the growth of an intrinsic metric on the graph in Theorem~\ref{t:Feller2}. The concept of an intrinsic metric was first introduced in full generality in \cite{FLW14} and has found numerous applications in proving results analogous to those on Riemannian manifolds in the graph setting, see \cite{Fol11, BHK13, HKMW13, HKW13, Fol14a, Fol14b, Hua14, HK14, BKW15} and \cite{Kel15} for a survey of results in this direction. \bigskip The structure of the paper is as follows. In Section~\ref{s:setting} we introduce our main setting of infinite weighted graphs and define the concepts of stochastic incompleteness, the Feller property and uniform transience in this context. We also discuss the notion of a regular covering for weighted graphs. In Section~\ref{s:Feller} we have a closer look at the Feller property and prove the uniformity in time as well as the improved criteria for the Feller property to hold mentioned above. In Section~\ref{s:coverings} we study the connections between a base and its covering with respect to these properties and also give some spectral consequences in this setting. In Section~\ref{sec:manifold} we give a proof of the equivalence of stochastic incompleteness of a manifold and that of its cover. \section{Setting and basic definitions} \label{s:setting} \subsection{Laplacians and the heat equation} We consider weighted graphs and graph Laplacians as in \cite{KL12} with no killing term and with the additional assumption that our graphs are locally finite. That is, a \emph{graph} $G=(X,b,m)$ is a triple where $X$ is a countable set of \emph{vertices}, $b:X \times X \to [0,\infty)$ is an \emph{edge weight} which satisfies $b(x,x)=0$, $b(x,y)=b(y,x)$ and $| \{y \ | \ b(x,y)>0 \} | < \infty$ and $m: X \to (0,\infty)$ is \emph{vertex measure} which can be extended to all subsets of $X$ by countable additivity. For $x \in X$, we let the \emph{weighted degree} of $x$ be given by \[ {\mathrm{Deg}}(x) = \frac{1}{m(x)}\sum_{y \in X} b(x,y). \] If $b(x,y)>0$, we say that $x$ and $y$ are \emph{connected} by an edge with weight $b(x,y)$ and write $x \sim y$. For $x \in X$, we call the set $\{ y \ | \ y \sim x\}$ the \emph{neighborhood} of $x$. We assume that all graphs are \emph{connected} in the usual sense of paths, that is, for all $x, y \in X$, there exists a sequence of vertices $(x_i)_{i=0}^n$ such that $x=x_0$, $y=x_n$ and $x_i \sim x_{i+1}$ for all $i=0,1, \ldots n-1$. We denote the usual combinatorial graph metric by $d$, that is, $d(x,y):= \inf\{n\ | \ x=x_0\sim \ldots \sim x_n=y\}$. Likewise, we will say that a subset of $X$ is \emph{connected} if it is connected in the sense of paths which remain in the subset. If $b(x,y) \in \{0,1\}$ and $m=1$, then we say that the graph has \emph{standard edge weight and measure}. We let $C(X) = \{ f: X \to {\mathbb R} \}$ denote the space of all real-valued functions on $X$ and let $\mathcal{L}:C(X) \to C(X)$ denote the \emph{formal Laplacian} which is given by \[ \mathcal{L} f(x) = \frac{1}{m(x)} \sum_{y \in X} b(x,y) (f(x) - f(y)). \] If $C_c(X)$ denotes the finitely supported functions in $C(X)$ and \[ \ell^2(X,m) = \{ f \in C(X) \ | \ \sum_{x \in X} f^2(x) m(x) < \infty \}\] with inner product $\as{f,g} = \sum_{x \in X} f(x)g(x)m(x)$ and associated norm $\| f \| = \as{f,f}^{1/2}$ denotes the Hilbert space of square summable functions with respect to $m$, then we let $L$ denote the smallest self-adjoint extension of $\mathcal{L}$ restricted to $C_c(X)$, see \cite{KL12, HKLW12, HKMW13} for more details. For $t>0$, we let $e^{-tL}$ denote the heat semigroup of $L$ and let $p_t(x,y)$ denote the \emph{heat kernel} of the graph which is defined by \[ e^{-tL}f(x) = \sum_{y \in X} p_t(x,y) f(y) m(y) \] for all functions $f \in \ell^2(X,m)$. We note that $u(x,t) = e^{-tL}f(x)$ is the minimal solution to the heat equation $(L + \partial_t)u =0$ with initial condition $u(x,0)=f(x)$ whenever $f\geq0$. In particular, $p_t(x,y)$ is the smallest non-negative function which satisfies $(L+ \partial_t)p_t(x,y)=0$, where the Laplacian is applied in either variable, and $p_0(x,y) = \oh{1}_x(y)$ where $\oh{1}_x = 1_x/m(x)$ is the delta function at $x$ divided by the measure at $x.$ Furthermore, as we assume that the graph is connected, $p_t(x,y)>0$ for all $t>0$, $x,y \in X$, see \cite{KL12} By monotone approximation, the heat semigroup can be extended to all $\ell^p(X,m)$ for $p \in [1,\infty]$, see \cite{KL12} for details. In particular, the heat semigroup can be applied to the constant function 1, which is 1 on all vertices. This fact will be needed for the definition of stochastic incompleteness given below. For vertices $x, y \in X$ we let $g(x,y)$ denote the \emph{Green's function} which is defined by \[ g(x,y) = \int_0^\infty p_t(x,y) dt. \] Note that this function is either always infinite or always finite. In the first case, a graph is called \emph{recurrent}, in the second, \emph{transient}. An alternative definition for the Green's function is given via resolvents as follows: \[ g(x,y) = \lim_{\alpha \to 0^+} (L+ \alpha)^{-1} \oh{1}_x(y). \] For a sequence of vertices $(x_n)$, we write $x_n \to \infty$ as $n \to \infty$ if $(x_n)$ leaves every finite set eventually. Furthermore, we let \[ C_0(X) = \ov{C_c(X)}^{\| \cdot \|_\infty} \] denote the set of functions \emph{vanishing at infinity} where $\| f \|_\infty = \sup_{x \in X} |f(x)|$. Hence, $f \in C_0(X)$ if and only if $f(x_n) \to 0$ for every $x_n \to \infty$. With these preparations we can define the three properties of the heat equation which we consider in this paper. \begin{definition} A graph $G=(X,b,m)$ is said to satisfy \begin{itemize} \item[(SI)] \emph{Stochastic incompleteness} if for some (all) $x \in X$, some (all) $t >0$ \[ \sum_{y \in X} p_t(x,y)m(y) < 1.\] \item[(FP)] The \emph{Feller property} if for some (all) $x \in X$, some (all) $t>0$ \[ p_t(x,y_n) \longrightarrow 0 \textup{ as } y_n \to \infty. \] \item[(UT)] \emph{Uniform transience} if there exists a constant $C>0$ such that for all $x \in X$ \[ g(x,x) \leq C. \] \end{itemize} \end{definition} \begin{remark}\label{r:introduction} \begin{itemize} \item[(i)] Note that all three properties have to do with heat escaping at infinity. However, all of the properties have quite a different flavor. In particular, both (SI) and (FP) depend strongly on the measure while (UT) does not. In fact, if the inequality in the definition of (UT) holds for one measure $m$, then it holds for all measures (with the same constant). Furthermore, while (SI) and (UT) require a large growth on the graph, (FP) can happen in the case of both large and small growth. The only general implication that holds between these properties is that (UT) $\Longrightarrow$ (FP) as noted in \cite{KLSW17} where (UT) is systematically introduced and studied. However, note that the Green's function does not appear in \cite{KLSW17} but (UT) is rather introduced via several other equivalent conditions. In particular, (UT) is equivalent to $\inf_x {\mathrm{cap}}(x) >0$ where ${\mathrm{cap}}(x)$ denotes the \emph{capacity} of $x$ which is defined via \[ {\mathrm{cap}}(x) = \inf_{\varphi \in C_c(X), \varphi(x)=1} Q(\varphi) \] where $Q(\varphi) = \frac{1}{2} \sum_{x,y \in X} b(x,y) (\varphi(x) - \varphi(y))^2$ denotes the \emph{energy} of $\varphi$. Furthermore, (UT) is also equivalent to the existence of a constant $C>0$ such that $C \| \varphi \|_\infty \leq Q(\varphi)$ for all $\varphi \in C_c(X)$. As pointed out by M. Schmidt, either of these conditions is easily seen to be equivalent to our definition of (UT) by using general principles such as the resolvent formulation of the Green's function and the Green's formula. \item[(ii)] An equivalent formulation for (FP) is that $e^{-tL}:C_0(X) \to C_0(X)$, as such, this is also called the $C_0$-\emph{conservativeness} property. \item[(iii)] The fact that (UT) $\Longrightarrow$ (FP) mentioned above follows from another characterization of (UT) given in \cite{KLSW17}. Namely, (UT) is equivalent to the fact that the domain of the form associated to $L$ is contained in $C_0(X)$ for all measures $m$. That is, \[ D(Q) = \ov{C_c(X)}^{\| \cdot \|_Q} \subseteq C_0(X) \] for all measures $m$ where $\| \varphi \|_Q = (\|\varphi\|^2 + Q(\varphi))^{1/2}$. Hence, if a graph satisfies (UT), then $e^{-tL}(C_c(X)) \subseteq D(Q) \subseteq C_0(X)$ and (FP) follows by continuity of the semigroup with respect to the sup norm. \item[(iv)] It is always true that $\sum_{y \in X} p_t(x,y)m(y) \leq 1$. In particular, if $\inf_x m(x) > 0$, then a graph automatically satisfies (FP) as pointed out in \cite{Woj17}. We will improve this result below to allow some decay to 0 on the part of $m$. \item[(v)] We mention that there are elliptic viewpoints for both (SI) and (FP) as follows: (SI) is equivalent to the existence of a positive, bounded function $v$ such that $\mathcal{L} v \leq \lambda v$ for some $\lambda <0$, see \cite{KL12}. (FP) is equivalent to the existence of a positive function $v$ which vanishes at infinity such that $\mathcal{L} v \geq \lambda v$ for some $\lambda<0$, see \cite{Woj17}. We will return to this later. \item[(vi)] It follows from the semigroup property and maximum principles that if the graph satisfies (SI) for some $x$ and some $t$, then it satisfies (SI) for all $x$ and all $t$, see \cite{KL12}. From the elliptic viewpoint for (FP), it is clear that (FP) satisfies the same property with respect to $x$. We will establish that (FP) satisfies an even stronger uniformity with respect to $t$ below, see Lemma~\ref{l:uniformity}. \end{itemize} \end{remark} \subsection{Regular coverings} We now make precise the notion of a regular covering in the setting of weighted graphs. We consider a graph as a 1-dimensional simplicial complex which is a metric space with respect to the combinatorial graph metric. \begin{definition} We say that a graph $\ow{G}= (\ow{X},\ow{b},\ow{m})$ is a \emph{regular covering} of $G=(X,b,m)$ if $(\ow{X},\ow{b},\ow{m})$ is a regular covering space in the topological sense and if the edge weights and measures are such that the deck transformations are graph isomorphisms. That is, there exists an onto map \[ \pi: \ow{X} \to X\] which, for every $\ow{x} \in \ow{X}$, is a graph isomorphism on the neighborhood of $\ow{x}$ and which satisfies \[ \ow{b}(\ow{x},\ow{y})= b(x,y) \qquad \textup{ and } \qquad \ow{m}(\ow{x}) = m(x)\] for all $\ow{x},\ow{y} \in \ow{X}$ with $\ow{x} \sim \ow{y}$, $\pi(\ow{x}) = x$ and $\pi(\ow{y}) = y$. We call $\ow{G}$ the \emph{cover} and $G$ the \emph{base} in this case. The set $\pi^{-1}(x)$ is called the \emph{fiber} over $x\in X$ and the cardinality of this set is referred to as the \emph{number of sheets} of the covering. \end{definition} In particular, note that $\ow{\mathcal{L}}(f \circ \pi)(\ow{x}) = \mathcal{L} f(x)$ for all $x \in X, \ow{x} \in \ow{X}$ such that $\pi(\ow{x})=x$ and all $f \in C(X)$ where $\ow{\mathcal{L}}$ $(\mathcal{L}$, respectively) denotes the Laplacian on $\ow{G}$ ($G$, respectively). Furthermore, as the covering is regular, for every $x \in X$ and all $\ow{x}_1, \ow{x}_2 \in \pi^{-1}(x)$, there exists a deck transformation $\gamma$ which is a graph isomorphism such that $\gamma(x_1) = x_2$. We will denote the set of all deck transformations by $\Gamma$. In particular, \[ \ow{p}_t(\ow{x},\ow{y}) = \ow{p}_t(\gamma (\ow{x}), \gamma (\ow{y})) \] for all $\gamma \in \Gamma, t \geq 0$ and $\ow{x}, \ow{y} \in \ow{X}$. \section{The Feller property}\label{s:Feller} In this section we have a closer look at the Feller property (FP). First, we show that (FP) satisfies a uniformity in both space and time. We then give some new criteria for (FP) to hold on graphs. \subsection{Uniformity} We show that if the heat kernel vanishes at infinity for one $t$, then it does so for all $t$. In fact, we show that (FP) is equivalent to an even stronger statement with respect to time. In order to do so, we utilize the semigroup property which states that $e^{-(s+t)L} = e^{-sL}e^{-tL}$ or, in terms of the heat kernel, \[ p_{s+t}(x,y) = \sum_{z \in X} p_s(x,z)p_t(z,y)m(z). \] Recall that a graph satisfies (FP) if $p_t(x, y_n) \to 0$ for all $t>0$ and all $x\in X$ as $y_n \to \infty$. We will say that a graph satisfies the \emph{uniform Feller property} or (UFP) if \[ \max_{t \in [0,T]} p_t(x,y_n) \longrightarrow 0 \textup{ as } y_n \to \infty \] for all $x \in X$, all $T>0$. We now show that for both of these properties, it suffices that the heat kernel vanishes at infinity at only one time and one vertex. \begin{lemma}\label{l:uniformity} Let $G=(X,b,m)$ be a graph with heat kernel $p$. The following statements are equivalent: \begin{itemize} \item[(i)] $p_{t_0}(x_0,y_n) \longrightarrow 0$ as $y_n \to \infty$ for some $t_0>0$, some $x_0 \in X$. \item[(ii)] $G$ satisfies (FP). \item[(iii)] $G$ satisfies (UFP). \end{itemize} \end{lemma} \begin{proof} (i) $\Longrightarrow$ (ii): Let $x \in X$ and $t < t_0$. It then follows from the semigroup property that \begin{align*} p_{t_0}(x_0,y_n) &= \sum_{z \in X} p_{t_0-t}(x_0,z) p_t(z,y_n) m(z)\\ &\geq p_{t_0-t}(x_0,x)p_t(x,y_n) m(x). \end{align*} Therefore, \[ p_t(x,y_n) \leq \frac{p_{t_0}(x_0,y_n)}{p_{t_0-t}(x_0,x)m(x)} \longrightarrow 0 \textup{ as } y_n \to \infty \] for all $x \in X$ and $t < t_0$. Taking finite sums yields $e^{-tL}(C_c(X)) \subseteq C_0(X)$ for $t < t_0$. A density argument yields $e^{-tL}: C_0(X) \to C_0(X)$ for $t<t_0$ and, finally, the semigroup property gives $e^{-tL}: C_0(X) \to C_0(X)$ for all $t\geq 0$. (ii) $\Longrightarrow$ (iii): For a fixed $x \in X$, note that $u(y,t) = e^{-t{\mathrm{Deg}}(x)}1_x(y)$ is a subsolution for the heat equation, that is, $(\mathcal{L} + \partial_t) u(y,t) \leq 0$, with $u(y,0) = 1_x(y)$. By applying a maximum principle such as Proposition~2.2 in \cite{Woj17}, it follows that $e^{-t{\mathrm{Deg}}(x)}1_x \leq e^{-tL}1_x$, that is, \[ e^{-t{\mathrm{Deg}}(x)}1_x(y) \leq p_t(x,y)m(y). \] Now, for $T>0$ and $t \in [0,T]$, we get that \begin{align*} p_T(x,y_n) &= \sum_{z \in X}p_{T-t}(x,z)p_t(z,y_n)m(z) \\ &\geq e^{-(T-t){\mathrm{Deg}}(x)}p_t(x,y_n). \end{align*} Therefore, \[ p_t(x,y_n) \leq e^{(T-t){\mathrm{Deg}}(x)}p_T(x,y_n) \leq e^{T{\mathrm{Deg}}(x)}p_T(x,y_n) \] for all $t\in [0,T]$. The conclusion then follows. (iii) $\Longrightarrow$ (i): This is clear. \end{proof} \subsection{Degree criteria} We now prove criteria for the Feller property (FP) involving vertex degree quantities. Any graph for which the weighted degree ${\mathrm{Deg}}(x)= \frac{1}{m(x)} \sum_{y \in X} b(x,y)$ is a bounded function on $X$ satisfies (FP), see Theorem~4.2 in \cite{Woj17}. We note that this condition is equivalent to $L$ being a bounded operator on $\ell^2(X,m)$, see \cite{KL12}. This result was obtained by using the parabolic perspective and gives a counterpart to the result on manifolds which states that if the Ricci curvature is uniformly bounded from below, then the manifold satisfies (FP), see \cite{Yau 78, Dod83}. However, in the manifold case, the optimal result for Ricci curvature is obtained by using probabilistic methods and allows for some rate of decay, see \cite{Hsu89} and further discussion in Subsection~\ref{s:curvature}. \bigskip In order to prove our criteria, we take advantage of the elliptic perspective on the Feller property first pointed out for manifolds in \cite{Aze74} which formally carries over to the graph setting. That is, by combining Theorems~3.3~and~3.6 in \cite{Woj17}, we obtain that a graph satisfies (FP) if and only if there exists a positive function $v \in C_0(X)$ such that \[ \mathcal{L} v \geq \lambda v \] for $\lambda <0$. For a vertex $x_0 \in X$, we let $S_r := S_r(x_0) := \{ x \ | \ d(x,x_0)=r \}$ where $d$ denotes the standard combinatorial graph metric. We let \[ D(r) = \max_{x \in S_r} {\mathrm{Deg}}(x) \] denote the maximal degree on a sphere. For $x \in S_r$, we let ${\mathrm{Deg}}_{\pm}(x) = \frac{1}{m(x)}\sum_{y \in S_{r\pm1}} b(x,y)$ denote the \emph{outer} and \emph{inner degree} of $x$ and let \[ D_{\pm}(r) = \max_{x \in S_r} {\mathrm{Deg}}_\pm(x) \quad \textup{ and } \quad d_{\pm}(r) = \min_{x \in S_r} {\mathrm{Deg}}_\pm(x). \] \begin{theorem}\label{t:Feller1} Let $G=(X,b,m)$ be a graph. \begin{itemize} \item[(i)] If for some vertex $x_0 \in X$ \[ \sum_{r} \frac{1}{D_-(r)} = \infty, \] then the graph satisfies (FP). \item[(ii)] If for some vertex $x_0 \in X$ \[ \sum_r \frac{D(r)-d_{-}(r)+1}{d_{-}(r)} < \infty, \] then the graph does not satisfy (FP). \end{itemize} \end{theorem} This yields the following immediate corollary. \begin{corollary}Let $G=(X,b,m)$ be a graph. If for some vertex $x_0 \in X$ \[ D_-(r)=O(r), \quad \mathrm{as}\ r\to \infty,\] then the graph satisfies (FP). \end{corollary} \begin{remark} We contrast the conditions in Theorem~\ref{t:Feller1} with some related criteria for (SI). Namely, in \cite{Woj08, Woj09}, it is shown that for graphs with standard edge weights and measure if $\sum \frac{D_{-}(r)+1}{d_+(r)} < \infty$, then a graph satisfies (SI). This was later improved in \cite{Hua11} to $\sum_r \max_{x \in S_r} \frac{{\mathrm{Deg}}_-(x)}{{\mathrm{Deg}}_+(x)} <\infty$ by using the weak Omori-Yau maximum principle for (SI). Furthermore, in \cite{Woj11} it is shows that if $\sum_r \frac{1}{D_+(r)} = \infty$, then a graph does not satisfy (SI). Note that the conditions for (FP) and (SI) are opposite in some sense. The reason is that (SI) requires large growth while (FP) holds either due to large growth or small growth. The conditions presented here for (FP) have do with small growth. \end{remark} \begin{proof}[Proof of Theorem~\ref{t:Feller1}] As mentioned above, (FP) is equivalent to the existence of a positive function $v \in C_0(X)$ such that $\mathcal{L} v \geq \lambda v$ for $\lambda<0$. For the proof of (i), we construct such a function depending only on the distance to $x_0$. That is, let $v(x_0) = 1$ and for any $x \in S_r$ for $r\geq 1$, let \[ v(r):= v(x) := \prod_{i=1}^r \frac{D_-(i)}{D_-(i) - \lambda}. \] As $\lambda<0$, $v$ is decreasing as the distance to $x_0$ increases and as \[ \frac{1}{v(x)} = \prod_{i=1}^r \left( 1 - \frac{\lambda}{D_-(i)} \right) \longrightarrow \infty \textup{ as } r \to \infty \] by the assumption that $\sum_r \frac{1}{D_-(r)}=\infty$, it follows that $v \in C_0(X)$. We observe that $\mathcal{L} v(x_0) \geq 0 \geq \lambda v(x_0)$. Finally, for $x \in S_r$, $r\geq1$, we get that \begin{align*} \mathcal{L} v(x) &= {\mathrm{Deg}}_+(x)(v(r)-v(r+1)) + {\mathrm{Deg}}_-(x)(v(r)-v(r-1)) \\ &\geq {\mathrm{Deg}}_-(x)v(r)\left(1-\frac{D_-(r) - \lambda}{D_-(r)}\right) \\ &= {\mathrm{Deg}}_-(x)v(r)\left( \frac{\lambda}{D_-(r)} \right) \\ &\geq \lambda v(r) = \lambda v(x). \end{align*} This finishes the proof of (i). For (ii), let $v>0$ satisfy $\mathcal{L} v \geq \lambda v$ for $\lambda<0$. Such a positive function exists as the resolvent is positivity improving in the case that the graph is connected, see \cite{KL12}. Let $w(0) = v(x_0)$ and, for $r\geq1$, \[ w(r) = \prod_{i=1}^r \left( \frac{d_-(i)}{D(i)-\lambda} \right) v(x_0). \] Note that as \[ \frac{1}{w(r)} = \prod_{i=1}^r \left( 1 + \frac{D(i) - d_-(i) - \lambda}{d_-(i)} \right) \not \longrightarrow \infty \textup{ as } r \to \infty \] since $\sum_r \frac{D(r)-d_{-}(r)+1}{d_{-}(r)} < \infty$ it follows that $w \not \in C_0(X)$. We now claim by induction on $r$ that $v \geq w$. For $r=0$ this is clear by definition. Now, assume that $v(y) \geq w(r-1)$ for all $y \in S_{r-1}$ and let $x \in S_r$. Then \begin{align*} \lambda v(x) \leq \mathcal{L} v(x) &\leq {\mathrm{Deg}}(x) v(x) - \frac{1}{m(x)} \sum_{y \in S_{r-1}} b(x,y) v(y) \\ &\leq {\mathrm{Deg}}(x) v(x) - {\mathrm{Deg}}_-(x) w(r-1) \end{align*} from which it follows that \[ v(x) \geq \left(\frac{{\mathrm{Deg}}_-(x)}{{\mathrm{Deg}}(x)-\lambda} \right)w(r-1) \geq \left( \frac{d_-(r)}{D(r)-\lambda} \right) w(r-1) = w(r). \] This completes the proof as it follows that $v \not \in C_0(X)$. \end{proof} \subsection{Curvature and the Feller property}\label{s:curvature} As noted above, Theorem~\ref{t:Feller1} (i) extends Theorem 4.2 from \cite{Woj17} which gives (FP) in the case of uniformly bounded degree. The result on bounded degree in \cite{Woj17}, at least in terms of the proof, is an analogue to \cite{Yau78, Dod83} stating that any Riemannian manifold with Ricci curvature uniformly bounded from below will satisfy (FP). An optimal criterion for (FP) in terms of Ricci curvature in the setting of Riemannian manifolds is proven via probabilistic techniques in \cite{Hsu89}. Namely, Hsu shows that if $\kappa(r)$ is a lower bound on the Ricci curvature of a geodesic ball of radius $r$, then \[ \int^\infty \frac{1}{\sqrt{|\kappa(r)|}} dr = \infty \] implies (FP). Recently, there has been a tremendous interest in various notions of curvature, and especially of Ricci curvature, for graphs, see, for example, \cite{Sch99, LY10,LLY11,BJL12, JL14, BHLLMY15, HL16, Mun17, MW, johnson2015discrete, Oll09, ni2015ricci,maas2017entropic,rubleva2016ricci, eldan2017transport,kempton2017relationships, yamada2017curvature,gao2016one,lin2012ricci, liu2017rigidity,gao2016curvature,liu2017distance, lin2013ricci,lin2015equivalent,munch2014li, liu2016bakry,cushing2016bakry,saucan2009combinatorial, bhattacharya2015exact,fathi2016entropic, ollivier2012curved,paeng2012volume, sandhu2015graph,wang2014wireless,chung2014harnack, fathi2018curvature,erbar2012ricci}. In particular, two notions have been most prominently explored for finding analogues to results in the setting of Riemannian manifolds for graphs: the Bakry-{\'E}mery approach having its origins in \cite{BE85} and the coarse Ollivier-Ricci curvature originating in the work \cite{Oll09}. As such, given the results on Riemannian manifolds mentioned above, it would seem natural to ask if there is a Ricci curvature criterion for (FP) using these new curvature notions. In this subsection we discuss that this is, in general, not the case. In particular, we show that there exist graphs which satisfy arbitrary lower curvature bounds for both the Ollivier-Ricci and Bakry-{\'E}mery curvatures but do not satisfy (FP). \bigskip We first briefly discuss the definitions of the two curvatures mentioned above. First, the Ollivier-Ricci curvature originally defined via optimal transport in \cite{Oll09} and modified in \cite{LLY11} was recently extended to general graph Laplacians in \cite{MW}. Although, we do not give the definition here, we mention that this curvature can be calculated explicitly for large classes of graphs. In particular, for any graph satisfying $X = {\mathbb N}_0$ with $b(x,y)>0$ if and only if $|x-y|=1$, it follows that $\kappa(r):=\kappa(r-1,r)$, the curvature between adjacent vertices $r-1$ and $r$, can be calculated as \begin{align}\label{e:curvature} \kappa(r) &= \frac{b(r-1,r)-b(r-1,r-2)}{m(r-1)} - \frac{b(r,r+1)-b(r,r-1)}{m(r)}\\ &=\big(d_+(r-1) - d_-(r-1) \big) - \big( d_+(r)-d_-(r) \big)\nonumber \end{align} where we let $d_+(r)=b(r,r+1)/m(r)$ and $d_-(r)=b(r-1,r)/m(r)$ as above and let $d_-(0)=0$. Secondly, Bakry-{\'E}mery curvature is defined via a Bochner formula as follows. The maximal lower Bakry-{\'E}mery curvature bound $K_{BE} \in C(X)$ is given by \[ K_{BE}:= \sup\{K \in C(X): \Gamma_2(f,f) \geq K \Gamma_1(f,f), \forall f \in C(X)\} \] where $\Gamma_0(f,g):=f\cdot g$ and for $k\geq 1$, \[ \Gamma_k(f,g) := - \mathcal{L} \Gamma_{k-1}(f,g) + \Gamma_{k-1}(\mathcal{L} f, g) + \Gamma_{k-1}( f, \mathcal{L} g). \] Using these formulas, we can construct examples of graphs which do not satisfy (FP) with both Ollivier-Ricci and Bakry-{\'E}mery curvature satisfying arbitrary lower bounds. \begin{proposition}\label{p:curvature_feller} For every sequence $(k_r)_{r \in {\mathbb N}}, k_r \in {\mathbb R}$, there exists a graph with $X= {\mathbb N}_0$ which does not satisfy (FP) such that both $\kappa(r) \geq k_r$ and $K_{BE}(r) \geq k_r$. \end{proposition} \begin{proof} We let $X = {\mathbb N}_0$ with $b(x,y)>0$ if and only if $|x-y|=1$. As above, we let $d_+(r)= b(r,r+1)/m(r)$ and $d_-(r)=b(r-1,r)/m(r)$ and we first choose $b$ and $m$ to satisfy $d_+(r)=1$ and $\sum_r \frac 1 {d_-(r)}<\infty.$ Then, applying Theorem~\ref{t:Feller1} shows that the graph does not satisfy (FP). Furthermore, for every $k_r \in {\mathbb R}$, we can make the choices $d_-(r)-d_-(r-1) \geq k_r$. Then, $\kappa(r) \geq k_r$ follows from \eqref{e:curvature}. For Bakry-{\'E}mery curvature, we further refine the choices of $d_-(r)$ to be increasing and to satisfy $d_-(r)-d_-(r-1) \geq 2(k_r \vee k_{r-1})$. A straight-forward computation gives that $K_{BE}(r) \geq k_r$ is equivalent to $W_-(r) \geq 0$ and $W_+(r) \geq 0$ and $W_-(r)W_+(r) \geq 4d_-(r)d_+(r)$ with \[ W_-(r) := -d_-(r-1) + 3d_+(r-1) + d_-(r) - d_+(r) - 2k_r \] and \[ W_-(r) := -d_+(r+1) + 3d_-(r+1) + d_+(r) - d_-(r) - 2k_r, \] see Section~2 in \cite{HM17}. Since $d_-(r) - d_-(r-1) \geq 2 k_r$, we have $W_-(r) \geq 2$. Since $d_-(r+1) - d_r \geq 2 k_r$, we have $W_+(r) \geq 2d_-(r+1)$. Since $d_-(r)$ is increasing, we obtain $W_-(r)W_+(r) \geq 4d_-(r)d_+(r)$ which proves that $K_{BE}(r) \geq k_r$. \end{proof} \begin{remark} We note that in the example above the curvatures turn out to be positive. In fact, with a bit more effort, we can show that for any sequence $k_r \in {\mathbb R}$, there exist graphs such as above which do not satisfy (FP) and which have Ollivier-Ricci curvature $\kappa(r)=k_r$. This is surprising given the manifold case were, for example, all Cartan-Hadamard manifolds satisfy (FP), see \cite{Aze74, PS12}. To show this, we again take $X = {\mathbb N}_0$ with $b(x,y)>0$ if and only if $|x-y|=1$. We first set $m(0)=1$ and $b(0,1)=2$ giving ${\mathrm{Deg}}(0)=2$. Observe that by iterating \eqref{e:curvature}, $\kappa(r)=k_r$ for all $r \in {\mathbb N}$ is equivalent to \[ \frac{b(r,r+1) - b(r,r-1)}{m(r)} = {\mathrm{Deg}}(0)- \sum_{j=1}^{r} k_j =: C_r. \] The idea of the construction relies on the following two observations: First, if we choose $m(r)$ small enough, then we can guarantee $b(r,r+1)$ is uniformly bounded above and below by a constant. We can, for example, set $m(r)$ such that $|C_r| m(r) \leq 2^{-r}$ and $b(r,r+1):=b(r,r-1) + C_r m(r)$ yielding $|b(r,r+1)-b(r-1,r)| \leq 2^{-r}$ which gives $b(r,r+1) \in \left[1, 3\right]$ for all $r \in {\mathbb N}_0$. Moreover, the inductive definition of $b(r,r+1)$ guarantees that $\kappa(r)=k_r$. Second, if we choose $m(r)$ small enough, then, we can guarantee \[ \sum_{r=1}^\infty m(\{r,r+1,\ldots\}) < \infty. \] This, in particular, holds true if $m(r)<2^{-r}$. To satisfy both of these conditions, we can simply set $m(r):= \frac{2^{-r}}{1+|C_r|}$. We easily see that $\sum_r \frac 1 {b(r,r+1)} = \infty$ since $b(r,r+1) \leq 3 $. Moreover, since $b(r,r+1)\geq 1$, \[ \sum_r \frac {m(\{r,r+1,\ldots\})}{b(r,r-1)} \leq \sum_r m(\{r,r+1,\ldots\}) < \infty. \] This implies that the graph does not satisfy the Feller property due to Theorem~4.13 in \cite{Woj17}. \end{remark} \subsection{An intrinsic metric criterion} As previously mentioned, if the vertex measure is uniformly bounded from below by a positive constant, then the graph satisfies (FP). We now improve this by allowing the measure to decay to 0. The rate of decay will involve the use of intrinsic metrics and a heat kernel estimate obtained in \cite{BHY17}. \bigskip We first introduce some concepts related to intrinsic metrics. For the full theory of intrinsic metrics for non-local Dirichlet forms, which extends the framework of local Dirichlet forms with killing term in \cite{Stu94}, see \cite{FLW14}. For other applications for graphs, see the survey \cite{Kel15}. \begin{definition} We say that a metric $\rho:X \times X \to [0,\infty)$ is \emph{intrinsic} if \[ \sum_{y \in X} b(x,y) \rho^2(x,y) \leq m(x) \] for all $x \in X$. We say that an intrinsic metric has \emph{finite jump size} $j>0$ if $\rho(x,y) \leq j$ for all $x \sim y$. Finally, an intrinsic metric is called \emph{proper} if all balls defined with respect to $\rho$ are finite. \end{definition} \begin{example}\label{e:intrinsic} A standard example for graphs first found in \cite{Hua11a} is to let \[ \rho(x,y) = \left( \max \{ {\mathrm{Deg}}(x), {\mathrm{Deg}}(y) \} \right) ^{-1/2} \] for all $x \sim y$ and then extend this to all vertices via paths. It will have finite jump size if the weighted degree is uniformly bounded from below and will be proper given that the weighted degree does not grow too rapidly. \end{example} We now recall a heat kernel estimate which follows from the Davies-Gaffney-Girgor'yan Lemma for graphs proven in \cite{BHY17} (see also \cite{Pan93, Dav93, Dav93b, Del99, MS00, Sch02, Fol11, BHY15} for earlier work). Namely, if $\rho$ is a proper intrinsic metric with finite jump size $j$, we get the following estimate for the heat kernel: \begin{equation}\label{e:estimate} p_t(x,y) \leq \frac{1}{\sqrt{m(x)m(y)}} \exp{(- \zeta_j(t,\rho(x,y)))} \end{equation} where \[ \zeta_j(t,r) = \frac{1}{j^2}\left( jr \cdot \textup{arsinh} \left(\frac{jr}{t}\right) - \sqrt{t^2 + (jr)^2} + t \right). \] We note that $\rho$ is not needed to be proper for the result above to hold. However, we need that $y_n \to \infty$ if and only if $\rho(x, y_n) \to \infty$ for our result below and at this point we need the metric to be proper. \begin{theorem}\label{t:Feller2} Let $G=(X,b,m)$ be a graph with a proper intrinsic metric $\rho$ with finite jump size $j>0$. If for some $x_0 \in X$ and some $C>0$ one has \begin{equation*} {- \log m(y)} \leq \frac{2 \rho(x_0,y)}{j} (\log \rho(x_0,y) + C) \end{equation*} for all $y \in X$, then $G$ satisfies (FP). \end{theorem} \begin{proof}[Proof of Theorem~\ref{t:Feller2}] We aim to show that $p_t(x_0,y) \to 0$ as $y \to \infty$ by using \eqref{e:estimate}. We remark that due to Lemma~\ref{l:uniformity}, it suffices to show that $p_t(x_0,y) \to 0$ for some small $t>0$. First, we note that \[ \zeta_j(t,r) \geq \frac r j \log \left(\frac {jr}{t}\right) - \frac r j \geq \frac r j (\log r + C+1) \] if $t>0$ is chosen small enough, where the first inequality follows from $\textup{arsinh}(\alpha) \geq \log \alpha$ and $\alpha - \sqrt{\alpha^2 + \beta^2} \geq -\beta$ for $\alpha,\beta >0$. Hence by \eqref{e:estimate}, \begin{align*} \log p_t(x_0,y) &\leq -\frac 1 2 \log m(x_0) - \frac 1 2 \log m(y) - \zeta_j(t,\rho(x_0,y)) \\ &\leq -\frac 1 2 \log m(x_0) - \frac 1 2 \log m(y) - \frac {\rho(x_0,y)}j (\log \rho(x_0,y) + C+1)\\ & \leq -\frac 1 2 \log m(x_0) - \frac {\rho(x_0,y)}j \end{align*} where the last estimate follows by assumption. This implies that $p_t(x_0, \cdot) \in C_0(X)$ since $\rho$ is proper which finishes the proof. \end{proof} \begin{example} We give an example for which Theorem~\ref{t:Feller1} does not apply but Theorem~\ref{t:Feller2} does. For two positive functions $f, g: X \to (0,\infty)$ we will write $f \sim g$ if there exist positive constants $c_1, c_2>0$ such that $c_1 f(x) \leq g(x) \leq c_2 f(x)$ for all $x \in X$. Let $X={\mathbb N}$ with $b(x,y) = 1$ if $|x-y|=1$ and 0 otherwise and $m(r) \sim 1/r^2$ for $r \in {\mathbb N}$. Then ${\mathrm{Deg}}_-(r) \sim r^2$ so that Theorem~\ref{t:Feller1} does not apply. However, by using the intrinsic metric in Example~\ref{e:intrinsic}, we get that $\rho(r,r+1) \sim 1/r$ so that $\rho(0,r)\sim \log r$. Hence, the metric is proper and has finite jump size $j$. Therefore, \[ -\log m(r) \sim \log r \leq C \rho(0,r) \] for all $r \in {\mathbb N}$ so that Theorem~\ref{t:Feller2} shows that such a graph satisfies (FP). \end{example} \section{Coverings and the heat equation}\label{s:coverings} We now prove some connections between properties of the heat kernel on a graph and a covering of the graph. We will see that a graph is stochastically incomplete if and only if a covering of the graph is stochastically incomplete and the same is true for the Feller property and uniform transience in the case of finitely many sheets. In contrast, for coverings with infinitely many sheets, only one implication holds for both the Feller property and for uniform transience, namely, if the base satisfies (FP) or (UT), then the cover will also satisfy (FP) or (UT). We show by example that the other implications do not hold. \subsection{Heat kernel on the base and that on the cover} In order to prove these results, we show a very simple relation between the heat kernel on the base and the heat kernel on the cover following work on manifolds found in \cite{Bor00, Li12}. \bigskip Given a regular covering $\pi: \ow{X} \to X$ and the heat kernel $\ow{p}$ on $\ow{G}$, we define a new function on the base graph as follows: \[ q_t(x,y) = \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x},\ow{y}) \] for $t\geq0$, $x,y \in X$ where $\ow{x} \in \pi^{-1}(x)$. We will prove that $q_t(x,y) = p_t(x,y)$ where $p$ is the heat kernel on $G$. Throughout, we will let $L$ denote the Laplacian on $G$ while $\ow{L}$ will denote the Laplacian on $\ow{G}$. We will also put a subscript to indicate in which variable the Laplacian is being applied when necessary. We first show that $q_t(x,y)$ is well-defined. That is, if $\ow{x}_1,\ow{x}_2 \in \pi^{-1}(x)$, then we must show that $\sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x}_1,\ow{y})= \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x}_2,\ow{y})$. As the covering is regular, there exists a deck transformation $\gamma$ such that $\gamma(\ow{x}_1) = \ow{x}_2$. It then follows that \begin{align*} \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x}_2,\ow{y}) &= \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\gamma (\ow{x}_1),\ow{y}) \\ &= \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t( \ow{x}_1,\gamma^{-1} (\ow{y})) = \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t( \ow{x}_1, \ow{y}) \end{align*} since $\gamma$ is an isomorphism and $\Gamma$ acts transitively on each fiber. Thus, $q$ is well-defined. Furthermore, note that $q$ is finite as \[ q_t(x,y) = \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x},\ow{y}) = \frac{1}{m(y)} \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x},\ow{y}) \ow{m}(\ow{y}) \leq \frac{1}{m(y)}. \] Next we will show that $q$ satisfies the heat equation on $G=(X,b,m)$. In order to do so, we need to justify the interchange of the derivative and summation via the use of the dominated convergence theorem. Therefore, we need to bound the absolute value of the derivatives of $\ow{p}$ by a summable function independently of $t$. This is a general phenomenon and does not have to do with coverings so we present it as such and then apply it to $q$. \begin{lemma}\label{l:uniform} For every $T>0$ and $x \in X$, there exists $f_x:=f_{x,T} \in \ell^1(X,m)$ such that \[ \max_{t \in [0,T]} p_t(x,y) \leq f_{x}(y) \] for all $y \in X$. \end{lemma} \begin{proof} Let $\oh{1}_x(y) = 1_x(y)/m(x)$ and note that $p_0(x,y) = \oh{1}_x(y)$. As $p_t(x,y)$ satisfies the heat equation in either variable, \begin{align*} p_t(x,y) &= \oh{1}_x(y) + \int_0^t \partial_s p_s(x,y) ds \\ &\leq \oh{1}_x(y) + \int_0^T | \partial_s p_s(x,y)| ds \\ &= \oh{1}_x(y) + \int_0^T | L_x p_s(x,y)| ds \\ &\leq \oh{1}_x(y) + \int_0^T \left({\mathrm{Deg}}(x) p_s(x,y) + \frac{1}{m(x)} \sum_{z \in X} b(x,z)p_s(z,y) \right) ds \\ &=: f_x(y). \end{align*} Since this holds for all $t \in [0,T]$, it is clear that $\max_{t \in [0,T]} p_t(x,y) \leq f_x(y)$. Now, by applying Fubini's Theorem and using that $\sum_{y \in X} p_s(x,y) m(y) \leq 1$, we get that \begin{align*} \sum_{y \in X} f_x(y)m(y) \leq 1 + 2T {\mathrm{Deg}}(x) \end{align*} so that $f_x \in \ell^1(X,m)$. \end{proof} We now use the lemma above to show that $q$ satisfies the heat equation on $G$. \begin{lemma}\label{l:heat_equation} For all $x,y \in X$, $t\geq0$, we have that \[ (L + \partial_t) q_t(x,y) = 0 \] where the Laplacian is applied in either variable. Furthermore, \[ q_0(x,y) = p_0(x,y). \] \end{lemma} \begin{proof} Recall that by the definition of the covering, we have that $(L + \partial_t) (\ow{p} \circ \pi^{-1})=0$ where $L$ is applied in either variable. Therefore, in order to show that $q$ satisfies the heat equation on $X$, it suffices to show that \[ \partial_t \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x},\ow{y}) = \sum_{\ow{y} \in \pi^{-1}(y)} \partial_t \ow{p}_t(\ow{x},\ow{y}), \] that is, that the summation and derivative commute. For this, it suffices to observe that $\ow{p}$ is differentiable in $t$, that, as noted above, $\sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x},\ow{y}) \leq 1/m(y)$ for all $t$ and that for any $T>0$ and $t \in [0,T]$ with $f_{\ow{x}}$ as in Lemma~\ref{l:uniform}, we have \begin{align*} | \partial_t \ow{p}_t(\ow{x}, \ow{y}) | &= | \ow{L}_{\ow{x}} \ow{p}_t(\ow{x},\ow{y}) | \\ &\leq {\mathrm{Deg}}(x) f_{\ow{x}}(\ow{y}) + \frac{1}{\ow{m}(\ow{x})} \sum_{\ow{z} \in \ow{X}} \ow{b}(\ow{x},\ow{z})f_{\ow{z}}(\ow{y}). \end{align*} Since the upper bound is a finite sum of functions which are in $\ell^1(\ow{X}, \ow{m})$, with norm independent of $t$, and since \[ \sum_{\ow{y} \in \pi^{-1}(y)} |f(\ow{y}) | = \frac{1}{m(y)} \sum_{\ow{y} \in \pi^{-1}(y)} | f(\ow{y}) | \ow{m}(\ow{y})\] for any $f \in \ell^1(\ow{X}, \ow{m})$ the first statement follows. For the second statement, note that by applying Lemma~\ref{l:uniform} again we get that \[ \lim_{t \to 0^+} \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x},\ow{y}) = \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_0(\ow{x},\ow{y}) = \sum_{\ow{y} \in \pi^{-1}(y)} \oh{1}_{\ow{x}}(\ow{y}) = \oh{1}_x(y) = p_0(x,y). \] \end{proof} As $p_t(x,y)$ is the minimal non-negative solution to the heat equation on $G$ by \cite{KL12}, it follows that \[ p_t(x,y) \leq q_t(x,y). \] We now show that the opposite inequality is also true. In order to do so, we note the following summation formula. If $\ow{D} \subseteq \ow{X}$, $D = \pi(\ow{D})$ and $\varphi \in C_c(X)$, then \begin{equation}\label{e:summation} \sum_{y \in D} \left( \sum_{\ow{y} \in \pi^{-1}(y) \cap \ow{D}} \ow{p}_t(\ow{x},\ow{y}) \right)\varphi(y) m(y) = \sum_{\ow{y} \in \ow{D}}\ow{p}_t(\ow{x},\ow{y}) (\varphi \circ \pi)(\ow{y}) \ow{m}(\ow{y}), \end{equation} which is a discrete version of co-area formula for the map $\pi.$ Now, we take any exhaustion sequence $\ow{D}_i$ of $\ow{X}$. That is, $\ow{D}_i$ are finite, connected, increasing subsets of $\ow{X}$ such that $\ow{X} = \cup_i \ow{D}_i$. We let $\ow{p}^i$ denote the Dirichlet heat kernels on $\ow{D}_i$. These satisfy the heat equation on \[ {\mathrm{int }} \ow{D}_i = \{ \ow{x} \in \ow{D}_i \ | \ \ow{y} \in \ow{D}_i \textup{ for all } \ow{y} \sim \ow{x} \}, \] the \emph{interior} of $\ow{D}_i$, and vanish on the boundary $\partial \ow{D}_i = \ow{D}_i \setminus {\mathrm{int }} \ow{D}_i.$ For convenience, one may extend $\ow{p}^i$ to the entire graph by setting $\ow{p}^i(\ow{x},\widetilde{y})=0$ whenever $\ow{x}$ or $\ow{y}\in \ow{X}\setminus \ow{D}_i$. By maximum principle arguments it follows that $\ow{p}^i \to \ow{p}$ monotonically as $i \to \infty$, see \cite{Woj08,KL12} for more details. We fix $x \in X,$ choose $\ow{x}\in \pi^{-1}(x)$ and define \[ q_t^i(y) = \sum_{\ow{y} \in \pi^{-1}(y)\cap \ow{D}_i} \ow{p}_t^i(\ow{x},\ow{y}) \] and note that by the monotone convergence theorem $q_t^i(y) \to q_t(x,y)$ as $i \to \infty$. We furthermore note that if $D_i = \pi(\ow{D}_i)$, then $q_t^i(y) = 0$ for $y \in \partial D_i$ and that $q_0^i(y) \leq p_0(x,y)$. We now show that $q^i$ is a subsolution for the heat equation. The following is adapted from \cite{Bor00}. \begin{lemma}\label{l:heat_equation_2} For $y \in D_i = \pi (\ow{D}_i)$ we have that \[ (L + \partial_t) q_t^i(y) \leq 0. \] \end{lemma} \begin{proof} Let $\oh{1}_y = 1_y/m(y)$. By Green's formula and using the summation equality (\ref{e:summation}) twice, we have that \begin{align*} Lq_t^i(y) &= \as{L q_t^i, \oh{1}_y} = \as{q_t^i, L \oh{1}_y} = \sum_{z \in D_i} q_t^i(z) L \oh{1}_y(z) m(z)\\ &= \sum_{z \in D_i} \left( \sum_{\ow{z} \in \pi^{-1}(z) \cap \ow{D}_i} \ow{p}_t^i(\ow{x},\ow{z}) \right) L \oh{1}_y(z) m(z) \\ &= \sum_{\ow{z} \in \ow{D}_i} \ow{p}_t^i(\ow{x},\ow{z}) (L \oh{1}_y \circ \pi)(\ow{z})\ow{m}(\ow{z}) \\ &= \sum_{\ow{z} \in \ow{D}_i} \ow{p}_t^i(\ow{x},\ow{z}) \ow{L}(\oh{1}_y \circ \pi)(\ow{z})\ow{m}(\ow{z}) \\ &= \sum_{\ow{z} \in \ow{D}_i} \ow{L_{\ow{z}}} \ow{p}_t^i(\ow{x},\ow{z}) (\oh{1}_y \circ \pi)(\ow{z})\ow{m}(\ow{z}) \\ &= \sum_{\ow{z} \in {\mathrm{int }} \ow{D}_i} -\partial_t \ow{p}_t^i(\ow{x},\ow{z}) (\oh{1}_y \circ \pi)(\ow{z})\ow{m}(\ow{z}) \\ & \qquad - \sum_{\ow{z} \in \partial \ow{D}_i} \sum_{\ow{w} \in \ow{D}_i} \ow{b}(\ow{z},\ow{w}) \ow{p}_t^i(\ow{x},\ow{w}) (\oh{1}_y \circ \pi)(\ow{z})\\ &\leq -\partial_t \sum_{\ow{z} \in \ow{D}_i} \ow{p}^i_t(\ow{x},\ow{z}) (\oh{1}_y \circ \pi)(\ow{z})\ow{m}(\ow{z}) \\ &=- \partial_t \sum_{z \in D_i} q_t^i(z) \oh{1}_y(z)m(z) \\ &= - \partial_t q_t^i(y). \end{align*} \end{proof} \begin{theorem}\label{t:equality} Let $\ow{G}=(\ow{X},\ow{b},\ow{m})$ with heat kernel $\ow{p}$ be a regular covering of $G=(X,b,m)$ with heat kernel $p$. For all $t\geq0$, $x,y \in X$ and $\ow{x} \in \pi^{-1}(x)$, let $q_t(x,y) = \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x},\ow{y})$. Then, \[ q_t(x,y) = p_t(x,y). \] \end{theorem} \begin{proof} As $q_t(x,y)$ satisfies the heat equation on $X$ with initial condition $p_0(x,y)$ by Lemma~\ref{l:heat_equation}, it follows that $p_t(x,y) \leq q_t(x,y)$ as $p_t(x,y)$ is the minimal non-negative solution. On the other hand, by Lemma~\ref{l:heat_equation_2} and using a maximum principle as, for example Proposition~2.2 in \cite{Woj17}, it follows that $q_t^i(y) \leq p_t(x,y)$. Since $q_t^i(y) \to q_t(x,y)$ as $i \to \infty$ it follows that $q_t(x,y) \leq p_t(x,y)$ and the conclusion follows. \end{proof} We note from the above that it is always true that $\ow{p}_t(\ow{x},\ow{y}) \leq p_t(x,y)$ for all $t \geq 0$, $\ow{x} \in \pi^{-1}(x)$ and $\ow{y} \in \pi^{-1}(y)$. In the case of finitely many sheets, we get that the other inequality holds as well on the diagonal up to a multiple of the number of sheets. \begin{lemma}\label{l:heatKernelFiniteSheets} Let $\ow{G}=(\ow{X},\ow{b},\ow{m})$ be a regular covering of $G=(X,b,m)$ with $n$ sheets. Then, \[ p_t(x,x) \leq n \cdot \ow{p}_t (\ow{x},\ow{x}) \] for all $\ow x \in \ow X$ and $x=\pi(\ow x)$. \end{lemma} \begin{proof} We first note that if $\ow{x}_1,\ow{x}_2 \in \pi^{-1}(x)$ for $x \in X$ and if $\gamma \in \Gamma$ is a deck transformation such that $\gamma(\ow{x}_1)=\ow{x}_2$, then \[ \ow{p}_t(\ow{x}_1,\ow{x}_1) = \ow{p}_t(\gamma(\ow{x}_1),\gamma(\ow{x}_1)) = \ow{p}_t(\ow{x}_2,\ow{x}_2). \] Furthermore, as the semigroup is a positive operator, letting $\oh{1}_x = 1_x / m(x)$ we immediately get that \[ 0 \leq \as{e^{-t \ow{L}}(\oh{1}_{\ow{x}_1} - \oh{1}_{\ow{x}_2}), \oh{1}_{\ow{x}_1} - \oh{1}_{\ow{x}_2} }= \ow{p}_t(\ow{x}_1,\ow{x}_1) + \ow{p}_t(\ow{x}_2,\ow{x}_2) - 2\ow{p}_t(\ow{x}_1, \ow{x_2})\] so that $\ow{p}_t(\ow{x}_1,\ow{x}_2) \leq \ow{p}_t(\ow{x}_1, \ow{x_1})$. Therefore, if $\pi^{-1}(x) = \{ \ow{x}_i \}_{i=1}^n$, then by Theorem~\ref{t:equality} we get that \[ p_t(x,x) = \sum_{i=1}^n \ow{p}_t(\ow{x}_1,\ow{x}_i) \leq n \cdot \ow{p}_t(\ow{x}_1,\ow{x}_1). \] \end{proof} \subsection{Main result} The equality $q_t(x,y) = p_t(x,y)$ gives our main results on covering of graphs and the heat equation as presented below. We note that for (ii) and (iii) the other implication does not hold for the case of coverings with infinite sheets as we will show by example below. \begin{theorem}\label{t:covering} Let $\ow{G}= (\ow{X},\ow{b},\ow{m})$ be a regular covering of $G=(X,b,m)$. \begin{itemize} \item[(i)] $\ow{G}$ satisfies (SI) if and only if $G$ satisfies (SI). \item[(ii)] If $G$ satisfies (FP), then $\ow{G}$ satisfies (FP). \item[(iii)] If $G$ satisfies (UT), then $\ow{G}$ satisfies (UT). \end{itemize} Furthermore, \textup{(ii)} and \textup{(iii)} become equivalences when the regular covering has finitely many sheets. \end{theorem} \begin{proof} For (i), note that by Theorem~\ref{t:equality} \begin{align*} \sum_{y \in X} p_t(x,y)m(y) &= \sum_{y \in X} q_t(x,y) m(y) \\ &= \sum_{y \in X} \sum_{\ow{y} \in \pi^{-1}(y)} \ow{p}_t(\ow{x},\ow{y}) m(y) = \sum_{\ow{y} \in \ow{X}}\ow{p}_t(\ow{x},\ow{y}) \ow{m}(\ow{y}) \end{align*} from which the conclusion follows immediately. For (ii), note that Theorem~\ref{t:equality} implies that $\ow{p}_t(\ow{x},\ow{y}) \leq p_t(x,y)$ for all $t\geq 0$ and $\ow{x} \in \pi^{-1}(x), \ow{y} \in \pi^{-1}(y)$. Fix some $\ow{x}\in \ow{X}$ and $t>0.$ It suffices to show that for any sequence $\ow{y}_n \to \infty$ in $\ow{X},$ there is a subsequence $\ow{y}_{n_k}$ such that \[ \ow{p}_t(\ow{x},\ow{y}_{n_k}) \longrightarrow 0, \quad \mathrm{as}\ k \to \infty. \] Let $y_n = \pi(\ow{y}_n).$ If $y_n \to \infty$ in $X$, then the result is clear as we assume that $G$ satisfies (FP). If not, one can extract a subsequence, still denoted by $y_n,$ such that $\{ y_n \}_{n=1}^\infty \subseteq D$ for some finite set $D$ in $X.$ We may further choose a subsequence $y_{n_k}$ of $y_n$ such that $\ow{y}_{n_k}$ are contained in the same fiber. It follows that $\ow{p}_t(\ow{x},\ow{y}_{n_k}) \to 0$ as $k \to \infty$ as these terms are the tail of a convergent series $q_t(x, y_{n_k})$. Therefore, the conclusion of (ii) follows. In the case of finitely many sheets, if $\ow{G}$ satisfies (FP) and $y_n \in X$ satisfies $y_n \to \infty$, then for any choice $\ow{y}_n \in \pi^{-1}(y_n)$ it follows that $\ow{y}_n \to \infty$ in $\ow{X}$. As $\ow{G}$ satisfies (FP), it follows that $\ow{p}_t(\ow{x},\ow{y}_n) \to 0$ as $\ow{y}_n \to \infty$. Therefore, by Theorem~\ref{t:equality}, \[ p_t(x,y_n) = q_t(x,y_n) = \sum_{\ow{y}_n \in \pi^{-1}(y_n)} \ow{p}_t(\ow{x},\ow{y}_n) \longrightarrow 0 \] as $y_n \to \infty$ since the number of terms in the sum above is always equal to the number of sheets. For (iii), it is clear from the fact that $\ow{p}_t(\ow{x},\ow{y}) \leq p_t(x,y)$ that \[ g(x,x) = \int_0^\infty p_t(x,x) dt \geq \int_0^\infty \ow{p}_t(\ow{x},\ow{x}) dt = \ow{g}(\ow{x},\ow{x}) \] from which the conclusion follows immediately. Now, assume that the covering has $n$ sheets and that $\ow{G}$ satisfies (UT) so that $\ow{g}(\ow{x},\ow{x}) \leq C$ for all $\ow{x} \in \ow{X}$. Letting $x \in X$ and $\ow x \in \pi^{-1}(x)$, by Lemma~\ref{l:heatKernelFiniteSheets} we get that \[ g(x,x) = \int_0^\infty p_t(x,x) dt \leq \int_0^\infty n \cdot \ow{p}_t(\ow{x},\ow{x}) dt \leq n \cdot \ow{g}(\ow{x},\ow{x}) \leq nC \] so that $G$ satisfies (UT). \end{proof} The statement of the equality of (FP) on $G$ and $\ow{G}$ in the case of finitely many sheets gives an analogy to Proposition~8.1 in \cite{PS12}. However, neither (FP) nor (UT) become equality for the case of infinitely many sheets as the following example shows. \begin{example}[$\ow{G}$ (UT) $\not \Longrightarrow G$ (FP)]\label{e:converse} Let \[ G = {\mathbb Z}_{1,m} \times C_3 \times C_3\] where $C_3$ are ordinary 3-cycles with standard edge weight and measure and ${\mathbb Z}_{1,m} = ({\mathbb Z}, b, m)$ with $b(x,y) = 1$ if $|x-y|=1$ and 0 otherwise and $m(x)=m(-x)=m(r)$ if $|x|=r$ where $m$ satisfies $ \sum_{r=0}^\infty \sum_{k=r+1}^\infty m(k) <\infty. $ It follows by Theorem~4.13 in \cite{Woj17} that ${\mathbb Z}_{1,m}$ is not Feller. As such, by Theorem~3.3 in \cite{Woj17}, there exists a function $v \not \in C_0({\mathbb Z})$ with $v(0)=1$ and such that $\mathcal{L}_{{\mathbb Z}_{1,m}} v(z) = -v(z)$ for all $z \not = 0$. This function can be easily extended to $G$ by letting $w(z, x_1, x_2) = v(z)$ for all vertices $(z,x_1,x_2)$ in $G$. It follows that $w$ satisfies $\mathcal{L} w = -w$ away from $(0,x_1,x_2)$, $w(0,x_1,x_2)=1$ and $w \not \in C_0(X)$. As such, $G$ does not satisfy (FP) and, consequently, does not satisfy (UT). We now let \[\ow{G} = {\mathbb Z}_{1,m} \times {\mathbb Z} \times {\mathbb Z}\] where ${\mathbb Z}$ is the ordinary integer lattice with standard edge weight and measure. As ${\mathbb Z}$ is the universal cover of $C_3$, we let $\pi_i: {\mathbb Z} \to C_3$ denote the covering maps. Then, $\pi: \ow{X} \to X$ given by $\pi(z,\ow{x}_1,\ow{x}_2) = (z, \pi_1(\ow{x}_1), \pi_2(\ow{x}_2))$ is a regular covering. Topologically, $\ow{G}$ is homeomorphic to ${\mathbb Z}^3$, the standard 3-dimensional integer lattice, and all such graphs are uniformly transient, independently of the measure, see Corollary~2.6 in \cite{KLSW17}. Thus, $\ow{G}$ satisfies (UT) and (FP) while $G$ satisfies neither. \end{example} \subsection{Spectral applications} We now give some spectral applications of Theorem~\ref{t:equality}. In particular, we look at the bottom of the spectrum of the Laplacian which is given by \[ \lambda_0(L) = \inf_{\varphi \in C_c(X), \varphi \not = 0} \frac{\as{L\varphi, \varphi}}{\| \varphi \|^2} \] and give some connections between the bottom of the spectrum of a graph and its cover. We also investigate the heat kernel decay in this setting. \bigskip By applying an analogue to a theorem of Li \cite{Li86}, proven in the graph setting in \cite{HKLW12, KLVW15}, we have the following result concerning $\lambda_0(L)$. \begin{corollary} Let $\ow{G}=(\ow{X},\ow{b},\ow{m})$ be a regular covering of $G=(X,b,m)$. Let $\lambda_0(L)$ and $\lambda_0(\ow{L})$ denote the bottom of the spectrum of the Laplacian on $G$ and $\ow{G}$, respectively. Then, \[ \lambda_0(\ow{L}) \geq \lambda_0(L). \] Furthermore, we have equality if the covering has finitely many sheets. \end{corollary} \begin{proof} The heat kernel and the bottom of the spectrum are connected as follows: \[ \lim_{t \to \infty} \frac{\ln p_t(x,y)}{t} = -\lambda_0(L) \] for any $x,y \in X$, see \cite{HKLW12, KLVW15}. The result is then immediate since $\ow{p}_t(\ow{x},\ow{y}) \leq p_t(x,y)$ which follows from Theorem~\ref{t:equality}. In case of finitely many sheets, Lemma~\ref{l:heatKernelFiniteSheets} yields that $p_t(x,x) \leq n \cdot \ow{p}_t (\ow{x},\ow{x})$ where $n$ is the number of sheets. This immediately gives $\lambda_0(\ow{L}) \leq \lambda_0(L)$ as desired. \end{proof} We also have the following analogue to a result of Chavel/Karp, see Corollary~3 in \cite{CK91}. \begin{corollary} Let $\ow{G}=(\ow{X},\ow{b},\ow{m})$ be a regular covering of $G=(X,b,m)$. Let $\lambda_0(\ow{L})$ denote the bottom of the spectrum of the Laplacian on $\ow{G}$. If the number of sheets of the covering is infinite, then \[ \lim_{t \to \infty} e^{t\lambda_0(\ow{L})}\ow{p}_t(\ow{x},\ow{y}) =0. \] \end{corollary} \begin{proof} It is always true that the limit above exists, see \cite{HKLW12, KLVW15}. Now, if $\lambda_0(\ow{L})=0$, then $\lim_{t \to \infty} \ow{p}_t(\ow{x},\ow{y})=0$ by Corollary~8.2 in \cite{HKLW12} as $\ow{m}(\ow{X})=\infty$ since the number of sheets is infinite. If $\lambda_0(\ow{L})>0$ and the limit above is positive, then there exists a positive, normalized eigenfunction $\phi$ to $\lambda_0(\ow{L})$ in $\ell^2(\ow{X},\ow{m})$ and it turns out in this case that the eigenspace of $\lambda_0(\ow{L})$ is one-dimensional, see \cite{Sul87, HKLW12, KLVW15}. By an easy calculation using the invariance under the deck transformation group $\Gamma$, it follows that $\gamma^* \phi$ would also be an eigenfunction for $\lambda_0(\ow{L})$ in $\ell^2(\ow{X},\ow{m})$ for any $\gamma \in \Gamma$ where $(\gamma^* \phi)(\ow{x}) = \phi(\gamma(\ow{x}))$. As the eigenspace of $\lambda_0(\ow{L})$ is one-dimensional, it follows that there exists $\alpha:\Gamma \to {\mathbb R}_+$ such that $\gamma^* \phi= \alpha(\gamma) \phi$ for all $\gamma \in \Gamma$. Therefore, \begin{align*} \| \phi \|^2 &= \sum_{x \in X} \sum_{\ow{x} \in \pi^{-1}(x)} \phi(\ow{x})^2 \ow{m}(\ow{x}) \\ &= \sum_{x \in X}m(x) \sum_{\gamma \in \Gamma} (\gamma^* \phi)^2(\ow{x}) \\ &= \sum_{x \in X}m(x) \phi^2(\ow{x}) \sum_{\gamma \in \Gamma} (\alpha(\gamma))^2. \\ \end{align*} As this is independent of the choice of $\ow{x} \in \pi^{-1}(x)$, it follows that \[ \phi(\ow{x}_1) = \phi(\ow{x}_2) \] for all $\ow{x}_1, \ow{x}_2 \in \pi^{-1}(x)$. Since the measure is independent of the sheet and the number of sheets is infinite, it follows that $\phi$ could not be in $\ell^2(\ow{X},\ow{m})$. Therefore, the limit must be zero. \end{proof} \section{Covering manifolds and heat kernels}\label{sec:manifold} In this section, we give a deterministic proof of the fact that stochastic incompleteness of a complete Riemannian manifold is equivalent to stochastic incompleteness of the cover, thus answering a questions raised in \cite{PS12}. The proof is essentially a synthesis of results found in \cite{Bor00, Li12} and the basic argument was already given for graphs in the preceding section though the technicalities are different in the manifold setting. \bigskip Let $(M,g)$ be a complete Riemannian manifold. Let $\widetilde{M}$ be a regular covering of $M,$ $\pi:\widetilde{M} \to M$ be the covering map and $\Gamma$ be the deck transformation group. We denote by $\widetilde{g}$ the lifted Riemannian metric on $\widetilde{M}$ via the map $\pi.$ One can then show that $(\widetilde{M},\widetilde{g})$ is complete. We denote by $\Delta$ ($\widetilde{\Delta}$, respectively) the (positive) Laplace-Beltrami operator on $M$ ($\widetilde{M}$, respectively). We can then define the heat kernel as follows. Note that this is essentially the same as in the graph case though the initial condition is given in a distributional sense. \begin{definition}\label{def:heat kernel Let $M$ be a complete Riemannian manifold. We say that $H_t(x,y)$ is a \emph{heat kernel} on $M$ if $H$ is positive, symmetric in the $x$ and $y$ variables and satisfies the heat equation \begin{equation}\label{eq:heat}\left(\Delta_x + \partial_t \right) H_t(x,y)=0 \end{equation} for $y\in M$ with initial condition $$\lim_{t\to 0^+}H_t(x,y)=\delta_y(x)$$ where the limit is weak convergence in the sense of measure and $\delta_y(\cdot)$ denotes the point mass delta function at $y.$ \end{definition} We let $p_t(x,y)$ ($\widetilde{p}_t(\widetilde{x},\widetilde{y})$, respectively) denote the minimal heat kernel on $M$ ($\widetilde{M}$, respectively). These can be constructed via an exhaustion sequence regardless of the completeness of the manifold, see \cite{Dod83}. Stochastic incompleteness (SI) is then defined analogous to the case of graphs as \[ \int_M p_t(x,y) dy < 1 \] for some (all) $x \in M$ and some (all) $t>0$. For any $x,y\in M,$ we define $$q_t(x,y):=\sum_{\widetilde{y}\in \pi^{-1}(y)}\widetilde{p}_t(\widetilde{x},\widetilde{y})$$ for $\widetilde{x}\in\pi^{-1}(x).$ By the local Harnack inequality, one can show that $q_t(x,y)$ is finite for any $x,y\in M,$ see, for example, the proof of Corollary~16.3 in \cite{Li12}. Since the covering is regular, it is easy to show that the definition of $q_t(x,y)$ is independent of choice of $\widetilde{x}$ in $\pi^{-1}(x)$ and that $q_t(x,y)$ is symmetric in $x$ and $y.$ Bordoni \cite{Bor00} proved the following estimate, see Lemma~\ref{l:heat_equation_2} above for the proof of the essential step in the case of graphs. \begin{proposition}[Proposition~2.4 in \cite{Bor00}] \label{p:bord} Let $\widetilde{M}$ be a regular covering of $M.$ Then, for all $t>0,x,y\in M,$ \begin{equation*} q_t(x,y)\leq p_t(x,y). \end{equation*} \end{proposition} In fact, an argument by Li yields the following identity, see the proof of Corollary~16.3 in \cite{Li12} for manifolds and Theorem~\ref{t:equality} above for the graph case. \begin{theorem}\label{thm:pli} Let $\widetilde{M}$ be a regular covering of $M.$ Then, for all $t>0,x,y\in M,$ $$q_t(x,y)= p_t(x,y).$$ \end{theorem} We will give an alternative proof of this result below. In order to do so, we will show that $q_t(x,y)$ is also a heat kernel in the sense of Definition~\ref{def:heat kernel}. This is sufficient to prove Theorem~\ref{thm:pli} by combining Bordoni's result with the minimality $p_t(x,y)$. \begin{proposition} Let $\widetilde{M}$ be a regular covering of $M.$ Then, $q_t(x,y)$ is a heat kernel on $M.$ \end{proposition} \begin{proof} First, one can use the argument in Theorem~12.4 in \cite{Li12} to verify that $q_t(x,y)$ satisfies the heat equation \eqref{eq:heat} for any $t>0$ by writing $$q_t(x,y)=\lim_{k\to\infty}\sum_{i=1}^k\widetilde{p}_t(\widetilde{x_i},\widetilde{y})$$ where $\{\widetilde{x_i}\}_{i=1}^\infty=\pi^{-1}(x).$ To complete the proof, we show that $q_t(x,y)$ satisfies the initial condition. That is, for any $\varphi\in C_c(M),$ where $C_c(M)$ denotes the compactly supported continuous functions on $M$, and any $y\in M$ \begin{equation}\label{eq:initial} \lim_{t\to 0^+} \int_Mq_t(x,y)\varphi(x)dx = \varphi(y). \end{equation} Without loss of generality, we may assume that $\varphi \geq 0$. It is well-known that there exists a fundamental domain $\widetilde{M}_1$ in $\widetilde{M}$ such that $\gamma_1 \widetilde{M}_1\cap \gamma_2 \widetilde{M}_1=\emptyset$ if $\gamma_1\neq \gamma_2$ for $\gamma_1,\gamma_2\in \Gamma,$ and $\mathrm{vol}(\widetilde{M}\setminus\cup_{\gamma\in \Gamma}\gamma \widetilde{M}_1)=0.$ For fixed $y\in M,$ there exists a ball $B_\delta(y)$ of small radius $\delta>0$ such that $B_{2\delta}(\widetilde{y_1})$ is contained in a fundamental domain $\widetilde{M}_1$ for some $\widetilde{y_1}\in \pi^{-1}(y)$ and so that $\pi:B_\delta(\ow{y}_1) \to B_{\delta}(y)$ is an isometry. Let $\eta$ be a smooth cut-off function on $M$ satisfying $\eta\equiv 1$ on $B_{\frac{\delta}{2}}(y)$, $0 \leq \eta \leq 1$ and $\mathrm{spt}\eta\subset B_\delta(y)$ where $\mathrm{spt}\eta$ denotes the support of $\eta.$ We may write $$\varphi=\eta \varphi+(1-\eta)\varphi.$$ Note that since $q_t(x,y) \leq p_t(x,y)$, \begin{align*} \int_Mq_t(x,y)\big( (1-\eta)\varphi \big)(x) dx &\leq \int_Mp_t(x,y)\big( (1-\eta)\varphi \big)(x)dx \\ &\to \big( (1-\eta)\varphi \big)(y)=0, \qquad t \to 0^+ \end{align*} where we have used the initial condition for $p_t(x,y).$ Hence, to prove \eqref{eq:initial}, it suffices to show that \begin{equation}\label{eq:initial_2} \lim_{t\to 0^+} \int_Mq_t(x,y)(\eta\varphi)(x)dx = \varphi(y). \end{equation} For simplicity, we write $\zeta=\eta \varphi.$ We note that $\mathrm{spt}\zeta\subset B_\delta(y)$ and $\zeta(y)=\varphi(y).$ As we assume $\varphi\geq0$, by the fact that $q_t(x,y) \leq p_t(x,y)$, we get \begin{align*} \int_{M}q_t(x,y)\zeta(x)dx-\zeta(y)&\leq \int_M p_t(x,y)\zeta(x)dx-\zeta(y) \\ & \to 0, \qquad t \to 0^+ \end{align*} On the other hand, to estimate the opposite difference, we use the fact that $\mathrm{spt}\zeta\subset B_\delta(y)$ and $B_{\delta}(\widetilde{y_1})\subset \widetilde{M}_1,$ to lift the function $\zeta$ to $\widetilde{M}_1,$ by $$\widetilde{\zeta}(\widetilde{x})=\left\{\begin{array}{ll}\zeta(\pi(\widetilde{x})),&\widetilde{x}\in \widetilde{M}_1,\\0,&\widetilde{x}\in \widetilde{M}\setminus \widetilde{M}_1.\end{array} \right.$$ Then $\widetilde{\zeta}\in C_c(\widetilde{M})$ with $\mathrm{spt}\widetilde{\zeta}\subset\widetilde{M_1}.$ Hence, \begin{align*} \zeta(y)-\int_{M}q_t(x,y)\zeta(x)dx &= \widetilde{\zeta}(\widetilde{y_1})-\int_{\mathrm{spt} \zeta}\sum_{\widetilde{x}\in \pi^{-1}(x)}\widetilde{p}_t(\widetilde{x},\widetilde{y_1})\zeta(x)dx\\ &\leq \widetilde{\zeta}(\widetilde{y_1})-\int_{\ow{M}_1}\widetilde{p}_t(\widetilde{x},\widetilde{y_1})\widetilde{\zeta}(\widetilde{x})d\widetilde{x}\\ & \to 0, \qquad t \to 0^+. \end{align*} By combining these two inequalities, we get \eqref{eq:initial_2} which completes the proof. \end{proof} We are now ready to give another proof of Theorem~\ref{thm:pli}, which was originally proven by Li using the Duhamel principle, see the proof of Corollary~16.3 in \cite{Li12}. \begin{proof}[Proof of Theorem~\ref{thm:pli}] Since $p_t(x,y)$ is the minimal heat kernel and $q_t(x,y)$ is a heat kernel by the proposition above, we get $$p_t(x,y)\leq q_t(x,y)$$ by Theorem~3.6 in \cite{Dod83}. This proves the theorem by combining it with Bordoni's result, Proposition~\ref{p:bord}. \end{proof} In the case when $\widetilde{M}$ is a regular covering of $M$, Elworthy \cite{Elw82} used stochastic differential equations to give a proof of the fact that $M$ satisfies (SI) if and only if $\widetilde{M}$ satisfies (SI). Pigola and Setti \cite{PS12} asked for a deterministic proof of this fact. By Theorem~\ref{thm:pli}, we may provide an affirmative answer to their question. \begin{theorem}\label{thm:scequ} Let $\widetilde{M}$ be a regular covering of $M.$ $M$ satisfies (SI) if and only if $\widetilde{M}$ satisfies (SI). \end{theorem} \begin{proof} Given $x\in M,$ we choose any $\widetilde{x}\in \pi^{-1}(x).$ By Theorem ~\ref{thm:pli}, \begin{align*} \int_M p_t(x,y)dy &= \int_M q_t(x,y)dy \\ &=\int_M\sum_{\widetilde{y}\in \pi^{-1}(y)}\widetilde{p}_t(\widetilde{x},\widetilde{y})d\widetilde{y} =\int_{\widetilde{M}}\widetilde{p}_{t}(\widetilde{x},\widetilde{y})d\widetilde{y} \end{align*} where the last equality follows from the co-area formula. The theorem is a direct consequence of the above equality. \end{proof} \bibliographystyle{alpha}
{ "timestamp": "2018-02-21T02:03:02", "yymm": "1802", "arxiv_id": "1802.06879", "language": "en", "url": "https://arxiv.org/abs/1802.06879" }
\section{\label{sec:intro}Introduction} The interest in the accurate determination of $s$-wave scattering length has increased in recent decades due to its importance in the description of systems of ultracold atoms \cite{pethick_bose-einstein_2002,pitaevskii16:book}. As the range of the interparticle interactions is usually much smaller than the average inter-particle distances, the effects of interactions can be expressed in terms of the scattering amplitude between pairs of particles. For dilute gases at ultracold temperatures, the kinetic energies are low and, therefore, the main contribution to the amplitude comes from the $s$-wave scattering at zero momentum. Particle interactions are thus determined completely by a single parameter: the $s$-wave scattering length \cite{roger_g._newton_scattering_1982,pethick_bose-einstein_2002}. In theoretical calculations, it is therefore not necessary to consider the detailed interaction potential between the particles. Instead, a pseudopotential may be chosen in a way to reproduce the desired value of the $s$-wave scattering length, which can simplify the required computations considerably \cite{pethick_bose-einstein_2002}. One of the simplest and most popular pseudopotentials is the Dirac $\delta$ potential. Its straightforward application is, however, restricted to one dimension, since in two or three dimensions it is meaningless without renormalization \cite{esry_validity_1998,rontani_configuration_2013,doganov_two_2013}. An alternative option is to use finite-range pseudopotentials, e.g.\ the finite square well~\cite{stecher_energetics_2008,blume_few-body_2012}, Troullier-Martins~\cite{bugnion_high-fidelity_2014,whitehead_pseudopotential_2016}, P\"oschl-Teller~\cite{forbes_resonantly_2011,galea_diffusion_2016}, or Gaussian potential \cite{stecher_energetics_2008,blume_few-body_2012,doganov_two_2013,christensson_effective-interaction_2009, klaiman_breaking_2014,beinke_many-body_2015,imran_exact_2015,Bolsinger_2016,Bolsinger_2017}. The scattering length is finite for these pseudopotentials, but an extrapolation to zero range might be necessary to avoid an unphysical shape dependence~\cite{stecher_energetics_2008,blume_few-body_2012,forbes_resonantly_2011}. The relationship between the parameter(s) of the potential and the scattering length is not always trivial. Apart from some special cases~\cite{farrell_s-wave_2010,forbes_resonantly_2011}, numerical techniques are required to determine this relation~\cite{landau_course_1977,verhaar_scattering_1985,Galea_2017}. For Gaussian potentials, no closed-form analytic expressions are available and, for this reason, numerical approaches have been applied ~\cite{christensson_effective-interaction_2009,parish_bcs-bec_2005,johnson_effective_2012,Bolsinger_2016}. In two dimensions, an approximate expression was derived by Doganov \emph{et al.}~\cite{doganov_two_2013}. These authors considered two particles in a harmonic trap, where the Gaussian interparticle interaction is treated in a perturbative framework. The obtained second order correction combined with the analytical result of the contact pseudopotential ~\cite{busch_two_1998,farrell_universality_2010} provides the approximate expression. Due to the perturbative approach, this approximation works quite well in the weakly interacting limit, but it deteriorates with increasing interaction strength. In this paper, we derive approximate analytical expressions for the $s$-wave scattering length of a Gaussian pseudopotential in one, two and three dimensions. These expressions qualitatively describe the singularities of the $s$-wave scattering length at the formation of the first bound state, which is problematic for purely numerical approaches. Analytical formulas for weak interactions are derived in one and two dimensions, where the $s$-wave scattering length has a singularity at zero interaction strength. In order to improve the accuracy, the approximate expressions are generalized by including the effects of additional bound states. The unknown parameters in this ansatz are determined by non-linear fitting to accurate numerical results. The obtained formulas are robust and simple and accurately provide the values for the $s$-wave scattering length in a wide regime of attractive interaction. We describe and carefully benchmark a numerical method to accurately determine the scattering length of a short-range scattering potential in one, two, and three spatial dimensions. The approach is based on previous work of Verhaar \cite{verhaar_scattering_1985} and may be useful in its own right as it is able to provide very accurate results except for the immediate vicinity of the singularities. The numerical approach is applicable for general short-range potentials and is not restricted to potentials of Gaussian shape. This paper is organized as follows: In Sec.\ \ref{sec:Numerical}, after stating the problem and discussing the required asymptotic conditions for scattering wave functions, we present an accurate numerical approach for determining the $s$-wave scattering length along with benchmark calculations for a Gaussian potential. In Sec.\ \ref{sec:Approx} approximate analytic expressions for the $s$-wave scattering length of a Gaussian potential are derived before more accurate, generalized expressions with numerically determined parameters are introduced. Three appendices provide additional details on derivations and numerical issues with determining the position of singularities in the scattering length, respectively. \section{\label{sec:Numerical}Numerical determination of the $S$-wave scattering length} \subsection{Solution of the two-body problem and connection with the $s$-wave scattering length} \subsubsection{Two-body scattering problem}\label{sec:2bp} Let us consider a two-particle scattering process with the following $n$-dimensional Hamiltonian: \begin{eqnarray} H_{2p} &=& - \frac{\hbar^2}{2m_1}\nabla^2_1 - \frac{\hbar^2}{2m_2} \nabla^2_2 + V( | {\mathbf r}_1 - {\mathbf r}_2 |) \ , \label{2part} \end{eqnarray} where $V( | {\mathbf r}_1 - {\mathbf r}_2 |)$ is a spherically symmetric particle-particle interaction potential, and $m_i$, ${\mathbf r}_i$, and $\nabla^2_i$ are the mass, coordinate, and Laplace operator of the $i$th particle, respectively. Although our main target is the Gaussian potential, here we consider more general classes of potentials for which the numerical procedures can be applied. Specifically, we assume that the interaction is sufficiently short ranged to justify the existence of the scattering length. This is fulfilled in $n$ dimensions if $V(r)$ obeys the condition~\cite{case50,frank71} \begin{align*} \int\limits_A^{\infty} |V(r)|r^{n-1} \mbox{d} r<\infty \end{align*} for a finite $A$. It is sufficient to assume that the potential decreases faster than $1/r^{n+\varepsilon}$ with $\varepsilon>0$ at sufficiently large distance. In addition, we suppose that the potential is regular at the origin or diverges, at most, with $1/r^s$ with $s<1$. This condition is necessary to uniquely define the appropriate boundary conditions at the origin for the purpose of the numerical procedure. The $s$-wave scattering length can still be defined for more strongly divergent potentials \cite{frank71,andrews76}, but the numerical procedure would have to be modified in this case. The eigenproblem for the Hamiltonian (\ref{2part}) can be simplified by introducing the center of mass coordinate, ${\mathbf R}=(m_1{\mathbf r}_1+m_2{\mathbf r}_2)/(m_1+m_2)$, and relative coordinate, ${\mathbf r}={\mathbf r}_1-{\mathbf r}_2$. Then the wave function can be separated as ~\cite{roger_g._newton_scattering_1982} \begin{equation} \Psi_{nD}^{2p}({\bf r}_1, {\bf r}_2) = \exp(i{\bf Q}\cdot{\bf R}/\hbar) \, \psi_{nD}( {\bf r}) \ , \nonumber \end{equation} where ${\bf Q}$ is the total momentum of the two particles. The relative wave function $\psi_{nD}( {\bf r})$ is an eigenfunction of the Hamiltonian of the relative motion, \begin{eqnarray} H\, \Psi_{nD}( {\bf r}) &=& E \, \Psi_{nD}( {\bf r}) \ \label{fullrelsch} \end{eqnarray} with \begin{align} H = -\frac{\hbar^2}{2 \mu} \nabla^2 + V(r) \ , \end{align} where $E$ is the scattering energy and $\mu = m_1 m_2/(m_1+m_2)$ is the reduced mass. \subsubsection{The $s$-wave scattering and boundary conditions} Due to the spherical symmetry of the potential, Eq.~\rrefsb{fullrelsch} can be further simplified by solving the angular-coordinate-dependent part separately through eigenstates of the angular momentum operator. By definition, $s$-wave scattering correspond to zero angular momentum with a radially symmetric wave function. The radial coordinate dependence in \rref{fullrelsch} can be obtained from the following differential equation for the general $n$-dimensional case ~\cite{verhaar_scattering_1985,utoPhi}: \begin{align} -\frac{\hbar^2}{2\mu}\left( \frac{\mbox{d}^2}{\mbox{d}r^2} + \frac{n-1}{r} \frac{\mbox{d}}{\mbox{d}r} \right)\Phi_{nD}( r)& + \label{relsch} \\ + \left( V(r)- E \right) \Phi_{nD}&( r)=0 \ , \nonumber \end{align} where $\Phi_{nD}(r)$ is the radial part of the relative wave function $\psi^{rel}_{nD}( {\bf r})$. For ultracold atoms, only low-energy scattering processes are relevant and we may set $E=0$. As we see in the following section, it also provides us with a simple way to define the $s$-wave scattering length. Appropriate boundary conditions for the differential equation \rrefsb{relsch} can be obtained from smoothness and symmetry considerations in the limit $r \rightarrow 0$ (see the detailed description in Appendix \ref{AppendixBC}), as \begin{eqnarray} \label{eq:bcs} \Phi_{nD}(0)=1 \ , &\hspace{1cm}\mbox{and}\hspace{1cm}& \Phi_{nD}'(0)=0 \ . \label{boundarycond} \end{eqnarray} \subsubsection{Scattering length} For a short-range potential, the asymptotic of the wave function $\Phi_{nD}(r)$ at distances much larger than the characteristic length scale of the potential $\ell_v$ is given by a solution of Eq.~\eqref{relsch} with $V(r)=0$ and $E=0$, which is a linear combination of a constant and $r$ in one dimension (1D), $\mbox{ln} \left(r\right)$ in 2D, and $1/r$ in 3D. The $s$-wave scattering length is defined by the ratio of the corresponding constants in this linear combination \cite{verhaar_scattering_1985,utoPhi}, \begin{equation} \mbox{if } r \gg \ell_v \begin{cases} \Phi_{1D}( r) \approx {\mathcal N}_{1D} \left( r - a^{1D}_s \right) \ , \\ \Phi_{2D}( r) \approx {\mathcal N}_{2D} \left[ \mbox{ln} \left( \frac{2r}{a^{2D}_s}\right)-\gamma \right] \ , \\ \Phi_{3D}( r) \approx {\mathcal N}_{3D} \left( 1 - \frac{a^{3D}_s}{r} \right) \ . \end{cases} \label{assPhy} \end{equation} Here $a^{nD}_s$ is the $n$-dimensional $s$-wave scattering length, $\gamma = 0.5772 \ldots$ is the Euler-Mascheroni constant, and $\mathcal{N}_{nD}$ is a scalar factor. The scattering length can be expressed as a limit of the function $\Phi_{nD}(r)$ and its first derivative by eliminating the unknown parameter $\mathcal{N}_{nD}$ \cite{verhaar_scattering_1985,a2ddef,utoPhi} as \begin{eqnarray} a^{1D}_s &=& \lim_{r \rightarrow \infty} \left( r-\frac{\Phi_{1D}( r)}{\Phi_{1D}'( r)} \right) \ , \label{as1dnum} \\ a^{2D}_s &=& \lim_{r \rightarrow \infty} 2r \exp{\left(-\frac{\Phi_{2D}(r)}{r\Phi_{2D}'(r)}-\gamma\right)} \ , \label{as2dnum} \\ a^{3D}_s &=& \lim_{r \rightarrow \infty} \left( r-\frac{r\Phi_{3D}( r)}{r\Phi_{3D}'( r)+ \Phi_{3D}( r)} \right) \ . \label{as3dnum} \end{eqnarray} As can be seen from the expressions above, $a^{2D}_s$ is always positive by definition, while $a^{1D}_s$ and $a^{3D}_s$ can be of either sign. In the limiting case where the scattering potential is absent the solution of Eq.~\eqref{relsch} becomes a zero-energy plane wave, i.e.~the constant 1. Therefore, we have $a^{1D}_s\to \infty$ and $a^{2D}_s\to \infty$, while $a^{3D}_s\to0$. This means that the scattering length develops a singularity when $V(r)\to0$ in one and two dimensions. \subsection{One and three dimensions} The radial Schr\"odinger equation can be simplified by introducing the functions~\cite{verhaar_scattering_1985} \begin{align} u_{3D}( r) &= r \, \Phi_{3D}( r) \quad \textrm{and} \label{u3ddef} \\ u_{1D}(r) &= \Phi_{1D}(r). \end{align} Substituting \rref{u3ddef} into the radial Schr\"odinger equation \rrefsb{relsch} and the expression for the $s$-wave scattering length \rrefsb{as3dnum}, we obtain identical equations for three and one dimensions, \begin{align} \left( -\frac{\hbar^2}{2\mu} \frac{\mbox{d}^2}{\mbox{d}r^2} + V(r)- E \right) u_{1D/3D}( r) =0 \ , \label{urelsch} \\ a^{1D/3D}_s = \lim_{r \rightarrow \infty} \left( r-\frac{u_{1D/3D}( r)}{u_{1D/3D}'( r)} \right) \ . \label{localas3d} \end{align} The boundary conditions are obtained by substituting \rref{u3ddef} into \rref{boundarycond} and now differ between one and three dimensions, \begin{eqnarray} u_{1D}(0)=1 \ , &\hspace{2cm}& u_{1D}'(0) = 0 \ , \label{bound1d} \\ u_{3D}(0)=0 \ , &\hspace{2cm}& u_{3D}'(0) = 1 \ . \label{bound3d} \end{eqnarray} In a numerical procedure, we may assume that the functions $u_{1D/3D}(r)$ and $u_{1D/3D}'( r)$ can only be given with limited numerical accuracy ($p$) as \begin{eqnarray*} u_{1D/3D}(r)&=&\lim\limits_{p \rightarrow \infty} \tilde{u}_{1D/3D}(r;p) \ ,\\ u_{1D/3D}'(r)&=&\lim\limits_{p \rightarrow \infty} \tilde{u}_{1D/3D}'(r;p) \ , \end{eqnarray*} where $p$ relates to the accuracy of the decimal representation and the numerical method itself. For the numerical determination of the scattering length, one should then consider the combined limit, \begin{eqnarray*} a^{1D/3D}_s &=& \lim_{r,p \rightarrow \infty} \underbrace{\left( r-\frac{\tilde{u}_{1D/3D}( r;p)}{\tilde{u}_{1D/3D}'( r;p)} \right)}_{\tilde{a}_s^{1D/3D}(r;p)} \ . \end{eqnarray*} \subsection{Two-dimensional case} In two dimensions, the original radial function $\Phi_{2D}(r)$ is used directly. Here a numerical instability is present as a result of the $1/r$ singularity in the first derivative term of the radial Schr\"odinger equation \rrefsb{relsch} for two dimensions. The instability can be avoided by giving the boundary conditions at distance $r=\epsilon$, where $\epsilon$ is chosen large enough to avoid the numerical difficulties but small enough to approximately satisfy the conditions of Eq.~\eqref{eq:bcs} \begin{eqnarray} \Phi_{2D}(\epsilon) \approx 1 \ , \hspace{2cm} \Phi'_{2D}(\epsilon) \approx 0 \ . \label{initcond} \end{eqnarray} Consequently, $\epsilon$ becomes another parameter of the numerical evaluation besides the numerical accuracy ($p$). The scattering length is then obtained from the composite limit \begin{eqnarray} a_s^{2D} &=& \lim_{\stackrel{r,p \rightarrow \infty}{\epsilon \rightarrow 0}} \underbrace{2r \exp{\left(-\frac{\tilde{\Phi}_{2D}(r;\epsilon,p)}{r\tilde{\Phi}_{2D}'(r;\epsilon,p)}-\gamma\right)}}_{\tilde{a}_s^{2D}(r;\epsilon,p)} \label{assympexp} \ , \end{eqnarray} where $\tilde{\Phi}_{2D}(r;\epsilon,p)$ represents the approximate numerical solution of \rrefsa{relsch} and \rrefsb{initcond} with \begin{eqnarray} \Phi_{2D}(r)&=&\lim\limits_{\stackrel{p \rightarrow \infty}{\epsilon \rightarrow 0}} \tilde{\Phi}_{2D}(r;\epsilon,p) \ . \label{numr2d} \end{eqnarray} \subsection{The Gaussian potential and the convergence of numerical results} \begin{figure} \center \includegraphics[scale=0.5]{precision2D.pdf} \hspace{0.5cm} \includegraphics[scale=0.5]{r02D.pdf}\\ \includegraphics[scale=0.5]{epsilon2D.pdf} \caption{Relative error in the two-dimensional $s$-wave scattering length compared to a reference value $a_s^{2D}$ computed with parameters $p=11$, $r=10L$, $\epsilon=10^{-6}L$. Shown is the parameter dependence (a) on the numerical precision $p$, (b) on the cutoff distance $r$, and (c) on the boundary parameter $\epsilon$ for different values of the potential strength $V_0$. } \label{fig:2Dconvg} \end{figure} \begin{figure} \center \includegraphics[scale=0.5]{precision3D.pdf} \hspace{0.5cm} \includegraphics[scale=0.5]{r03D.pdf}\\ \caption{Relative error in the three-dimensional $s$-wave scattering length compared to a reference value $a_s^{3D}$ computed with parameters $p=11$, $r=10L$. Shown is the parameter dependence (a) on the numerical precision $p$ and (b) on the cutoff distance $r$ for different values of the potential strength $V_0$. } \label{fig:3Dconvg} \end{figure} We now apply this approach to the Gaussian potential \begin{eqnarray} V(r)=-\frac{V_0}{2L^2} e^{-\frac{r^2}{L^2}} \ , \label{Gausspot} \end{eqnarray} which depends on parameters for the potential strength $V_0$ and the length scale $L$. Since we are free to use $L$ as a scale parameter, we find that the ratio $a_s/L$ depends only on the single dimensionless parameter $V_0\mu/\hbar^2$. The results of the numerical calculations and their physical interpretation will be discussed in Sec.\ \ref{sec:Approx} along with analytic approximations. Here we discuss the details and convergence properties of the numerical approach. The numerical calculations are performed with the fourth-order Runge-Kutta method of the Mathematica program package ~\cite{wolfram_research_inc._mathematica_2014}. The parameter $p$ is considered here as a composite variable. We set the parameters "AccuracyGoal" and "PrecisionGoal", which quantify the accuracy and precision of the numerical method, respectively, to the same number $p$. The "WorkingPrecision", which controls the number of the digits in the calculations, is set to $p+5$. Ideally, we should consider the infinite limit of $p$ and $r$, and the zero limit of $\epsilon$. On the computer this limit is considered numerically with a finite accuracy. The convergence properties of the numerical procedure can be seen from Figs.\ \ref{fig:2Dconvg} and \ref{fig:3Dconvg} for the $s$-wave scattering length of the Gaussian potential in two and three dimensions, respectively. We first compute a fairly accurate reference value with a fixed choice of the accuracy parameters and then plot the relative error of the scattering length compared to the reference value as a function of the accuracy parameters. In all cases, the relative error decays exponentially until the reference value of the accuracy parameter is reached. At that point, due to the equality of $\tilde{a}_s^{nD}=a_s^{nD}$, the curves abruptly drop to zero. Beyond that point, the relative error saturates to a constant value that corresponds to the numerical error of the reference value of the scattering length. In two dimensions (Fig.\ \ref{fig:2Dconvg}), the largest errors occur at $V_0=0.002 \hbar^2/\mu$ and $V_0=11 \hbar^2/ \mu$, close to divergences of the scattering length (see Fig.\ \ref{fig:fitting2D}). This demonstrates how the numerical accuracy is limited near the divergences of the scattering length. In three dimensions, the $s$-wave scattering length diverges near to $V_0=2.683 \hbar^2/\mu$, where the largest errors in are seen in Fig.\ \ref{fig:3Dconvg}(a). In the same graph, the case of $V_0=14 \hbar^2/\mu$ has the second largest numerical error. In that case the scattering length is close to the zero crossing ($a_s^{3D} \approx 0.05L$). It is difficult to compute it accurately from \rref{localas3d} where a difference of small numbers [cutoff distance and inverse logarithmic derivative of $\Phi_{3D}(r)$] needs to be taken. This effect is even more notable as a function of the cutoff distance in Fig.\ \ref{fig:3Dconvg}(b), where the error in the case of $V_0=14 \hbar^2/\mu$ is at least one order larger at larger distances compared to other values of the potential strength. Numerical rounding errors also explain the jumps in the cases of $V_0=-10 \hbar^2/\mu$ and $V_0=5 \hbar^2/\mu$, which are the limit of the chosen accuracy. \vglue 1cm \section{\label{sec:Approx}Approximate expressions for Gaussian potential} \subsection{Three-dimensional case} As can be seen in the previous section, the numerical approach is accurate in most cases, but fails near the divergences of the scattering length. Here we derive analytic approximations that can handle these numerically unstable regions. An alternative derivation based on the Lippmann-Schwinger equation is given in Appendix \ref{AppendixLS}. In order to derive suitable approximations, we can make use of the fact that the Gaussian potential decays rapidly to zero with increasing distance. Contributions of the long-range part of the wave function, therefore, become negligible when they are multiplied by this potential compared to the other parts of the Schr\"odinger equation, e.g., Eq.\ \eqref{urelsch}. Let us specifically consider the simplest case of a shallow Gaussian potential in three dimensions that has no bound states. Thus the zero-energy wave function $u_{3D}(r)$ will be nodeless and can be safely approximated with the leading term of its Taylor expansion around $r=0$ in the product with the Gaussian potential as \begin{eqnarray} e^{-\frac{r^2}{L^2}} \, u_{3D}( r) & \approx& e^{-\frac{r^2}{L^2}} \, r \ . \label{3dtaylor} \end{eqnarray} Note that the long-range asymptotics of $u_{3D}(r)$ that define the scattering length are (approximately) unaffected by this procedure. Substituting into \rref{urelsch} at $E_{rel}=0$, we obtain the differential equation \begin{eqnarray} -\frac{\hbar^2}{2\mu} \frac{\mbox{d}^2}{\mbox{d}r^2} \bar{u}_{3D}( r) -\frac{V_0}{2L^2} e^{-\frac{r^2}{L^2}} r=0 \ . \label{approxdiffeq} \end{eqnarray} The function $\bar{u}_{3D}( r)$ can be obtained by integrating \rref{approxdiffeq} twice, \begin{eqnarray} \bar{u}_{3D}( r) &=& c_1^{3D} +c_2^{3D}r+\frac{\sqrt{\pi} V_0 \mu}{4\hbar^2} \mbox{erf} \left(\frac{r}{L}\right) \ , \label{sol3d} \end{eqnarray} where $\mathrm{erf}(x) = (2/\sqrt{\pi})\int_0^x \exp(-t^2) dt$ is the error function. The coefficients in \rref{sol3d} can be determined by considering the boundary conditions \rrefsb{bound3d} as \begin{eqnarray*} c_1^{3D}=0 \ , &\hspace{2cm}& c_2^{3D}=\frac{2-\frac{V_0\mu}{\hbar^2}}{2L} \ . \end{eqnarray*} Examining these wave functions in the limit when $r$ goes to infinity and using the fact that $\lim_{r \rightarrow \infty} \mbox{erf} \left(\frac{r}{L}\right) =1 $, if $L$ is finite, we obtained the following asymptotic expression: \begin{eqnarray} \hspace{-0.5cm} \bar{u}_{3D}( r) \approx \frac{2-\frac{V_0 \mu}{\hbar^2}}{2}\left( \frac{r}{L} - \frac{\sqrt{\pi}}{2}\frac{V_0}{V_0-\frac{2\hbar^2}{\mu}}\right) \ , \ r \rightarrow \infty \ . \label{assymp3d} \end{eqnarray} Substituting \rref{assymp3d} into \rref{localas3d}, the approximate relations between the $s$-wave scattering length and the parameters of the potential can be found as \begin{eqnarray} \frac{ \bar{a}_s^{3D}}{L} &=& \frac{\sqrt{\pi}}{2}\frac{V_0}{V_0-\frac{2\hbar^2}{\mu}} \ . \label{as3d} \end{eqnarray} This approximate formula has a pole near the value of $V_0$ where the Gaussian potential well acquires the first bound state. The appearance of a pole, even though we had started out with assuming a nodeless wave function, validates the procedure but also signals a limit of validity of the approximation. On closer inspection, we find that the expression \eqref{as3d} describes the scattering length near the singularity qualitatively correctly, but the position of the pole is inaccurate. As can be seen in Fig.\ \ref{fig:fitting3D}, further singularities appear when the potential well becomes deeper and these correspond to additional bound states. Although the approximation \rrefsb{as3d} includes only the first singularity, it can be sufficient for the use as a pseudopotential for ultracold atoms if only the qualitative behavior of the scattering length in the presence of up to one bound state is of interest. In order to reproduce the behavior of the scattering length across a larger range of potential strengths, we generalize \rref{as3d} by explicitly introducing a variable number of singularities in the following way: \begin{eqnarray} \frac{ a_s^{3D}}{L} &\approx& \sum_{i=1}^n \alpha_i \frac{V_0}{\left(V_0-W_i \right)} \ . \label{imprappexp3d} \end{eqnarray} Here, $W_i$ and $\alpha_i$ are numerically determined parameters. The parameters $W_i$ are set to the values of $V_0$ where the numerically determined scattering length diverges (and changes sign). At these values of $V_0$ weakly bound states appear (see, e.g., Ref.~\cite{book:flugge99}, problem 90). To achieve a high accuracy for approximations of the $s$-wave scattering length, it is important to use accurate values for these parameters (see the detailed description in Appendix \ref{AppendixSingularity}). The parameters $\alpha_i$ are obtained by nonlinear fitting of the approximate expression \rref{imprappexp3d} to the numerically obtained scattering length. The fitting procedure is performed on the intervals $V_0 \mu/\hbar^2 \in [0,2.68] \cup [2.69,14]$ in order to avoid the singularities. The values of the fitted parameters are shown in Table \ref{table:fittedcoeff3d}. As can be seen in Fig. \ref{fig:fitting3D}, including even one additional singularity in the model greatly improves \rref{as3d} and qualitatively describes the fitted region. Each additional fitting parameter further improves the relative accuracy by more than one order of magnitude. In addition, the approximate formulas also dramatically improve the approximation of $a_s^{3D}$ outside the fitted regime with each fitting parameter. \begin{figure} \center \hglue 0.5cm \includegraphics[scale=0.5]{Fitting3D.pdf} \hspace{0.5cm} \includegraphics[scale=0.54]{Fitting_diff3D.pdf} \caption{(a) Three-dimensional $s$-wave scattering length from the numerical and the approximate expression. (b) The difference between the approximative and numerical scattering length values. The parameters of the numerical simulations are set to $p=11$ and $r=10L$. The values of the parameters for the approximative expressions can be found in Table \ref{table:fittedcoeff3d}.} \label{fig:fitting3D} \end{figure} \begin{table} \center \resizebox{.5\textwidth}{!}{ \begin{tabular}{c|c|cccc} n & $W_n \left( \hbar^2/ \mu \right)$ & $\alpha_1$ & $\alpha_2$ & $\alpha_3$ & $\alpha_4$ \\ \hline 1 & 2.68400465092 & 1.11942413969 & & & \\ 2 & 17.7956995472 & 1.12031910105 & 0.378402820446 & & \\ 3 & 45.5734799205 & 1.12034867267 & 0.322141242778 & 0.332600792963 & \\ 4 & 85.9634003809 & 1.12034897387 & 0.326461774698 & 0.135560767226 & 0.375312300726 \\ \hline $\tilde{a}_{s}^{3D}$ & 2 & $\sqrt{\pi}/2 \approx$ 0.8862269 & & & \\ \end{tabular}} \caption{Numerical values of parameters for the three-dimensional approximate expression \eqref{imprappexp3d}. } \label{table:fittedcoeff3d} \end{table} \subsection{One-dimensional case} We follow an analogous procedure to the three-dimensional case by approximately solving the Schr\"odinger equation for large $r$. Although the form of the one-dimensional Schr\"odringer equation is equivalent to the three-dimensional one [\rref{urelsch}], the boundary conditions of \rrefsa{bound1d} and \rrefsb{bound3d} differ. This has the consequence that the zeroth-order term in the Taylor expansion of $u_{1D}(r)$ does not vanish and thus we may approximate the differential equation as \begin{eqnarray} -\frac{\hbar^2}{2\mu} \frac{\mbox{d}^2}{\mbox{d}r^2} \bar{u}_{1D}( r) -\frac{V_0}{2L^2} e^{-\frac{r^2}{L^2}} =0 \ . \label{1Dzeroorder} \end{eqnarray} Equation\ (\ref{1Dzeroorder}) can be solved and provides the following approximate expression for the wave function and the $s$-wave scattering length: \begin{eqnarray} \bar{u}_{1D}(r) &=& 1-\frac{V_0\mu}{2\hbar^2} \left[ e^{-\frac{r^2}{L^2}}+\sqrt{\pi}\mbox{erf}\left(\frac{r}{L}\right)\frac{r}{L} -1\right] \ , \label{sol1d} \\ \frac{\bar{a}_s^{1D}}{L} &=& \ \frac{1}{\sqrt{\pi}} + \frac{2}{\sqrt{\pi}}\frac{\hbar^2}{V_0\mu}. \label{as1d} \end{eqnarray} Comparing the obtained expression (\ref{as1d}) with the three-dimensional result (\ref{as3d}), it can be seen that the first singularity in one dimension is located in the origin, while in three dimensions it is displaced to a finite value of attractive potential strength. As every singularity indicates the creation of a new bound state, the former statement is related to a well-known property: in one dimension, there is always a bound state at any nonzero attractive potential, meanwhile, in three dimensions, the bound state appears at some finite potential strength. The approximate expression (\ref{as1d}) can be further improved if we expand the function $u_{1D}(r)$ in a Taylor series around the origin. As we are interested in the behavior of the singularity in the origin, we can consider the limit of $V_0$ approaching zero, where the coefficients of the Taylor expansion can be determined (see the detailed description in Appendix \ref{Appendix}). By examining the asymptotic properties of the wave function we obtain the following approximate formula for the scattering length: \begin{eqnarray} \frac{\bar{\bar{a}}_s^{1D}}{L} &=& \sqrt{\frac{2}{\pi}} + \frac{2}{\sqrt{\pi}}\frac{\hbar^2}{V_0\mu}, \label{tas1d} \end{eqnarray} which differs from \rref{as1d} only in the constant offset. This expression fits better with the numerically obtained results, but it is still inaccurate at larger absolute values of the potential strength. In analogy to the three dimensional case [\rref{imprappexp3d}], the accuracy of \rref{tas1d} can be further improved by including additional singularities, \begin{eqnarray} \frac{a_s^{1D}}{L} &\approx& \sqrt{\frac{2}{\pi}} + \frac{2}{\sqrt{\pi}}\frac{\hbar^2}{V_0\mu} + \sum_{i=1}^n \alpha_i \frac{V_0}{(V_0-W_i)} \ , \label{imprappexp1d} \end{eqnarray} where the parameters $W_i$ are obtained directly from the numerical solution of the differential equation. The parameter values $\alpha_i$ are obtained nonlinearly fitting the expression \rrefsb{imprappexp1d} to the numerical data in the interval $V_0 \hbar^2/\mu \in [1.0,8.0]$. A comparison of the approximate and the numerical results is shown in Fig.\ \ref{fig:fitting1D}. Similarly to the three-dimensional case, the relative error from the numerical solution gradually decreases with the number of the parameter pairs. \begin{table}[h!] \center \begin{tabular}{c|c|cccc} n & $W_n \left( \hbar^2/ \mu \right)$ & $\alpha_1$ & $\alpha_2$ & $\alpha_3$ & $\alpha_4$ \\ \hline 1 & 8.6490975 & 0.52689372 & & & \\ 2 & 30.106280 & 0.51419392 & 0.35899733 & & \\ 3 & 64.193333 & 0.51460375 & 0.20675606 & 0.36766012 & \\ 4 &110.88204 & 0.51459468 & 0.24033314 & 0.040512694 & 0.44420188 \end{tabular} \caption{Numerically determined parameters for the one-dimensional approximate expression in \rref{imprappexp2d}. } \label{table:fittedcoeff1d} \end{table} \begin{figure} \center \hglue 0.5cm \includegraphics[scale=0.5]{Fitting1D.pdf} \hspace{0.5cm} \includegraphics[scale=0.54]{Fitting_diff1D.pdf} \caption{(a) One-dimensional $s$-wave scattering length from the numerical and the approximate expression. (b) The difference between the approximate and numerical values of the scattering length. The parameters of the numerical simulations are set to $p=11$, $r=10L$. The parameter values for the approximate expressions are tabulated in Table \ref{table:fittedcoeff1d}. The $n=0$ approximation corresponds to Eq.~\eqref{tas1d}.} \label{fig:fitting1D} \end{figure} \subsection{Two-dimensional case} In two dimensions the function $\Phi_{2D}(r)$ is considered, where the corresponding Schr\"odinger equation \rrefsb{relsch} at $E_{rel}=0$ and boundary conditions \rrefsb{boundarycond} provide the following approximate differential equation: \begin{eqnarray} - \frac{\hbar^2}{2 \mu } \left( \frac{\mbox{d}^2}{\mbox{d} r^2 }+ \frac{1}{r} \frac{\mbox{d}}{\mbox{d} r} \right) & \bar{\Phi}_{2D}(r) + V( r) = 0 \ . \label{2dapp} \end{eqnarray} Solving \rref{2dapp}, the radial function can be obtained as \begin{eqnarray} \bar{\Phi}_{2D}(r) &=& 1+\frac{V_0}{4} \left[ \mbox{Ei}\left(-r^2 \right) - \gamma - 2 \, \mbox{ln}(r) \right]\ , \label{sol2d} \end{eqnarray} where Ei$(x)=-\int_{-x}^\infty \frac{e^{-t}}{t} \mbox{d}t$ is the exponential integral function. At large particle separation, the exponential integral function goes to zero, $\lim_{r \rightarrow \infty} \mbox{Ei} \left(-r^2 \right)=0 $, and therefore \rref{sol2d} can be approximated with the following expression: \begin{eqnarray} \bar{\Phi}_{2D}(r) &\approx& 1-\frac{V_0\mu}{4\hbar^2} \left[ \gamma + 2 \, \mbox{ln}\left( \frac{r}{L}\right) \right] \ , \hspace{0.5cm} r \rightarrow \infty \ . \end{eqnarray} Using this asymptotic formula, the approximate expression of the $s$-wave scattering length can be determined from \rref{as2dnum} as \begin{eqnarray} \frac{ \bar{a}_s^{2D}}{L} &=& 2 e^{ \frac{-3\gamma}{2}+\frac{2\hbar^2}{V_0\mu}} \ . \label{as2d} \end{eqnarray} A singularity appears again in the origin, like in the one-dimensional case, as a consequence of the fact that any arbitrarily weak Gaussian potential well in two dimensions has at least one bound state. In analogy to the procedure of Appendix \ref{Appendix}, in one dimension, we can thus determine an improved prefactor to arrive at the approximation \begin{eqnarray} \frac{\bar{\bar{a}}_s^{2D}}{L} &=& \sqrt{8} e^{ \frac{-3\gamma}{2}+\frac{2\hbar^2}{V_0\mu}} \ . \label{tas2d} \end{eqnarray} This approximate formula \eqref{tas2d} is equivalent to the previously mentioned formula of Doganov $et$ al.~\cite{doganov_two_2013}, where \rref{tas2d} was derived in a different manner using perturbation theory. This expression is not very accurate at larger values of potential strength and can be improved by including additional singularities in the same manner as done previously to obtain \begin{eqnarray} \frac{a_s^{2D}}{L} &\approx& \sqrt{8} e^{-\frac{3\gamma}{2}+\frac{2\hbar^2}{V_0\mu}+ \sum\limits_{i=1}^n \alpha_i \frac{V_0}{(V_0-W_i)}} \ . \label{imprappexp2d} \end{eqnarray} We determined the parameters $W_i$ with the numerical differential equation solver and fitted the parameters $\alpha_i$ on the interval $V_0 \in [1,10]\hbar^2/\mu$. The numerical and approximate values for the two-dimensional $s$-wave scattering length are shown in Fig.\ \ref{fig:fitting2D}. In contrast to the one- and three-dimensional results, the two-dimensional scattering length is always positive and single poles occur not in the scattering length itself but in its logarithm. Similarly to the previous cases, increasing the number of fitted parameter pairs successively improves the approximate values for the scattering length inside and outside the fitted region. \begin{table} \center \begin{tabular}{c|c|cccc} n & $W_n \left( \hbar^2/ \mu \right)$ & $\alpha_1$ & $\alpha_2$ & $\alpha_3$ & $\alpha_4$ \\ \hline 1 & 11.076903 & 0.33553384 & & & \\ 2 & 35.081301 & 0.30476380 & 0.20423041 & & \\ 3 & 71.774188 & 0.30609585 & 0.10986740 & 0.19295017 & \\ 4 &121.10485 & 0.30605919 & 0.13171195 & 0.017845686 & 0.22077743 \end{tabular} \caption{Numerically determined parameters for the two-dimensional approximate expression in \rref{imprappexp2d}. } \label{table:fittedcoeff2d} \end{table} \begin{figure} \center \hglue 0.5cm \includegraphics[scale=0.5]{Fitting2D.pdf} \hspace{0.5cm} \includegraphics[scale=0.54]{Fitting_diff2D.pdf} \caption{(a) Two-dimensional $s$-wave scattering length from the numerical and the approximate expression. (b) The difference between the approximative and numerical scattering length values. The parameters of the numerical simulations are set to the following: $p=11$, $r=10L$, and $\epsilon=10^{-6}L$. The values of the parameters for the approximative expressions can be found in Table \ref{table:fittedcoeff2d}. The $n=0$ approximations correspond to Eq.~\eqref{tas2d}.} \label{fig:fitting2D} \end{figure} \section{Conclusion} We have introduced approximate expressions for the $s$-wave scattering length for a Gaussian potential in one, two, and three dimensions. These may be useful on their own or can improve the accuracy of a numerical determination of the scattering length by providing the correct asymptotic behavior near singularities. The lowest-level expressions can be obtained as simple parameter-free approximations derived from the two-particle Schr\"odinger equation. They can qualitatively describe the singularity at the first bound-state formation, where numerical methods usually fail or provide inaccurate answers. In one and two dimensions these expressions can be further improved analytically by examining the weakly interacting limit, where the leading terms can be given exactly. More accurate expressions generalize the simple formulas in a straightforward way by including additional singularities, where the unknown parameters are determined from accurate numerical computations. The obtained formulas improve the accuracy for the whole region of the potential strength. In three dimensions, where the singularity due to appearance of the first bound state appears at a finite value of the potential strength, the accuracy of this value crucially limits the obtainable accuracy for the $s$-wave scattering length. The Gaussian potential well has its main application for use as a pseudopotential in the description of ultracold atoms in the parameter regime between zero interaction and the first nontrivial bound state. In this region, the relative error of the parameterized approximate formulas reaches below $10^{-4}$ and thus they provide accurate, reliable, and simple formulas to connect the parameters of the Gaussian potential to the $s$-wave scattering length. \section{Acknowledgement} We wish to acknowledge Jonas Cremon, Christian Forss\'en, and Stephanie Reimann for providing a note on the numerical determination of the $s$-wave scattering length and we thank Ali Alavi and Tal Levy for discussion and for initiating our interest in this problem. P.J. and J.B. thank the Max Planck Institute for Solid State Research for hospitality during an extended stay where this work was completed. This work was supported by the Marsden Fund of New Zealand (Contract No. MAU1604). A.Yu.Ch. acknowledges support from the JINR--IFIN-HH projects.
{ "timestamp": "2018-04-25T02:07:16", "yymm": "1802", "arxiv_id": "1802.07063", "language": "en", "url": "https://arxiv.org/abs/1802.07063" }
\section{Introduction} Dwarf spheroidal galaxies (dSphs) have proven to be ideal targets for gamma-ray searches from particle dark matter annihilation or decay~\citep{Conrad2015JETP..121.1104C}. The Fermi-LAT combined limits from the dSphs with well-measured stellar kinematics place strong bounds on the dark matter annihilation cross section, with sensitivity to cosmologically-motivated models in the mass range $\sim 10-100$ GeV \citep{FermiLATCollaboration2010ApJ...712..147A, Geringer-Sameth2011PhRvL.107x1303G, FermiLATCollaboration2011PhRvL.107x1302A, FermiLATCollaboration2014PhRvD..89d2001A, Geringer-Sameth2015PhRvD..91h3535G, FermiLATCollaboration2015PhRvL.115w1301A}. More massive dark matter particles, $\sim 100$ GeV - $100$ TeV, are now being probed by ground-based gamma-ray observatories \citep{HESS2014PhRvD..90k2012A, MAGIC2016JCAP...02..039M, HAWCCollaboration2017arXiv170601277A, VERITAS2017PhRvD..95h2001A}. The gamma-ray flux from dark matter annihilation or decay depends on the distribution of dark matter within the system (the astrophysical component) and the annihilation cross section or decay rate which determines how the dark matter particles convert into standard model particles (the particle physics component). The astrophysical component that governs the annihilation and decay fluxes are commonly referred to as the J and D-Factors, which are, respectively, line-of-sight integrals over the dark matter distribution squared and the dark matter distribution within a dSph. The measured stellar kinematics of dSphs determine the dark matter density distribution, and thereby the J and D-factors. There is a large volume of literature on computing the J and D-Factors. The standard methodology for computing these factors for the dSphs involves combining the spherical Jeans equation with a Bayesian likelihood analysis to constrain model parameters~\citep{Evans2004PhRvD..69l3501E, Strigari2007PhRvD..75h3526S, Strigari2008ApJ...678..614S, Martinez2009JCAP...06..014M, Charbonnier2011MNRAS.418.1526C, Bonnivard2015MNRAS.446.3002B, Geringer-Sameth2015ApJ...801...74G, Bonnivard2015MNRAS.453..849B}. The dynamical analyses indicate that the J and D-factors are in particular well-constrained for angular scales subtended by the approximate half-light radius of the stellar system~\citep{Walker2011ApJ...733L..46W}, which corresponds to an angular scale $\sim 0.1-0.5$ degrees for a typical dSph. The spherical Jeans formalism may be extended to consider an axisymmetric stellar distribution~\citep{Hayashi2016MNRAS.461.2914H, Klop2017PhRvD..95l3012K}. The results from the spherical Jeans modeling are consistent with simple analytic relations \citep{Evans2016PhRvD..93j3512E} including axisymmety~\citep{Sanders2016PhRvD..94f3521S}. The astrophysical properties of the global population of dSphs may be used to better constrain the J-factors for individual systems, for example using Bayesian hierarchical modeling \citep{Martinez2015MNRAS.451.2524M}. The Bayesian techniques used in the aforementioned analyses can be compared to a Frequentist-based method, which is more compatible with the statistical modeling used in the Fermi-LAT gamma-ray analysis \citep{Chiappo2017MNRAS.466..669C}. \par In recent years, many new dSphs have been discovered in the Sloan Digital Sky Survey (SDSS), PanSTARRS \citep{Laevens2015ApJ...802L..18L, Laevens2015ApJ...813...44L}, and the Dark Energy Survey (DES) \citep{Bechtol2015ApJ...807...50B, Koposov2015ApJ...805..130K, Drlica-Wagner2015ApJ...813..109D}. These systems are sufficiently faint that it is difficult to obtain spectroscopy on a large sample of their member stars. Because of these small data samples, and the susceptibility to systematic uncertainties, the velocity dispersion, and therefore the mass and J-factor, may easily be systematically overestimated if the velocity dispersion is based on only a handful of stars~\citep{Bonnivard2016MNRAS.462..223B}. However, given the importance of the dSphs to dark matter searches, going forward it will be increasingly important to obtain reliable estimates for the J and D-factors, especially for systems without kinematic measurements. \par Because many newly-discovered dSphs lack measured stellar kinematics, previous studies have appealed to scaling relations between the J-factor and observable quantities to extract dark matter limits from them. For example \citet{Drlica-Wagner2015ApJ...809L...4D, Albert2017ApJ...834..110A} consider a simple model in which the J-factor scales as the inverse of the distance squared to the dSph. This scaling was motivated by the fact that the dSphs have similar integrated dark matter masses, despite having a wide range of luminosities \citep{Strigari2008Natur.454.1096S, Walker2009ApJ...704.1274W, Wolf2010MNRAS.406.1220W}. As even more dSphs are discovered, it will be increasingly important to identify the most optimal scaling relations between the J and D-factors and the observable properties of the dSphs. \par In this paper, we compute the J and D-factors from the dSphs with measured stellar kinematics, and perform new calculations for several systems that do not have published measurements of these quantities. We include not only the Milky Way (MW) satellites, but also dSphs that are satellites of M31, and are in the local field (LF), not bound to either the MW or M31. We perform a statistical analysis to determine the observable quantities, in particular the line-of-sight velocity dispersion ($\sigma_{\mathrm{los}}$), distance ($d$), and stellar scale ($r_{1/2}$), that the J and D-factors scale with. We determine the appropriate scaling relations, and quantify the residuals obtained from these relations. Our statistical analysis builds on the analytic work of \citet{Evans2016PhRvD..93j3512E}, who work out relations for the J-factors in terms of parameters that describe the dark matter halo, such as the scale density and the scale radius. We structure of this paper as follows. In Section~\ref{section:data}, we summarize the dSph properties and data sources. In Section~\ref{section:method}, we present our dynamical framework for determining the dark matter distributions and subsequent J and D-Factor calculations. In Section~\ref{section:results}, we present results for the galaxy sample, present scaling relations for the J and D-Factors, and discuss some systematics. In Section~\ref{section:conclusion}, we conclude. \section{Data} \label{section:data} To determine the J and D-factors, we compile a large sample of dSph spectroscopic data from the literature. The sample properties are summarized in Table~\ref{data_table} and includes the distance ($d$), azimuthally averaged half-light radius ($r_{1/2}$), line-of-sight velocity dispersion ($\sigma_{\mathrm{los}}$), absolute magnitude (${\rm M_V}$), and number of spectroscopic members (N). We have opted not to combine spectroscopic samples due to potential zero-point offsets in heliocentric velocity between different telescopes/instruments or mis-estimation of velocity errors. We use the Gaussian likelihood given in \citet{Walker2006AJ....131.2114W} to compute the average velocity, $\overline{V}$, and $\sigma_{\mathrm{los}}$. For $\sigma_{\mathrm{los}}$ we assume a Jefferys prior with range $-2 < \log_{10}{\sigma_{\mathrm{los}}} < 2$. The resulting values of $\sigma_{\mathrm{los}}$ are listed in Table~\ref{data_table}. For galaxies not mentioned in the reminder of the section the membership was determined in the original spectroscopic study (citations are listed in number of stars column in Table~\ref{data_table}). The data for Carina, Fornax, Sculptor, and Sextans is from \citet{Walker2009AJ....137.3100W}, the Draco data is from ~\cite{Walker2015MNRAS.448.2717W}, and the Ursa Minor data is from M. Spencer et al. in prep (Matt Walker private communication). For all 6 galaxies, we select member stars by applying a cut at $p_i > 0.95$, where $p_i$ is the membership probability determined from the expectation maximization method \citep{Walker2009AJ....137.3109W}. For Leo I \citep{Mateo2008ApJ...675..201M} and Leo II \citep{Spencer2017ApJ...836..202S} we use a $3-\sigma$ clipping algorithm to select members. For Bo\"{o}tes I, we use the 37 member ``Best'' sample in \citet{Koposov2011ApJ...736..146K}. We additionally explored a larger subset of this sample and found consistent results. We have made slight membership changes to the~\citet{Simon2007ApJ...670..313S} data for the following dSphs: Canes Venatici I, Coma Berenices, Leo IV, Ursa Major I, and Ursa Major II. We have removed RR Lyrae from the following (identified after the original spectroscopic analysis): Canes Venatici I \citep[5 RR Lyrae identified in][]{Kuehn2008ApJ...674L..81K}, Coma Berenices \citep[1 star;][]{Musella2009ApJ...695L..83M}, Leo IV \citep[1 star;][]{Moretti2009ApJ...699L.125M}, and Ursa Major I \citep[3 stars;][]{Garofalo2013ApJ...767...62G}. RR Lyrae are pulsating stars with variable velocity; these stars are dSph members, however, without additional phase information we cannot determine the star's bulk velocity. In Ursa Major II, a foreground dwarf has been removed that was identified in follow-up high-resolution spectroscopy analysis \citep{Frebel2010ApJ...708..560F}. These removals do not have a large impact as the removed stars tend to have large error bars. For Segue 1, we used the Bayesian membership probability ($p_i>0.8$) to identify Segue 1 members \citep{Simon2011ApJ...733...46S}. The Hercules data set has 18 members \citep{Aden2009A&A...506.1147A} but we have removed one member that was later identified as a spectroscopic binary\footnote{The binary star's center of mass velocity was identified but there is a zero point offset between the different studies.} \citep{Koch2014ApJ...780...91K}. Regarding Hercules it is also worth noting that several studies indicate that this galaxy may be undergoing tidal disruption~\citep[e.g.][]{Deason2012MNRAS.425L.101D, Kupper2017ApJ...834..112K, Garling2018ApJ...852...44G}. For Leo V, we use the 8 star data set from \cite{Collins2017MNRAS.467..573C}. We note that this analysis argued that Leo V contains a large velocity gradient, a potential indication of tidal disruption. For Willman 1, we use the 40 likely members selection but note that the dynamical state of this system is unclear \citep{Willman2011AJ....142..128W}. Furthermore, Willman 1 member selection is difficult due to the overlap in heliocentric velocity with the MW field stars \citep{Siegel2008AJ....135.2084S, Willman2011AJ....142..128W}. The satellites Segue 2 \citep{Kirby2013ApJ...770...16K} and Triangulum II \citep{Kirby2017ApJ...838...83K} have also been argued to have undergone tidal disruption due to their offset on the stellar mass-luminosity relation \citep{Kirby2013ApJ...779..102K}. We also note that the satellite, Tucana III, contains tidal tails \citep{Drlica-Wagner2015ApJ...813..109D, Shipp2018arXiv180103097S}. The data for Grus I and Tucana II is from~\citet{Walker2016ApJ...819...53W}. We find that simply weighting the stars by their reported mean membership values gives a $\sigma_{\mathrm{los}}$ result that disagrees with the values reported in~\citet{Walker2016ApJ...819...53W}. We speculate that this disagreement is due to our lack of a Milky Way background model; there are several stars with large errors on the dSph membership probabilities that could be driving the dispersions apart. In order to reproduce the \citet{Walker2016ApJ...819...53W} values, we exclude (include) the two stars in Grus I (Tucana II) with large membership errors but non-zero membership giving a sample of 5 (10) stars. We include a handful of more distant objects in our compilation including five M31 satellites and four local field (LF) objects. We selected the 5 Andromeda satellites with the largest sample size from the SPLASH survey \citep{Tollerud2012ApJ...752...45T}. Following the SPLASH team, dSph stars are selected with a membership probability cut of $p_i>0.1$. The LF objects we include here are: And XVIII \citep[][$d_{\rm M31} - d_{\rm And \, XVIII} \approx 600 \, \mathrm{kpc}$]{Tollerud2012ApJ...752...45T}, Cetus \citep{Kirby2014MNRAS.439.1015K}, Eridanus II \citep{Li2017ApJ...838....8L}, and Leo T \citep{Simon2007ApJ...670..313S}. We did not consider any other local group dSph/dIrr as our dynamical modeling assumptions are not appropriate; they contain either low mass-to-light ratios \citep[e.g. Aquarius, Leo A, NGC 6822,][]{Kirby2014MNRAS.439.1015K}, large gas components, disk-like stellar components, peculiar kinematics \citep[e.g. Tucana,][]{Fraternali2009A&A...499..121F}, and/or stellar rotation \citep[e.g. Pegasus and Phoenix,][]{Kirby2014MNRAS.439.1015K, Kacharov2017MNRAS.466.2006K}. \section{Methods} \label{section:method} \par In this section we briefly outline the method for calculating the J and D-factors. To facilitate comparison with previous results, we follow standard treatments in the literature~\citep[e.g.][]{Evans2004PhRvD..69l3501E, Strigari2007PhRvD..75h3526S, Strigari2008ApJ...678..614S, Martinez2009JCAP...06..014M, Charbonnier2011MNRAS.418.1526C, Bonnivard2015MNRAS.446.3002B, Geringer-Sameth2015ApJ...801...74G, Bonnivard2015MNRAS.453..849B}, and refer to these papers for more details and discussion of systematics. \par The J-Factor (annihilation factor) is an integral over the line-of-sight and solid angle of the dark matter density squared: \begin{equation} J(\theta_{\mathrm{max}}) = \underset{\mathrm{los}}{\iint} \rho_{\mathrm{DM}}^2 (r) \, \mathrm{d}\ell \mathrm{d}\Omega\,, \end{equation} \noindent where $\rho_{\mathrm{DM}}$ is the dark matter density profile, $\ell$ is the line-of-sight direction and $\Omega$ is the solid angle of radius $\theta_{\mathrm{max}}$. The relationship between $r$ and $\ell$ is: $r^2 = \ell^2 + d^2 - 2 \ell d \cos{\theta}$ and ${\rm d}\Omega=2\pi \sin{\theta}{\rm d}\theta$. The limits of the $\ell$ integration are: $\ell_{\pm}= d \cos{\theta} \pm \sqrt{r_t^2 - d^2 \sin^2{\theta}}$ and $r_t$ is the dark matter tidal radius. The D-Factor (decay factor) is: \begin{equation} D(\theta_{\mathrm{max}}) = \underset{\mathrm{los}}{\iint} \rho_{\mathrm{DM}} (r) \, \mathrm{d}l \mathrm{d}\Omega\,. \end{equation} \par For both the J and D-factor, the key quantity to determine from the stellar photometry and kinematic data is $\rho_{\mathrm{DM}}$. The dark matter density is constrained through the spherical Jeans equation, \begin{equation} \frac{\mathrm{d}\nu \sigma_r^2}{\mathrm{d}r} + \frac{2}{r} \beta(r) \nu \sigma_r^2 + \frac{\nu \mathrm{G} M(r)}{r^2} = 0\,, \label{eq:jeans} \end{equation} where $\nu$ is the tracer (stellar) density, $\sigma_r^2$ is the radial velocity dispersion, $\beta(r) = 1 - \frac{\sigma_t^2}{2 \sigma_r^2}$ is the velocity anisotropy, and $M(r)$ is the mass profile. Since the dSphs have large mass-to-light ratios (with the only exception being the central region of Fornax), we assume that the mass profile is entirely dominated by the dark matter halo. \par To compare $\sigma_r^2$ to observed line-of-sight velocity data it needs to be projected into the line-of-sight direction. The projection is: \begin{equation} \Sigma(R) \sigma_{\mathrm{los}}^2 (R) = 2 \int_R^{\infty}\mathrm{d}r \, \left[1 - \beta(r) \frac{R^2}{r^2} \right] \frac{r \, \nu \sigma_r^2(r)}{\sqrt{r^2 - R^2}}, \end{equation} \noindent where $\sigma_{\mathrm{los}}$ is the line-of-sight velocity dispersion and $\Sigma$ the projected tracer density. We model the stellar profile with a Plummer (\citeyear{Plummer1911MNRAS..71..460P}) model. The 3D distribution is: \begin{equation} \nu_{Plummer} (r) = \frac{3}{4 \pi r_p^3} \frac{1}{\left(1 + (r/r_p)^2 \right)^{5/2}}. \end{equation} \noindent and the projected (2D) distribution is: \begin{equation} \Sigma_{Plummer} (r) = \frac{1}{ \pi r_p^2} \frac{1}{\left(1 + (R/r_p)^2 \right)^{2}}. \end{equation} \noindent The Plummer profile has a scale radius, $r_p$, which is equivalent to the stellar half-light radius. The dSphs have varying ellipticity \citep[$\epsilon\approx0.1-0.7$][]{McConnachie2012AJ....144....4M}; to approximate spherical symmetry, we use azimuthally\footnote{Sometimes referred to as the geometric half-light radius.} averaged half-light radii: $r_{1/2} = r_{\rm half, \, azimuthal} = r_{\rm half, \,major} \sqrt{1-\epsilon}$. When referring to the half-light radius ($r_{1/2}$), we are referring to the azimuthally averaged value. Note that it is possible to consider more generalized models than the Plummer profile \citep{Strigari2010MNRAS.408.2364S}, however for the majority of our sample (ultra-faint dwarf galaxies) Plummer profiles are adequate to describe them. We parameterize the dark matter distribution as a NFW profile \citep{Navarro1996ApJ...462..563N}: \begin{equation} \rho_{DM} = \frac{\rho_s}{x(1+x)^2}\,, \end{equation} \noindent where $x=r/rs$, and the scale radii and density are $r_s$ and $\rho_s$ respectively. We take the velocity anisotropy to be constant with radius, $\beta(r) = \beta_0$. Instead of a prior in linear $\beta$ space, we parameterize the anisotropy in a symmetrized space; $\tilde{\beta} = \beta/\left(2-\beta\right)$ \citep[see Eq. 8 in][]{Read2006MNRAS.367..387R, Read2017MNRAS.471.4541R}. The symmetrized parameterization uniformly favors radial and tangential orbits whereas the linear parameterization preferentially samples tangential orbits\footnote{Another alternative anisotropy parameterization for equally favoring radial and tangential orbits is: $\beta^{\prime}=\log_{10}{\left(1-\beta \right)}$ \citep{Charbonnier2011MNRAS.418.1526C}. }. To extract the model parameters from the data, we use an unbinned likelihood function \citep{Strigari2008ApJ...678..614S, Martinez2009JCAP...06..014M, Geringer-Sameth2015ApJ...801...74G}: \begin{equation} \mathcal{L}_v (\mathcal{D}) = \prod_{i=1}^{N} \frac{p_i}{\sqrt{2 \pi (\sigma_{\mathrm{los}}^2 (R_i)+\sigma_{\epsilon, i}^2)}} \exp{\left[-\frac{1}{2} \frac{(V_i - \overline{V})^2}{\sigma_{\mathrm{los}}^2 (R_i) +\sigma_{\epsilon, i}^2}\right]}, \end{equation} \noindent Here each data point is the radial position, line-of-sight velocity, velocity error, and membership probability; $\mathcal{D}_i= (R_i, V_i, \sigma_{\epsilon, i}, p_i)$. For many data sets, there is only member/non-member selection; the members are considered $p_i=1$ and the non-member discarded (see Section~\ref{section:data} and citations in the $N$ column of Table~\ref{data_table} for additional information). $\overline{V}$ is the average heliocentric velocity of the dSph. For the classical MW satellites with measured proper motions, we also account for the perspective motion affect outlined in \citet{Kaplinghat2008ApJ...682L..93K} and \citet{Walker2008ApJ...688L..75W}. Briefly, for a large extended object (i.e. a ``classical'' dSph), only at the center of the object do the line-of-sight and z-directions exactly align. A small component of the line-of-sight velocity comes from the net proper motion of the galaxy; this effect increases the further from the center. Note this is generally a small effect, corresponding to $1-2\, \mathrm{km} \, \mathrm{s}^{-1}$. To correct for this effect we follow the method outlined in the Appendix of \citet{Walker2008ApJ...688L..75W}. The proper motions (in ${\rm mas \, century^{-1} }$) of the classical satellites are as follows ($\mu_{\alpha} \cos{\delta}, \mu_{\delta}$)= ($22\pm9$, $15\pm9$) \citep[Carina;][]{Piatek2003AJ....126.2346P}, ($5.62\pm0.99$, $-16.75\pm1.0$) and ($2.96\pm2.09$, $-13.58\pm2.14$) \citep[Draco and Sculptor;][]{Sohn2017ApJ...849...93S}, ($47.6\pm4.5$, $-36.0\pm4.1$) \citep[Fornax;][]{Piatek2007AJ....133..818P}, ($-11.4\pm2.95$, $-12.56\pm2.93$) \citep[Leo I;][]{Sohn2013ApJ...768..139S}, ($-6.9\pm3.7$, $-8.7\pm3.9$) \citep[Leo II;][]{Piatek2016AJ....152..166P}, ($-40.9\pm5.0$, $-4.7\pm5.8$) \citep[Sextans;][]{Casetti-Dinescu2017arXiv171002462C} , and ($-50\pm17$, $22\pm16$) \citep[Ursa Minor;][]{Piatek2005AJ....130...95P}. When available, we choose proper motion results based on data from the {\it Hubble Space Telescope} over ground based studies. We do not include this effect for the ultra-faint satellites as they do not have measured proper motions and are less extended systems. We determine the posterior distributions with the MultiNest sampling routine \citep{Feroz2008MNRAS.384..449F, Feroz2009MNRAS.398.1601F}. We assume Gaussian priors based on literature measurements for the distance, half-light radius\footnote{In some literature fits an exponential model was used instead of the Plummer model assumed here. To convert between Plummer and exponential scale radii we use: $r_{p} = 1.68\times r_{exp}$.}, ellipticity, and if applicable the proper motions. The literature measurements are summarized in Table~\ref{data_table} except for the proper motions which where summarized in the preceding paragraph. To approximate Gaussianity, some parameter errors represent the average of the upper and lower error bars\footnote{We note that it is better to redraw the measured parameter distributions directly from the posterior distributions. In most cases these are not available but they are from M31 distances \citep{Conn2012ApJ...758...11C} and M31 structural parameters \citep{Martin2016ApJ...833..167M}. For additional discussion on this topic see \citet{Martin2016ApJ...833..167M}.}. We assume a uniform prior range for $\overline{V}$: $-10 \leq \overline{V} - V_{\rm literature} \leq +10$\footnote{This prior range is expanded for several satellites with small sample sizes.}. Jeffreys priors are assumed for the dark matter halo parameters: $-2 \leq \log_{10}{r_s} \leq 1$, and $4 \leq \log_{10}{\rho_s} \leq 14$. We additionally impose the prior $r_s > r_{1/2}$ \citep[see Section 4.1 of ][and discussion in Section~\ref{section:halo_prior}]{Bonnivard2015MNRAS.446.3002B}. The prior range for the anisotropy parameter is: $-0.95 \leq \tilde{\beta} \leq 1$. In summary we have 4 free parameters for the Jeans modeling ($\overline{V}$, $r_s$, $\rho_s$, and $\tilde{\beta}$) and 2-5 parameters with Gaussian priors to average over observational uncertainties ($d$, $r_p$, $\epsilon$, $\mu_{\alpha}$, $\mu_{\delta}$). The dark matter tidal radius, $r_t$, is required for the J and D-Factor calculation. We compute $r_t$ via: $r_t = \left[M_{sub}(r_t)/(2 - \mathrm{d}\ln{(M_{host}})/\mathrm{d}\ln{r}) M_{host}(d) \right]^{1/3} d$ \citep[Eq. 12 of ][]{Springel2008MNRAS.391.1685S}. The MW and M31 host mass profiles are from \citet{Eadie2016ApJ...829..108E} and \citet{Sofue2015PASJ...67...75S} respectively. The LF objects systems have $r_t$ fixed to $r_t =25\, \mathrm{kpc}$. The LF $r_t$ is set based on the Jacobi radii of the classical MW satellites which have median $r_t$ values in the range $6.3-38.9 \, \mathrm{kpc}$. Choosing a different value ($10~\, \mathrm{kpc}$ or $5~\, \mathrm{kpc}$) does not affect the J-Factor as the majority of the `signal/flux’ comes from the inner region of the satellite (this is not true for the D-Factor). Systems with an unresolved $\sigma_{\mathrm{los}}$ have $r_t$ fixed to $r_t=1\, \mathrm{kpc}$. We base the choice on the $r_t$ values of nearby ($d<60~\, \mathrm{kpc}$) ultra-faints. The median $r_t$ of these systems range from $\sim2-6~\, \mathrm{kpc}$. Only a small portion of the $r_t$ posterior of closest satellites falls below 1 kpc. \section{Results and Discussion} \label{section:results} \subsection{Unresolved Velocity Dispersions} \label{section:sigma} \begin{figure} \includegraphics[width=\columnwidth]{unresolved_summary_v2.png} \caption{Left: Example posteriors of four galaxies with different resolution of the velocity dispersion ($\sigma_{\mathrm{los}}$). The resolved $\sigma_{\mathrm{los}}$ system is in black (Bo\"{o}tes~I), the green lines are a upper limit $\sigma_{\mathrm{los}}$ system (Hydra~II), and the orange (Draco~II) and light blue (Pegasus~III) contain large and small tails to $0-\sigma_{\mathrm{los}}$ respectively. Right: The corresponding J-Factors at 0.5\degree. All histograms are normalized to their maximum value to compare the shapes of the posterior. } \label{fig:unresolved} \end{figure} There are several objects for which a non-negligible portion of the $\sigma_{\mathrm{los}}$ posterior tends to zero. For some systems the $\sigma_{\mathrm{los}}$ posteriors are entirely upper limits \citep[Hydra~II, Segue~2, Triangulum~II, Tucana~III;][]{Kirby2015ApJ...810...56K, Kirby2013ApJ...770...16K, Kirby2017ApJ...838...83K, Simon2017ApJ...838...11S}, whereas others have large tails to zero-$\sigma_{\mathrm{los}}$ \citep[Draco~II, Leo~IV, Grus~I;][]{Martin2016MNRAS.458L..59M, Simon2007ApJ...670..313S, Walker2016ApJ...819...53W}, or small tails \citep[Leo V\footnote{In Leo V we find a similar small zero-$\sigma_{\mathrm{los}}$ tail in the 5 star data set from \citet{Walker2009ApJ...694L.144W}.}, Pegasus III, Pisces II;][]{Walker2009ApJ...694L.144W, Collins2017MNRAS.467..573C, Kim2016ApJ...833...16K, Kirby2015ApJ...810...56K}. The tails are easily observed in $\log_{10}{\sigma_{\mathrm{los}}}$ space and may have been previously overlooked if only examined in linear space. We have separated our sample into 4 groups based on the shape of the $\sigma_{\mathrm{los}}$ posterior; 3 groups based on how much of the posterior tends to zero (unresolved, large zero-dispersion tail, and small zero-dispersion tail and 1 group with secure $\sigma_{\mathrm{los}}$ measurements. The distinction between large and small tail is based on the ratio of the posterior of the tail to peak. The small tail galaxies have a ratios $\approx5\%$ whereas the large tail objects have ratios in the range $\approx 40-60\%$. For the non-resolved systems, we have expanded the prior range of the dark matter scale density ($0 < \log_{10}{\left(\rho_s/ {\rm M}_{\odot}\, {\rm kpc}^3 \right)} < 13$) to better illuminate the zero J-factor tail in the posterior (with the original prior only a small portion of the posterior corresponds to zero-$\sigma_{\mathrm{los}}$). Note that $\rho_s$ values in the expanded prior range are highly disfavored from cosmological N-Body simulations \citep{Springel2008MNRAS.391.1685S}. Figure~\ref{fig:unresolved} shows $\sigma_{\mathrm{los}}$ (left) and J-Factor (right) posteriors for representative galaxies from the four cases. A non-resolution of all or part of $\sigma_{\mathrm{los}}$ implies a similar non-resolution in the J-Factor posterior and both have similar shapes. Note that both posterior distributions will extend to even smaller values if the prior range is expanded. The lower limit of $\rho_s$ will set the inferred confidence intervals of the J and D-Factors and confidence intervals for objects without a resolved $\sigma_{\mathrm{los}}$ should be treated with caution. For the subset of galaxies with zero-$\sigma_{\mathrm{los}}$ tails we quote maximum posterior values and error bars encompass the non-tail portion of the posterior in Table~\ref{data_table}. For unresolved systems we quote an upper limit at the 95.5\% percentile confidence interval after applying a cut at $\log_{10}{\rho_s }>4$. This allows for better comparison to the remainder of our sample and to previous studies. For the remainder of this work we only include systems with securely measured $\sigma_{\mathrm{los}}$. \subsection{Halo Priors} \label{section:halo_prior} \begin{figure} \includegraphics[width=\columnwidth]{rsprior_uma2_updated.png} \caption{Example of posterior without $r_s > r_{1/2}$ prior. The corner plot compares $\log_{10}{r_s}$, $\log_{10}{J(0.5\degree)}$ and $\log_{10}{D(0.1\degree)}$ of Ursa Major II. At small $r_s$, the J-Factor is systematically larger which is removed with our $r_s > r_{1/2}$ prior. In the $r_s$ panels we display $r_{1/2}$ with a blue line.} \label{fig:rsprior2} \end{figure} In our analysis we have imposed a prior $r_{s} > r_{1/2}$ \citep[as suggested by][]{Bonnivard2015MNRAS.446.3002B}. To quantify the effect of this prior, we re-analyze several galaxies (Bo\"{o}tes~I, Carina~II, Reticulum~II, Ursa~Major~II, Draco, and Leo~II) without this prior and expand the $r_s$ prior range ($-3 < \log_{10}{\left( r_s/{\rm kpc} \right)} < 1.4$). While there is no change for the Draco values (the posterior favors larger $r_s$ values) all other galaxies show an increase in the J-Factor and most show a decrease in the D-Factor. This affect is generally larger in the ultra-faint galaxies as the halo properties are more likely to be prior dominated. As an example, Figure~\ref{fig:rsprior2} shows a corner plot comparing the posteriors of $r_s$, $J(0.5\degree)$, and $D(0.1\degree)$ for Ursa~Major~II. For $r_s>r_{1/2}$, there is little to no $r_s$ trend with the J-Factor in contrast to the $r_s<r_{1/2}$ region. With mock data sets, \citet{Bonnivard2015MNRAS.446.3002B} found that the J-Factor was systematically over-estimated without the $r_s > r_{1/2}$ prior (see their Section 4.1). Our results for prior dominated systems agree with this result. This is consistent with galaxy formation simulations, in which galaxies form with $r_s > r_{1/2}$. In particular for ultra-faint galaxies, the small $r_s$ values are significantly disfavored by $\Lambda$CDM N-Body simulations. For example, the median $r_s$ values for a halos with $V_{\rm max}=5-15 \, \mathrm{km} \, \mathrm{s}^{-1}$ are $r_s=120-600 \,{\rm pc}$ \citep{Garrison-Kimmel2014MNRAS.444..222G}. The systematic over-estimation in the J-Factor, and disagreement with N-body simulations, justifies the $r_s > r_{1/2}$ prior. \subsection{J and D-Factor Compilation} \begin{figure*} \includegraphics[width=\textwidth]{j_0d5_summary_color_all.png} \caption{J-Factor within a solid angle of 0.5\degree versus (from left to right): distance ($d$; kpc), velocity dispersion ($\sigma_{\mathrm{los}}$; $\, \mathrm{km} \, \mathrm{s}^{-1}$), azimuthally averaged half-light radius ($r_{1/2}$; pc), and luminosity (${\rm L_V}$; ${\rm L_{\odot}}$). The dSphs are separated based on whether $\sigma_{\mathrm{los}}$ is resolved (see Section~\ref{section:sigma}). The groups are resolved (black), unresolved/upper limit (green), large zero-$\sigma_{\mathrm{los}}$ tail (orange), and small zero-$\sigma_{\mathrm{los}}$ tail (light blue). Only the peak of the posterior is displayed for the points with zero-$\sigma_{\mathrm{los}}$ tails. } \label{fig:jfactor} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{j_dist_angles_v2.png} \caption{J-Factor versus distance for integration angles of 0.1\degree, 0.2\degree, 0.5\degree, and $\alpha_c$. $\alpha_c$ is the angle where the J-Factor uncertainties are minimized \citep{Walker2011ApJ...733L..46W}.} \label{fig:jfactor_angles} \end{figure*} \begin{figure*} \includegraphics[width=\textwidth]{d_0d1_summary_color_all.png} \caption{Same as Figure~\ref{fig:jfactor} except with the J-Factor replaced with the D-Factor within 0.1\degree. } \label{fig:dfactor} \end{figure*} We compute the J and D-Factors within solid angles of $\theta_{\rm max}=0.1\degree, 0.2\degree, 0.5\degree,\alpha_c$, where $\alpha_c$ is the angle within which the uncertainties are minimized for J and D-Factor analyses \citep{Walker2011ApJ...733L..46W}. For the J-Factor, $\alpha_c=2 r_{1/2}/d$, whereas the D-Factor value is half the J-Factor angle, $\alpha_c^D=\alpha_c/2$. We provide the first J and D-Factor analysis for the recently discovered MW dSphs Aquarius~II and Pegasus~III. We also provide the first J and D-factor analyses for the M31 satellites and a couple of the dispersion supported local group objects (And~XVIII and Cetus). Surprisingly, even though it is at a relatively large distance, And~VII has a J-Factor comparable to some ``faint'' MW satellites and therefore will be useful in future combined gamma-ray likelihood analyses\footnote{In principle most M31 satellites and local group dwarf irregular galaxies could be added to stacked analysis. This has been discussed within the dynamical framework of rotation curve modeling for local field dwarf irregular galaxies \citep{Gammaldi2017arXiv170601843G}. }. The J and D-Factor results are compiled in Table~\ref{table:jfac_table}. Figure~\ref{fig:jfactor} shows the summary of the integrated J-Factors within $0.5\degree$~with our sample. From these results we can test how the J-Factor scales with the following observed dSph properties: $d$, $\sigma_{\mathrm{los}}$, $r_{1/2}$, and Visual-band luminosity, ${\rm L_V}$. We separate the dSphs into four categories defined in Section~\ref{section:sigma} based the resolution of $\sigma_{\mathrm{los}}$. The intermediate sub-sets have tails in the $\sigma_{\mathrm{los}}$ and J-Factor posterior to small values (equivalent to zero) and we have removed this part of the posterior to compare the ``resolved'' portion to other dSphs (see Section~\ref{section:sigma} for additional details). Figure~\ref{fig:jfactor_angles} compares the J-Factor at the four different integration angles (0.1\degree, 0.2\degree, 0.5\degree and $\alpha_c$) versus $d$. For several of the recently-discovered dSphs in DES (Horologium~I, Reticulum~II, and Tucana~II) the stellar parameters differ between the two independent discovery analyses \citep{Bechtol2015ApJ...807...50B, Koposov2015ApJ...805..130K}. We compute the J-Factors for these objects with both sets of stellar parameters (results for both are listed in Table~\ref{data_table}). The differences in J-Factor are $\Delta\log_{10}(J(0.5\degree))\sim0.4, 0.1, 0.2$ for the three galaxies respectively. The largest difference is found in Horologium I where there is a factor of two difference in $r_{1/2}$ between the photometric studies. This suggests that there is some J-Factor scaling with $r_{1/2}$ despite the lack of an apparently obvious trend in Figure~\ref{fig:jfactor}. Throughout the reminder of this analysis we will use J and D-Factor values derived with photometric properties from \citet{Bechtol2015ApJ...807...50B}. Deeper photometric data will be particularly important to precisely measure the structural parameters for these galaxies. In Figure~\ref{fig:dfactor} we show the D-Factor within $0.1\degree$ versus $d$, $\sigma_{\rm los}$, $r_{1/2}$, and ${\rm L_V}$. There is an inverse squared scaling with $d$ but it is less pronounced than the trend with J-Factor. The more distant systems contain a larger fraction of the dark matter halo within the fixed integration angles and the D-Factor falls off less rapidly than the J-Factor with $\theta_{\rm max}$. \subsection{J-Factor Scaling} \label{section:scaling} \begin{figure*} \includegraphics[width=\textwidth]{jfac_one_to_one_0d5.png} \caption{J-Factor models versus measured J-Factors at $\theta_{\rm max}=0.5\degree$. The models from left to right are: $\sigma_{\mathrm{los}}^4 d^{-2} r_{1/2}^{-1}$ (units-based), $\sigma_{\mathrm{los}}^{+3.8} d^{-1.8} r_{1/2}^{-1.2}$ (best-fit), and $d^{-2}$ (flux-based). For reference we show the one-to-one line with shaded bands set by intrinsic spread of the models ($\sigma_J$). We list $\sigma_J$ values in the upper-left hand corner. The units-based and best-fit $\sigma_J$ are the $2-\sigma$ upper limits (95.5\% confidence interval). } \label{fig:model} \end{figure*} \begin{table*} \begin{center} \caption{ Posteriors of J and D-Factor scaling relations. We list median and 16/84\% confidence intervals. For upper-limits we quote the $2-\sigma$ value (95.5\% confidence interval). Rows with integer values for a $\gamma$ parameter denote a model fixed to that value. GS15 refers to our results with the \citep{Geringer-Sameth2015ApJ...801...74G} compilation and H16 refers to our results with the \citet{Hutten2016JCAP...09..047H} compilation (see Section~\ref{section:compilation_comparison}). } \label{table:scaling} \begin{tabular}{l cc cc c} \hline J(angle) & $\log_{10}{J_0}$ & $\sigma_J$ & $\gamma_{\sigma_{\mathrm{los}}}$ & $\gamma_d$ & $\gamma_{r_{1/2}}$ \\ \hline J(0.5\degree) & $17.87_{-0.03}^{+0.04}$ & $<0.10$ & 4 & -2 & -1 \\ J($\alpha_c$) & $17.78\pm0.03$ & $<0.09$ & 4 & -2 & -1 \\ J(0.5\degree) & $18.30\pm0.07$ & $0.30_{-0.06}^{+0.07}$ & 0 & -2 & 0 \\ J($\alpha_c$) & $18.16\pm0.07$ & $0.29_{-0.06}^{+0.07}$ & 0 & -2 & 0 \\ J(0.1\degree) & $17.69_{-0.08}^{+0.09}$ & $<0.13$ & $3.8_{-0.5}^{+0.6}$ & $-1.6\pm0.1$ & $ -1.2\pm0.2$ \\ J(0.2\degree) & $17.84\pm0.09$ & $<0.12$ & $3.8\pm0.5$ & $-1.7\pm0.1$ & $ -1.2\pm0.2$ \\ J(0.5\degree) & $17.96\pm0.9$ & $<0.10$ & $3.8\pm0.4$ & $-1.8\pm0.1$ & $ -1.2\pm0.2$ \\ J($\alpha_c$) & $17.74\pm0.08$ & $<0.10$ & $3.8\pm0.4$ & $-2.0\pm0.1$ & $-0.8\pm0.2$ \\ \hline \multicolumn{6}{c}{Comparison to other J-Factor Compilations, Section~\ref{section:compilation_comparison}}\\ \hline J(0.5\degree) GS15 & $18.09\pm0.12$ & $<0.18$ & $+3.8_{-0.7}^{+0.6}$ & $-1.7\pm0.2$ & $-1.4\pm0.2$ \\ J(0.5\degree) H16 & $18.40\pm0.26$ & $0.23_{-0.10}^{+0.12}$ & $+2.5\pm1.1$ & $-1.9_{-0.4}^{+0.5}$ & $-1.4\pm0.4$ \\ \hline \multicolumn{6}{c}{D-Factor Scaling, Section~\ref{section:d_factor}}\\ \hline D(angle) & $\log_{10}{D_0}$ & $\sigma_D$ & $\gamma_{\sigma_{\mathrm{los}}}$ & $\gamma_d$ & $\gamma_{r_{1/2}}$ \\ \hline D($\alpha_c/2$) & $16.57\pm0.02$ & $<0.06$ & 2 & -2 & +1 \\ D(0.1\degree) & $16.98\pm0.05$ & $<0.06$ & $1.9\pm0.2$ & $-0.5\pm0.1$ & $ -0.5\pm0.1$ \\ D(0.2\degree) & $17.41\pm0.06$ & $<0.08$ & $1.8\pm0.3$ & $-0.7\pm0.1$ & $ -0.5\pm0.1$ \\ D(0.5\degree) & $17.93_{-0.10}^{+0.09}$ & $<0.13$ & $1.7\pm0.5$ & $-0.9\pm0.2$ & $ -0.5\pm0.2$ \\ D($\alpha_c/2$) & $16.65\pm0.04$ & $<0.05$ & $1.9\pm0.2$ & $-1.9\pm0.6$ & $ +0.9\pm0.1$ \\ \hline \multicolumn{6}{c}{Luminosity J-Factor Scaling, Section~\ref{section:luminosity}}\\ \hline J(angle) & $\log_{10}{J_0}$ & $\sigma_J$ & $\gamma_{L_V}$ & $\gamma_d$ & $\gamma_{r_{1/2}}$ \\ \hline J(0.5\degree) & $18.17\pm0.11$ & $0.27_{-0.06}^{+0.07}$ & $0.23_{-0.12}^{+0.11}$ & -2 & $-0.5\pm0.4$ \\ J($\alpha_c$) & $17.96\pm0.11$ & $0.26_{-0.06}^{+0.07}$ & $0.22_{-0.11}^{+0.10}$ & -2 & $-0.3_{-0.3}^{+0.4}$ \\ \hline \end{tabular} \end{center} \end{table*} \label{section:luminosity} \begin{figure*} \includegraphics[width=\textwidth]{j_alpha_residual_v2.png} \caption{The residuals ($J_{\rm measured}- J_{\rm model}$; y-axis) with several different J-Factor (with $\theta_{\rm max}=\alpha_c$) models versus distance ($d$), azimuthally average half-light radius ($r_{1/2}$), velocity dispersion ($\sigma_{\mathrm{los}}$), and luminosity (${\rm L_V}$). The models are listed on the y-axis and from top to bottom they are: $J\propto \sigma^4 d^{-2} r_{1/2}^{-1}$ (units-based), $1/d^2$ (flux-based), and $\sigma^{+3.8} d^{-2} r_{1/2}^{-0.8} $ (best fit). The bands shows intrinsic scatter ($\sigma_J)$ of each model. For the top and bottom panels the $\sigma_J$ is the $2-\sigma$ upper limit. The y-axis displays the same total range for each relationship. } \label{fig:scaling} \end{figure*} The first searches for dark matter annihilation into gamma-rays from new dwarf galaxy candidates discovered in DES \citep{Bechtol2015ApJ...807...50B, Koposov2015ApJ...805..130K, Drlica-Wagner2015ApJ...813..109D} used an empirical scaling relationship between the J-Factor and distance to estimate the J-Factor for the new discoveries \citep{Drlica-Wagner2015ApJ...809L...4D, Albert2017ApJ...834..110A}. Since the J-Factor is essentially a ``flux,'' it therefore scales as the inverse of distance squared, as observed in the $d$ subplot of Figure~\ref{fig:jfactor}. The $d$ relation is written as $\log_{10}{\left (J_{\rm pred}(0.5\degree)/ J_0 \right)} = -2 \log_{10}{\left(d / 100 \, \mathrm{kpc} \right)}$. The normalization, $J_0$, varies based on the J-Factor compilation and it ranges between $\log_{10}{J_0} = 18.1-18.4 \,{\rm GeV^{2} \, cm^{-5}}$ \citep{Geringer-Sameth2015ApJ...801...74G, Bonnivard2015MNRAS.453..849B, Martinez2015MNRAS.451.2524M}. One of the recently discovered dSphs, Carina~II, contained a significantly lower J-Factor than the distance scaling prediction \citep{Li2018ApJ...857..145L} and led us to explore more general scaling relations. Guided by the analytic work of \citet{Evans2016PhRvD..93j3512E}, we examined scaling relations of the form: \begin{equation} \label{equation:scaling_model} J_{\rm model}(\theta_{\rm max}) = J_0 \left(\frac{\sigma_{\mathrm{los}}}{5 \, \mathrm{km} \, \mathrm{s}^{-1}}\right)^{\gamma_{\sigma_{\mathrm{los}}}} \left(\frac{d}{100 {\rm kpc}}\right)^{ \gamma_d} \left(\frac{r_{1/2}}{100 {\rm pc}}\right)^{\gamma_{r_{1/2}}}\,. \end{equation} \noindent These quantities are all observed and not dependent on the halo model. The distance scaling is required as the J-Factor is a flux, the dispersion probes the mass of the galaxy and the half-light radius sets the inferred mass density for a given dispersion. We use a likelihood method to determine best fit parameters and derive errors. The likelihood we used is: \begin{equation} -2\ln{\mathcal{L}}= \sum_{i=1}^{N} \frac{\left( J_{\rm model}(\sigma_{{\rm los},\,i}, d_i, r_{1/2,\,i })-J_{i}\right)^2}{\sigma_J^2 + \epsilon_i^2}+\ln{\left[2 \pi (\sigma_J^2 + \epsilon_i^2)\right]}\,. \end{equation} \noindent Here $\sigma_J$ is the intrinsic dispersion or spread of the relation and we assume that the $J_0$ and $\sigma_J$ are in $\log_{10}$ space (for the D-Factor relations we instead refer to these parameters as $D_0$ and $\sigma_D$). The summation is over the number of galaxies with measured J-Factors and resolved $\sigma_{\mathrm{los}}$. The posteriors of the model parameters ($J_0$, $\gamma_{\sigma_{\mathrm{los}}}$, $\gamma_{d}$, $\gamma_{r_{1/2}}$, $\sigma_J$) were determined with the {\tt emcee}\footnote{\url{http://dfm.io/emcee/current/}} python Markov chain Monte Carlo python package \citep{ForemanMackey2013PASP..125..306F}. We assumed uniform priors in the following ranges: $12<J_0<23$, $0<\sigma_J<1$, $-5<\gamma_{\sigma_{\mathrm{los}}} <10$, $-5<\gamma_d <5$, and $-5<\gamma_{r_{1/2}} <5$. In general, the best fit relation will minimize $\sigma_J$. We first examine a relation set by a units based argument. The units of $[J/G^2]$ are $\textrm{[velocity]}^4$/ $\textrm{[length]}^3$; as a $d^{-2}$ is required for a flux the expected units based relation is $J\propto \sigma_{\mathrm{los}}^4/d^2 r_{1/2}$ . After fixing the slope parameters, $(\gamma_{\sigma_{\mathrm{los}}}, \gamma_d, \gamma_{r_{1/2}})=(4, -2, -1)$, we found small intrinsic scatter ($\sigma_J<0.10$ at 95.5\% confidence interval) for all angles. We explore the full parameter range as a cross check. In Table~\ref{table:scaling}, we list the posterior values for the 4 different J-Factor angles. For all 4 angles we find that the power-law slopes, $\gamma$, agree with the units based parameters values within errors. We find the minimum $\sigma_J$ occurs at $\alpha_c$ as it is dependent on the $r_{1/2}$, the radius where the mass is best estimated for dispersion supported systems \citep{Walker2009ApJ...704.1274W, Wolf2010MNRAS.406.1220W}. The errors in the J-Factors are minimized at this angle \citep{Walker2011ApJ...733L..46W}. We note that there are correlations observed in the posteriors of $J_0-\gamma_{\sigma_{\mathrm{los}}}$ and $\gamma_{\sigma_{\mathrm{los}}}-\gamma_{r_{1/2}}$. In Figure~\ref{fig:model}, we compare the model predicted J-Factors to the measured J-Factors. The unit-based and best-fit model give equivalent results and are much tighter relations compared to the flux-based (distance) model. In Figure~\ref{fig:scaling} we examine the residuals of these three models compared to the input parameters ($\sigma_{\mathrm{los}}$, $d$, $r_{1/2}$) and the visual luminosity (${\rm L_V}$). The unit-based and best-fit models do not show any trends versus any of the parameters whereas in the distance model there is a trend with $\sigma_{\mathrm{los}}$ showing the necessity of a $\sigma_{\mathrm{los}}$ scaling. The best fit power of $\sigma_{\mathrm{los}}$ is large compared to the other parameters and there is no apparent trend with respect to $\sigma_{\mathrm{los}}$ in Figure~\ref{fig:jfactor}. This is due to the small dynamic range of $\sigma_{\mathrm{los}}$ in the dSphs. In particular, $\sigma_{\mathrm{los}}$ only varies by an order of magnitude ($3 \lesssim \sigma_{\mathrm{los}} \lesssim 11 \, \mathrm{km} \, \mathrm{s}^{-1}$), whereas the other parameters vary by several orders of magnitude ($20\lesssim d \lesssim 1200 \, \mathrm{kpc}$ and $20\lesssim r_{1/2}\lesssim 600 \, {\rm pc}$). As $r_{1/2}$ has the weakest scaling in the $J$-factor relation, the trend is only marginally observable in Figure~\ref{fig:jfactor} whereas the $d$ scaling is apparent. Since the range in $\sigma_{\mathrm{los}}$ between the dSphs is relatively small, the large $\sigma_{\mathrm{los}}$ power is not seen until the other trends are removed. We note that the exponent in the $\sigma_{\mathrm{los}}$ scaling has larger uncertainty than the other parameters due to the small dynamic range of $\sigma_{\mathrm{los}}$ values. We explore subsets of our sample at two angles (0.5\degree, $\alpha_c$) to check the robustness of our results. The subsets are: MW satellites, systems with a well measured $\sigma_{\mathrm{los}}$ (error of $\sigma_{\mathrm{los}}<1.5\, \mathrm{km} \, \mathrm{s}^{-1}$), luminosity based subsets (removing faint galaxies at $\log_{10}{L} =3.5, 4.0, 4.5$, or removing the `brightest' galaxies $\log_{10}{L} <4.5$), the classical systems (pre-SDSS), systems with a minimum spectroscopic sample size (N > 20), galaxies in the \citet{Geringer-Sameth2015ApJ...801...74G} or \citet{Hutten2016JCAP...09..047H} J-Factor compilations, and distant (non-MW) galaxies. For $J(\theta_{\rm max}=\alpha_c)$ models, the median power-law parameters varied between: $3.54<\gamma_{\sigma_{\mathrm{los}}}<3.90$, $-2.08<\gamma_d<-2.30$, and $-0.60<\gamma_{r_{1/2}}<-0.72$ except for the classical ($\gamma_{\sigma_{\mathrm{los}}}=4.6$) and distant subsets ($\gamma_d=-2.7$). These ranges of median values cover the errors in the full sample and show that only in extreme subsets are the slopes different. We find similar results at $J(\theta_{\rm max}=0.d5\degree)$. The ranges of median values are: $3.34<\gamma_{\sigma_{\mathrm{los}}}<3.84$, $-1.93<\gamma_d<-2.01$, and $-0.80<\gamma_{r_{1/2}}<-1.0$ with similar excepts for the classical and distant subsets. Increasing the number of dSphs with kinematics and improving the current precision will help determine the exact scaling. We advocate to use the J-Factor scaling relation with parameters set by the unit-based values. The scaling relation at $\theta_{\rm max}=0.5\degree$ with typical dSph parameters is: \begin{equation} \frac{J(0.5\degree)}{ {\rm GeV^{2} \, cm^{-5}}} \approx 10^{17.87} \left(\frac{\sigma_{\mathrm{los}}}{5 \, \mathrm{km} \, \mathrm{s}^{-1}}\right)^4 \left(\frac{d}{100\, {\rm kpc} } \right)^{-2} \left( \frac{r_{1/2}}{100 \, {\rm pc} } \right)^{-1} \\. \label{equation:relation} \end{equation} \noindent The intrinsic spread is constrained to small values with these parameters ($\sigma_J<0.10$). We show in Section~\ref{section:analytic} that this relation can be derived analytically. \subsection{D-Factor Scaling} \label{section:d_factor} We turn now to deriving a D-Factor scaling relation. Oddly we find that the D-Factor relations at fixed angles (i.e. 0.1\degree, 0.2\degree, and 0.5\degree) have similar $\gamma$, they differ from the D($\alpha_c$) results. Moreover, the fixed angle D-Factor relations do not have the expected $\gamma_d=-2$ `flux' scaling whereas the D($\alpha_c$) scaling relation does. After fixing $\gamma_d=-2$, the fixed angle D-Factor relations continue to disagree with the D($\alpha_c$) relation and differ from the non fixed version ($\gamma_{\sigma_{\mathrm{los}}}\approx+3.4$, $\gamma_{r_{1/2}}\approx-0.05$). As the the best fit free distance model had a different slope it is no surprise to see the other parameters change. The D($\alpha_c/2$) scaling relation is consistent with a unit-based argument; $[D/G]=\textrm{[velocity]}^2/ \textrm{[length]} = \sigma_{\mathrm{los}}^2 r_{1/2}/d^2$. Fixing the $\gamma$ parameters to the unit based argument for D($\alpha_c$) results in a slightly larger scatter, $\sigma_D<0.06$, versus $\sigma_D<0.05$ for the free parameters. The best fit scaling relation with typical dSph parameters with fixed $\gamma$ parameters is: \begin{equation} \frac{D(\alpha_c/2)}{{\rm GeV \, cm^{-2}}} \approx 10^{16.57} \left(\frac{\sigma_{\mathrm{los}}}{5 \, \mathrm{km} \, \mathrm{s}^{-1}}\right)^2 \left(\frac{d}{100 {\rm kpc} } \right)^{-2} \left( \frac{r_{1/2}}{100 {\rm pc} }\right)^{+1} \\. \end{equation} The fixed angle scaling relation with $\gamma$ set by to the units-based parameters have large scatter ($\sigma_D\approx0.37-0.53$). The only angle for which a D-Factor scaling relation applies is at $\alpha_c/2$. In Figures~\ref{fig:dfactor_model},~\ref{fig:dfactor_residual}, we compare the D-Factor measurements to the D-Factor model at $\alpha_c$. Figures~\ref{fig:dfactor_model} provides a one-to-one comparison and Figure~\ref{fig:dfactor_residual} compares the residuals to the input parameters ($\sigma_{\mathrm{los}}$, $d$, $r_{1/2}$) and the luminosity as a cross check. Encouragingly, there are no remaining trends with the input parameters. The lack of a consistent scaling relationship at fixed versus $\alpha_c/2$ could be due to how the D-Factor scales with $\theta$. The shape of the D-Factor integrand with respect to $\theta$ (only line-of-sight integration) is quite different compared to the analogous J-Factor integrand. For the J-Factor integrand the majority of ``signal'' comes within $r_s$ and always decreases with respect to $\theta$. For the D-Factor however, the integrand initially increases with respect to $\theta$ and then turns over at $\theta \approx 0.4 r_s/d$. There is significantly slower falloff with respect to $\theta$ for the D-Factor than for the J-Factor. At a fixed integration angle, the shape of the D-Factor integrand will vary between objects, whereas the variable angle, $\alpha_c/2$, is more likely to probe similar parts of the D-Factor integrand. \begin{figure} \includegraphics[width=\columnwidth]{dfac_one_to_one_single.png} \caption{Comparison of model predications to measured the D-Factor at $\theta_{\rm max}=\alpha_c/2$.} \label{fig:dfactor_model} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{dfac_residual_v2.png} \caption{Similar to Figure~\ref{fig:scaling}, except with the D-Factor model at $\theta=\alpha_c/2$ instead of J-Factor models.} \label{fig:dfactor_residual} \end{figure*} \subsection{Analytic Relation} \label{section:analytic} We can derive the form of our scaling relation by appealing to the analytic work of \citet{Evans2016PhRvD..93j3512E}. They derive analytic J and D-Factors for several simple halo profiles including the NFW profile. Their analytic J and D-Factors contain two, generally valid, simplifying assumptions: first, the dwarf is distant enough to simplify the angular part of the integration (projection from infinite distance versus finite distance) and second, the dark matter halo has no truncation radius (infinite $r_t$). We find that their formula works remarkably well for the NFW J-Factor; generally, the percent error between the numerical integration and approximate analytic calculation in our posterior distribution is $\leq0.1\%$. The D-Factor formula however, preforms quite poorly due to the infinite tidal radius assumption (percent errors range range from $1-50\%$). As the D-Factor has a larger dependence on the total size of the dark matter halo than the J-Factor, the approximation tends to over estimate the D-Factor. We therefore only focus on the analytic J-Factor work. The bulk of our derivation is in Appendix~\ref{appendix:jfactor}. Briefly, we start with the analytic J-Factor formula for the NFW profile in \citet[Equation 16]{Evans2016PhRvD..93j3512E} and replace the halo scale density with observed quantities ($\sigma_{\mathrm{los}}$, $r_{1/2}$) using the half-mass estimators \citep{Walker2009ApJ...704.1274W,Wolf2010MNRAS.406.1220W}. At $\alpha_c$, the J-Factor can then be written as: \begin{equation} J(\alpha_c) = \frac{\sigma_{\mathrm{los}}^4}{G^2 d^2 r_{1/2}} F(r_{1/2}/r_s), \label{equation:analytic_sec3} \end{equation} \noindent where the $F$ is an analytic function derived in the appendix. The $\sigma_{\mathrm{los}}^4$ dependence comes from the half-mass estimators ($J\propto \rho_s^2 \propto M^2 \propto \sigma_{\mathrm{los}}^4$) and $d^{-2}$ dependence from the ``flux'' nature of the J-Factor. The remaining unit is 1/[length] and implies that $J\propto 1/r_{1/2}$. The first part of Equation~\ref{equation:analytic_sec3} is the scaling relation we find while the remainder is dependent on the ratio $r_{1/2}/r_s$. With $\langle r_{1/2}/r_s \rangle \approx 0.25$, Equation~\ref{equation:analytic_sec3} has the same normalization as our scaling relations for $\alpha_c$ (see Table~\ref{table:scaling}). \subsection{Comparison with other J-Factor Compilations} \label{section:compilation_comparison} As a cross check we apply our methodology to the compilations\footnote{In both cases we exclude galaxies (Draco~II, Leo~IV, Leo~V, Pisces~II, Segue~2, and Triangulum~II) that were excluded from our sample.} of \citet{Geringer-Sameth2015ApJ...801...74G} and \citet{Hutten2016JCAP...09..047H}. Both works tabulated the $d$ and $r_{1/2}$ values used in their analysis, however, neither compiled $\sigma_{\mathrm{los}}$ for their samples. We substitute our own calculations of $\sigma_{\mathrm{los}}$ as the kinematic samples for most galaxies overlap. The \citet{Geringer-Sameth2015ApJ...801...74G} analysis used a more generalized dark matter halo (double power-law) and assume the same stellar anisotropy and stellar density profile as our work. The \citet{Hutten2016JCAP...09..047H}\footnote{Most of the analysis in this compilation was initially presented in \citet{Bonnivard2015MNRAS.453..849B}} study in contrast used more general profiles for the dark matter, stellar anisotropy, and stellar light compared to our work. Both provide J-Factors at 0.5\degree. We find $J_0=18.09\pm0.12$, $\sigma_J=<0.18$, $\gamma_{\sigma_{\mathrm{los}}}=3.8_{-0.7}^{+0.6}$, $\gamma_d=-1.7\pm0.2$, and $\gamma_{r_{1/2}}=-1.4\pm0.2$ for the \citet{Geringer-Sameth2015ApJ...801...74G} compilation and find $J_0=18.4\pm0.26$, $\sigma_J=0.23_{-0.10}^{+0.12}$, $\gamma_{\sigma_{\mathrm{los}}}=2.5_{-1.1}^{+1.1}$, $\gamma_d=-1.9_{-0.4}^{+0.5}$, and $\gamma_{r_{1/2}}=-1.4\pm0.4$ for the \citet{Hutten2016JCAP...09..047H} compilation. Both studies have a larger normalizations than our study which may be caused by the generalized halo model and they have slightly steeper values for the $\gamma_{r_{1/2}}$ parameter. The best fit slope parameters with the \citet{Geringer-Sameth2015ApJ...801...74G} compilation agree with the analytic and units based argument. In contrast, we find with the \citet{Hutten2016JCAP...09..047H} compilation a smaller $\gamma_{\sigma_{\mathrm{los}}}$ value with much larger errors and a large intrinsic scatter ($\sigma_J$). As a cross check we examined our sample with the subset of galaxies in each of these compilations and found our results were consistent. Overall, the results with the \citet{Geringer-Sameth2015ApJ...801...74G} compilation are consistent with our compilation while the \citet{Hutten2016JCAP...09..047H} compilation results are consistent within $1-\sigma$. \subsection{Scaling with Luminosity} \label{section:luminosity} In the era of deep and wide surveys, it will become increasingly more difficult for the measurement of stellar kinematics within individual systems to keep up with the rate at which new systems are discovered. So it is important to determine how the J-factor scales with parameters other than $\sigma_{\mathrm{los}}$. For satellites without stellar kinematics, the stellar luminosity, ${\rm L_V}$, may be a potential replacement for $\sigma_{\mathrm{los}}$ in the J-factor scaling relation. In our dSph sample, there is a rough correlation between $\sigma_{\mathrm{los}}$ and ${\rm L_V}$. We explore J-factor scaling relations replacing ${\rm L_V}$ for $\sigma_{\mathrm{los}}$. The best-fit relation we find is (fixing $\gamma_d=-2$): \begin{equation} \frac{J(0.5\degree )}{{\rm GeV^{2} \, cm^{-5}}} \approx 10^{18.17} \left(\frac{\rm L_V}{10^4 L_{\odot}}\right)^{0.23} \left(\frac{d}{100\, {\rm kpc} } \right)^{-2} \left( \frac{r_{1/2}}{100 \, {\rm pc} } \right)^{-0.5} \\. \label{eq:relationLv} \end{equation} \noindent This scaling relation has an intrinsic scatter of $\sigma_J=0.27$, which is larger than the scaling with $\sigma_{\mathrm{los}}$ ($\sigma_J<0.10$), but it is equivalent in scatter to a simple $d^{-2}$ scaling ($\sigma_J=0.30$). Similar results are found for $J(\alpha_c)$. We note there is a clear anti-correlation between the parameters, $\gamma_{\rm L_V}$-$\gamma_{r_{1/2}}$. We provide the best fit parameters for these two angles in Table~\ref{table:scaling}. In Figures~\ref{fig:jfac_lum_model},~\ref{fig:lum_jfactor_residual} we provide comparison figures for the luminosity models. In Figure~\ref{fig:lum_jfactor_residual} compares the model residuals versus the model inputs ($d$, $r_{1/2}$, ${\rm L_V}$) and $\sigma_{\mathrm{los}}$. In the residuals there is a trend versus $\sigma_{\mathrm{los}}$ implying that the best fit model should include $\sigma_{\mathrm{los}}$. As the luminosity scaling has larger scatter compared to our $\sigma_{\mathrm{los}}$ scaling relation this is not surprising. We explored different subsets of our sample (similar to Section~\ref{section:scaling}) to check the robustness of the luminosity based scaling relation. Most subsets contain similar results with similar $\sigma_J$. The subsets with large differences are the classical, distant, and `faint' ($\log_{10}{L_V}<4.5$) dSph subsets. All of these subsets have small sample sizes and show the same anti-correlation between $\gamma_{\rm L_V}$-$\gamma_{r_{1/2}}$. However, the `faint' galaxies subset significantly differs ($\gamma_{\rm L_V}\approx-0.46$, $\gamma_{r_{1/2}}\approx+0.5$). There are a couple caveats to note with the above results. Many of the M31 dSphs with smaller kinematic samples do not follow the same ${\rm L_V}-\sigma_{\mathrm{los}}$ trend as the dSphs in our sample \citep{Tollerud2012ApJ...752...45T, Collins2013ApJ...768..172C}. The recently discovered diffuse galaxy, Crater II, also falls below this relationship~\citep{Torrealba2016MNRAS.459.2370T, Caldwell2017ApJ...839...20C}. However, our $\sigma_{\mathrm{los}}$ relationship predicts $\log_{10}{J(0.5\degree)}= 15.6$, while the published measurement (at a larger angle) is $\log_{10}{J(1.4\degree)}= 15.7$. Given the present data, we make predictions with the $J-{\rm L_V}$ relation for galaxy candidates without kinematics in Table~\ref{table:predictions}. Comparable predictions with the distance scaling relation can be found in Table 1 of \citet{Albert2017ApJ...834..110A}. The scatter we find in the ${\rm L_V}-r_{1/2}-d$ relation is marginally better than $d^{-2}$. It is possible that additional and more precise J-Factors of faint systems can improve predictions for J-Factors for systems without kinematic measurements. \begin{table} \caption{ Predictions for galaxies without kinematics based on Equation~\ref{eq:relationLv}. Citations: (a) \citep{Drlica-Wagner2015ApJ...813..109D} (b) \citep{Homma2018PASJ...70S..18H} (c) \citep{Carlin2017AJ....154..267C} (d) \citep{Kim2015ApJ...808L..39K} (e) \citep{Drlica-Wagner2016ApJ...833L...5D} (f) \citep{Bechtol2015ApJ...807...50B} (g) \citep{Laevens2015ApJ...813...44L} } \label{table:predictions} \begin{tabular}{l cc cc c} \hline Galaxy & ${\rm L_V}$ & $r_{1/2}$ & $d$ & J(0.5\degree) & Citation\\ & $L_{\odot}$ & pc & kpc & $\, {\rm GeV^{2} \, cm^{-5} } $ \\ \hline Cetus II & 8.6e1 & 17 & 30 & 19.1 & a \\ Cetus III & 8.2e2 & 44 & 251 & 17.3 & b\\ Columba I & 4.1e3 & 98 & 183 & 17.6 & c\\ Grus II & 3.1e3 & 93 & 53 & 18.4 & a\\ Horologium II & 94e2 & 33 & 78 & 18.4 & d\\ Indus II & 4.5e3 & 181 & 214 & 17.3 & a\\ Pictor I & 2.6e3 & 43 & 126 & 18.0 & f\\ Pictor II & 1.6e3 & 46 & 45 & 18.9 & e\\ Phoenix II & 26e3 & 33 & 95 & 18.3 & f\\ Reticulum III & 1.8e3 & 64 & 92 & 18.2 & a\\ Sagittarius II & 1.0e4 & 33 & 67 & 18.8 & g\\ Tucana IV & 2.1e3 & 98 & 48 & 18.7 & a\\ Tucana V & 3.7e2 & 9.3 & 55 & 18.9 & a\\ Virgo I & 1.2e2 & 30 & 91 & 18.1 & b\\ \hline \end{tabular} \end{table} \begin{figure} \includegraphics[width=\columnwidth]{jfac_one_to_one_single_lum.png} \caption{Best-fit J-Factor luminosity model compared to the measurements at $\theta_{\rm max}=\alpha_c$. The shaded band represents the intrinsic scatter of the model ($\sigma_J$) and is noted in the upper-left. } \label{fig:jfac_lum_model} \end{figure} \begin{figure*} \includegraphics[width=\textwidth]{jfac_lum_residual_v2.png} \caption{Similar to Figure~\ref{fig:scaling}, except with best-fit J-Factor luminosity models at $\theta_{\rm max}=\alpha_c$. There is still a non-zero slope in the $\sigma_{\mathrm{los}}$ residual. } \label{fig:lum_jfactor_residual} \end{figure*} \subsection{Limitations} \par As emphasized above, to obtain the J and D-factors, we have implemented the standard spherical Jeans-based likelihood analysis. We have chosen this method for the simplicity in interpretation of the results, and to perform a fair comparison with previous authors who have performed similar analyses. We note, however, that there are some limitations to our likelihood model. First we have fixed to the dark matter halo profile to be an NFW profile. Several works have presented generalized dark matter profiles \citep[e.g.][]{Geringer-Sameth2015ApJ...801...74G, Bonnivard2015MNRAS.453..849B}. When allowing for a more flexible model for the dark matter halo, the general trend is toward increasing the J-factor, though there are no substantial biases in the results. A second limitation may be in our parameterization of the stellar velocity anisotropy profile. The APOSTLE hydrodynamical simulations find stellar anisotropy profiles that can be well approximated as constant profiles \citep{Campbell2017MNRAS.469.2335C}. ~\citet{Chiappo2017MNRAS.466..669C} have studied in particular the different between constant and Osipkov-Merrit anisotropy profiles in J-Factor analysis. Given the small data sets for many of the dSphs we have studied, our assumptions for stellar anisotropy and stellar density profiles are adequate. This is particularly true for ultra-faints, though for the brightest dSphs a more flexible model may be ultimately required. We have also assumed spherical symmetry for our dynamical models. Deviations from spherical symmetry have been studied in previous J-factors analysis, including the use of axisymmetric Jeans modeling~\citep{Hayashi2016MNRAS.461.2914H} and made-to-measure models \citep{Sanders2016PhRvD..94f3521S}. It is also worth noting that incorrect dSph membership can affect the inferred velocity dispersion and therefore J-Factor in ultra-faints \citep{Bonnivard2016MNRAS.462..223B, Ichikawa2017arXiv170605481I}, though this is less of a problem in the bright classical satellites. We have modified some commonly used spectroscopic data sets, mostly removing RR Lyrae stars (dSph members but the stars are variable in velocity). Binary stars can inflate the velocity dispersion of dSphs and therefore inflate the J-Factors \citep{Minor2010ApJ...721.1142M, McConnachie2010ApJ...722L.209M, Kirby2017ApJ...838...83K, Spencer2017AJ....153..254S}. If binary stars have inflated the velocity dispersion, then our scaling relations imply that the J and D-Factors will be simply scaled down $\left(\sigma_{\rm true}/\sigma_{\rm inflated}\right)^\alpha$ with $\alpha=4,2$ respectively. Inflated $\sigma_{\mathrm{los}}$ values could potentially affect the derived normalizations ($J_0/D_0$) for our scaling relations but this requires future multi-epoch data to test. \section{Conclusion} \label{section:conclusion} \par In this paper we have presented a compilation of J and D-Factors for MW, M31, and LF dSphs. In addition to adding new J and D-factor calculations for newly-discovered MW satellites (Aquarius II and Pegasus III), we provide the first calculations for satellites of M31 and within the local field between the MW and M31. From this compilation, we derive scaling relations for the J and D-Factors. We find a scaling relation for the J-Factor to be: $J \propto \sigma_{\mathrm{los}}^4/d^2/r_{1/2}$ and find the respective relation for the D-Factor to be: $D \propto \sigma_{\mathrm{los}}^2 r_{1/2}/d^2$. The strongest scaling for the J-Factor is based on $\sigma_{\mathrm{los}}$ or dynamical mass but due to the small relative range of $\sigma_{\mathrm{los}}$ for the MW satellites ($\sigma_{\mathrm{los}} \approx 2-13\, \mathrm{km} \, \mathrm{s}^{-1}$) this scaling is hard to pick out over the distance scaling ($d\approx 20-450 {\rm kpc}$). We further find that there is no strong scaling with the stellar luminosity. \par While performing a dynamical modeling as outlined in Section~\ref{section:method} is ideal for computing the J and D-Factors, for small data samples and for systems with unresolved velocity dispersions this full analysis may not be feasible. It is in these instances in which the scaling relations are particularly important to provide an estimate of the J and D-factors. \par Improved galaxy kinematics in the ultra-low luminosity region is required to ultimately test the scaling relations we have discussed. However, with current observational facilities, it is likely difficult to significantly improve upon the stellar kinematics of known dSphs, especially the faintest-known systems. It would be particularly interesting if a larger data sample reveals a scaling between the luminosity and the J or D-factors. Though the scaling with $L_V$ does not do better than pure distance scaling given the current data, if more data were to reveal a $J-L_V$ scaling, such a relation would be especially useful to estimate the J-Factor for galaxies before they are followed up spectroscopically. This will likely be especially true in the LSST era, during which many low luminosity dSphs are predicted to be discovered \citep[e.g.][]{Hargis2014ApJ...795L..13H} and spectroscopic characterization and dark matter properties may be difficult before 30-meter class facilities become available. Future multi-object spectrographs on 10-meter class telescopes such as the Maunakea Spectroscopic Explorer \citep{McConnachie2016arXiv160600060M, McConnachie2016arXiv160600043M} and the Prime Focus Spectrograph on Subaru \citep{Takada2014PASJ...66R...1T} will significantly improve follow-up kinematic surveys and therefore J-Factors of current and future dwarf galaxies. For future spectroscopic follow-up of dwarfs, our scaling relation are useful for selecting the objects that will maximize improvements in the indirect detection of dark matter. \section*{Acknowledgments} We thank Josh Simon and Matt Walker for providing spectroscopic data. LES acknowledges support from DOE Grant de-sc0010813. We acknowledge generous support from the Texas A\&M University and the George P. and Cynthia Woods Institute for Fundamental Physics and Astronomy. We thank the referee for their careful reading of the paper that improved the presentation and strengthened the conclusions of the paper. Databases and software: Python packages: \texttt{Astropy}\footnote{\url{http://www.astropy.org}} \citep{astropy2013}, \texttt{NumPy} \citep{numpy}, \texttt{iPython} \citep{ipython}, \texttt{SciPy} \citep{scipy}, \texttt{matplotlib} \citep{matplotlib}, {\tt emcee}\citep{ForemanMackey2013PASP..125..306F}, and \texttt{corner}\citep{corner}. This research has made use of NASA's Astrophysics Data System Bibliographic Services. Posterior chain and additional plots can be found at the following webpage \url{https://github.com/apace7/J-Factor-Scaling}. \bibliographystyle{mnras}
{ "timestamp": "2018-10-19T02:12:40", "yymm": "1802", "arxiv_id": "1802.06811", "language": "en", "url": "https://arxiv.org/abs/1802.06811" }
\section{Entropy-Isomap} Standard Isomap does not work well for dynamic process data since data points are typically closest to other data points from the same trajectory, yet the global structure of the process depends on relations between different trajectories. When the $k$-NN neighborhoods are computed, this can result in poor mixing. Worse, how much trajectories interact can change throughout the process. For example, when trajectories come from simulations with similar initial conditions, the trajectories might interact for a while, but then diverge to explore different parts of the state space. A neighborhood size $k$ that produces good results in early stages might produce poor results later on in the process. A value of $k$ that is large enough to work for all times might include so many data points that the geodesic and Euclidean distances become essentially the same, which results in a PCA like behavior, defeating the purpose of using Isomap. To address this situation, we propose to directly measure the amount of mixing and use it to change the neighborhood size for different data points adaptively. This mitigates the shortcomings of the two methods described in the previous section that either discard data (subsampling) or lose local information (skipping). Figure~\ref{entropy_k} shows that neighborhood entropy increases when the next nearest neighbors are added. We propose using an entropy threshold to determine a neighborhood size $k$. This modification allows the flexibility of larger neighborhoods in regions where it is necessary or desired to force mixing between~trajectories. To prevent neighborhoods that are so large as to reduce Isomap to PCA, the maximum neighborhood size $M$ is left as a parameter. This check allows processing datasets which contain trajectories in poorly sampled regions of the state space without skewing the rest of the analysis, which would otherwise result in unreasonably large neighborhood sizes. \begin{Algorithm}[0.45\textwidth] \caption{\textsc{Entropy-Isomap}} \begin{algorithmic}[1] \setstretch{0.975} \REQUIRE $\textbf{X}$, $k$ , $\hat{H}$, $M ( = 100 )$ \ENSURE $\textbf{Y}$ \STATE $\textbf{D}_{n\times n}$ $\leftarrow$ \textsc{PairwiseDistances}($X$) \STATE $\textbf{G}_{n\times n}$ $\leftarrow \infty$ \FORALL{$x_i \in \textbf{X}$} \STATE $k_i \leftarrow k$ \WHILE {$H < \hat{H}$ \textbf{and} $k_i < (M + k)$} \STATE $k_i \leftarrow k_i + 1$ \STATE \textbf{kNN} $\leftarrow$ \textsc{KNN}($x_i$, $\textbf{X}$, $k$) \STATE $\textbf{G}_{i, j} \leftarrow \textbf{D}_{i, j}$ where $x_j \in $ \textbf{kNN}. \STATE $H \leftarrow$ \textsc{NeighborhoodEntropy}($x$, $k_i$, $\bar{G_i}$) \ENDWHILE \ENDFOR \STATE $\textbf{F}_{n\times n} \leftarrow$ \textsc{AllPairsShortestPaths}($\textbf{G}$) \STATE $\textbf{Y} \leftarrow \textsc{MDS}(\textbf{F})$ \RETURN{$\textbf{Y}$} \end{algorithmic} \label{alg:ent_isomap} \end{Algorithm} The proposed {\sf Entropy-Isomap} algorithm is shown in Figure~\ref{alg:ent_isomap}. Compared to the standard approach, the algorithm takes additional argument, the target entropy level, $\hat{H}$. This parameter is used to decide when adaptively computed neighborhoods are producing good mixing. The initial step, computing all pairwise distances for data points in ${\bf X}$, remains the same as in the standard algorithm. Then, the entropy-based neighborhood selection is performed~\mbox{(lines 3-9)}. For each point $x_i$, the algorithm proceeds with neighborhood size~$k_i$, initially equal to some default value $k$. $k_i$-nearest-neighbors are identified, and their neighborhood entropy is computed (lines 7-9). If the entropy threshold $\hat{H}$ is not satisfied, then $k_i$ is incremented (line~6), and the process repeats. Once the entropy threshold is reached, or a user-defined maximum of $M$ iterations have been performed, the process terminates. The entire process is repeated for each $x_i$, and after all neighborhoods have been identified, the algorithm continues the same way as standard Isomap (lines 10-12). We note that our presentation of the algorithm is simplified for clarity. In the practical implementation, the size of the neighborhood $k_i$ can be found via simple binary search, which further can be coupled with efficient incremental $k$-NN solver, without the need to instantiate a complete distance matrix ${\bf D}$. We applied {\sf Entropy-Isomap} to our data with $k=8$ and the maximum number of steps $M=100$. We selected this large $M$ to compute the fraction of large neighborhoods that would be required to strictly enforce mixing, in this case nearly 5\% of our data. We also varied the entropy threshold $\hat{H}$ from $0.1$ to $0.9$, to explore the effect it has on the neighborhood size distribution. Example low dimensional representation obtained by {\sf Entropy-Isomap} is presented~in~Figure~\ref{eiso_iso3d_ngbr}~(a). \begin{figure*}[htbp!] \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=.9\textwidth,trim={4cm 1.5cm 1.75cm 1.5cm},clip]{plot_datafftR_Ent_0_3_Isomap.png} \vspace{-0.25cm} \caption{} \end{subfigure} \begin{subfigure}[t]{0.495\textwidth} \includegraphics[width=0.99\textwidth]{ent_3_ngbr.pdf} \vspace{-0.25cm} \caption{} \end{subfigure} \vspace{-0.25cm} \caption{Entropy-Isomap with $k=8$, $\hat{H}=0.3$ is run on $\chi = 3.0$ and variable $\phi\in\{0.50, 0.52, 0.54, 0.56, 0.58, 0.6\}$. (a)~The learned mapping is used to transform the data to 3-dimensions. (b)~Neighborhood cross-mixing: for each trajectory~$\Gamma$, neighbors of each point belonging to individual trajectories are aggregated and shown in stacked bar graph form. (Please~view~in~color).} \label{eiso_iso3d_ngbr} \end{figure*} We start our analysis by observing, that in the experiments high-entropy thresholds were often not reachable (see Figure~\ref{entropy_vs_t_eiso}). We believe that this is because $k$ nearest neighbors for the majority of points are in the same trajectory (see Figure~\ref{chi300_fftR_sortedpwdheat}), which leads to skewed neighborhoods distribution. As a result, even when a satisfactory number of neighbors come from other trajectories, the entropy for the neighborhood might be low. Figure~\ref{eiso_iso3d_ngbr}~(b) shows that even when trajectories mix, the majority of neighbors are still from the same trajectory. Therefore, high entropy implies good mixing, but the converse is not necessarily true. Large neighborhoods could produce mixing, while still having low entropy. Since large neighborhoods produce poor results with Isomap, we would like to avoid them in any case. In practice, we achieved good results with entropy thresholds in the range of $\hat{H} = 0.30-0.40$, which were achievable by a large fraction of neighborhoods. This is further confirmed by experiments in Section~\ref{sec:app}. When strictly enforcing entropy, the neighborhood sizes can become too large. Figure \ref{entropy_iso_kvals} shows the neighborhood size distribution for $\hat{H} = 0.30$. When such neighborhoods are included for many points in the dataset, the neighborhood graph tends toward a completely connected graph, and the Isomap solution reduces to the PCA result. Recall that PCA is equivalent to classical MDS and that classical MDS is Isomap with $k=n-1$. \begin{figure} \centering \includegraphics[]{plot_datafftR_Ent_0_3_kVals.pdf} \caption{Entropy-Isomap with default $k=8$ and entropy threshold $\hat{H} = 0.3$ run for data with $\chi = 3.0$ and variable $\phi\in\{0.50, 0.52, 0.54, 0.56, 0.58, 0.6\}$. The distribution of selected neighborhood sizes $k_i$ that did not reach the maximum $M=100$ are shown.} \label{entropy_iso_kvals} \end{figure} Points that produce no mixing also end up with large neighborhoods, as Entropy-Isomap tries to increase $k_i$ in order to meet the entropy threshold. These points occur when the dataset does not contain enough trajectories that pass near those particular states to produce good geodesic distance estimates. Interestingly, plotting entropy versus time in Figure~\ref{entropy_vs_t_eiso} reveals that trajectories can pass through poorly sampled parts of the state space and again ``meet up'' with other trajectories. The proposed methods can be used to detect trajectories that do not interact and also which regions of the state space are poorly sampled. This can be used to either remove them form the dataset or as a guide to decide where to collect more process data. \begin{figure} \centering \includegraphics[]{ent_vs_t_eiso.pdf} \caption{Entropy-Isomap with default $k=8$ and $\hat{H}=0.3$ is run on $\chi = 3.0$ and variable $\phi\in\{0.50, 0.52, 0.54, 0.56, 0.58, 0.6\}$. The entropy of the discovered neighborhood is shown for the data point at each time step. (Please view in color).} \label{entropy_vs_t_eiso} \end{figure} \section{Introduction} The vast majority of the current big data, especially coming from high-performance high-fidelity numerical simulations and high-resolution scientific instruments, is a result of complex non-linear processes. While these non-linear processes can be characterized by low-dimensional sub-manifolds, the actual observable data they generate is high-dimensional. This fact means that the resulting data can be represented more concisely by using a latent state, and more importantly, that physical processes described by the observed data might be better understood by discovering their underlying low dimensionality. Our focus in this work is on the second point, specifically, we propose a novel method for dimensionality reduction of {\it process data}. Here process data means any data that represents evolution of some process states over time (see for example Fig.~\ref{fig:morphs}). While such data are ubiquitous, they are challenging for current dimensionality reduction techniques. This is because the input data sets are large, as each sample of a process delivers a time series of high dimensional points, the underlying processes are highly non-linear, which rules out many methods that otherwise would be computationally feasible, and finally, the individual data points are sampled in a highly correlated way, which can easily confuse many dimensionality reduction~techniques. In our prior work, we have developed {\sf S-Isomap}, a spectral dimensionality reduction technique for non-linear big data streams~\cite{Schoeneman2017} that addresses two of the above challenges. The method can efficiently and reliably handle large non-linear data sets, but assumes that the input data is weakly correlated. Consequently, it fails when applied directly to process data. {\sf S-Isomap} has been derived from the standard Isomap algorithm~\cite{Tenenbaum2000}, which is frequently used and favored in the scientific computing data analysis~\cite{Lim2003,Dawson2005,Zhang2006,Rohde2008,Ruan2014,Strange2014,Samudrala2015}. Unfortunately, while there is some prior work on applying Isomap to spatio-temporal data~\cite{Jenkins2004}, the focus has been on segmentation of data trajectories rather than discovering a continuous latent state. To the best of our knowledge, currently there are no spectral methods that can handle high-dimensional process data. \begin{figure*}[t] \centering \scriptsize \begin{tabular}[t]{lcccc} \raisebox{\height}{$\Gamma_1 = \Gamma(\phi = 0.6, \chi = 2.2)$} & \includegraphics[scale=0.7]{BR60_CHI22_dataCH_0000_p.pdf} & \includegraphics[scale=0.7]{BR60_CHI22_dataCH_0020_p.pdf} & \includegraphics[scale=0.7]{BR60_CHI22_dataCH_0080_p.pdf} & \includegraphics[scale=0.7]{BR60_CHI22_dataCH_0150_p.pdf}\\ \raisebox{\height}{$\Gamma_2 = \Gamma(\phi = 0.6, \chi = 3.0)$} & \includegraphics[scale=0.7]{BR60_CHI30_dataCH_0000_p.pdf} & \includegraphics[scale=0.7]{BR60_CHI30_dataCH_0020_p.pdf} & \includegraphics[scale=0.7]{BR60_CHI30_dataCH_0080_p.pdf} & \includegraphics[scale=0.7]{BR60_CHI30_dataCH_0150_p.pdf}\\ \raisebox{\height}{$\Gamma_3 = \Gamma(\phi = 0.5, \chi = 3.0)$} & \includegraphics[scale=0.7]{BR50_CHI30_dataCH_0000_p.pdf} & \includegraphics[scale=0.7]{BR50_CHI30_dataCH_0020_p.pdf} & \includegraphics[scale=0.7]{BR50_CHI30_dataCH_0080_p.pdf} & \includegraphics[scale=0.7]{BR50_CHI30_dataCH_0150_p.pdf}\\ & \textsf{Time step} $0$ & \textsf{Time step} $20$ & \textsf{Time step} $80$ & \textsf{Time step} $150$\\ \end{tabular} \caption{Sample of high-dimensional dynamic process data with three trajectories and 12 points. Each trajectory $\Gamma_I$ corresponds to different variant of organic thin film fabrication process (described by parameters $\phi$ and $\chi$). Each image is a high-dimensional point capturing material morphology (different colors represent different types of polymer making the material). Each trajectory describes morphology evolution over time. (Please view in color).}\label{fig:morphs} \end{figure*} The current work is motivated by the need to analyze massive and high-dimensional data sets generated from highly non-linear differential equations modeling material morphology evolution during fabrication process of organic thin films (see Section~\ref{sec:datagen}). The fabrication of organic thin films is a key factor controlling properties of organic electronics, including transistors, batteries, and displays, but is computationally expensive and difficult to model precisely. Depending on the fabrication parameters, different process trajectories are possible, leading to different material properties. Scientists and engineers are interested in using dimensionality reduction on the resulting big data to explore the material design space, and optimize the fabrication to make devices with desired properties. In this paper, we first show that linear techniques, such as Principal Component Analysis (PCA)~\cite{Pearson1901}, overestimate the latent dimension of the process, and that dimensionality reduction techniques that assume uniformity in sampling, including non-linear strategies, fail due to the highly correlated nature of process data. We hypothesize that the poor performance of non-linear techniques is related to a lack of {\em mixing} (or {\em cross-talk}) between different trajectories of a process, and present remedy based on data resampling. Our main contribution is the concept of {\em neighborhood entropy} of a point, which indicates mixing between process trajectories. We use neighborhood entropy to adaptively size the neighborhoods when computing the geodesic distance approximation in Isomap. The resulting dimensionality reduction method is both easy to implement and effective, and could likely be extended to other spectral dimensionality reduction~approaches. \section{Preliminaries} \subsection{Spectral Dimensionality Reduction}\label{sec:sdr} Spectral Dimensionality Reduction (SDR) refers to a class of methods used to identify low-dimensional structure in high-dimensional data. For a given set of points, ${\bf X}$, in a high-dimensional space~$\mathbb{R}^D$, SDR methods work by computing either top or bottom eigenvectors (eigendecomposition) of a feature matrix derived from~${\bf X}$. Here, the feature matrix, ${\bf F}$, captures structure of the data through some selected property (e.g., pairwise distances). The SDR methods rely on the assumption that there exists a function $f:\mathbb{R}^d \rightarrow \mathbb{R}^D$, $d \leq D$, that maps low-dimensional representation, $y_i \in \mathbb{R}^d$, of each data sample to the observed $x_i \in \mathbb{R}^D$. The goal then becomes to learn the inverse mapping, $f^{-1}$, that can be used to map high-dimensional $x_i$ to low-dimensional $y_i$. The SDR methods are frequently categorized based on the assumption they make about $f$ (i.e., linear vs. non-linear). Linear methods assume that the data lie on a low-dimensional subspace $\mathbb{V}^{d}$ of $\mathbb{R}^D$, and construct a set of basis vectors representing the mapping. However, in many cases the data is complex, and the linearity assumption is too restricting. In such cases, non-linear methods work off the assumption that the data is sampled from some low-dimensional submanifold $M^{d}$, embedded within $\mathbb{R}^{D}$. When working with the linearity assumption the most commonly used methods are PCA and Multidimensional Scaling (MDS)~\cite{Cox2000}. PCA learns the subspace that best preserves covariance. The basis vectors learned by PCA, known as principal components, are the directions along which the data has highest variance. The data $\textbf{X}$ can be transformed by first computing the covariance matrix, $\textbf{F}_{D\times D}$. In the case of MDS, the feature matrix $\textbf{F}_{n\times n}$ contains some pairwise relationships between data points in $\textbf{X}$. When these relationships are Euclidean distances, the result is equivalent to that of PCA, and this is known as classical MDS. Spectral decomposition of \textbf{F} in both methods yields eigenvectors \textbf{Q}. Taking the top $d$ eigenvectors, the data can be mapped to low-dimensions as $\textbf{Y}$ by the transformation \textbf{Y} = \textbf{X}$\textbf{Q}_{d}$. \begin{Algorithm}[0.45\textwidth] \caption{\textsc{Isomap}} \begin{algorithmic}[1] \REQUIRE $\textbf{X}$, $k$ \ENSURE $\textbf{Y}$ \STATE $\textbf{D}_{n\times n}$ $\leftarrow$ \textsc{PairwiseDistances}($\textbf{X}$) \STATE $\textbf{G}_{n\times n}$ $\leftarrow \infty$ \FOR{$x_i \in \textbf{X}$} \STATE {\bf kNN} $\leftarrow$ \textsc{KNN}($x_i$, $\textbf{X}$, $k$) \FOR{$x_j \in $ {\bf kNN}} \STATE $\textbf{G}_{i, j} \leftarrow \textbf{D}_{i, j}$ \ENDFOR \ENDFOR \STATE $\textbf{F}_{n\times n}\leftarrow $ \textsc{AllPairsShortestPaths}($\textbf{G}$) \STATE $\textbf{Y} \leftarrow \textsc{MDS}(\textbf{F})$ \RETURN{$\textbf{Y}$} \end{algorithmic} \label{alg:isomap} \end{Algorithm} In cases when data is assumed to be generated by some non-linear process, both PCA and MDS are not robust enough to learn the inverse mapping $f^{-1}$. Although variants of PCA have been proposed to address such situations (e.g. Kernel PCA~\cite{Scholkopf1998}), the most common approach is to use Isomap~\cite{Tenenbaum2000}. Isomap constructs feature matrix by approximating distances between input points along the manifold $M^{d}$, and then proceeds as regular MDS. This is accomplished in four steps, as shown in Algorithm~\ref{alg:isomap}. First all $n^{2}$ pairwise distances are computed for points in $\textbf{X}$. Then geodesic distances along manifold are approximated (lines 3--6) by first constructing a neighborhood graph, \emph{G}, where each point $x \in \textbf{X}$ is adjacent to its $k$-nearest-neighbors, and then by computing shortest paths between all points in~\emph{G} (line 7). The resulting geodesics approximations are contained in the feature matrix \textbf{F}, which is processed by MDS to yield the final low-dimensional transformation. \subsection{Dynamic Process Data}\label{sec:dynamicprocess} When dataset $\textbf{X} \in \mathbb{R}^D$ represents a dynamic process, points in $\textbf{X}$ are partitioned into $T$ trajectories, $\Gamma_1$, $\Gamma_2$,..., $\Gamma_T$. Each trajectory $\Gamma_I$ is given by a $\tau$-parameterized sequence of $m_I$ data points. In other words, $\Gamma_I$ = ($x_I(\tau_1)$, $x_I(\tau_2)$,\ldots, $x_I(\tau_{m_I})$), where $\tau_i < \tau_j$ when $i<j$. Parameter $\tau$ usually denotes time, and trajectory $\Gamma_I$ can be a function of one or more additional~parameters. In this work, we investigate the use of SDR methods in the analysis of dynamic processes. As a representative example, we use numerical simulation of material morphology evolution during fabrication of organic thin film~\cite{WodoBaskar2012-CmpMatSc}. The input data consists of trajectories $\Gamma = \Gamma(\phi, \chi)$, where each trajectory is a function of two variables corresponding to two fabrication parameters (see Fig.~\ref{fig:morphs}): $\phi$,~which denotes blend ratio of polymers making organic film, and~$\chi$, which denotes strength of interaction between these polymers. Each data point $x(\tau)$ is an image representing one morphology snapshot generated by complex non-linear differential equation solver modeling morphology evolution in time. Each image is then represented by a high-dimensional vector in $\mathbb{R}^D$, obtained by simple processing of image pixels. Example morphologies from selected trajectories are shown in Fig.~\ref{fig:morphs}, and we give detailed description of the data and the data generation process in Section~\ref{sec:app}. The main challenge in analyzing the temporal morphology evolution data comes from the inherent bias in the exploration of possible states of the fabrication process. In the essence, sampling in $\tau$ (i.e., time) is commonly unbalanced meaning much more dense than in parameters $\phi$ or $\chi$. This is because the computational cost of executing solver to generate a single trajectory (i.e., sampling in $\tau$) is prohibitive to allow for the exhaustive sampling in the space formed by parameters $\phi$ and $\chi$. Furthermore, data points in the same trajectory have high temporal correlation, which is reflective of how morphologies evolve. These factors strongly influence connectivity of the neighborhood graph, $G$, and in turn affect approximation of the manifold distances. \section{Challenges in Using SDR with Dynamic Process Data} The standard off-the-shelf approach to perform dimensionality reduction on large data is PCA. However, if the method is applied without taking into consideration the underlying assumption of data linearity, it delivers highly misleading results. Here we study effectiveness of both PCA and Isomap when dealing with dynamic process data. \subsection{Effectiveness of State of the Art SDR Methods} A reliable way of determining the quality of the low dimensional representation (mapping) produced by each method is to compare the original data $\textbf{X}$ in $\mathbb{R}^D$ with the mapped data $\textbf{Y}$ in $\mathbb{R}^d$, by computing the {\em residual variance}. The process of computing residual variance for PCA differs from Isomap, but the values are directly comparable. In PCA, each principal component (PC) explains a fraction of the total variance in the dataset. If we consider $\lambda_{i}$ as the eigenvalue corresponding to the $i^{th}$ PC and $\vert\Lambda\vert$ as the total energy in the spectrum, i.e., $\vert \Lambda \vert = \sum_{i=1}^D \lambda_i$, then the variance explained by the $i^{th}$ PC can be computed as $\frac{\lambda_{i}}{\vert\Lambda\vert}$. The residual variance can be calculated~as: \begin{equation}\label{resvardef} R = 1 - \sum_{i=1}^{d}\frac{\lambda_{i}}{\vert\Lambda\vert}. \end{equation} In the Isomap setting, residual variance is computed by comparing the approximate pairwise geodesic distances, computed in \emph{G} represented by matrix $\textbf{D}_{G}$ (recall that $G$ is a neighborhood graph), to the pairwise distances of the mapped data $\textbf{Y}$, represented by matrix $\textbf{D}_{Y}$: \begin{equation}\label{Iso_resvardef} R = 1 - \rho(\textbf{D}_{G}, \textbf{D}_{Y})^{2}. \end{equation} Here, $\rho$ is the standard linear correlation coefficient, taken over all entries of ${\bf D}_G$ and ${\bf D}_Y$. In the first step of our analysis, we compared the residual variance obtained using PCA and Isomap on the material morphology evolution process data (see Sections~\ref{sec:dynamicprocess} and~\ref{sec:app}) consisting of six different trajectories, each trajectory corresponding to a unique configuration of pair $\phi$ and $\chi$. Figure~\ref{chi300scree} summarizes our findings for PCA and Isomap. From the figure, we can see that PCA in unable to learn an effective low-dimensional mapping. In fact, while Isomap is able to explain about 70\% of the variance using 3 dimensions, PCA requires more than 9 dimensions. Here we note that the ability to explain most of the information in the data in two or three dimensions is highly desired by domain experts as it permits data visualization and exploratory analysis. \begin{figure}[ht] \centering \includegraphics[]{screes_chi300.pdf} \caption{Isomap and PCA run on data with six trajectories for $\chi = 3.0$ and $\phi\in\{0.50, 0.52, 0.54, 0.56, 0.58, 0.6\}$. The quality of the Isomap manifold and PCA subspace are assessed using residual~variance.} \label{chi300scree} \end{figure} \begin{figure*}[htbp!] \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=.99\textwidth,trim={3.75cm 1.25cm 1.75cm 1.75cm},clip]{chi300_pca3d_fftR.png} \caption{\label{chi300_fftR_pca}} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=0.99\textwidth,trim={2.25cm 1.25cm 1.75cm 1.75cm},clip]{chi300_isomap3d_fftR.png} \caption{\label{chi300_fftR_iso}} \end{subfigure} \begin{subfigure}[t]{0.32\textwidth} \includegraphics[width=0.99\textwidth,trim={3.70cm 1.5cm 1.75cm 1.5cm},clip]{chi300_isomap3d_fftR_early.png} \caption{\label{chi300_fftR_early_iso3d}} \end{subfigure} \caption{ Six trajectories with fixed $\chi = 3.0$ and variable $\phi\in\{0.50, 0.52, 0.54, 0.56, 0.58, 0.6\}$ were selected to learn mapping and transform the data to 3-dimensions using (a) PCA (b) Isomap with $k=8$ (c) Isomap with $k=8$ using only the first 30 time steps of each pathway. (Please view in color). } \label{chi300_sdr_figs} \end{figure*} To visualize the data, we used both PCA and Isomap to map the data to $d=3$ dimensions. The results are shown in Fig.~\ref{chi300_fftR_pca} and~\ref{chi300_fftR_iso}. For PCA, one of the dimensions ($D1$) describes the time aspect of the process evolution. However, the PCA visualization does not offer additional insights into the process, which we attribute primarily to the PCA's inability to capture non-linearities. Since Isomap outperforms PCA in terms of residual variance, it is expected that the 3-dimensional data obtained from Isomap would offer more meaningful insights. However, as shown in Fig.~\ref{chi300_fftR_iso}, all trajectories diverge from one another in $3$-dimensions and there is no reasonable interpretation of the empty space. This indicates that the standard application of Isomap is inadequate when working with parameterized high-dimensional time series data. We note that we obtained equally unsatisfactory results with other methods, including t-SNE~\cite{Maaten2008} and LLE~\cite{Roweis2000}. \subsection{Standard Isomap and Dynamic Process Data} To further study the reason behind Isomap performance, we focus on the initial stage of the trajectories, where the morphologies are expected to evolve in a similar fashion. This is reflected in the Isomap visualization in Fig.~\ref{chi300_fftR_iso}, where all trajectories appear to start from a common point in the 3-dimensional space and then diverge. We applied Isomap on only the early stage data represented by the first $30$ time steps of each trajectory (threshold selected by the domain expert). The results are shown in Fig.~\ref{chi300_fftR_early_iso3d}, where we can clearly observe that the early data points for all trajectories cluster together before quickly diverging. This leads us to the first key observation of this paper: When dealing with dynamic process data, in which the data points exhibit a strong temporal correlation within the trajectory to which they belong, but are different from data points that belong to other trajectories, Isomap cannot capture the relationships across different trajectories. Thus, the resulting mapping is dominated by the time dimension, as can be seen in Fig.~\ref{chi300_fftR_iso}. This behavior can be attributed to how neighbors are selected for each point (see Algorithm~\ref{alg:isomap}, line 4). To better illustrate the point, consider Fig.~\ref{chi300_fftR_pwdheat}, which shows the matrix, ${\bf D}$, containing the distance between every pair of points, with rows and columns ordered by trajectory and time. In Fig.~\ref{chi300_fftR_pwdheat}, the same row ordering is retained, however, each row contains the sorted distances of the corresponding point to all points in the dataset, and colored by the trajectory to which they belong. Both figures show that for the majority of the points, the first several nearest neighbors are always from the same trajectory. This is problematic, because the ability of Isomap to learn an accurate description of the underlying manifold, depends on how well the neighborhood matrix captures the relationship {\em across} the trajectories. We refer to this relationship as {\em cross-talk}, or {\em mixing} among the trajectories. For any given point, the desired effect would be that the nearest neighborhood set contains points from multiple trajectories. However, the sorted neighborhood matrix indicates a lack of mixing, which essentially means that the Isomap algorithm does not consider information from other trajectories, when learning the shape of the manifold in the neighborhood of one~trajectory. \begin{figure*}[htbp!] \begin{subfigure}[t]{0.49\textwidth} \centering\captionsetup{width=.95\linewidth}% \includegraphics[]{chi300_PwdHeat_fftR.pdf} \caption{\label{chi300_fftR_pwdheat}\ Pairwise distance matrix for all data grouped by $\phi$ and ordered by time step along both axes. Distances in subblocks along the main diagonal denote inter-point distances within a fixed $\phi$-value trajectory. Off-diagonal subblocks highlight distance between points lying on disjoint trajectories.} \end{subfigure} \begin{subfigure}[t]{0.49\textwidth} \centering\captionsetup{width=.95\linewidth \includegraphics[]{chi300_sortedPwdHeat_fftR.pdf} \caption{\label{chi300_fftR_sortedpwdheat}\ Distance matrix with rows grouped by $\phi$ and ordered by time step. Entries in row $i$ are sorted by increasing distance from $x_i$ and colored according with their $\phi$ value. Clusters of similar color nearest the left edge reflect $k$-NN having common $\phi$-value for sufficiently large $k$.} \end{subfigure} \caption{Pairwise distances of all points points with $\chi = 3.0$ and from six trajectories for $\phi\in\{0.50, 0.52, 0.54, 0.56, 0.58, 0.6\}$ visualized in two ways. (Please view in color).} \label{chi300_heatmaps} \end{figure*} \subsection{Quantifying Trajectory Mixing} To better assess the quality of neighborhoods and understand the mixing of trajectories, we use the information-theoretic notion of entropy. For a given point $x$, let $p_i$ be the fraction of $k$ closest neighbors of $x$ that lie on the trajectory $\Gamma_i$. Then, the entropy of the $k$-neighborhood of point $x$ is calculated as: \begin{equation}\label{entropy_eqn} H^k_x = \sum_{p_{i}\neq 0}{-p_{i} \log_2 p_{i}} \end{equation} Similarly, we can define the $k$-neighborhood entropy for a trajectory $\Gamma$, as the average of $k$-neighborhood entropy for all points on $\Gamma$. When the neighborhood entropy of a point is high, its nearest neighbors are uniformly distributed across all trajectories (high level of mixing). On the other hand, if the entropy of a point is low, its nearest neighbors mostly lie on a single trajectory (low level of mixing). Thus, neighborhood entropy measures the mixing level across the trajectories, for a given neighborhood size, $k$. \begin{figure}[!ht] \centering \includegraphics[]{entropy_k.pdf} \caption{Neighborhood entropy of different trajectories as a function of $k$ ($\chi = 3.0$ and $\phi\in\{0.50, 0.52, 0.54, 0.56, 0.58, 0.6\}$).} \label{entropy_k} \end{figure} \subsection{Strategies for Inducing Trajectory Mixing} One simple way to induce more trajectory mixing is to increase $k$, since this would increase the neighborhood entropy of the points. Fig.~\ref{entropy_k} shows the average neighborhood entropy for each of the six trajectories in the data set, for different values of $k$. The neighborhood entropy increases linearly with $k$, consistently for all six trajectories. Thus, for a small value of $k$, Isomap is unable to obtain a meaningful low-dimensional representation, as evident in Fig.~\ref{chi300_fftR_iso}, where $k$ was set to 8. Figure~\ref{entropy_k} shows that using large $k$ could result in the desired level of trajectory mixing. However, as discussed in the original Isomap paper~\cite{Tenenbaum2000}, the approximation error between the true Geodesic distance on the manifold between a pair of points and the approximate distance calculated using Dijkstra's algorithm (See Algorithm~\ref{alg:isomap}) is inversely related to $k$. For large $k$, Isomap is essentially reduced to PCA, and is unable to capture the nonlinearities in the underlying manifold. Another strategy to induce trajectory mixing is {\em subsampling}, i.e., selecting a subset of points from a given trajectory. However, this would result in reduction of the data, which yields poor results. Alternatively we could use {\em skipping} in the neighborhood selection, i.e., for a given point, skip the $s$ nearest points before including points in the neighborhood. Unfortunately, in experimenting with skipping and subsampling approaches we experienced a loss in local manifold quality or data size, respectively. Based on this and the desirability of achieving the most accurate local and global qualities, we propose an entropy-driven approach in the next section. \input{entropy-isomap-section} \section{Application}\label{sec:app} The current work is motivated by the need to analyze and understand big data sets arising in the manufacturing of Organic Electronics (OE). OE is a new sustainable class of devices, spanning organic transistors~\cite{osc-transistors,osc-transistors-review}, organic solar cells~\cite{HS04,osc-book}, diode lighting~\cite{oled-review1,oled-review2}, flexible displays~\cite{displays}, integrated smart systems such as RFIDs~\cite{organic-rfdis,organic-rfdis2}, smart textiles~\cite{smartTextiles}, artificial skin~\cite{organicSkin}, and implantable medical devices and sensors~\cite{lochner2014all, zhu2014photoreconfigurable}. The critical and highly desired feature of OE is inexpensive, rapid and low-temperature roll-to-roll fabrication. However, many promising OE technologies are bottlenecked at the manufacturing stage -- more precisely, at efficiently choosing fabrication pathways that would lead to the desired material morphologies, and hence device properties. Final properties of OE (e.g., electrical conductivity), are a function of more than a dozen material and process variables that can be tuned (e.g., evaporation rate, blend ratio of polymers, final film thickness, solubility, degree of polymerization, atmosphere, shearing stress, chemical strength and frequency of patterning substrate), leading to the combinatorial explosion of manufacturing variants. Because the standard trial-and-error approach, in which many prototypes are manufactured and tested, is too slow and cost inefficient, scientists are investigating {\em in silico} approaches. The idea is to describe the key physical processes via a set of differential equations, and then perform high-fidelity numerical simulations to capture the process dynamics in relation to input variables. Then the problem becomes to identify and simulate some initial set of manufacturing variants, and use analytics of the resulting process data to first understand the process dynamics (e.g., rate of change in domain size, or transition between different morphological classes), and then identify new promising manufacturing~variants. \subsection{Data Generation} \label{sec:datagen} The material morphology data analyzed in this paper, has been generated by a computational model based on the phase-field method to record the morphology evolution during thermal annealing of the organic thin films~\cite{WodoBaskar2012-CmpMatSc,WodoBaskar2011-JCP}. We focused on the exploration of two manufacturing parameters, blend ratio $\phi$ and strength of interaction $\chi$. We selected these two parameters, since they are known to strongly influence properties of the resulting morphologies. For each fabrication variant ($\phi,\chi$), we generated a series of morphologies that together formed one trajectory $\Gamma(\phi, \chi)$. We selected the range of our design parameters $\phi=[0.5,0.6]$ and $\chi=[2.2,3.0]$ to explore several factors. First, we are interested in two stages of the process: early materials phase separation and coarsening. Moreover, we would like to explore various topological classes of morphologies. In particular, we are interested in identifying fabrication condition leading to interpenetrated structures. Finally, we seek to find the optimal annealing time that results in desired material domain sizes. In total, we generated 16 trajectories, with $180$ morpohologies on average per trajectory. Each morphology was represented as an image converted into an $40,000$-dimensional space defined by pixel composition~values. \subsection{Results} From the manufacturing design perspective, there are two basic aims for dimensionality reduction of morphological pathways. First, we seek to discover the common latent variables driving the dynamic process. Second, we seek to learn the geometry of manifold to device subsequent round of input parameter space exploration. Figures~\ref{results:early} and~\ref{results:late} depict three dimensional manifold discovered using {\sf Entropy-Isomap} for the complete set of 16 pathways. When mapped to the manifold, the pathways show ordering according to the process variables that were varied to generate the data. In both figures, for easier inspection we marked the pathways according to one varying variable. For example, the top row in Fig.~\ref{results:early} depicts the pathways for fixed $\phi$ and varying $\chi$. Pathways for increasing $\chi$ are ordered from right (dark) to left (light), while pathways for increasing $\phi$ are ordered from front (green) to back (blue). The observed ordering of pathways strongly indicates that the variables are also latent variables controlling the dynamic process. More importantly, the ordering reveals that denser sampling is required in $\phi$ space. Specifically, the pathways sharing the same $\chi$ but varying $\phi$ are spread further apart than these sharing the same $\chi$ value. This observation has important implications for the design of the next round of exploration in the design space. In particular, the $\phi$ space offers higher exploration benefits, while $\chi$ space has better exploitation chance. This suggests that $\phi$ space should be explored first, followed by potential exploitation phase. Finally, we notice that {\sf Entropy-Isomap} mapped the data into two regions. The early stages of the process are mapped to evolve in the radial direction, while late stages are mapped parallel to each other. This is interesting as the underlying process indeed has two inherent time scales. In the early stage, the phase separation between two polymer occurs. During this stage, the changes mostly result in increase of the composition amplitude. In the second stage, the coarsening between already formed domains occurs. Here, the amplitude of the composition (signal) does not change significantly. The changes mostly occur in the frequency space with the domain sizes increases over time. \begin{figure*} \centering \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{EarlyStage-30pts-coloringByPhi050.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{EarlyStage-30pts-coloringByPhi052.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{EarlyStage-30pts-coloringByPhi054.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{EarlyStage-30pts-coloringByPhi056.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{EarlyStage-30pts-coloringByChi24.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{EarlyStage-30pts-coloringByChi26.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{EarlyStage-30pts-coloringByChi28.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{EarlyStage-30pts-coloringByChi30.png} \caption{The manifold of early stage of the morphology evolution with first 30 points per each trajectory. To better illustrate discovered ordering by two variables we color coded the same manifold with increasing $\phi$ (top) and $\chi$ (bottom). (Please view in color).} \label{results:early} \end{figure*} \begin{figure*} \centering \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{LateStage-80pts-coloringByPhi050.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{LateStage-80pts-coloringByPhi052.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{LateStage-80pts-coloringByPhi054.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{LateStage-80pts-coloringByPhi056.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{LateStage-80pts-coloringByChi24.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{LateStage-80pts-coloringByChi26.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{LateStage-80pts-coloringByChi28.png} \includegraphics[width=.24\textwidth,trim={3cm 1.5cm 1.75cm 1.5cm},clip]{LateStage-80pts-coloringByChi30.png} \caption{The manifold of the late stage with the first 80 points per each trajectory. The same manifold is color coded by increasing $\phi$ (top) and $\chi$ (bottom). (Please view in color).} \label{results:late} \end{figure*} \section{Conclusions} Dynamic process data, represented by data trajectories, is challenging to commonly used SDR methods. This is due the strong temporal correlations within data trajectories, that lead to poor quality of recovered manifold. In this work, we introduce the notion of {\em neighborhood entropy}, which quantifies the information exchange between data points in dynamic process data. Then, we present {\sf Entropy-Isomap}, a new algorithm that uses {\em neighborhood entropy} to learn more reliable geodesics, and is able to discover latent variables governing dynamic processes, and learn the true manifold~geometry. We showcased our method on the data capturing the morphology evolution of materials. The method ordered the trajectory data according to the two process variables. Moreover, it exposed the need for more dense sampling in one of the explored variables. This observation can be used to design the next round of simulations to generate more data for under-sampled process configurations. This demonstrates that the method can be used to guide the data exploration process, potentially reducing the number of required numerical experiments. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-08-07T02:17:34", "yymm": "1802", "arxiv_id": "1802.06823", "language": "en", "url": "https://arxiv.org/abs/1802.06823" }
\section{Introduction} Low Mass X-ray Binaries (LMXBs) are systems where a companion star overflows its Roche lobe, so that material spirals down towards a compact object which can be either a black hole or neutron star. The observed X-ray emission is powered by the enourmous gravitational potential energy released by this material as it falls inwards, lighting up the regions of intense space--time curvature and giving observational tests of strong gravity. This inflow also powers outflows. Accretion disc winds are seen via blue--shifted absorption lines from highly ionised material in high inclination LMXBs (see e.g. the reviews by \citealt{Ponti2012, DiazTrigo2016}) but the driving mechanism of these winds is not well understood. The potential candidates are acceleration of gas by the Lorentz force from magnetic fields threading discs (magnetic driving: \citealt{Blandford1982, Fukumura2014}), radiation pressure on the electrons (continuum driving: \citealt{Proga2002, Hashizume2015}) and thermal expansion of the hot disc atmosphere heated by the central X-ray source, which makes a wind at radii which are large enough for the sound speed to exceed the local escape velocity (thermal driving: \citealt{Begelman1983, Woods1996}, hereafter W96). Recent work has focussed on magnetic driving, primarily because of a single observation of dramatic wind absorption seen from the black hole GRO J1655-40 at low luminosity, far below the Eddington limit where continuum driving becomes important, and with a derived launch radius which is far too small for thermal driving \citep{Miller2006, Luketic2010, Higginbottom2015}. Magnetic wind models can fit this spectrum \citep{Fukumura2017}, but then it is difficult to understand why such high column and (relatively) low ionisation winds are not seen from other observations of this object or any other high inclination systems with similar luminosities and spectra. Instead, the unique properties of this wind could potentially be explained if the outflow has become optically thick along the line of sight. This would suppress the observed flux from an intrinsically super--Eddington source \citep{Shidatsu2016, Neilsen2016}. Alternatively, this singular wind may be a transient phenomena, not representative of the somewhat lower column/higher ionisation winds which are normally seen. What then powers the more typical winds? Thermal driving qualitatively fits the observed properties, as these winds are preferentially seen in systems with larger discs \citep{DiazTrigo2016}. \if{\bf\fi X-rays from near the compact objects (the inner disc emission and any corona and/or boundary layer) irradiate the outer disc, and the balance between Compton heating and cooling heats the surface to the Compton temperature, which is a luminosity weighted mean energy given by $T_{\mathrm{IC}} = \frac{1}{4}\int E L(E)dE/\int L(E) dE$ (\citealt{Begelman1983, Done2018}, hereafter D18).\if}\fi This temperature is constant with radius, as it depends only on the spectrum of the radiation from the central region, but gravity decreases with distance from the source. For a large enough disc, the isothermal sound speed from this Compton temperature is bigger than the escape velocity, so the heated material makes a transition from forming a bound atmosphere, to an outflowing wind. This defines the Compton Radius ($R_{\mathrm{IC}}=GMm_p/kT_{\mathrm{IC}}= (6.4\times 10^4/T_{\mathrm{IC,8}}) R_g$, where $R_g=GM/c^2$ and $T_{\mathrm{IC,8}}=T_{\mathrm{IC}}/10^8~K$) as the typical launch radius for a thermal wind (\citealt{Begelman1983}, D18). These thermal wind models give an analytic solution for the mass loss rate from the disc (\citealt{Begelman1983}, W96). However, to calculate the observables such as column density and ionisation structure requires an understanding of the velocity as a function of two-dimensional position (or equivalently, velocity as a function of length along a streamline, together with the shape of the streamlines). D18 assume a very simple 2-dimensional density and velocity structure along radial streamlines, and show that this can match the column seen in the full hydrodynamic simulation of W96. This composite model was applied to multiple spectra of H1743--322, matching the wind seen in its disk dominated state, and predicting the disappearance of these wind absorption lines in a bright low/hard state, as observed (Shidatsu \& Done 2017, in prep). This wind disappearance does not occur from simply the change in photo-ionisation state from the changing illumination \citep{Miller2012}. The key difference is that thermal winds respond to changing illumination by changing their launch radius, density and velocity as well as responding to the changing photo-ionising spectrum (Shidatsu \& Done 2017, in prep). Here we explore the thermal wind predictions in more detail, using a 3--dimensional Monte-Carlo simulation code MONACO \citep{Odaka2011} to calculate the radiation transport through the material so as to predict the resulting emission and absorption line profiles. We calculate these for the very simple constant velocity structure assumed in D18, and then extend this to consider an accelerating wind along biconical streamlines, as a more appropriate disk wind geometry \citep{Waters2012}. We show results from both these geometries for the specific case of $L/L_{\mathrm{Edd}}=0.3, T_{\mathrm{IC,8}}=0.13, R_{out}=5R_{\mathrm{IC}}$ to compare with W96. \if{\bf\fi We apply the biconical wind model to the bright neutron star binary system GX13+1. This system is unique amongst all the black hole and neutron star binaries in showing persistently strong blueshifted absorption features in its spectrum \citep{Ueda2004,D'Ai2014}. The X-ray continuum is rather stable, with $ T_{\mathrm{IC,8}}\sim 0.13$, making it a good match to the simulations, but it has slightly higher luminosity at $L/L_{\mathrm{Edd}}=0.5$. This means that radiation pressure should become important, decreasing the radius from which the wind can be launched and increasing its column density. \if}\fi We compare results from a hybrid thermal/radiative wind (calculated using the approximate radiation pressure correction from D18) to the detailed absorption line profiles seen in the third order Chandra grating data from this source. This is the first quantatiative, physical model for the wind, and the first exploration of the highest spectral resolution data from this source. \if{\bf\fi While magnetic driving models are always possible, \if}\fi our results match the majority of the observed features, showing that the wind properites are broadly consistent with hybrid thermal-radiative driving. \section{Radiative Transfer Code We use the Monte-Carlo simulation code MONACO \citep{Odaka2011} to calculate radiative transfer through the wind. MONACO uses its original physics implementation of photon interactions \citep{Watanabe2006, Odaka2011} while it employs the Geant4 toolkit library \citep{Agostinelli2003} for photon tracking in an arbitrary 3--dimensional geometry. We consider an azimuthally symmetric density and velocity field, then use the XSTAR photo-ionization code \citep{Kallman2001} to calculate the equilibrium population of ions from each element assuming one-dimensional radiation transfer from the central source. We grid in radial distance, $r$, and $\theta$ and assume the illuminating flux is the transmitted part of the central spectrum along the line of sight to the element. This gives an ionisation parameter \begin{equation} \xi (r,\theta) = \frac{L \exp(-\tau_{abs})}{n (r,\theta) r^2} \end{equation} where $\tau_{abs}$ is the optical depth to absorption (but does not include electron scattering), and $n(r,\theta)$ is the density as a function of position. We then use MONACO to track photons through this ionisation structure, including their interaction with this material. Photons interacting with ions can be absorbed in photo-ionisation or photo-excitation, and photons generated via recombination and atomic deexcitation are tracked. \if{\bf\fi Doppler shifts of the absorption cross-sections from the velocity structure of the material are included\if}\fi, as is the Compton energy change on interaction with free electrons. Ideally, this calculated radiation field should then be used as input to XSTAR, the ion populations recalculated, and the process should be iterated until convergence. However, for our simulations here the wind is mostly optically thin, so we do not include this self-consistent iteration. This method of radiation transfer calculation is based on the \citet{Hagino2015}, but the geometry and the velocity/density structure we use is reflected on the thermally driven winds in LMXBs whereas \citet{Hagino2015} is focussed on the UV-line driven winds in Active Galactic Nucleus (AGN). Also, as thermal winds are typically highly ionised, we consider only H-like and He-like ions of Fe and calculate the spectrum only over a restricted energy band of 6.5--7.2 keV. Table.\ref{table:lines} details the transitions used. \begin{table} \begin{tabular}{|c|c|c|}\hline Line ID & Energy [keV] & oscillator strength \\ \hline Fe {\scriptsize XXV} He$\alpha ~\mathrm{y}$ & 6.668 & $6.57 \times 10^{-2}$ \\ Fe {\scriptsize XXV} He$\alpha~ \mathrm{w}$ & 6.700 & $7.26 \times 10^{-1} $\\ Fe {\scriptsize XXVI} Ly$\alpha_2$ &6.952& $1.36\times 10^{-1}$ \\ Fe {\scriptsize XXVI} Ly$\alpha_1$ &6.973& $2.73\times 10^{-1}$ \\ \hline \end{tabular} \caption{Detailed parameters for each line included in these MONACO simulations. Note that we list only lines which have larger oscillator strengths than $10^{-3}$. } \label{table:lines} \end{table} \section{Radial streamlines: D18} \subsection{Geometry and Parameters} We first consider the radial streamline wind model of D18. This calculates the analytic mass loss rate per unit area, $\dot{m}(R)$ where $R$ denotes distance along the disc plane. Integrating over the whole disc gives the total mass loss rate in the wind, $\dot{M}$. This is assumed to flow along radial (centered at the origin) streamlines from a launch radius which is $0.2R_{\mathrm{IC}}$ for high $L/L_{\mathrm{Edd}}$, with constant velocity set at the mass loss weighted average escape velocity. The mass loss rate along each radial streamline is weighted with angle such that $\dot{M}(\theta)\propto \dot{M}(1-\cos\theta)$, and then mass conservation gives $n(r,\theta) \propto ( 1-\cos\theta)/r^2$. D18 show that these assumptions lead to a total column density through the structure which matches to within a factor 2 of that in the hydrodynamic simulations of W96 (see also Section 4). We put this structure into MONACO for $L=0.3 L_{\mathrm{Edd}}$ with $T_{\mathrm{IC}}=1.3\times 10^7$~K and $R_{\mathrm{out}}=5R_{\mathrm{IC}}$ (mass loss rate $\dot{M}_w = 2.0 \times 10^{19} \mathrm{g~s^{-1}}$ for a $10M_\odot$ black hole which means the ratio of mass loss rate to mass accretion rate $\dot{M}_w/\dot{M}_a = 3.9$, where the $\dot{M}_a = L/(0.1c^2)$, launch radius of $0.2R_{\mathrm{IC}}\approx 10^5R_g$ and weighted average $v_{\mathrm{out}}=420\mathrm{km/s}$). We include turbulence, assuming $v_{\mathrm{turb}}=v_{\mathrm{out}}$, and calculate the rotation velocity along each stream line assuming angular momentum conservation (see Appendix A). We make a grid which follows the symmetry of the assumed structure, i.e. centred on the origin, with 20 linearly spaced spherical shells from $0.2-5R_{\mathrm{IC}}$, and 20 angles, linearly spaced in $\theta$ from $7-83^\circ$ (see below). This density structure is shown in the left panel of Fig.~\ref{fig: monaco input s}, while the right panel shows the mean Fe ion state obtained from the XSTAR calculation. This is constant along each streamline because the ionisation parameter $\xi=L/(n r^2)$, and the constant velocity radial streamlines mean that density decreases as $1/r^2$. Fe is almost completely ionised over the whole grid, with a small fraction of hydrogen-like iron remaining only for high inclination streamlines. \begin{figure} \includegraphics[width=0.9\hsize]{./fig_allpdf/monaco_input_sp-crop.pdf} \caption{Distribution of density (left) and mean Fe ionization state (right) for the radial streamline model} \label{fig: monaco input s} \end{figure} \subsection{MONACO output} Fig.~\ref{fig: spectra s} shows the resulting spectra at three different inclination angles. \if{\bf\fi These show that the emission lines are always similarly weak, and that the electron scattered continuum flux makes only a $\sim 0.5$\% contribution to the total flux, but that the absorption lines strongly increase at higher inclination angles. We calculate the equivalent width (EW) of each emission and absorption line by fitting the continuum outside the emission and absorption regions with an arbitrary function ($F(E) = aE^{bE+c}$ \citealt{Odaka2016}). The EW of each emission and absorption line is then measured by numerical integration of the difference between the model and the simulation data. The left panel of Fig.~\ref{fig: ew s} shows the EW of the He-like (red) and H-like Ly$\alpha_2$ (green) and Ly$\alpha_1$ (blue) absorption lines. The corresponding emission lines always have EW lower than $0.1$~eV so are not seen on this plot. The strong increase of the absorption line EW with inclination clearly shows that the wind is equatorial (by construction from the $1-\cos \theta$ density dependence and constant velocity assumptions). At inclinations above $70^\circ$, the Doppler wings of the K$\alpha_1$ and K$\alpha_2$ absorption lines merge together due to the turbulent velocities, so Fig.~\ref{fig: ew s} shows only a single EW for this blend. \if}\fi Fig.~\ref{fig: ew s} (right) shows the outflow velocity, as measured from the energy of the deepest absorption lines (with error set by the resolution of the simulation to $\pm 0.5$~eV). These velocities are constant within 25 \% as a function of inclination, again by construction due to the assumption of constant radial velocity along radial streamlines. \begin{figure*} \includegraphics[width=\hsize]{./fig_allpdf/normalized_flux_spherical-crop.pdf} \caption{ Spectra computed for the radial streamline model with 1 eV resolution for different lines of sight. Each panel shows the spectrum in a different inclination angle bin (the angular bin sizes are indicated the top of each panel). The total spectrum is shown in black (top), the spectrum direct photons in red and scattered/reprocessed spectrum is blue (bottom). Note that the vertical axis is plotted linearly in the top panels but logarithmically in the bottom panels. Lines are Fe {\scriptsize XXV} ~(6.668 keV for $\mathrm{He\alpha} ~y$ and 6.700 keV for $\mathrm{He\alpha} ~w $ ) and Fe {\scriptsize XXVI}~ (6.952 keV for $\mathrm{Ly\alpha_2} $ and 6.973 keV for $\mathrm{Ly\alpha_1})$. The equatorial density structure of the wind means that the absorption is much stronger at high inclination angles. The emission is more isotropic, so it can clearly be seen at low inclination angles, but is absorbed by the wind at high inclinations. } \label{fig: spectra s} \end{figure*} \begin{figure} \includegraphics[width=0.9\hsize]{./fig_allpdf/ew_vel_sp.pdf} \caption{Left panel: EW of the absorption lines as a function of inclination angle, Fe {\scriptsize XXV} ($\mathrm{He \alpha ~w}$, red) and Fe {\scriptsize XXVI} ($\mathrm{Ly \alpha_2}$, green and $\mathrm{Ly\alpha_1}$, blue). The EW of all absorption lines increases strongly at higher inclination, showing the assumed equatorial disc wind geometry. The Doppler wings (with width set by turbulent velocity) of the two H-like absorption lines start to merge for inclinations above $70^\circ$ so above this we show the total EW of the two lines. Right panel: the blue shifted absorption line velocity for each ion species. This clearly shows the assumed constant velocity structure of the radial streamline.} \label{fig: ew s} \end{figure} \section{Diverging wind} In section 3, we considered a wind model with constant velocity along radial streamlines. However, the expected thermal wind geometry is instead much more like an accelerating, diverging biconical wind \citet{Waters2012}. Full streamline structures which give the density and velocity of the wind at all points can only be found by hydrodynamic calculations (but see \citealt{Clarke2016} for some analytic approximations). Since modern calculations only exist for the singular case of GRO J1655-40, we follow D18 and use the W96 simulation results. W96 does not give full density/velocity structures, but do give total column density through the wind at three different luminosities. We use these to match to our assumed streamline and velocity structure, which is the standard biconical diverging disc wind used in a variety of systems including cataclysmic variables \citep{Knigge1995,Long2002} and Active Galaxies \citep{Sim2010,Hagino2015}. \subsection{Geometry and Parameters} The geometry can be defined by 3 parameters (Fig.~\ref{fig:diverging}). \begin{enumerate} \item $R_{\mathrm{in}}=0.1R_{\mathrm{IC}}$, the distance from the source to the inner edge of the wind \item $R_{\mathrm{out}}$, the distance from the source to outer edge of the wind \item $\alpha_{\mathrm{min}}$, the angle from z axis to the inner edge of the wind \end{enumerate} The disc wind is fan-shaped, with a focal point offset down from the centre by a distance $d= 0.1R_{\mathrm{IC}}/\tan\alpha_{\mathrm{min}}$ so that the wind fills the angles from $\alpha_{\mathrm{max}}-\alpha_{\mathrm{min}}$ down to the disc surface. We use $R$ to denote distance along the disc surface, and $r,\theta$ denote radial distance and polar angle from the origin, as before. $\alpha_{\mathrm{min}}$ (or equivalently $d$) is a free parameter, which sets the wind geometry. Streamlines are assumed to be along lines of constant angle $\alpha$ (where $\alpha_{\mathrm{min}}<\alpha<\alpha_{\mathrm{max}}$) from the focal point. Distance along a streamline which starts on the disc at radius $R$ is $l(R)$ (see Appendix. \ref{sec:calculation model A}). Velocity along the streamline is assumed to be of the form $v (r,\theta)=f_v c_{ch}(r)\sqrt{\frac{l(r,\theta)}{R(r)}}$, i.e. this wind accelerates with distance along the streamline, with a terminal velocity which is related via a free parameter $f_v$ to the characteristic sound speed $c_{ch}$, given by the balance between heating and cooling in the time it takes the wind to reach a height $H\sim R$ (D18). The density structure is solved by the mass conservation continuity equation along streamlines (see Appendix B). We calculate the wind properties out to a distance which is twice that of the focal point of the wind to $R_{\mathrm{out}}$. We set the free parameter values, $f_v$ and $\alpha_{\mathrm{min}}$, and calculate the total column along each line of sight, $N_H(\theta)$, to the central source for parameters matching to the three W96 simulations. These are $L/L_{\mathrm{Edd}} =0.3, 0.08$ with $R_{\mathrm{out}}=5R_{\mathrm{IC}}$ and $L/L_{\mathrm{Edd}}= 0.01$ with $R_{\mathrm{out}}=12R_{\mathrm{IC}}$. We adjust $f_v$ and $\alpha_{\mathrm{min}}$ to minimise the difference between our model and W96. We find $\alpha_{\mathrm{min}}=7^\circ$ and $f_v=0.25$ matches within a factor 2 of the results from W96. Fig.~\ref{fig:column density} shows results with these parameters (filled circles), compared to the radial wind model of Section 3 (open circles) as well as the W96 results (solid line). This more physically realistic geometry and velocity gives a similarly good match to the simulations as the D18 radial wind. \begin{figure} \includegraphics[width=0.9\hsize]{fig_allpdf/geometry_modelB-crop.pdf} \caption{The geometry of the diverging biconical wind model.} \label{fig:diverging} \end{figure} \begin{figure} \includegraphics[width=0.9\hsize]{fig_allpdf/column_newest-crop.pdf} \caption{The solid lines show column density as a function of the cosine of the inclination angle through the wind resulting from the hydrodynamic simulations of W96 for $L/L_{\mathrm{Edd}}=$ 0.01(green), 0.08 (red), 0.3 (black). The filled circles show that resulting from the diverging biconical wind (Section 4) while the open circles show the radial streamline model of D18 (Section 3).} \label{fig:column density} \end{figure} The resulting density structure from this different geometry and velocity are shown in the left panel of Fig.~\ref{fig:xout d} for $L/L_{\mathrm{Edd}}=0.3$. Comparing this with the radial wind shows that the density is higher closer to the disc, and lower further away due to the material accelerating away from the disc rather than being at constant velocity. We run XSTAR as before, and the right panel of Fig.~\ref{fig:xout d} shows that this leads to a lower mean ionisation state of Fe than before, and this is no longer constant along the radial sightline due to the different wind geometry. \begin{figure} \includegraphics[width=0.9\hsize]{./fig_allpdf/monaco_input_div-crop.pdf} \caption{Distribution of density (left) and Fe ionisation state (right) for the diverging wind geometry. The accelerating flow gives higher density material close to the disc compared to the constant velocity outflow model in Fig.~\ref{fig: monaco input s}, giving lower mean ionisation state.} \label{fig:xout d} \end{figure} \subsection{MONACO output} \begin{figure*} \includegraphics[width=\hsize]{fig_allpdf/normalized_flux_div-crop.pdf} \caption{As in Fig \ref{fig: spectra s}, but for the diverging biconical wind geometry.} \label{fig: spectra d} \end{figure*} \begin{figure} \includegraphics[width=0.9\hsize]{./fig_allpdf/ew_vel_div.pdf} \caption{As in Fig.~\ref{fig: ew s}, but for the diverging wind model. The lower The lower ionisation state means that there is also a contribution from the intercombination line of Fe {\scriptsize XXV} $\mathrm{He \alpha ~y}$ (black) at the highest inclinations. } \label{fig: ew d} \end{figure} We calculate the emission and absorption lines resulting from the different wind structure (Fig.~\ref{fig: spectra d}). The diverging bipolar wind has higher density material closer to the source compared to the radial wind geometry, so it subtends a larger solid angle to scattering. This means that there is more emission line contribution, as well as a higher fraction of electron scattered continuum (around 2\%, see the lower panel of Fig.~\ref{fig: spectra d}). The left panel of Fig.~\ref{fig: ew d} shows the emission line EW (dotted lines) for each ion species (red: Fe XXV w, green: Fe XXVI Ly$\alpha_2$, blue: Ly$\alpha_1$). These can now be of order 1eV for face on inclinations, decreasing at higher inclination as they are significantly suppressed by line absorption. The corresponding absorption line EWs are shown as the solid lines (compare to Fig.~\ref{fig: ew s}). The lower mean ionisation state leads to more He-like Fe, so there is more of this ion seen in absorption than in the radial streamline model. These absorption lines increase as a function of inclination angle as before, but now the Ly$\alpha_1$ and Ly$\alpha_2$ do not merge together at the highest inclination angles due to the different velocity structure (see right panel of Fig.~\ref{fig: ew d}). The lines are formed preferentially in the higher density material close to the disc. The assumed acceleration law means that the typical velocities here are lower than in the constant velocity model, as the material has only just begun to accelerate. Thus the turbulence is also lower, so the Doppler width of the absorption lines is smaller. This also means that the absorption line saturates to a constant EW at lower column density, so the EW of the absorption lines does not increase so strongly as before at the highest inclination angles. \section{Comparison with GX13+1} We now use the more physically motivated diverging biconical wind geometry to compare with observational data. An ideal source would be one which is not too different from the parameters simulated in the previous sections, as here we know the total column from W96 and know that our assumed velocity/density matches to this. Of the sources listed in \citet{DiazTrigo2016}, the neutron star LMXB GX13+1 is the source which has most similar $L/L_{\mathrm{Edd}}$ and $T_{\mathrm{IC}}$ to that assumed here, and it also has the advantage that it is a persistent source, with relatively constant luminosity and spectral shape, and it shows similarly strong absorption lines in multiple datasets. \subsection{Observational data} GX13+1 was observed by the Chandra HighEnergy Transmission Grating (HETG) 4 times in two weeks in 2010 (Table.~\ref{table:obsids}). The first order data are shown in \citet{D'Ai2014} and reveal multiple absorption features from highly ionised elements (see also \citealt{Ueda2004} for similar features in an earlier observation). Higher order grating spectra give higher resolution, as demonstrated for the black hole binaries by \citep{Miller2015}. Here we show for the first time the third order Chandra data for GX13+1. We extract first- and third-order HEG spectra from these observations, using CIAO version 4.9 and corresponding calibration files. We reprocess the event files with $"\mathrm{chandra\_repro}" $, and make response files using $"\mathrm{mktgres}"$ to make the redistribution and ancillary response files. We run $"\mathrm{tgsplit}"$ to get the HEG $\pm 3$ spectra, and run $"\mathrm{combine\_grating\_spectra}"$ to combine HEG plus and minus orders for each observation to derive a single 1st order spectrum (black), and a single 3rd order spectrum (red) as shown in Fig.~\ref{fig:heg spectra}. The 1st order spectra can resolve the components of the He-like Fe triplet, with a clear dip to the low energy side at the resonance line energy of 6.7~keV, but the H-like Ly$\alpha_1$ and $\alpha_2$ are blended together. The higher resolution of the 3rd order spectra is able to clearly separate the He-like intercombination and resonance lines, and even the H-like Ly$\alpha_1$ and $\alpha_2$ \citep{Miller2015}. \begin{table} \begin{tabular}{|c|c|c|c|} \hline OBSID & MODE & Date& Exposure (ks)\\ \hline 11815& TE-F & 24/07/2010 & 28 \\ 11816 & TE-F & 30/07/2010 &28 \\ 11814 & TE-F & 01/08/2010 & 28 \\ 11817 & TE-F & 03/08/2010 & 28 \\ \hline \end{tabular} \caption{List of the Chandra HETG observations} \label{table:obsids} \end{table} \begin{figure} \includegraphics[width=0.9\hsize]{fig_allpdf/gx13_heg3-crop.pdf} \caption{HEG spectra of GX 13+1 from 1st order (black) and 3rd order (red). } \label{fig:heg spectra} \end{figure} \subsection{Model of GX 13+1} We fit the contemporaneous RXTE spectrum (ObsID 95338-01-01-05) with a model consisting of a disc, Comptonised boundary layer and its reflection. The resulting inverse Compton temperature of the continuum (disc plus Comptonisation) is $T_{\mathrm{IC}}\sim 1.2 \times 10^7~\mathrm{K}$, almost identical to the simulation (see also \citealt{D'Ai2014}). The luminosity is $L=0.5L_{\mathrm{Edd}}$ \citep{DiazTrigo2014, D'Ai2014}, similar to the maximum simulation value of $L=0.3L_{\mathrm{Edd}}$ in W96. The simulation also requires $ R_{\mathrm{out}}$, which can be calculated from the orbital period and mass of binary stars. GX 13+1 has 24 day orbital period, and the neutron star and companion have masses of $1.4 M_\odot$ and $5 M_\odot$ respectively \citep{Bandyopadhyay1999, Corbet2010}. This gives a binary separation $a=4.6\times10^{12} \mathrm{cm}$, for a Roche-lobe radius $R_R/a=0.27$. The disc size is then $R_{\mathrm{out}}= 10R_{\mathrm{IC}}$ assuming that $R_{\mathrm{out}}=0.8R_R$ \citep{Shahbaz1998}, double the value assumed in the simulations. D18 shows that this increase in disc size makes the predicted column slightly larger, but the effect is fairly small (Fig.~3: D18). Fig.~\ref{fig:radiation correction} (blue line) shows the predicted column density through the wind as a function of inclination angle. This is very similar to the column predicted for the fiducial simulations (Fig.~\ref{fig:column density}) However, D18 show that radiation pressure should make a rapidly increasing contribution to the wind as $L/L_{\mathrm{Edd}}$ increases from $0.3-0.7$. The GX13+1 luminosity is midway between these two, so radiation pressure should significantly lower the effective gravity, meaning that the wind can be launched from smaller radii. We follow D18 and estimate a radiation pressure correction to the launch radius of $\bar{R}_{\mathrm{IC}} = (1.0-0.5L_{\mathrm{Edd}}/0.71L_{\mathrm{Edd}})R_{\mathrm{IC}}=0.30R_{\mathrm{IC}}$ , hence $R_{\mathrm{out}}=33 \bar{R}_{\mathrm{IC}}$, dramatically larger than assumed in the fiducial simulations. This correction predicts a density which is $11$ times larger and column along any sightline which is $ 3.3$ times larger assuming (as in D18) that the velocity structure is unchanged (red line, Fig.~\ref{fig:radiation correction}). This increase in $R_{\mathrm{out}}$ in terms of $R_{\mathrm{IC}}$ means that more wind is produced (as in D18), so the wind efficiency increases to 4.0 (from 2.3). The column density goes close to $10^{24}$~cm$^{-2}$ at high inclinations, so electron scattering becomes important. This effect reduces the illuminating ionising flux by $e^{-\tau_T}$ from the central source along the line of sight to each wind element, and also increases the contribution of diffuse and scattered emission from the wind to the ionising continuum. We include scattering, reducing the XSTAR illumination by $e^{-\tau_{T}}$ along each line of sight, but do not include the diffuse emission as the timescale to integrate over the entire wind at each point is prohibitive. \begin{figure} \includegraphics[width=0.9\hsize]{fig_allpdf/colum_density_forGXmodel-crop.pdf} \caption{The column density as a function of the cosine of the inclination angle for the diverging biconical wind calculated for the system parameters of GX13+1. The blue line shows the predictions for a purely thermal wind, while the red includes a very simple treatment of radiation pressure. The source has $L/L_{\mathrm{Edd}}\sim 0.5$, so the thermal wind can be launched from closer in due to the lower effective gravity. This effect has a large impact on the predicted column, so the details of how this radiation pressure correction affects the velocity and density structure will be important in determining the line profiles.} \label{fig:radiation correction} \end{figure} We run MONACO on this wind structure to predict the detailed absorption line profiles for comparison to the 3rd order HEG data. Fig.~\ref{fig:gx model1} shows the result assuming an inclination angle of $80^\circ$ \citep{DiazTrigo2012} which gives the best fit to the data. This gives a fairly good match to the overall absorption, except for the highest velocity material seen in the data. Lower inclination angles give higher blueshift, but lower absorption line equivalent width, while higher inclination gives larger absorption line but lower blueshift (see Fig.~\ref{fig:ew gx}). Thus it is not possible to completely reproduce the observed lines in GX13+1 with our simple radiation pressure corrected thermal wind model. This is not surprising, as radiation pressure will almost certainly change the velocity law by radiative acceleration as well as changing the launch radius. Full radiation hydrodynamic simulations are required to predict the resulting velocity and density structure. Nonetheless, our result demonstrates for the first time that hybrid thermal-radiative wind models can give a good overall match to the column and ionisation state of the wind in GX13+1, and that current data can already give constraints on the velocity and density structure of this material. \begin{figure} \includegraphics[width=0.9\hsize]{fig_allpdf/heg3_vs_05Ledd_m1_80deg_lowion-crop.pdf} \caption{The model (red) and HEG 3rd order spectrum (black). The best fit inclination angle is $i=80^\circ$. This gives roughly the correct column of Fe {\scriptsize XXV} and {\scriptsize XXVI} at low velocity, but fails to match the observed higher velocity blue wing to the absorption features. } \label{fig:gx model1} \end{figure} \begin{figure} \includegraphics[width=0.9\hsize]{fig_allpdf/ew_vel_gx.pdf} \caption{As in Fig.~\ref{fig: ew d}, but with the system parameters of GX 13+1 and the simplest radiation pressure correction to make a hybrid thermal/radiative wind.} \label{fig:ew gx} \end{figure} \section{Discussion and Summary} We construct a Monte-Carlo code to calculate detailed spectra from any given density and velocity distribution of highly ionised material. We use this to explore the absorption and emission lines of H and He-like Fe for the mass loss rates predicted from thermal wind models. We first use the radial streamline, constant velocity model of D18 which is able to reproduce the column derived from the hydrodynamic calculations of W96, but then extend this to a more realistic disc-wind geometry with gas accelerating along diverging streamlines, again reproducing the column from W96. The different assumed velocity and density structures for the thermal wind mass loss rates give different predictions for the overall ionisation state of the material, the resulting EW of emission and absorption lines, and their velocity shift. These show the potential of observations to test the detailed structure of the wind. We apply the biconical disc wind model to some of the best data on winds from an LMXB. The neutron star GX13+1 shows strong and persistent absorption features in Chandra first order HETG spectra \citep{Ueda2004, D'Ai2014}, but here we show for the first time the higher resolution third order data. We find that while the source is fairly well matched to the parameters of the brightest fiducial simulation in terms of $T_{\mathrm{IC}}$, the higher luminosity \if{\bf\fi ($L/L_{\mathrm{Edd}}=0.5$ compared to $0.3$ for the simulation) makes a significant impact on the predicted wind properties\if}\fi as it puts the source firmly into the regime where radiation pressure driving should become important. We use the simple radiation pressure correction suggested by D18 and calculate the line profiles from a hybrid thermal-radiative wind. The additional radiation pressure driving means that the wind can be launched from much closer to the central source, and has higher mass loss rate. This is the first detailed test of the absorption line profiles predicted by physical wind models on any source other than the singular wind seen in GRO J1655-40 \citep{Luketic2010}. Our simulations quantitatively match many of the observed features except for the highest velocity material. This is not surprising, given the simplistic assumptions about the effect of radiation pressure. In future work we will use the wind velocity and density structure determined from full radiation hydrodynamics simulations in order to properly test the thermal-radiative wind models in GX13+1. Nonetheless, our current simulations already show that the thermal-radiative winds can potentially explain all of the wind absorption features seen in GX13+1, so that there is very little room for any additional magnetically driven winds in this source. \section*{Acknowledgements} We thank K. Hagino for help in setting up the MONACO simulations. CD acknowledges STFC funding under grant ST/L00075X/1 and a JSPS long term fellowship L16581. \bibliographystyle{mnras}
{ "timestamp": "2018-02-21T02:06:54", "yymm": "1802", "arxiv_id": "1802.07019", "language": "en", "url": "https://arxiv.org/abs/1802.07019" }
\section{Acknowledgments} We would like to thank De Meng, Maryam Fazel and Mehran Mesbahi for their help. \subsection{Lemma~\ref{lemma Pythagorean} } \begin{proof} Since \(P_{ij}=0\) if \(j\notin \mathcal{N}(i)\), the optimality condition for \eqref{mirror Markov mixing dynamics} can be written as follows: for any \(u\in\mathcal{X}\), we have \begin{equation*} \sum_{j\in\mathcal{V}}P_{ij}\langle \nabla\phi(y_i^{(t)})-\nabla \phi(x_j^{(t)}), u-y_i^{(t)}\rangle\geq 0 \end{equation*} Using three point property \eqref{3-point property}, we have \begin{equation} \begin{aligned} &\sum_{j\in\mathcal{V}}P_{ij}B_\phi(u, x_j^{(t)})-\sum_{j\in\mathcal{V}}P_{ij}B_\phi(u, y_i^{(t)}) \\ \geq&\sum_{j\in\mathcal{V}}P_{ij}B_\phi(y_i^{(t)}, x_j^{(t)}) \end{aligned} \label{lemma Pythagoream: eqn2} \end{equation} Summing \eqref{lemma Pythagoream: eqn2} over all \(i\) completes the proof. \end{proof} \subsection{Theorem~\ref{theorem Bregman PDMM global convergence} } \begin{proof} Since \(P\) is irreducible, \(\bm{x}^\star\) satisfy \eqref{KKT: primal} if and only if there exists \(x^\star\in\mathcal{X}\) such that \(\bm{x}^\star=\mathbf{1}_m\otimes x^\star\). Substitute \eqref{Bregman PDMM primal update: optimality} into \eqref{definition of subgradient} we have: there exists \(g_i\in N_\mathcal{X}(x_i^{(t+1)})\) for all \(i\) such that \begin{equation} \begin{aligned} &\sum_{i\in\mathcal{V}} f_i(x_i^{(t+1)})-\sum_{i\in\mathcal{V}} f_i(x^\star)\\ \leq &\langle -\bm{\nu}^{(t)},((I_m-P)\otimes I_n)\bm{x}^{(t+1)} \rangle\\ & + \rho\sum_{i\in\mathcal{V}}\langle \nabla\phi(x_i^{(t+1)})-\nabla\phi(y_i^{(t)}), x^\star-x_i^{(t+1)}\rangle\\ & + \sum_{i\in\mathcal{V}}\delta_i\langle \nabla\varphi_i(x_i^{(t+1)})-\nabla\varphi_i(x_i^{(t)}), x^\star-x_i^{(t+1)}\rangle\\ & - \sum_{i\in\mathcal{V}}\langle g_i, x_i^{(t+1)}-x^\star\rangle\\ \overset{\eqref{definition of normal cone}}{\leq} &\langle -\bm{\nu}^{(t)},((I_m-P)\otimes I_n)\bm{x}^{(t+1)} \rangle\\ & + \rho\sum_{i\in\mathcal{V}}\langle \nabla\phi(x_i^{(t+1)})-\nabla\phi(y_i^{(t)}), x^\star-x_i^{(t+1)}\rangle\\ & + \sum_{i\in\mathcal{V}}\delta_i\langle \nabla\varphi_i(x_i^{(t+1)})-\nabla\varphi_i(x_i^{(t)}), x^\star-x_i^{(t+1)}\rangle\\ \overset{\eqref{3-point property}}{\leq} & \langle -\bm{\nu}^{(t)},((I_m-P)\otimes I_n)\bm{x}^{(t+1)} \rangle\\ &+ \rho \sum_{i\in\mathcal{V}} B_\phi(x^\star, y_i^{(t)}) + \sum_{i\in\mathcal{V}} \delta_i B_{\varphi_i}(x^\star, x_i^{(t)})\\ &-\rho \sum_{i\in\mathcal{V}}B_\phi(x^\star, x_i^{(t+1)}) - \sum_{i\in\mathcal{V}}\delta_i B_{\varphi_i}(x^\star, x_i^{(t+1)})\\ & -\rho \sum_{i\in\mathcal{V}}B_\phi(x_i^{(t+1)}, y_i^{(t)}) - \sum_{i\in\mathcal{V}}\delta_i B_{\varphi_i}(x_i^{(t+1)}, x_i^{(t)})\\ \overset{\eqref{lemma Pythagorean: eqn1} }{\leq} & \langle -\bm{\nu}^{(t)},((I_m-P)\otimes I_n)\bm{x}^{(t+1)} \rangle\\ & + \rho \sum_{i\in\mathcal{V}} B_\phi(x^\star, y_i^{(t)}) + \sum_{i\in\mathcal{V}} \delta_i B_{\varphi_i}(x^\star, x_i^{(t)})\\ & -\rho \sum_{i\in\mathcal{V}}B_\phi(x^\star, y_i^{(t+1)}) - \sum_{i\in\mathcal{V}}\delta_i B_{\varphi_i}(x^\star, x_i^{(t+1)})\\ & -\rho \sum_{i\in\mathcal{V}}B_\phi(x_i^{(t+1)}, y_i^{(t)}) - \sum_{i\in\mathcal{V}}\delta_i B_{\varphi_i}(x_i^{(t+1)}, x_i^{(t)})\\ & -\rho\sum_{i, j\in\mathcal{V}}P_{ij}B_\phi(y_i^{(t+1)}, x_j^{(t+1)})\\ \overset{\eqref{phi strong convexity} }{\leq}& \langle -\bm{\nu}^{(t)},((I_m-P)\otimes I_n)\bm{x}^{(t+1)} \rangle\\ & + \rho \sum_{i\in\mathcal{V}} B_\phi(x^\star, y_i^{(t)}) + \sum_{i\in\mathcal{V}} \delta_i B_{\varphi_i}(x^\star, x_i^{(t)})\\ & -\rho \sum_{i\in\mathcal{V}}B_\phi(x^\star, y_i^{(t+1)}) - \sum_{i\in\mathcal{V}}\delta_i B_{\varphi_i}(x^\star, x_i^{(t+1)})\\ & -\rho \sum_{i\in\mathcal{V}}B_\phi(x_i^{(t+1)}, y_i^{(t)}) - \sum_{i\in\mathcal{V}}\delta_i B_{\varphi_i}(x_i^{(t+1)}, x_i^{(t)})\\ & -\frac{\rho\mu}{2}\sum_{i, j\in\mathcal{V}}P_{ij}\norm{y_i^{(t+1)} - x_j^{(t+1)}}_p^2 \end{aligned} \label{theorem Bregman PDMM global convergence: eqn1} \end{equation} Similarly, substitute \eqref{KKT: dual} into \eqref{definition of subgradient} we have \begin{equation} \begin{aligned} &\sum_{i\in\mathcal{V}} f_i(x^\star)-\sum_{i\in\mathcal{V}} f_i(x_i^{(t+1)})\\ \leq & \langle \bm{\nu}^\star, ((I_m-P)\otimes I_n)\bm{x}^{(t+1)}\rangle \end{aligned} \label{theorem Bregman PDMM global convergence: eqn2} \end{equation} Notice if we let \(\phi(u)=\frac{1}{2}\norm{u}_2^2\) in \eqref{3-point property}, we can show the following using \eqref{Bregman PDMM: dual update}. \begin{equation} \begin{aligned} & \langle \bm{\nu}^\star-\bm{\nu}^{(t)}, ((I_m-P)\otimes I_n)\bm{x}^{(t+1)}\rangle\\ = &\frac{1}{2\tau}\norm{\bm{\nu}^\star-\bm{\nu}^{(t)}}_2^2-\frac{1}{2\tau}\norm{\bm{\nu}^\star-\bm{\nu}^{(t+1)}}_2^2\\ &+\frac{\tau}{2}\norm{((I_m-P)\otimes I_n)\bm{x}^{(t+1)}}_2^2 \end{aligned} \label{theorem Bregman PDMM global convergence: eqn3} \end{equation} Based on \eqref{theorem Bregman PDMM global convergence: eqn3}, sum up \eqref{theorem Bregman PDMM global convergence: eqn1} and \eqref{theorem Bregman PDMM global convergence: eqn2} we obtain \begin{equation} \begin{aligned} &V(t)-V(t+1)\\ \geq &R(t+1)+\frac{\mu}{2}\sum_{i, j\in\mathcal{V}}P_{ij}\norm{y_i^{(t+1)} - x_j^{(t+1)}}_p^2\\ &-\frac{\tau/\rho+\gamma}{2}\norm{((I_m-P)\otimes I_n)\bm{x}^{(t+1)}}_2^2\\ \overset{\eqref{lemma residual & variance: eqn1} }{\geq} &R(t+1)+\frac{\mu}{2}\sum_{i, j\in\mathcal{V}}P_{ij}\norm{y_i^{(t+1)} - x_j^{(t+1)}}_p^2\\ &-\frac{\tau/\rho+\gamma}{2\sigma}\sum_{i, j\in\mathcal{V}}P_{ij}\norm{y_i^{(t+1)}-x_j^{(t+1)}}_p^2\\ \overset{\eqref{parameter Bregman PDMM} }{\geq} & R(t+1). \end{aligned} \label{theorem Bregman PDMM global convergence: eqn4} \end{equation} Notice that properties of mirror maps ensured that \(x_i^{(t)}\) will not be achieved on the boundary of \(\mathcal{D}\) for all \(i\in\mathcal{V}\) and \(t\), hence \(V(t)<\infty\) for all \(t\). Sum up \eqref{theorem Bregman PDMM global convergence: eqn4} from \(t=0\) to \(\infty\) we have \(\sum_{t=0}^\infty R(t+1)\leq V(0)\). Since \(R(t+1)\geq 0\), \(R(t+1)\to 0\) as \(t\to \infty\), which completes the proof. \end{proof} \subsection{Theorem~\ref{theorem Bregman PDMM 1/T convergence} } \begin{proof} Since \begin{equation} \begin{aligned} & \langle -\bm{\nu}^{(t)}, ((I_m-P)\otimes I_n)\bm{x}^{(t+1)}\rangle\\ \overset{\eqref{Bregman PDMM: dual update} }{=} &\frac{1}{2\tau}\norm{\bm{\nu}^{(t)}}_2^2-\frac{1}{2\tau}\norm{\bm{\nu}^{(t+1)}}_2^2\\ & +\frac{\tau}{2}\norm{((I_m-P)\otimes I_n)\bm{x}^{(t+1)}}_2^2 \end{aligned} \label{theorem Bregman PDMM 1/T convergence: eqn1} \end{equation} Substitute \eqref{theorem Bregman PDMM 1/T convergence: eqn1} into \eqref{theorem Bregman PDMM global convergence: eqn1}, combined with the fact that \[\rho\sum_{i\in\mathcal{V}} B_\phi(x_i^{(t+1)}, y_i^{(t)})+\sum_{i\in\mathcal{V}}\delta_iB_{\varphi_i}(x_i^{(t+1)}, x_i^{(t)})\geq 0\] we obtain \begin{equation} \begin{aligned} &\sum_{i\in\mathcal{V}} f_i(x_i^{(t+1)})-\sum_{i\in\mathcal{V}} f_i(x^\star)\\ \overset{\eqref{lemma residual & variance: eqn1}}{\leq} & \frac{1}{2\tau}\norm{\bm{\nu}^{(t)}}_2^2-\frac{1}{2\tau}\norm{\bm{\nu}^{(t+1)}}_2^2\\ &+\rho\sum_{i\in\mathcal{V}} B_\phi(x^\star, y_i^{(t)})-\rho\sum_{i\in\mathcal{V}} B_\phi(x^\star, y_i^{(t+1)})\\ &+\sum_{i\in\mathcal{V}} \delta_iB_{\varphi_i}(x^\star, x_i^{(t)}) -\sum_{i\in\mathcal{V}} \delta_i B_{\varphi_i}(x^\star, x_i^{(t+1)})\\ &-\frac{\mu\rho}{2}\sum_{i, j\in\mathcal{V}}P_{ij}\norm{y_i^{(t+1)} - x_j^{(t+1)}}_p^2\\ &+\frac{\tau}{2\sigma}\sum_{i, j\in\mathcal{V}}P_{ij}\norm{y_i^{(t+1)}-x_j^{(t+1)}}_p^2\\ \overset{\eqref{parameter Bregman PDMM} }{\leq} & \frac{1}{2\tau}\norm{\bm{\nu}^{(t)}}_2^2-\frac{1}{2\tau}\norm{\bm{\nu}^{(t+1)}}_2^2\\ &+\rho\sum_{i\in\mathcal{V}} B_\phi(x^\star, y_i^{(t)})-\rho\sum_{i\in\mathcal{V}} B_\phi(x^\star, y_i^{(t+1)})\\ &+\sum_{i\in\mathcal{V}} \delta_iB_\varphi(x^\star, x_i^{(t)}) -\sum_{i\in\mathcal{V}} \delta_i B_{\varphi_i}(x^\star, x_i^{(t+1)}) \end{aligned} \label{theorem Bregman PDMM 1/T convergence: eqn2} \end{equation} Sum up \eqref{theorem Bregman PDMM 1/T convergence: eqn2} for \(t=0, \ldots, T-1\) and apply Jensen's inequality we have \eqref{Bregman PDMM function value 1/T}, similarly sum up \eqref{theorem Bregman PDMM global convergence: eqn4} for \(t=0, \ldots, T-1\) and apply Jensen's inequality we have \eqref{Bregman PDMM residual 1/T}. \end{proof} \subsection{Corollary~\ref{corollary 1}} \begin{proof} The proof is a direction application of Theorem~\ref{theorem Bregman PDMM 1/T convergence} and the fact that \(\bm{\nu}^\star\) is in the range space of matrix \((I_m-P)\otimes I_n\) if \(\bm{\nu}^{(0)}=0\) (See Lemma 1 in \cite{meng2015proximal}). \end{proof} \section{Conclusions}\label{conclusion} In order to solve distributed optimization over a graph, we generalize PDMM to Bregman PDMM based on mirror averaging. The global convergence and iteration complexity of Bregman PDMM are established, along with its improvement over PDMM. We can further enhance its performance by designing the averaging matrix. Future work directions include the variants of the proposed algorithm for asynchronous and stochastic updates, time-varying graphs, and applications in multi-agent decision making. \section{Convergence}\label{convergence} In this section, we establish the convergence analysis of Algorithm~\ref{Bregman PDMM}. All detailed proofs in this section can be found in the Appendix. We first define the Lagrangian of problem \eqref{consensus optimization problem} as \(L(\bm{x}, \bm{\nu})=\sum_{i\in\mathcal{V}} L_i(x_i, \bm{\nu})\) where, \begin{equation} L_i(x_i, \bm{\nu})\coloneqq f_i(x_i) + \iota_{\mathcal{X}}(x_i)+\langle x_i, \nu_i-\sum_{j\in\mathcal{V}}P_{ij}\nu_j\rangle, \label{Lagrangian} \end{equation} and \(\bm{\nu}=[\nu_1^\top, \ldots, \nu_m^\top]^\top\) denote the dual variables. We group our assumptions in Assumption~\ref{basic assumption}. \begin{assumption} \label{basic assumption} \begin{enumerate}[(a)] \item For all \(i\in\mathcal{V}\), \(f_i:{\mbox{\bf R}}^n\to{\mbox{\bf R}}\cup \{+\infty\}\) is closed, proper and convex. \item There exists a saddle point \((\bm{x}^\star, \bm{\nu}^\star)\) that satisfies the KKT conditions of the Lagrangian given in \eqref{Lagrangian}: for all \(i\in\mathcal{V}\), there exists \(g_i\in N_\mathcal{X}(x_i^\star)\) such that \begin{subequations} \begin{align} \sum_{j\in\mathcal{V}}P_{ij}x_j^\star & =x_i^\star \label{KKT: primal} \\ -\nu^\star_i+\sum_{j\in\mathcal{V}} P_{ij}\nu^\star_j -g_i& \in \partial f_i(x_i^\star) \label{KKT: dual} \end{align} \label{KKT} \end{subequations} \item Functions \(\varphi_1, \varphi_2, \ldots, \varphi_n:\mathcal{D}\to{\mbox{\bf R}}\) are strictly convex, where \(\mathcal{D}\) is a open convex set such that \(\mathcal{X}\) is included in its closure. Function \(\phi:\mathcal{D}\to{\mbox{\bf R}}\) is a mirror map and is \(\mu\)-strongly convex with respect to \(l_p\)-norm \(\norm{\cdot}_p\) over \(\mathcal{X}\cap\mathcal{D}\), {\it i.e.}, for any \(u, v\in \mathcal{X}\cap\mathcal{D}\), \begin{equation} B_\phi(u, v)\geq \frac{\mu}{2}\norm{u-v}_p^2. \label{phi strong convexity} \end{equation} \item The matrix \(P\) is symmetric, stochastic, irreducible and positive semi-definite. \label{P matrix assumption} \end{enumerate} \end{assumption} \begin{remark} An immediate implication of assumptions in entry~(\ref{P matrix assumption}) is that \(\lambda_2(P)<1\) due to \cite[Corollary 8.4.6]{horn2012matrix}. \end{remark} Notice that we assume a homogeneous mirror map \(\phi\) is used by all vertices in Algorithm~\ref{Bregman PDMM}, but our results can be generalized to the cases of heterogeneous mirror maps as long as they all satisfy \eqref{phi strong convexity}. Now we start to construct the convergence proof of Algorithm~\ref{Bregman PDMM} under Assumption~\ref{basic assumption}. From the definition in \eqref{Lagrangian} we know that the Lagrangian \(L(\bm{x}, \bm{\nu})\) is separable in each \(x_i\); hence the KKT conditions \eqref{KKT: dual} can be obtained separately for each \(x_i\) using Lemma~\ref{lemma optimality condition}. Similarly one can have the optimality condition of \eqref{Bregman PDMM: primal update}: there exists \(g_i\in N_\mathcal{X}(x_i^{(t+1)})\) such that \begin{equation} \begin{aligned} &-\nu_i^{(t)}+\sum_{j\in\mathcal{V}} P_{ij}\nu_j^{(t)}-\rho\left(\nabla\phi(x_i^{(t+1)})-\nabla\phi(y_i^{(t)})\right)\\ &-\delta_i\left(\nabla\varphi_i(x_i^{(t+1)})-\nabla\varphi_i(x_i^{(t)})\right)-g_i\in \partial f_i(x_i^{(t+1)}). \end{aligned} \label{Bregman PDMM primal update: optimality} \end{equation} Our goal is to show that as \(t\to\infty\), \(\{\bm{x}^{(t+1)}, \bm{\nu}^{(t+1)}\}\) will satisfy \eqref{KKT: primal} and reduce conditions in \eqref{Bregman PDMM primal update: optimality} to those in \eqref{KKT: dual}. Note that if \(\bm{x}^{(t+1)}=(P\otimes I_n)\bm{x}^{(t+1)}\), then \(\bm{\nu}^{(t+1)}=\bm{\nu}^{(t)}\). Therefore, KKT conditions \eqref{KKT} are satisfied by \(\{\bm{x}^{(t+1)}, \bm{\nu}^{(t+1)}\}\) if the following holds \begin{equation} \bm{x}^{(t+1)}=(P\otimes I_n)\bm{x}^{(t+1)}, \quad \bm{x}^{(t+1)}=\bm{x}^{(t)}=\bm{y}^{(t)}. \label{optimality conditions} \end{equation} Define the residuals of optimality conditions \eqref{optimality conditions} at iteration \(t\) as \begin{equation} \begin{aligned} &R(t+1)\coloneqq \frac{\gamma}{2}\norm{((I_m-P)\otimes I_n)\bm{x}^{(t+1)}}_2^2\\ &+\sum_{i\in\mathcal{V}} B_\phi(x_i^{(t+1)}, y_i^{(t)})+\sum_{i\in\mathcal{V}} \frac{\delta_i}{\rho} B_{\varphi_i}(x_i^{(t+1)}, x_i^{(t)}), \end{aligned} \label{residuals} \end{equation} where \(\gamma>0\). Notice that \(R(t+1)=0\) if and only if \eqref{optimality conditions} holds. Hence \(R(t+1)\) is a running distance to KKT conditions in \eqref{KKT}. Define the Lyapunov function of Algorithm~\ref{Bregman PDMM}, which measures a running distance to optimal primal-dual pair \((\bm{x}^\star, \bm{\nu}^\star)\) as, \begin{equation} \begin{aligned} V(t)\coloneqq &\frac{1}{2\tau\rho}\norm{\bm{\nu}^\star-\bm{\nu}^{(t)}}_2^2 +\sum_{i\in\mathcal{V}}B_\phi(x_i^\star, y_i^{(t)})\\ &+\sum_{i\in\mathcal{V}} \frac{\delta_i}{\rho} B_{\varphi_i}(x_i^\star, x_i^{(t)}). \end{aligned} \label{Lyapunov function} \end{equation} We first establish the global convergence of Algorithm~\ref{Bregman PDMM} by showing that as \(t\to\infty\), \(V(t)\) is monotonically non-increasing and that \(R(t+1)\to 0\) (see \cite{yu2018bregman} for detailed proof). \begin{theorem} Suppose that Assumption~\ref{basic assumption} holds. Let the sequence \(\{\bm{y}^{(t)}, \bm{x}^{(t)}, \bm{\nu}^{(t)}\}\) be generated by Algorithm~\ref{Bregman PDMM}. Let \(R(k+1)\) and \(V(k)\) be defined as in \eqref{residuals} and \eqref{Lyapunov function}, respectively. Set \begin{equation} \tau\leq \rho(\mu\sigma-\gamma), \quad 0<\gamma<\mu\sigma, \label{parameter Bregman PDMM} \end{equation} where \(\sigma=\min\{1, n^{\frac{2}{p}-1}\}\). Then \begin{equation} V(t)-V(t+1)\geq R(t+1), \label{telescop series} \end{equation} As \(t\to\infty\), \(R(t+1)\) converges to zero, and \(\{\bm{x}^{(t)}, \bm{\nu}^{(t)}\}\) converges to a point that satisfy KKT conditions \eqref{KKT}. \label{theorem Bregman PDMM global convergence} \end{theorem} The sketch of the proof is as follows. First apply inequality \eqref{definition of subgradient} at \(\bm{x}^{(t+1)}\) and \(\bm{x}^\star\), which yields a non-negative inner product. Then use identity \eqref{3-point property} to break this inner product into three parts, each of which contributes to \(V(t), V(t+1),\) and \(R(t+1)\), respectively. Lemma~\ref{lemma Pythagorean}, entry (c) and (d) in Assumption~\ref{basic assumption}, together with parameter setting in \eqref{parameter Bregman PDMM} ensures that intermediate terms cancel each other, and finally we reach \eqref{telescop series}. Summing up \eqref{telescop series} from \(t=0\) to \(t=\infty\), we have \(\sum_{t=0}^\infty R(t+1)=V(0)-V(\infty)\leq V(0)\). Therefore, as \(t\to\infty\), we must have \(R(t+1)\to 0\), which implies that \(\{\bm{x}^{(t)}, \bm{\nu}^{(t)}\}\) satisfy \eqref{KKT} in the limit. In general, \eqref{parameter Bregman PDMM} implies that as \(p\) increases, step size \(\tau\) needs to decrease. See \cite[Remark 1]{wang2014bregman} for details. The following theorem establishes the \(O(1/T)\) convergence rate of Algorithm~\ref{Bregman PDMM} in an ergodic sense via the Jensen's inequality. \begin{theorem} Suppose that Assumption~\ref{basic assumption} holds. Let the sequence \(\{\bm{y}^{(t)}, \bm{x}^{(t)}, \bm{\nu}^{(t)}\}\) be generated by Algorithm~\ref{Bregman PDMM}. Let \(V(k)\) be defined as in \eqref{Lyapunov function}, \(\mu, \tau, \rho, \gamma\) satisfy \eqref{parameter Bregman PDMM}, \(\bm{\nu}^{(0)}=0\) and \(\bar{\bm{x}}^{(T)}=\frac{1}{T}\sum_{t=1}^{T}\bm{x}^{(t)}\). Then \begin{subequations} \begin{align} &\begin{aligned} &\sum_{i\in\mathcal{V}} f_i(\bar{x}^{T}_i)-\sum_{i\in\mathcal{V}} f_i(x^\star_i)\\ &\leq \frac{1}{T}\left(\rho\sum_{i\in\mathcal{V}} B_\phi(x^\star_i, y_i^{(0)})+\sum_{i\in\mathcal{V}} \delta_i B_{\varphi_i}(x^\star_i, x_i^{(0)})\right), \end{aligned}\label{Bregman PDMM function value 1/T}\\ &\frac{1}{2}\norm{((I_m-P)\otimes I_n)\bar{\bm{x}}^{(T)}}_2^2\leq \frac{V(0)}{\gamma T}.\label{Bregman PDMM residual 1/T} \end{align} \label{Bregman PDMM 1/T convergence} \end{subequations} \label{theorem Bregman PDMM 1/T convergence} \end{theorem} Theorem~\ref{theorem Bregman PDMM 1/T convergence} shows that the complexity bound of Algorithm~\ref{Bregman PDMM} with respect to dimensionality \(n\) is determined by the Bregman divergence term. The following corollary gives an example where, with a properly chosen Bregman divergence, Algorithm~\ref{Bregman PDMM} outperforms Algorithm~\ref{PDMM} by a factor of \(O(n/\ln n)\) \cite[Remark 2]{wang2014bregman}. \begin{corollary} Suppose that assumption~\ref{basic assumption} holds. Suppose \(\norm{g_i}^2_2\leq M_0\) for all \(g_i\in\partial f_i(x^\star_i)\) and \(i\in\mathcal{V}\), where \(M_0\in{\mbox{\bf R}}_+\). Let the sequence \(\{\bm{y}^{(t)}, \bm{x}^{(t)}, \bm{\nu}^{(t)}\}\) be generated by Algorithm~\ref{Bregman PDMM}. Let \(\gamma=1/4\), \(\tau=\rho/2\), \(\delta_{\max}=\max_i \delta_i\), \(\bm{\nu}^{(0)}=0\), \(\bm{x}^{(0)}=\mathbf{1}_m\otimes(\frac{1}{n}\mathbf{1}_n)\) and \(\bar{\bm{x}}^{(T)}=\frac{1}{T}\sum_{t=1}^{T}\bm{x}^{(t)}\), \(\mathcal{X}\) be the probability simplex, \(\phi\) and \(\varphi_i\) be the negative entropy function, then \begin{subequations} \begin{align} &\sum_{i\in\mathcal{V}} f_i(\bar{x}^{(T)}_i)-\sum_{i\in\mathcal{V}} f_i(x^\star_i)\leq \frac{m(\rho+\delta_{\max})\ln n}{T}, \label{cor1: objective function value}\\ &\begin{aligned} &\frac{1}{2}\norm{((I_m-P)\otimes I_n)\bar{\bm{x}}^{(T)}}_2^2\\ \leq &\frac{4m M_0}{ \rho^2(1-\lambda_2(P))^2T}+\frac{ 4m(\rho+\delta_{\max})\ln n}{\rho T}. \end{aligned}\label{cor1:consensus residual} \end{align} \label{cor1: eqn1} \end{subequations} \label{corollary 1} \end{corollary} Observe that \eqref{cor1:consensus residual} implies that the convergence bounds on consensus residual can be tightened by designing \(\lambda_2(P)\), which can be achieved efficiently via convex optimization \cite{boyd2004fastest}. \section{Numerical examples}\label{numerical examples} In this section, we present numerical examples to demonstrate the performance of Algorithm~\ref{Bregman PDMM}. Consider the following special case of \eqref{distributed optimization problem} defined over graph \(\mathcal{G}=(\mathcal{V}, \mathcal{E})\): \begin{equation} \underset{u\in\mathcal{X}}{\mbox{minimize}} \sum_{i=1}^m \left\langle c_i, u\right\rangle, \label{experiment problem} \end{equation} where \(\mathcal{X}\) is the probability simplex. Such problems have potential applications in, for example, policy design in multi-agent decision making \cite{el2016convex, zhang2016decision}. We use Algorithm~\ref{PDMM} as benchmark since it includes other popular variants of distributed ADMM \cite{iutzeler2013asynchronous, ling2013decentralized, shi2014linear, chang2015multi} as special cases. Compared to Algorithm~\ref{PDMM} which needs efficient Euclidean projection onto probability simplex \cite{duchi2008efficient}, Algorithm~\ref{Bregman PDMM} can solve \eqref{experiment problem} with closed-form updates suited for massive parallelism \cite{wang2014bregman}. We compare the performance of Algorithm~\ref{Bregman PDMM} with Algorithm~\ref{PDMM} on problem \eqref{experiment problem}, where entries in \(\{c_1, \ldots, c_m\}\) are sampled from standard normal distribution, graph \(\mathcal{G}\) is randomly generated with edge probability \(0.2\) \cite[p.~90]{mesbahi2010graph}. We use the following parameter setting: \(\rho=1\), \(\tau=1/2\), \(\delta_i=0\) for all \(i\in\mathcal{V}\), \(\phi\) is the negative entropy function. We demonstrate the convergence described by \eqref{cor1: eqn1} in Figure~\ref{experiment}. We observe that Algorithm~\ref{Bregman PDMM} significantly outperforms Algorithm~\ref{PDMM}, especially for large scale problem, and optimizing \(\lambda_2(P)\) further accelerates convergence considerably. \begin{figure} \centering \subfloat[\(m=20, n=1000\)\label{exp1}]{% \includegraphics[width=0.23\textwidth]{experiment1} } \hfill \subfloat[\(m=100, n=10000\)\label{exp2}]{% \includegraphics[width=0.23\textwidth]{experiment2} } \caption{Comparison of Bregman PDMM and PDMM with different \(\lambda_2(P)\).} \label{experiment} \end{figure} \section{Introduction} Distributed optimization arises in a variety of applications such as distributed tracking and localization \cite{li2002detection}, estimation in sensor networks \cite{lesser2012distributed}, and multiagent coordination \cite{xiao2007distributed}. In particular, given an undirected connected graph with \(m\) vertices, distributed optimization over this graph is defined as \begin{equation} \begin{array}{ll} \underset{u\in\mathcal{X}}{\mbox{minimize}} & \sum_{i=1}^m f_i(u) \end{array} \label{distributed optimization problem} \end{equation} where \(\mathcal{X}\subseteq{\mbox{\bf R}}^n\) is a closed convex set, each \(f_i\) is a convex function locally known by vertex \(i\) only. The optimality is achieved by local optimization on each vertex and efficient communication between neighboring vertices in the graph. Alternating direction method of multipliers (ADMM) \cite{boyd2011distributed} is a primal-dual algorithm that alternatively optimizes a quadratic augmented Lagrangian with respect to splitted primal variables and dual variables. There has been an increasing interest in applying multi-block variants of ADMM to solve problem \eqref{distributed optimization problem} \cite{he2012alternating, deng2017parallel, hong2017linear}. One of the main challenges of such methods is to find a separable approximation to the coupled quadratic penalty term in augmented Lagrangian. In particular, a Gauss-Seidel approximation \cite{he2012alternating, hong2017linear} was proposed in \cite{wei2012distributed}, which results in sequential updates on the vertices. On the other hand, a Jacobian approximation based variant of ADMM \cite{jakovetic2013distributed, deng2017parallel} allows simultaneous updates \cite{wang2014parallel, meng2015proximal}. We call such methods parallel direction method of multipliers (PDMM) since their primal variables are updated parallelly instead of alternatively. Bregman ADMM \cite{wang2014bregman} is a generalization of ADMM where the quadratic penalty function in ADMM updates is replaced by Bregman divergence, which can potentially exploit the problem structure. There has been attempts to introduce Bregman divergence as proximal term in multi-block variants of ADMM \cite{wang2014parallel, wang2015convergence}, but they are still based on a quadratic augmented Lagrangian. To our best knowledge, all existing ADMM based methods for distributed optimization use quadratic penalty functions. In this paper, we propose a new solution method, namely Bregman PDMM, for distributed optimization, which combines the advantages of PDMM and Bregman ADMM. We first propose a generalized averaging step named mirror averaging. Based on this, we develop Bregman PDMM which replaces all the quadratic penalty function in PDMM updates with Bregman divergence. We establish the global convergence of the proposed algorithm and its \(O(1/T)\) convergence rate, where \(T\) is the number of iterations. Furthermore, in some cases, Bregman PDMM can outperform PDMM by a factor of \(O(n/\ln n)\), where \(n\) is the dimension of solution variable. Finally, we show that by optimizing the spectral gap of the averaging matrix, we can enhance the performance of the Bregman PDMM. The rest of the paper is organized as follows. \S\ref{preliminaries} provides a reformulation of problem \eqref{distributed optimization problem} using consensus constraints. In \S\ref{method}, we develop Bregman PDMM for problem \eqref{distributed optimization problem}, whose convergence properties are established in \S\ref{convergence} via Lyapunov analysis. \S\ref{numerical examples} presents numerical examples; \S\ref{conclusion} concludes the paper and comments on the future work. \section{Bregman Parallel Direction Method of Multipliers}\label{method} In this section, we first introduce an existing PDMM that contains averaging as an implicit update. Then we generalize averaging to mirror averaging based on the idea of mirror descent, and finally propose our Bregman PDMM based on mirror averaging. \subsection{Parallel Direction Method of Multipliers} PDMM \cite{meng2015proximal} solves \eqref{distributed optimization problem} with \(\mathcal{X}={\mbox{\bf R}}^n\) with parallel and single loop primal updates, and links convergence behavior to graph topology \cite{makhdoumi2017convergence, francca2017distributed}. An adaption of PDMM to formulation \eqref{consensus optimization problem} is given in Algorithm~\ref{PDMM}. Naturally, one will try to generalize the quadratic penalty in Algorithm~\ref{PDMM} to Bregman divergence the same way Bregman ADMM generalizes ADMM \cite{wang2014bregman}. However, if we simply replace the quadratic penalty in Algorithm~\ref{PDMM} with Bregman divergence induced by a strongly convex function \(\phi\), it is challenging to prove its convergence for the following reasons. A crucial step in the proof provided in \cite{meng2015proximal} is to apply the three point identity \eqref{3-point property} to a convex function \(\Psi:{\mbox{\bf R}}^{mn}\to{\mbox{\bf R}}\) that satisfies the following differential equation, \begin{equation} \nabla \Psi(\bm{u})=(P\otimes I_n)\nabla\Phi(\bm{u}), \label{Dennis ODE} \end{equation} where \(\Phi(\bm{u})=\sum_{i\in\mathcal{V}}\phi(u_i)\) with \(\bm{u} = [u_1^\top, \ldots, u_m^\top]^\top\in\mathcal{X}^m\). However, it is highly non-trivial to solve \eqref{Dennis ODE} for a convex function \(\Psi\) unless \(\phi\) is quadratic function. Hence we cannot directly utilize the convergence proof in \cite{meng2015proximal}. Therefore, we need to take a closer look at the role of the quadratic term in \eqref{PDMM: primal update}. \begin{algorithm} \caption{Existing PDMM \cite{meng2015proximal}} \begin{algorithmic}[h] \REQUIRE Parameter \(\rho>0\); initial point \(\bm{x}^{(0)}, \bm{\nu}^{(0)}\in{\mbox{\bf R}}^{mn}\). \FORALL{\(t=0, 1, 2, \ldots\)} \STATE{each vertex \(i\) updates \(x_i\) in parallel \begin{equation} \begin{aligned} x_{i}^{(t+1)} = & \underset{x_i}{\mathop{\rm argmin}} \,\, f_i(x_i)\\ & +\langle x_i, \nu_i^{(t)} - \sum_{j\in\mathcal{N}(i)}P_{ij}\nu^{(t)}_j\rangle\\ & + \frac{\rho}{2} \sum_{j\in\mathcal{N}(i)} P_{ij}\norm{x_i - x_j^{(t)}}_2^2 \end{aligned}\label{PDMM: primal update} \end{equation} each vertex \(i\) updates \(\nu_i\) \begin{equation} \nu_i^{(t+1)} = \nu_i^{(t)}+\rho x_i^{(t+1)}-\rho\sum_{j\in\mathcal{N}(i)}P_{ij}x_j^{(t+1)} \label{PDMM: dual update} \end{equation}} \ENDFOR \end{algorithmic} \label{PDMM} \end{algorithm} Consider the following intermediate update, \begin{equation} y_i^{(t)}\coloneqq\underset{x_i}{\mathop{\rm argmin}}\sum_{j\in\mathcal{N}(i)} P_{ij}\norm{x_i-x_j^{(t)}}_2^2. \label{averaging} \end{equation} The behavior of \eqref{averaging} is characterized by the Markov chain defined by matrix \(P\) \cite[Proposition 3.21]{mesbahi2010graph}. In the sequel, we will generalize the quadratic function in \eqref{averaging} to Bregman divergence; then we will introduce Bregman PDMM based on such a generalization. \subsection{Mirror Averaging} Consider the following update: for all \(i\in\mathcal{V}\), \begin{equation} y_i^{(t)} = \underset{x_i\in \mathcal{X}}{\mathop{\rm argmin}}\sum_{j\in\mathcal{N}(i)} P_{ij} B_\phi(x_i, x_j^{(t)}), \label{mirror averaging dynamics} \end{equation} where \(P\) is symmetric, stochastic and irreducible matrix defined on \(\mathcal{G}\), and \(\phi\) is a mirror map defined on the open set \(\mathcal{D}\) such that \(\mathcal{X}\) is included in the closure of \(\mathcal{D}\). Let \(\Phi(\bm{u})=\sum_{i\in\mathcal{V}} \phi(u_i)\) with \(\bm{u}=[u_1^\top, \ldots, u_n^\top]^\top\). Using an argument similar to the one in \cite[p. 301]{bubeck2015convex}, one can obtain the following result. \begin{proposition} Update \eqref{mirror averaging dynamics} is equivalent to \begin{subequations} \begin{align} \nabla\Phi (\bm{z}^{(t)}) &= (P\otimes I_n)\nabla \Phi(\bm{x}^{(t)}), \label{averaging in mirror}\\ \bm{y}^{(t)} &= \underset{\bm{x}\in \mathcal{X}^m}{\mathop{\rm argmin}}\,\, B_\Phi(\bm{x}, \bm{z}^{(t)}).\label{projection in mirror} \end{align} \label{mirror dynamics break down} \end{subequations} \label{proposition 1} \end{proposition} Since \eqref{averaging in mirror} has the same dynamics as averaging step \eqref{averaging}, inspired by the idea of mirror descent, we interpret \eqref{mirror averaging dynamics} as \emph{mirror averaging}: to achieve update \eqref{mirror averaging dynamics}, we first map \(\bm{x}^{(t)}\) to \(\nabla\Phi(\bm{x}^{(t)})\), next run an averaging step via \eqref{averaging in mirror} and obtain \(\nabla\Phi(\bm{z}^{(t)})\), then apply \(\left(\nabla\Phi\right)^{-1}\) to it and obtain \(\bm{z}^{(t)}\), and finally get \(\bm{y}^{(t)}\) via the projection \eqref{projection in mirror}. \begin{remark} We provide two special cases \cite[p~301]{bubeck2015convex} where \eqref{mirror averaging dynamics} has a close form solution:1) If \(\mathcal{X}={\mbox{\bf R}}^n\) and \(\phi=\norm{\cdot}_2^2\), then \eqref{averaging in mirror} and \eqref{projection in mirror} reduces to \eqref{averaging}. 2) If \(\mathcal{X}\) denotes the probability simplex and \(\phi\) the negative entropy function, then \eqref{averaging in mirror} reduces to weighted geometric averaging and \eqref{projection in mirror} to a simple re-normalization. \end{remark} \begin{algorithm} \caption{Bregman PDMM} \begin{algorithmic}[h] \REQUIRE Parameters: \(\tau, \rho >0\), \(\delta_1, \ldots, \delta_m\geq 0\); initial point \(\bm{x}^{(0)}\in\mathcal{X}^m\cap\mathcal{D}^m , \bm{\nu}^{(0)}\in{\mbox{\bf R}}^{mn}\). \FORALL{\(t=0, 1, 2, \ldots\)} \STATE{each vertex \(i\) updates \(y_i\) and \(x_i\) in parallel \begin{subequations} \begin{align} &y_i^{(t)} = \underset{y_i\in \mathcal{X}}{\mathop{\rm argmin}}\sum_{j\in\mathcal{N}(i)} P_{ij} B_\phi(y_i, x_j^{(t)})\label{Bregman PDMM: mirror Markov update}\\ &\begin{aligned} x_{i}^{(t+1)} = \,& \underset{x_i\in\mathcal{X}}{\mathop{\rm argmin}} \,\, f_i(x_i)\\ & +\langle x_i, \nu_i^{(t)} - \sum_{j\in\mathcal{N}(i)}P_{ij}\nu^{(t)}_j\rangle \\ & + \rho B_\phi(x_i, y_i^{(t)})+\delta_iB_{\varphi_i}(x_i, x_i^{(t)})\label{Bregman PDMM: primal update} \end{aligned} \end{align} \label{Bregman PDMM: subproblem} \end{subequations} each vertex \(i\) updates \(\nu_i\) in parallel \begin{equation} \nu_i^{(t+1)} = \nu_i^{(t)}+\tau x_i^{(t+1)}-\tau\sum_{j\in\mathcal{N}(i)}P_{ij}x_j^{(t+1)} \label{Bregman PDMM: dual update} \end{equation}} \ENDFOR \end{algorithmic} \label{Bregman PDMM} \end{algorithm} We introduce the following useful lemma, whose proof can be found in the Appendix. \begin{lemma} Given update \eqref{mirror averaging dynamics}, for any \(u\in\mathcal{X}\), \begin{equation} \begin{aligned} &\sum_{i\in\mathcal{V}} B_\phi(u, x_i^{(t)})-\sum_{i\in\mathcal{V}} B_\phi(u, y_i^{(t)})\\ \geq&\sum_{i,j\in\mathcal{V}}P_{ij}B_\phi(y_i^{(t)}, x_j^{(t)}). \end{aligned} \label{lemma Pythagorean: eqn1} \end{equation}\label{lemma Pythagorean} \end{lemma} \begin{remark} Lemma~\ref{lemma Pythagorean} turns out to be a key step in our convergence proof. Notice that without the generalization from \eqref{averaging} to \eqref{mirror averaging dynamics}, we can replace Lemma~\ref{lemma Pythagorean} with Jensen's inequality for strongly convex function by assuming the Bregman divergence is strongly convex in the second argument, but such an assumption does not hold in general. Hence the generalization from \eqref{averaging} to \eqref{mirror averaging dynamics} is necessary. \end{remark} \subsection{Bregman PDMM via Mirror Averaging} Based on the above observations, we propose Algorithm~\ref{Bregman PDMM} by generalizing the quadratic penalty term in Algorithm~\ref{PDMM} to Bregman divergence. It essentially combines the parallel updates in Algorithm~\ref{PDMM} and Bregman penalty term in Bregman ADMM \cite{wang2014bregman}. Notice that Algorithm~\ref{PDMM} is a special case of Algorithm~\ref{Bregman PDMM} with \(\phi=\frac{1}{2}\norm{\cdot}_2^2\), \(\tau=\rho\), and \(\delta_i=0\) for all \(i\in\mathcal{V}\). \section*{APPENDIX} \section*{ACKNOWLEDGMENT} The authors would like to thank De Meng and Maryam Fazel for many helpful discussions and suggestions. The anonymous reviewers are gratefully acknowledged. \bibliographystyle{IEEEtran} \section{Preliminaries and Background}\label{preliminaries} \subsection{Notation} Let \({\mbox{\bf R}}\) (\({\mbox{\bf R}}_+\)) denote the (nonnegative) real numbers, \({\mbox{\bf R}}^n\) (\({\mbox{\bf R}}^n_+\)) denote the set of \(n\)-dimensional (elementwise nonnegative) vectors. Let \(\geq(\leq)\) denote elementwise inequality when applied to vectors and matrices. Let \(\langle\cdot, \cdot\rangle\) denote the dot product. Let \(I_n\in{\mbox{\bf R}}^{n\times n}\) denote the \(n\)-dimensional identity matrix, \(\mathbf{1}_n\in{\mbox{\bf R}}^n\) the \(n\)-dimensional vector of all \(1\)s. Given matrix \(A\in{\mbox{\bf R}}^{n\times n}\), let \(A_{ij}\) denote its \((i, j)\) entry; \(A^\top\) denotes its transpose. Let \(\otimes\) denote the Kronecker product. Given the set \(\mathcal{X}\subseteq{\mbox{\bf R}}^n\), its indicator function \(\iota_\mathcal{X}:{\mbox{\bf R}}^n\to {\mbox{\bf R}}\) is defined as: \(\iota_\mathcal{X}(u)=0\) if \(u\in\mathcal{X}\) and \(+\infty\) otherwise. \subsection{Subgradients} Let \(f:{\mbox{\bf R}}^n\to{\mbox{\bf R}}\) be a convex function. Then \(g\in{\mbox{\bf R}}^n\) is a subgradient of \(f\) at \(u\in{\mbox{\bf R}}^n\) if and only if for any \(v\in{\mbox{\bf R}}^n\) one has \begin{equation} f(v)-f(u)\geq \left\langle g, v-u\right\rangle. \label{definition of subgradient} \end{equation} We denote \(\partial f(u)\) the set of subgradients of \(f\) at \(u\). We will use the following results. \begin{lemma}\cite[Theorem 27.4]{rockafellar2015convex} Given a closed convex set \(\mathcal{C}\subseteq{\mbox{\bf R}}^n\) and closed, convex, proper function \(f:{\mbox{\bf R}}^n\to {\mbox{\bf R}}\), then \(u^\star=\underset{u\in\mathcal{C}}{\mathop{\rm argmin}}\, f(u)\) if and only if there exists \(g\in N_\mathcal{C}(u^\star)\) such that \(-g\in\partial f(u^\star)\), where \begin{equation} N_\mathcal{C}(u^\star)\coloneqq\left\{g\in{\mbox{\bf R}}^n: \langle g, u^\star-v\rangle\geq 0, \forall v\in\mathcal{C}\right\} \label{definition of normal cone} \end{equation} \label{lemma optimality condition} is the normal cone of the set \(\mathcal{C}\) at \(u^\star\). \end{lemma} \subsection{Mirror maps and Bregman divergence} Let \(\mathcal{D}\subseteq{\mbox{\bf R}}^n\) be a convex open set. We say that \(\phi:\mathcal{D}\to{\mbox{\bf R}}\) is a {\em mirror map} \cite[p.298]{bubeck2015convex} if it satisfies: 1) \(\phi\) is differentiable and strictly convex, 2) \(\nabla\phi\) takes all possible values, and 3) \(\nabla\phi\) diverges on the boundary of the closure of \(\mathcal{D}\), {\it i.e.} , \(\lim_{u\to\partial \bar{\mathcal{D}}}\norm{\nabla\phi(u)}=\infty\), where \(\norm{\cdot}\) is an arbitrary norm on \({\mbox{\bf R}}^n\). Bregman divergence \(B_\phi:\mathcal{D}\times\mathcal{D}\to{\mbox{\bf R}}_+\) induced by \(\phi\) is defined as \begin{equation} B_\phi(u, v)=\phi(u)-\phi(v)-\left\langle \nabla\phi(v), u-v\right\rangle. \label{definition of Bregman divergence} \end{equation} Note that \(B_\phi(u, v)\geq 0\) and \(B_\phi(u, v)=0\) only if \(u=v\). $\Phi$ and $B_\phi$ also satisfy the following three-point identity, \begin{equation} \begin{aligned} &\langle\nabla\phi(u)-\nabla\phi(v), w-u\rangle\\ =&B_\phi(w, v)-B_\phi(w, u)-B_\phi(u, v). \end{aligned}\label{3-point property} \end{equation} \subsection{Graphs and distibuted optimization} An undirected connected graph \(\mathcal{G}=(\mathcal{V}, \mathcal{E})\) contains a vertex set \(\mathcal{V}=\{1, 2, \ldots, m\}\) and an edge set \(\mathcal{E}\subseteq \mathcal{V}\times \mathcal{V}\) such that \((i, j)\in\mathcal{E}\) if and only if \((j, i)\in\mathcal{E}\) for all \(i, j\in\mathcal{V}\). Denote \(\mathcal{N}(i)\) the set of neighbors of node \(i\), where \(j\in\mathcal{N}(i)\) if \((i, j)\in\mathcal{E}\). Consider a symmetric stochastic matrix \(P\in{\mbox{\bf R}}^{m\times m}\) defined on the graph \(\mathcal{G}\) such that \(P_{ij}>0\) implies that \(j\in\mathcal{N}(i)\). Such a matrix \(P\) can be constructed, for example, by the graph Laplacian \cite[Proposition 3.18]{mesbahi2010graph}. The eigenvalues of \(P\) are real and will be ordered nonincreasingly in their magnitude, denoted by \(|\lambda_1(P)|\geq |\lambda_2(P)|\geq \ldots\geq|\lambda_m(P)|\). From \cite[Theorem 8.4.4]{horn2012matrix} we know that \(\lambda_1(P)=1\) is simple with eigenvectors spanned by \(\mathbf{1}_m\). Let \(\mathcal{G}=(\mathcal{V}, \mathcal{E})\) denote the underlying graph over which the distributed optimization problem \eqref{distributed optimization problem} is defined. A common approach to solve problem \eqref{distributed optimization problem} is to create local copies of the design variable \(\{x_1, x_2, \ldots, x_m\}\) and impose the consensus constraints: \(x_i=x_j\) for all \((i, j)\in \mathcal{E}\) \cite{bertsekas1989parallel, boyd2011distributed}. Many different forms of consensus constraints have been proposed \cite{wei2012distributed, jakovetic2013distributed, iutzeler2013asynchronous, shi2014linear}. In this paper, we consider consensus constraints of the form: \begin{equation} (P\otimes I_n)\bm{x}=\bm{x}, \label{consensus constraints} \end{equation} where \(\bm{x} =[x^\top_1, x^\top_2, \ldots, x^\top_m]^\top\), \(P\) is a symmetric, stochastic and irreducible matrix defined on \(\mathcal{G}\). We will focus on solving the following reformulation of \eqref{distributed optimization problem}, \begin{equation} \begin{array}{ll} \underset{\bm{x}\in\mathcal{X}^m}{\mbox{minimize}} & \sum_{i\in\mathcal{V}} f_i(x_i)\\ \mbox{subject to} & (P\otimes I_n) \bm{x}= \bm{x}, \end{array} \label{consensus optimization problem} \end{equation} where \(\mathcal{X}^m\) is the Cartesian product of \(m\) copies of \(\mathcal{X}\).
{ "timestamp": "2018-05-03T02:04:24", "yymm": "1802", "arxiv_id": "1802.06835", "language": "en", "url": "https://arxiv.org/abs/1802.06835" }
\section*{Acknowledgments} Thanks to Tobi Delbruck and the team at iniLabs for providing and supporting the DAVIS-346b cameras. We also gratefully appreciate support through the following grants: NSF-IIS-1703319, NSF-IIP-1439681 (I/UCRC), ARL RCTA W911NF-10-2-0016, and the DARPA FLA program. \addtolength{\textheight}{-400pt} \bibliographystyle{plainnat} \section{Conclusion} \label{sec:conclusion} In this work, we have presented a novel design for a neural network architecture that is able to accurately predict optical flow from events alone. Due to the method's self-supervised nature, the network can be trained without any manual labeling, simply by recording data from the camera. We show that the predictions generalize beyond hand designed laboratory scenes to natural ones, and that the method is competitive with state of the art frame-based self supervised methods. We hope that this work will provide not only a novel method for flow estimation, but also a paradigm for applying other self-supervised learning methods to event cameras in the future. For future work, we hope to incorporate additional losses that provide supervisory signals from event data alone, to expose the network to scenes that are challenging for traditional frame-based cameras, such as those with high speed motions or challenging lighting. \section{Empirical Evaluation} \label{sec:results} \input{tex/results/FlowFigs.tex} \input{tex/results/Training.tex} \input{tex/results/Ablation.tex} \input{tex/results/Tables.tex} \input{tex/results/Comparisons.tex} \input{tex/results/Testing.tex} \input{tex/results/Results.tex} \section{Optical Flow Dataset} For ground truth evaluation only, we generated a novel dataset for ground truth optical flow using the data provided in the Multi-Vehicle Stereo Event Camera dataset (MVSEC) by \citet{zhu2018mvsec}. The dataset contains stereo event camera data in a number of flying, driving and handheld scenes. In addition, the dataset provides ground truth poses and depths maps for each event camera, which we have used to generate reference ground truth optical flow. From the pose (consisting of rotation $R$ and translation $\mathbf{p}$) of the camera at time $t_0$ and $t_1$, we make a linear velocity assumption, and estimate velocity and angular velocity using numerical differentiation: \begin{align} \mathbf{v}=&\frac{(\mathbf{p}(t_1) - \mathbf{p}(t_0))}{dt}\\ \mathbf{\omega}^{\wedge}=&\frac{\text{logm}\left(R_{t_0}^{T}R_{t_1}\right)}{dt} \intertext{where logm is the matrix logarithm, and $\mathbf{\omega}^{\wedge}$ converts the vector $\mathbf{\omega}$ into the corresponding skew symmetric matrix:} \mathbf{\omega}^{\wedge}=&\begin{bmatrix}0 & -\omega_z & \omega_y\\\omega_z & 0 & -\omega_x\\-\omega_y & \omega_x & 0\end{bmatrix} \end{align} A central moving average filter is applied to the estimated velocities to reduce noise. We then use these velocities to estimate the motion field, given the ground truth depths, $Z$, at each undistorted pixel position: \begin{align} \begin{pmatrix}\dot{x}\\\dot{y}\end{pmatrix}=&\begin{bmatrix}-\frac{1}{Z} & 0 & -\frac{x}{Z} & xy & -(1+x^2) & y\\0 & -\frac{1}{Z} & \frac{y}{Z} & 1+y^2 & -xy & -x\end{bmatrix}\begin{pmatrix}\mathbf{v} \\ \mathbf{\omega}\end{pmatrix} \end{align} Finally, we scale the motion field by the time window between each pair of images $dt$, and use the resulting displacement as an approximation to the true optical flow for each pixel. To apply the ground truth to the distorted images, we shift the undistorted pixels by the flow, and apply distortion to the shifted pixels. The distorted flow is, then, the displacement from the original distorted position to the shifted distorted position. In total, we have generated ground truth optical flow for the indoor$\_$flying, outdoor$\_$day and outdoor$\_$night sequences. In addition to using the indoor$\_$flying and outdoor$\_$day ground truth sets for evaluation, we will also release all sequences as a dataset. \section{Introduction} \label{sec:introduction} By registering changes in log intensity in the image with microsecond accuracy, event-based cameras offer promising advantages over frame based cameras in situations with factors such as high speed motions and difficult lighting. One interesting application of these cameras is the estimation of optical flow. By directly measuring the precise time at which each pixel changes, the event stream directly encodes fine grain motion information, which researchers have taken advantage of in order to perform optical flow estimation. For example, \citet{benosman2012asynchronous} show that optical flow can be estimated from a local window around each event in a linear fashion, by estimating a plane in the spatio-temporal domain. This is significantly simpler than image-based methods, where optical flow is performed using iterative methods. However, analysis in \citet{rueckauer2016evaluation} has shown that these algorithms require significant, hand crafted outlier rejection schemes, as they do not properly model the output of the sensor. For traditional image-based methods, deep learning has helped the computer vision community achieve new levels of performance while avoiding having to explicitly model the entire problem. However, these techniques have yet to see the same level of adoption and success for event-based cameras. One reason for this is the asynchronous output of the event-based camera, which does not easily fit into the synchronous, frame-based inputs expected by image-based paradigms. Another reason is the lack of labeled training data necessary for supervised training methods. In this work, we propose two main contributions to resolve these issues. \begin{figure}[t!] \centering \includegraphics[trim=0 0 0 -1cm, width=0.48\linewidth]{figs/primary_events.png} \includegraphics[trim=0 0 0 -1cm, width=0.48\linewidth]{figs/primary_flow.png} \caption{Left: Event input to the network visualizing the last two channels (latest timestamps). Right: Predicted flow, colored by direction. Best viewed in color.} \label{fig:timestamp_image} \end{figure} First, we propose a novel image-based representation of an event stream, which fits into any standard image-based neural network architecture. The event stream is summarized by an image with channels representing the number of events and the latest timestamp at each polarity at each pixel. This compact representation preserves the spatial relationships between events, while maintaining the most recent temporal information at each pixel and providing a fixed number of channels for any event stream. Second, we present a self-supervised learning method for optical flow estimation given only a set of events and the corresponding grayscale images generated from the same camera. The self-supervised loss is modeled after frame based self-supervised flow networks such as \citet{jason2016back} and \citet{meister2017unflow}, where a photometric loss is used as a supervisory signal in place of direct supervision. As a result, the network can be trained using only data captured directly from an event camera that also generates frame based images, such as the Dynamic and Active-pixel Vision (DAVIS) Sensor developed by \citet{brandli2014240}, circumventing the need for expensive labeling of data. These event images combined with the self-supervised loss are sufficient for the network to learn to predict accurate optical flow from events alone. For evaluation, we generate a new event camera optical flow dataset, using the ground truth depths and poses in the Multi Vehicle Event Camera Dataset by \citet{zhu2018mvsec}. We show that our method is competitive on this dataset with UnFlow by \citet{meister2017unflow}, an image-based self supervised network trained on KITTI, and fine tuned on event camera frames, as well as standard non-learning based optical flow methods. In summary, our main contributions in this work are: \begin{itemize} \item We introduce a novel method for learning optical flow using events as inputs only, without any supervision from ground-truth flow. \item Our CNN architecture uses a self-supervised photoconsistency loss from low resolution intensity images used in training only. \item We present a novel event-based optical flow dataset with ground truth optical flow, on which we evaluate our method against a state of the art frame based method. \end{itemize} \section{Method} \label{sec:method} In this section, we describe our approach in detail. In Sec. \ref{sec:representation}, we describe our event representation, which is an analogy to an event image. In Sec. \ref{sec:loss}, we describe the self-supervised loss used to provide a supervisory signal using only the gray scale images captured before and after each time window, and in Sec. \ref{sec:architecture}, we describe the architecture of our network, which takes as input the event image and outputs a pixel-wise optical flow. Note that, throughout this paper, we refer to optical flow as the displacement of each pixel within a given time window. \input{tex/method/EventRepresentation.tex} \input{tex/method/Loss.tex} \input{tex/method/Architecture.tex} \section{RSS citations} Please make sure to include \verb!natbib.sty! and to use the \verb!plainnat.bst! bibliography style. \verb!natbib! provides additional citation commands, most usefully \verb!\citet!. For example, rather than the awkward construction {\small \begin{verbatim} \cite{kalman1960new} demonstrated... \end{verbatim} } \noindent rendered as ``\cite{kalman1960new} demonstrated...,'' or the inconvenient {\small \begin{verbatim} Kalman \cite{kalman1960new} demonstrated... \end{verbatim} } \noindent rendered as ``Kalman \cite{kalman1960new} demonstrated...'', one can write {\small \begin{verbatim} \citet{kalman1960new} demonstrated... \end{verbatim} } \noindent which renders as ``\citet{kalman1960new} demonstrated...'' and is both easy to write and much easier to read. \subsection{RSS Hyperlinks} This year, we would like to use the ability of PDF viewers to interpret hyperlinks, specifically to allow each reference in the bibliography to be a link to an online version of the reference. As an example, if you were to cite ``Passive Dynamic Walking'' \cite{McGeer01041990}, the entry in the bibtex would read: {\small \begin{verbatim} @article{McGeer01041990, author = {McGeer, Tad}, title = {\href{http://ijr.sagepub.com/content/9/2/62.abstract}{Passive Dynamic Walking}}, volume = {9}, number = {2}, pages = {62-82}, year = {1990}, doi = {10.1177/027836499000900206}, URL = {http://ijr.sagepub.com/content/9/2/62.abstract}, eprint = {http://ijr.sagepub.com/content/9/2/62.full.pdf+html}, journal = {The International Journal of Robotics Research} } \end{verbatim} } \noindent and the entry in the compiled PDF would look like: \def\tmplabel#1{[#1]} \begin{enumerate} \item[\tmplabel{1}] Tad McGeer. \href{http://ijr.sagepub.com/content/9/2/62.abstract}{Passive Dynamic Walking}. {\em The International Journal of Robotics Research}, 9(2):62--82, 1990. \end{enumerate} where the title of the article is a link that takes you to the article on IJRR's website. Linking cited articles will not always be possible, especially for older articles. There are also often several versions of papers online: authors are free to decide what to use as the link destination yet we strongly encourage to link to archival or publisher sites (such as IEEE Xplore or Sage Journals). We encourage all authors to use this feature to the extent possible. \section{Related Work} \label{sec:relatedwork} \subsection{Event-based Optical Flow} There have been several works that attempt to take advantage of the high temporal resolution of the event camera to estimate accurate optical flow. \citet{benosman2012asynchronous} model a given patch moving in the spatial temporal domain as a plane, and estimate optical flow as the slope of this plane. This work is extended in \citet{benosman2014event} by adding an iterative outlier rejection scheme to remove events significantly far from the plane, and in \citet{barranco2014contour} by combining the estimated flow with flow from traditional images. \citet{brosch2015event} present an analogy of \citet{lucas1981iterative} using the events to approximate the spatial image gradient, while \citet{orchard2014bioinspired} use a spiking neural network to estimate flow, and \citet{liu2018abmof} estimate sparse flow using an adaptive block matching algorithm. In other works, \citet{bardow2016simultaneous} present the optical flow estimation problem jointly with image reconstruction, and solve the joint problem using convex optimization methods, while \citet{zhu2017event} present an expectation-maximization based approach to estimate flow in a local patch. A number of these methods have been evaluated in \citet{rueckauer2016evaluation} against relatively simple scenes with limited translation and rotation, with limited results, with ground truth optical flow estimated from a gyroscope. Similarly, \citet{barranco2016dataset} provide a dataset with optical flow generated from a known motion combined with depths from a RGB-D sensor. \subsection{Event-based Deep Learning} One of the main challenges for supervised learning for events is the lack of labeled data. As a result, many of the early works on learning with event-based data, such as \citet{ghosh2014real} and \citet{moeys2016steering}, rely on small, hand collected datasets. To address this, recent works have attempted to collect new datasets of event camera data. \citet{mueggler2017event}, provide handheld sequences with ground truth camera pose, which \citet{nguyen2017real} use to train a LSTM network to predict camera pose. In addition, \citet{zhu2018mvsec} provide flying, driving and handheld sequences with ground truth camera pose and depth maps, and \citet{DBLP:journals/corr/abs-1711-01458} provide long driving sequences with ground truth measurements from the vehicle such as steering angle and GPS position. Another approach has been to generate event based equivalents of existing image based datasets by recording images from these datasets from an event based camera (\citet{orchard2015converting}, \citet{hu2016dvs}). Recently, there have also been implementations of neural networks on spiking neuromorphic processors, such as in \citet{amir2017low}, where a network is adapted to the TrueNorth chip to perform gesture recognition. \subsection{Self-supervised Optical Flow} Self-supervised, or unsupervised, methods have shown great promise in training networks to solve many challenging 3D perception problems. \citet{jason2016back} and \citet{ren2017unsupervised} train an optical flow prediction network using the traditional brightness constancy and smoothness constraints developed in optimization based methods such as the Lucas Kanade method \citet{lucas1981iterative}. \citet{zhu2017guided} combine this self-supervised loss with supervision from an optimization based flow estimate as a proxy for ground truth supervision, while \citet{meister2017unflow} extend the loss with occlusion masks and a second order smoothness term, and \citet{lai2017semi} introduce an adversarial loss on top of the photometric error. \subsection{Network Architecture} \label{sec:architecture} The EV-FlowNet architecture very closely resembles the encoder-decoder networks such as the stacked hourglass (\citet{newell2016stacked}) and the U-Net (\citet{ronneberger2015u}), and is illustrated in Fig. \ref{fig:architecture}. The input event image is passed through 4 strided convolution layers, with output channels doubling each time. The resulting activations are passed through 2 residual blocks, and then four upsample convolution layers, where the activations are upsampled using nearest neighbor resampling and then convolved, to obtain a final flow estimate. At each upsample convolution layer, there is also a skip connection from the corresponding strided convolution layer, as well as another convolution layer to produce an intermediate, lower resolution, flow estimate, which is concatenated with the activations from the upsample convolution. The loss in \eqref{eq:total_loss} is then applied to each intermediate flow by downsampling the grayscale images. The tanh function is used as the activation function for all of the flow predictions. \subsection{Data Augmentation} \label{sec:dataaug} \subsection{Event Representation} \label{sec:representation} An event-based camera tracks changes in the log intensity of an image, and returns an event whenever the log intensity changes over a set threshold $\theta$: \begin{align} \log(I_{t+1})&-\log(I_t) \geq \theta \intertext{Each event contains the pixel location of the change, timestamp of the event and polarity:} e=&\begin{Bmatrix}\mathbf{x},&t,&p\end{Bmatrix} \end{align} Because of the asynchronous nature of the events, it is not immediately clear what representation of the events should be used in the standard convolutional neural network architecture. Most modern network architectures expect image-like inputs, with a fixed, relatively low, number of channels (recurrent networks excluded) and spatial correlations between neighboring pixels. Therefore, a good representation is key to fully take advantage of existing networks while summarizing the necessary information from the event stream. Perhaps the most complete representation that preserves all of the information in each event would be to represent the events as a $n\times 4$ matrix, where each column contains the information of a single event. However, this does not directly encode the spatial relationships between events that is typically exploited by convolutions over images. In this work, we chose to instead use a representation of the events in image form. The input to the network is a 4 channel image with the same resolution as the camera. The first two channels encode the number of positive and negative events that have occurred at each pixel, respectively. This counting of events is a common method for visualizing the event stream, and has been shown in \citet{nguyen2017real} to be informative in a learning based framework to regress 6dof pose. However, the number of events alone discards valuable information in the timestamps that encode information about the motion in the image. Incorporating timestamps in image form is a challenging task. One possible solution would be to have $k$ channels, where $k$ is the most events in any pixel in the image, and stack all incoming timestamps. However, this would result in a large increase in the dimensionality of the input. Instead, we encode the pixels in the last two channels as the timestamp of the most recent positive and negative event at that pixel, respectively. This is similar to the "Event-based Time Surfaces" used in \citet{lagorce2017hots} and the "timestamp images" used in \citet{park2016performance}. An example of this kind of image can be found in Fig.~\ref{fig:timestamp_image}, where we can see that the flow is evident by following the gradient in the image, particularly for closer (faster moving) objects. While this representation inherently discards all of the timestamps but the most recent at each pixel, we have observed that this representation is sufficient for the network to estimate the correct flow in most regions. One deficiency of this representation is that areas with very dense events and large motion will have all pixels overridden by very recent events with very similar timestamps. However, this problem can be avoided by choosing smaller time windows, thereby reducing the magnitude of the motion. In addition, we normalize the timestamp images by the size of the time window for the image, so that the maximum value in the last two channels is 1. This has the effect of both scaling the timestamps to be on the same order of magnitude as the event counts, and ensuring that fast motions with a small time window and slow motions with a large time window that generate similar displacements have similar inputs to the network. \subsection{Self-Supervised Loss} \label{sec:loss} Due to the fact that there is a relatively small amount of labeled data for event based cameras as compared to traditional cameras, it is difficult to generate a sufficient dataset for a supervised learning method. Instead, we utilize the fact that the DAVIS camera generates synchronized events and grayscale images to perform self-supervised learning using the grayscale images in the loss. At training time, the network is provided with the event timestamp images, as well as a pair of grayscale images, occurring immediately before and after the event time window. Only the event timestamp images are passed into the network, which predicts a per pixel flow. The grayscale images are then used to apply a loss over the predicted flow in a self-supervised manner. The overall loss function used follows traditional variational methods for estimating optical flow, and consists of a photometric and a smoothness loss. To compute the photometric loss, the flow is used to warp the second image to the first image using bilinear sampling, as described in \citet{jason2016back}. The photometric loss, then, aims to minimize the difference in intensity between the warped second image and the first image: \begin{align} \ell_{\text{photometric}}(u,v;I_t, I_{t+1})&=\notag\\ \sum_{x, y} \rho(I_t(x, y)& - I_{t+1}(x+u(x,y), y+v(x,y)))\label{eq:photometric} \intertext{where $\rho$ is the Charbonnier loss function, a common loss in the optical flow literature used for outlier rejection (\citet{sun2014quantitative}):} \rho(x)=&(x^2+\epsilon^2)^{\alpha}\label{eq:charb} \end{align} As we are using frame based images for supervision, this method is susceptible to image-based issues such as the aperture problem. Thus, we follow the other works in the frame based domain, and apply a regularizer in the form of a smoothness loss. The smoothness loss aims to regularize the output flow by minimizing the difference in flow between neighboring pixels horizontally, vertically and diagonally. \begin{align} &\ell_{\text{smoothness}}(u, v)=\notag\\ &\sum_{x,y}\sum_{i,j\in \mathcal{N}(x,y)}\rho(u(x,y)-u(i, j)) + \rho(v(x,y)-v(i, j)) \end{align} where $\mathcal{N}$ is the set of neighbors around $(x, y)$. The total loss is the weighted sum of the photometric and smoothness losses: \begin{align} L_{\text{total}}=&\ell_{\text{photometric}}+\lambda \ell_{\text{smoothness}}\label{eq:total_loss} \end{align} \subsection{Ablation Studies} In addition to the described architecture (denoted EV-FlowNet$_{\text{2R}}$), we also train three other networks to test the effects of varying the input to the network, as well as increasing the capacity of the network. To test the contribution of each of the channels in the input, we train two additional networks, one with only the event counts (first two channels) as input (denoted EV-FlowNet$_{\text{C}}$), and one with only the event timestamps (last two channels) as input (denoted EV-FlowNet$_{\text{R}}$). In addition, we tested different network capacities by training a larger model with 4 residual blocks (denoted EV-FlowNet$_{\text{4R}}$). A single forward pass takes, on average, 40ms for the smaller network, and 48ms for the larger network, when run on a NVIDIA GeForce GTX 1050, a laptop grade GPU. \subsection{Comparisons} To compare our results with other existing methods, we tested implementations of Event-based Visual Flow by \citet{benosman2014event}, an optimization based method that works on events, and UnFlow by \citet{meister2017unflow}, a self supervised method that works on traditional frames. As there is no open source code by the authors of Event-based Visual Flow, we designed an implementation around the method described in \citet{rueckauer2016evaluation}. In particular, we implemented the robust Local Plane Fit algorithm, with a spatial window of 5x5 pixels, vanishing gradient threshold th3 of 1e-3, and outlier distance threshold of 1e-2. However, we were unable to achieve any reasonable results on the datasets, with only very few points returning valid flow values ($<5\%$), and none of the valid flow values being visually correct. For validation, we also tested the open source MATLAB code provided by the authors of \citet{mueggler2017event}, where we received similar results. As a result, we believe that the method was unable to generalize to the natural scenes in the test set, and so did not include the results in this paper. For UnFlow, we used the unsupervised model trained on KITTI raw, and fine tuned on outdoor$\_$day2. This model was able to produce reasonable results on the testing sets, and we include the results in the quantitative evaluation in Tab. \ref{tab:results}. \subsection{Results} \begin{figure}[t!] \centering \includegraphics[width=0.32\linewidth]{figs/indoor_flying1_events01813.png} \includegraphics[width=0.32\linewidth]{figs/indoor_flying1_pred01813.png} \includegraphics[width=0.32\linewidth]{figs/indoor_flying1_gt01813.png} \centering \caption{Common failure case, where fast motion causes recent timestamps to overwrite older pixels nearby, resulting in incorrect predictions. Best viewed in color.} \label{fig:failurecase} \end{figure} \subsubsection{Qualitative Results} In addition to the quantitative analysis provided, we provide qualitative results in Fig. \ref{fig:flowfigs}. In these results, and throughout the test set, the predicted flow always closely follows the ground truth. As the event input is quite sparse, our network tends to predict zero flow in areas without events. This is consistent with the photometric loss, as areas without events are typically low texture areas, where there is little change in intensity within each pixel neighborhood. In practice, the useful flow can be extracted by only using flow predictions at points with events. On the other hand, while UnFlow typically performs reasonably on the high texture regions, the results on low texture regions are very noisy. \subsubsection{Ablation Study Results} From the results of the ablation studies in Tab. \ref{tab:results}, EV-FlowNet$_{\text{C}}$ (counts only) performed the worst. This aligns with our intuition, as the only information attainable from the counts is from motion blur effects, which is a weak signal on its own. EV-FlowNet$_{\text{T}}$ (timestamps only) performs better for most tests, as the timestamps carry information about the ordering between neighboring events, as well as the magnitude of the velocity. However, the timestamp only network fails when there is significant noise in the image, or when fast motion results in more recent timestamps covering all of the older ones. This is illustrated in Fig \ref{fig:failurecase}, where even the full network struggles to predict the flow in a region dominated by recent timestamps. Overall, the combined models clearly perform better, likely as the event counts carry information about the importance of each pixel. Pixels with few events are likely to be just noise, while pixels with many events are more likely to carry useful information. Somewhat surprisingly, the larger network, EV-FlowNet$_{\text{4R}}$ actually performs worse than the smaller one, EV-FlowNet$_{\text{2R}}$. A possible explanation is that the larger capacity network learned to overfit the training sets, and so did not generalize as well to the test sets, which were significantly different. For extra validation, both EV-FlowNet$_{\text{2R}}$ and EV-FlowNet$_{\text{4R}}$ were trained for an additional 200,000 iterations, with no appreciable improvements. It is likely, however, that, given more data, the larger model would perform better. \subsubsection{Comparison Results} From our experiments, we found that the UnFlow network tends to predict roughly correct flows for most inputs, but tends to be very noisy in low texture areas of the image. The sparse nature of the events is a benefit in these regions, as the lack of events there would cause the network to predict no flow, instead of an incorrect output. In general, EV-FlowNet performed better on the dt=4 tests, while worse on the dt=1 tests (with the exception of outdoor$\_$driving1 and indoor$\_$flying3). We observed that UnFlow typically performed better in situations with very small or very large motion. In these situations, there are either few events as input, or so many events that the image is overriden by recent timestamps. However, this is a problem intrinsic to the testing process, as the time window is defined by the image frame rate. In practice, these problems can be avoided by choosing time windows large enough so that sufficient information is available while avoiding saturating the event image. One possible solution to this would be to have a fixed number of events in the window each time. \subsection{Test Sequences} For comparison against UnFlow, we evaluated 800 frames from the outdoor$\_$day1 sequence as well as sequences 1 to 3 from indoor$\_$flying. For the event input, we used all of the events that occurred in between the two input frames. The outdoor$\_$day1 sequence spans between 222.4s and 240.4s. This section was chosen as the grayscale images were consistently bright, and there is minimal shaking of the camera (the provided poses are smoothed and do not capture shaking of the camera if the vehicle hits a bump in the road). In order to avoid conflicts between training and testing data, a model trained only using data from outdoor$\_$day2 was used, which is visually significantly different from outdoor$\_$day1. The three indoor$\_$flying sequences total roughly 240s, and feature a significantly different indoor scene, containing vertical and backward motions, which were previously unseen in the driving scenes. A model trained on both outdoor$\_$day1 and outdoor$\_$day2 data was used for evaluation on these sequences. We avoided fine tuning on the flying sequences, as the sequences are in one room, and all relatively similar in visual appearance. As a result, it would be very easy for a network to overfit the environment. Sequence 4 was omitted as the majority of the view was just the floor, and so had a relatively small amount of useful data for evaluation. \subsection{Metrics} For each method and sequence, we compute the average endpoint error (AEE), defined as as the distance between the endpoints of the predicted and ground truth flow vectors: \begin{align} AEE=&\sum_{x,y}\left\|\begin{pmatrix}u(x,y)_{\text{pred}} \\ v(x,y)_{\text{pred}}\end{pmatrix} - \begin{pmatrix}u(x,y)_{\text{gt}} \\ v(x,y)_{\text{gt}}\end{pmatrix}\right\|_2 \end{align} In addition, we follow the KITTI flow 2015 benchmark and report the percentage of points with EE greater than 3 pixels and 5\% of the magnitude of the flow vector. Similarly to KITTI, 3 pixels is roughly the maximum error observed when warping the grayscale images according to the ground truth flow, and comparing against the next image. However, as the input event image is relatively sparse, the network only returns accurate flow on points with events. As a result, we limit the computation of AEE to pixels in which at least one event was observed. For consistency, this is done with a mask applied to the EE for both event-based and frame-based methods. We also mask out any points for which we have no ground truth flow (i.e. regions with no ground truth depth). In practice, this results in the error being computed over 20-30\% of the pixels in each image. In order to vary the magnitude of flow observed for each test, we run two evaluations per sequence: one with input frames and corresponding events that are one frame apart, and one with frames and events four frames apart. We outline the results in Tab. \ref{tab:results}. \subsection{Training Details} Two networks were trained on the two outdoor$\_$day sequences from MVSEC. outdoor$\_$day1 contains roughly 12000 images, and outdoor$\_$day2 contains roughly 26000 images. The images are captured from driving in an industrial complex and public roads, respectively, where the two scenes are visually very different. The motions include mostly straights and turns, with occasional independently moving objects such as other cars and pedestrians. The input images are cropped to 256x256, the number of output channels at the first encoder layer is 64 and the number of output channels in each residual block is 512. To increase the variation in the magnitude of the optical flow seen at training, we randomly select images up to $k$ images apart in time, and all of the events that occurred between those images. In our experiments, $k\in [2, 4, 6, 8, 10, 12]$. In addition, we randomly flip the inputs horizontally, and randomly crop them to achieve the desired resolution. The weight on the smoothness loss \eqref{eq:total_loss}, $\lambda$, is set to 0.5. Each of the intermediate losses is weighted equally in the final loss. For the Charbonnier loss \eqref{eq:charb}, $\alpha$ was set to be 0.45 and $\epsilon$ was set to be 1e-3. The Adam optimizer is used, with learning rate initialized at 1e-5, and exponentially decayed every 4 epochs by 0.8. The model is trained for 300,000 iterations, and takes around 12 hours to train on a 16GB NVIDIA Tesla V100.
{ "timestamp": "2018-08-14T02:18:08", "yymm": "1802", "arxiv_id": "1802.06898", "language": "en", "url": "https://arxiv.org/abs/1802.06898" }
\section{Introduction} For $X$ a smooth projective variety over $\mathbb{C}$, let $A^j(X)$ denote the Chow groups of codimension $j$ algebraic cycles on $X$ modulo rational equivalence. The intersection product makes a graded ring of $A^\ast(X)=\oplus_j A^j(X)$, the {\em Chow ring\/} of $X$. In this note, we will be interested in the Chow ring of smooth projective surfaces $S$. What can be said about the image of the intersection product map \[ i_S\colon\ \ A^1(S)\otimes A^1(S)\ \to\ A^2(S)\ \ \ ?\] For $K3$ surfaces, the image of $i_S$ is as small as possible: it is a free abelian group of rank $1$ \cite{BV}. At the other extreme, for abelian surfaces the map $i_S$ is surjective (the same is true for the Fano surface of lines on a cubic threefold \cite{B}, and another example where this holds is given in remark \ref{compare} below). For surfaces $S\subset\mathbb{P}^3$, the rank of the image of $i_S$ can grow arbitrarily large \cite{OG}. There is a relation with the cohomology ring: if $i_S$ is surjective, then also the cup product map in coherent cohomology \[ H^1(S,\mathcal O_S)\otimes H^1(S,\mathcal O_S)\ \to\ H^2(S,\mathcal O_S)\] is surjective \cite{ESV}. The conjectural converse statement is studied in \cite{moimult}. To complete the picture, we propose the following conjecture: \begin{conjecture}\label{conj} Let $S$ be a smooth projective surface, such that the cup product map \[ H^1(S,\mathcal O_S)\otimes H^1(S,\mathcal O_S)\ \to\ H^2(S,\mathcal O_S)\] is zero. Then the intersection product map \[ j_S\colon\ \ \ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \to\ A^2_{AJ}(S) \] is also zero. \end{conjecture} Here, $A^1_{hom}$ denotes homologically trivial cycles, and $A^2_{AJ}$ denotes the Albanese kernel. The point of conjecture \ref{conj} is that $A^2_{AJ}(S)$ is expected to be related to $H^2(S,\mathcal O_S)$ \cite{B}. A particular case of conjecture \ref{conj} is that the map $j_S$ should be zero for any surface with irregularity $q(S):=h^{1,0}(S)=1$. We can prove conjecture \ref{conj} for so--called {\em Sicilian surfaces\/}. These surfaces (defined in \cite{BCF}, cf. also definition \ref{sic} below) form a $4$--dimensional family of general type surfaces with $p_g(S):=h^{2,0}(S)=q(S)=1$ and $K_S^2=6$. \begin{nonumbering}[=theorem \ref{main}] Let $S$ be a Sicilian surface as in \cite{BCF}. Then the intersection product map \[ j_S\colon\ \ \ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \to\ A^2_{AJ}(S) \] is zero. \end{nonumbering} This implies that the image of $i_S$ is ``not so large'' for Sicilian surfaces: it is supported on a divisor (corollary \ref{cor}). The proof of theorem \ref{main} is an easy application of O'Sullivan's theory of {\em symmetrically distinguished cycles\/} on abelian varieties \cite{OS} (cf. also subsection \ref{ssd} below). \vskip0.6cm \begin{convention} In this note, the word {\sl variety\/} will refer to a reduced irreducible scheme of finite type over $\mathbb{C}$, and a {\sl surface\/} will mean a $2$--dimensional variety. For any variety $X$, we will denote by $A_j(X)$ the Chow group of dimension $j$ cycles on $X$. For a smooth $n$--dimensional variety $X$, we will write $A^j(X)$ for $A_{n-j}(X)$. For a smooth proper variety, $A^j_{hom}(X)$ and $A^j_{AJ}(X)$ will be used to indicate the subgroups of homologically trivial, resp. Abel--Jacobi trivial cycles. For a morphism between smooth varieties $f\colon X\to Y$, we will write $\Gamma_f\in A^\ast(X\times Y)$ for the graph of $f$, and ${}^t \Gamma_f\in A^\ast(Y\times X)$ for the transpose. The contravariant category of Chow motives (i.e., pure motives with respect to rational equivalence as in \cite{Sc}, \cite{MNP}) will be denoted $\mathcal M_{\rm rat}$. We will write $H^j(X)$ to indicate singular cohomology $H^j(X,\mathbb{Q})$. \end{convention} \section{Preliminaries} \subsection{Chow cohomology} For a singular variety $X$, we follow the convention of \cite{F} and write $A_\ast(X)$ for Chow groups and $A^\ast(X)$ for the {\em operational Chow cohomology} of \cite[Chapter 17]{F}. As proven in loc. cit., $A^\ast()$ is a contravariant functor from varieties to commutative rings, and for $X$ smooth the ring structure coincides with the usual intersection product. For $n$--dimensional quotient varieties $X=Y/G$ with $Y$ smooth and $G$ finite, the natural map induces isomorphisms \[ A^i(X)_{\mathbb{Q}}\ \xrightarrow{\cong}\ A_{n-i}(X)_{\mathbb{Q}}\ \ \ \forall i\ \] \cite[Example 17.4.10]{F}. The same is true for surfaces whose singularities are rational \cite[Theorem 4.1]{Vi}, \cite{Kim0}. \subsection{Finite--dimensional motives} We refer to \cite{Kim}, \cite{An}, \cite{J4}, \cite{MNP} for the definition of finite--dimensional motive. An essential property of varieties with finite--dimensional motive is embodied by the nilpotence theorem: \begin{theorem}[Kimura \cite{Kim}]\label{nilp} Let $X$ be a smooth projective variety of dimension $n$ with finite--dimensional motive. Let $\Gamma\in A^n(X\times X)_{\mathbb{Q}}$ be a correspondence which is numerically trivial. Then there is $N\in\mathbb{N}$ such that \[ \Gamma^{\circ N}=0\ \ \ \ \in A^n(X\times X)_{\mathbb{Q}}\ .\] \end{theorem} Actually, the nilpotence property (for all powers of $X$) could serve as an alternative definition of finite--dimensional motive, as shown by a result of Jannsen \cite[Corollary 3.9]{J4}. Conjecturally, any variety has finite--dimensional motive \cite{Kim}. We are still far from knowing this, but at least there are quite a few non--trivial examples. \subsection{The transcendental motive} \begin{theorem}[Kahn--Murre--Pedrini \cite{KMP}]\label{t2} Let $S$ be a surface. There exists a decomposition \[ \mathfrak h^2(S)= \mathfrak t^2(S)\oplus \mathfrak h^2_{alg}(S)\ \ \ \hbox{in}\ \mathcal M_{\rm rat}\ ,\] such that \[ H^\ast(\mathfrak t^2(S),\mathbb{Q})= H^2_{tr}(S)\ ,\ \ H^\ast(\mathfrak h^2_{alg}(S),\mathbb{Q})=NS(S)_{\mathbb{Q}}\ \] (here $H^2_{tr}(S)$ is defined as the orthogonal complement of the N\'eron--Severi group $NS(S)_{\mathbb{Q}}$ in $H^2(S,\mathbb{Q})$), and \[ A^\ast(\mathfrak t^2(S))_{\mathbb{Q}}=A^2_{AJ}(S)_{\mathbb{Q}}\ .\] (The motive $\mathfrak t^2(S)$ is called the {\em transcendental part of the motive\/}.) \end{theorem} \begin{remark} It would be more precise to write $H^\ast(\mathfrak h^2_{alg}(S),\mathbb{Q})=NS(S)_{\mathbb{Q}}(-1)$, taking into account the Tate twist. In this note, we will omit Tate twists from the notation. \end{remark} \subsection{Refined Chow--K\"unneth decomposition} \begin{theorem}[Vial \cite{V4}]\label{pi_2} Let $X$ be a smooth projective variety of dimension $n\le 5$. Assume that $X$ has finite--dimensional motive, and that the Lefschetz standard conjecture $B(X)$ holds (in particular, the K\"unneth components $\pi_i\in H^{2n}(X\times X)$ are algebraic). Then there is a splitting into mutually orthogonal idempotents \[ \pi_i=\sum_j \pi_{i,j}\ \ \ \in A^{n}(X\times X)_{\mathbb{Q}}\ ,\] such that \[ (\pi_{i,j})_\ast H^\ast(X) =gr^j_{\widetilde{N}} H^i(X)\ .\] (Here, $ gr^j_{\widetilde{N}} H^i(X)$ denotes the graded quotient for the {\em niveau filtration\/} $\widetilde{N}^\ast$ defined in \cite{V4}.) The motive $\mathfrak h^{i,0}(X)=(X,\pi_{i,0},0)\in \mathcal M_{\rm rat}$ is well--defined up to isomorphism. \end{theorem} \begin{proof} This is \cite[Theorems 1 and 2]{V4}. The last statement follows from \cite[Proposition 1.8]{V4} combined with \cite[Theorem 7.7.3]{KMP}. \end{proof} \begin{remark} In dimension $n\le 3$ (which will be the case when we apply theorem \ref{pi_2} in this note), the niveau filtration $\widetilde{N}^\ast$ coincides with the coniveau filtration $N^\ast$ of \cite{BO}. \end{remark} \begin{remark} In dimension $n=2$, the motive $\mathfrak h^{2,0}(X)$ is isomorphic to the motive $\mathfrak t^2(X)$ of theorem \ref{t2}. \end{remark} \subsection{Symmetrically distinguished cycles} \label{ssd} \begin{definition}[O'Sullivan \cite{OS}] Let $A$ be an abelian variety. Let $a\in A^\ast(A)$ be a cycle. For $m\ge 0$, let \[ V_m(a)\ \subset\ A^\ast(A^m)_{\mathbb{Q}} \] denote the $\mathbb{Q}$--vector space generated by elements \[ p_\ast \Bigl( (p_1)^\ast(a^{r_1})\cdot (p_2)^\ast(a^{r_2})\cdot\ldots\cdot (p_n)^\ast(a^{r_n})\Bigr)\ \ \ \in A^\ast(A^m)_{\mathbb{Q}} \ . \] Here $n\le m$, and $r_j\in\mathbb{N}$, and $p_i\colon A^n\to A$ denotes projection on the $i$--th factor, and $p\colon A^n\to A^m$ is a closed immersion with each component $A^n\to A$ being either a projection or the composite of a projection with $[-1]\colon A\to A$. The cycle $a\in A^\ast(A)_{\mathbb{Q}}$ is said to be {\em symmetrically distinguished\/} if for every $m\in\mathbb{N}$ the composition \[ V_m(a)\ \subset\ A^\ast(A^m)_{\mathbb{Q}}\ \to\ A^\ast(A^m)_{\mathbb{Q}}/A^\ast_{hom}(A^m)_{\mathbb{Q}} \] is injective. \end{definition} \begin{theorem}[O'Sullivan \cite{OS}]\label{os} The symmetrically distinguished cycles form a $\mathbb{Q}$--subalgebra $A^\ast_{sym}(A)_{\mathbb{Q}}\subset A^\ast(A)_{\mathbb{Q}}$, and the composition \[ A^\ast_{sym}(A)_{\mathbb{Q}}\ \subset\ A^\ast(A)_{\mathbb{Q}}\ \to\ A^\ast(A)_{\mathbb{Q}}/A^\ast_{hom}(A)_{\mathbb{Q}} \] is an isomorphism. Symmetrically distinguished cycles are stable under pushforward and pullback of homomorphisms of abelian varieties. \end{theorem} \begin{remark} For discussion and applications of the theory of symmetrically distinguished cycles, in addition to \cite{OS} we refer to \cite[Section 7]{SV}, \cite{V6}, \cite{Anc}, \cite{LFu2}, \cite{FV}. \end{remark} \begin{proposition}\label{sym} Let $A$ be an abelian variety of dimension $g$. \noindent (\rom1) There exists a Chow--K\"unneth decomposition $\{ \Pi^i_A\}$ that is self--dual and consists of symmetrically distinguished cycles. One has equality \[ (\Pi_A^{2i-j})_\ast A^i(A)_{\mathbb{Q}}= A^i_{(j)}(A)_{}\ \ \ \forall i,j\ ,\] where $A^\ast_{(\ast)}(A)_{}$ denotes Beauville's decomposition \cite{Beau} on Chow groups with rational coefficients. \noindent(\rom2) Assume $g\le 5$, and let $\{ \Pi^i_A\}$ be as in (\rom1). There exists a further splitting in orthogonal projectors \[ \Pi^2_A= \Pi^{2,0}_A +\Pi^{2,1}_A\ \ \ \hbox{in}\ A^g(A\times A)_{\mathbb{Q}}\ ,\] where the $\Pi^{2,i}_A$ are symmetrically distinguished and $\Pi_A^{2,i}=\pi_A^{2,i}$ in $H^{2g}(A\times A)$. Moreover, one has \[ (\Pi_A^{2,0})_\ast A^2(A)_{\mathbb{Q}} = (\Pi_A^{2})_\ast A^2(A)_{\mathbb{Q}} = A^2_{(2)}(A)_{} \ .\] \end{proposition} \begin{proof} \noindent (\rom1) An explicit formula for $\{ \Pi^i_A\}$ is given in \cite[Section 7 Formula (45)]{SV}. \noindent (\rom2) The point is that $\Pi^{2,1}_A$ is (by construction) a cycle of type \[ \sum_j C_j\times D_j\ \ \ \hbox{in}\ A^g(A\times A)_{\mathbb{Q}}\ ,\] where $D_j\subset A$ is a symmetric divisor and $C_j\subset A$ is a curve obtained by intersecting a symmetric divisor with hyperplanes. This implies $\Pi_A^{2,1}$ is symmetrically distinguished. By assumption, $\Pi^A_2$ is symmetrically distinguished and hence so is $\Pi^A_{2,0}$. For the ``moreover'' part, one notes that the projector $\Pi_A^{2,1}$ acts trivially on $A^2_{(2)}(A)_{} \subset A^2_{AJ}(A)_{\mathbb{Q}}$, for reasons of dimension. \end{proof} \subsection{Sicilian surfaces} \begin{definition}\label{sic} A {\em Sicilian surface\/} is a minimal surface $S$ of general type satisfying: \noindent{(1)} $p_g(S)=q(S)=1$ and $K_S^2=6$; \noindent{(2)} There exists an unramified double cover $\hat{S}\to S$ with $q(\hat{S})=3$, and such that the Albanese morphism $\hat{\alpha}\colon \hat{S}\to A:=\hbox{Alb}(\hat{S})$ is birational to its image $Z$, a divisor in $A$ with $Z^3=12$. \end{definition} \begin{remark} Sicilian surfaces have an irreducible $4$--dimensional moduli space \cite[Theorem 6.1]{BCF}. Sicilian surfaces can be caracterized topologically; they form a connected component of the moduli space of surfaces of general type \cite[Corollary 6.5]{BCF}. Surfaces in the families $\mathcal S_{11}$ and $\mathcal S_{12}$ constructed in \cite{BCF} are Sicilian surfaces. \end{remark} We mention in passing the following result, which will {\em not\/} be used in the proof of the main result (theorem \ref{main}). \begin{theorem}[Peters \cite{Pet}]\label{chris} Let $S$ be a Sicilian surface. Then $S$ has finite--dimensional motive. More precisely, let $A$ be the abelian threefold as in definition \ref{sic}. Then the natural map \[ \mathfrak h^{2,0}(A)\ \to\ \mathfrak t^2(S) \ \ \ \hbox{in}\ \mathcal M_{\rm rat}\] admits a right--inverse, and the natural map \[ \mathfrak t^2(S)\ \to\ \mathfrak h^4(A) \ \ \ \hbox{in}\ \mathcal M_{\rm rat}\] admits a left--inverse. \end{theorem} \section{Main result} \begin{theorem}\label{main} Let $S$ be a Sicilian surface. The map induced by intersection product \[ j_S\colon\ \ \ A^1_{hom}(S)\otimes A^1_{hom}(S)\ \to\ A^2(S) \] is the zero map. \end{theorem} \begin{proof} As the image of $j_S$ is contained in $A^2_{AJ}(S)$ which is torsion free \cite{Ro}, it will suffice to prove that $j_S\otimes\mathbb{Q}$ is the zero map. The next reduction step is to pass to the canonical model $S_{can}$. Let $f\colon S\to S_{can}$ the canonical morphism. There is a commutative diagram \[ \begin{array}[c]{ccc} A^1_{hom}(S)\otimes A^1_{hom}(S) & \xrightarrow{j_S} & A^2(S)\\ &&\\ \ \ \ \ \ \uparrow {\scriptstyle (f^\ast,f^\ast)}&&\ \ \ \uparrow {\scriptstyle f^\ast}\\ &&\\ A^1_{hom}(S_{can})\otimes A^1_{hom}(S_{can}) & \xrightarrow{j_{S_{can}}} & A^2(S_{can})\\ \end{array} \] where $A^2(S_{can})$ denotes operational Chow cohomology. The vertical arrows are isomorphisms (for the left vertical arrow, this is because $S_{can}$ has rational singularities, for the right vertical arrow this follows from the exact sequence of \cite[Theorem 2.3]{Kim0}). It thus suffices to prove that $j_{S_{can}}\otimes\mathbb{Q}$ is the zero map. As shown in \cite[Theorem 6.1]{BCF}, the surface $S_{can}$ admits an inclusion as an ample divisor \[ S_{can} \ \subset\ X=A/G\ ,\] where $X$ is a Bagnera--de Franchis threefold (in the sense of \cite[Section 5]{BCF}), and $A$ is the abelian threefold of definition \ref{sic} and $G\cong\mathbb{Z}_2$. Because $q(X)=q(S)=1$, the cup product map \[ H^{1}(X,\mathcal O_X)\otimes H^1(X,\mathcal O_X)\ \to\ H^2(X,\mathcal O_X) \] is the zero map. In view of the Hodge decomposition, this means that the composition \[ H^1(X,\mathbb{C})\otimes H^1(X,\mathbb{C})\ \to\ H^2(X,\mathbb{C})\ \to\ H^2(X,\mathcal O_X)\ , \] which is the same as \[ H^1(A,\mathbb{C})^G\otimes H^1(A,\mathbb{C})^G\ \to\ H^2(A,\mathbb{C})^G\ \to\ H^2(A,\mathcal O_A)^G\ , \] is the zero map. In terms of motives, this means that the composition \[ \mathfrak h^1(A)^G\otimes \mathfrak h^1(A)^G\ \xrightarrow{\Delta_A^{sm}}\ \mathfrak h^2(A)\ \xrightarrow{\pi_A^{2,0}}\ \mathfrak h^{2,0}(A) \ \ \ \hbox{in}\ \mathcal M_{\rm hom}\] is zero (where $\Delta_A^{sm}\in A^6(A\times A\times A)$ is the ``small diagonal'', and the motive $\mathfrak h^{2,0}(A)\subset \mathfrak h^2(A)$ is as in proposition \ref{sym}). In terms of correspondences, this means that the correspondence \[ \Gamma:= \Pi_A^{2,0}\circ \Delta_A^{sm} \circ \Bigl(\Pi^1_A\circ (\Delta_A+\Gamma_g)\bigr)\times \bigl(\Pi^1_A\circ (\Delta_A+\Gamma_g)\Bigr)\ \ \in\ A^{6}\bigl( (A\times A)\times A\bigr)_{\mathbb{Q}} \] is homologically trivial (i.e., it vanishes in $H^{12}(A\times A\times A,\mathbb{C})$ and hence also in $H^{12}(A\times A\times A,\mathbb{Q})$. Here, we have written $G=\{\hbox{id},g\}\cong\mathbb{Z}_2$ (i.e., $g$ is the non--trivial element of $G$), and $\Pi^1_A, \Pi_A^{2,0}$ are the projectors of proposition \ref{sym}. The involution $g\in\hbox{Aut}(A)$ is described explicitly in \cite[Theorem 6.1]{BCF}; it can be written as a group homomorphism $\sigma$ followed by a translation $t$ by a torsion element. In view of lemma \ref{same} below, the graphs $\Gamma_g$ and $\Gamma_\sigma$ are the same in the Chow group with rational coefficients. Therefore, we have equality \[ \Gamma= \Pi_A^{2,0}\circ \Delta_A^{sm} \circ \Bigl(\Pi^1_A\circ (\Delta_A+\Gamma_\sigma)\bigr)\times \bigl(\Pi^1_A\circ (\Delta_A+\Gamma_\sigma)\Bigr)\ \ \hbox{in}\ A^{6}\bigl( (A\times A)\times A\bigr){}_{\mathbb{Q}}\ . \] But the right--hand side (being a composition of symmetrically distinguished cycles) is symmetrically distinguished. Therefore, theorem \ref{os} implies that \[ \Gamma=0\ \ \ \hbox{in}\ A^{6}\bigl( (A\times A)\times A\bigr){}_{\mathbb{Q}}\ . \] In particular, the action on Chow groups \[ \Gamma_\ast\colon\ \ \ A^2(A\times A)_{\mathbb{Q}}\ \to\ A^2(A)_{\mathbb{Q}} \] is zero. On the other hand, let $a,b\in A^1_{hom}(A)^G$ and consider the element \[ a\times b\ \ \in\ \hbox{Im}\Bigl(A^1_{hom}(A)^G\otimes A^1_{hom}(A)^G\ \to\ A^2(A\times A)\Bigr) \ .\] Then (by construction of $\Gamma$) we have equality \[ \Gamma_\ast(a\times b)= 4 \,(\Pi_A^{2,0})_\ast (\Delta_A^{sm})_\ast (a\times b) = 4\, (\Pi_A^{2,0})_\ast (a\cdot b)= 4 \,a\cdot b \ \ \ \hbox{in}\ A^2(A)_{\mathbb{Q}}\ .\] (Here, for the last equality we have used that $a\cdot b\in A^2_{(2)}(A)_{}$, as the Beauville decomposition of $A^\ast(A)$ is multiplicative.) The commutative diagram \[ \begin{array}[c]{ccc} A^1_{hom}(A)^G\otimes A^1_{hom}(A)^G &\xrightarrow{j_A} & A^2_{AJ}(A)^G\\ &&\\ \downarrow{\cong} &&\downarrow{\cong}\\ &&\\ A^1_{hom}(X)\otimes A^1_{hom}(X) &\xrightarrow{j_X} & A^2_{AJ}(X)\\ &&\\ \ \ \ \ \downarrow{\scriptstyle (\iota^\ast,\iota^\ast)} &&\ \ \ \downarrow{\scriptstyle \iota^\ast}\\ &&\\ A^1_{hom}(S_{can})\otimes A^1_{hom}(S_{can}) &\xrightarrow{j_{S_{can}}} & \ A^2_{AJ}(S_{can})\ ,\\ \end{array}\] plus the fact that $\iota^\ast\colon A^1_{hom}(X)\to A^1_{hom}(S_{can})$ is an isomorphism (weak Lefschetz), now ends the proof. In the above argument we have used the following, which is \cite[Lemma 2.1]{JY}: \begin{lemma}[\cite{JY}]\label{same} Let $A$ be an abelian variety of dimension $g$, and let $t\in\hbox{Aut}(A)$ be a translation by a torsion element. Then \[ \Gamma_t =\Delta_A\ \ \ \hbox{in}\ A^g(A\times A)_{\mathbb{Q}}\ .\] \end{lemma} \end{proof} \begin{corollary}\label{cor} Let $S$ be a Sicilian surface. The image of the intersection product map \[ i_S\colon\ \ A^1(S)\otimes A^1(S)\ \to\ A^2(S) \] is supported on a divisor. \end{corollary} \begin{proof} Let $D_1,\ldots,D_r$ be generators of the N\'eron--Severi group of $S$. Given arbitrary divisors $D, D^\prime\in A^1(S)$, let us write $D=\sum_{i=1}^r d_i D_i$, $D^\prime=\sum_{j=1}^r d^\prime_j D_j$ in $NS(S)$. This gives decompositions \[ D=\sum_{i=1}^r d_i D_i + D_0\ ,\ \ \ D^\prime= \sum_{j=1}^r d_j^\prime D_j + D_0^\prime\ \ \ \hbox{in}\ A^1(S)\ ,\] with $D_0, D_0^\prime\in A^1_{hom}(S)$. It follows from theorem \ref{main} that $D_0\cdot D_0^\prime=0$ in $A^2(S)$, and so \[ D\cdot D^\prime = \sum_{i=1}^r \sum_{j=1}^r d_i d^\prime_j D_i\cdot D_j + \sum_{i=1}^r d_i D_i\cdot D_0^\prime + \sum_{j=1}^r d^\prime_j D_j\cdot D_0 \ \ \ \hbox{in}\ A^2(S)\ .\] This implies that the image of $i_S$ is supported on the union $\cup_{j=1}^r D_r\subset S$. \end{proof} \begin{remark}\label{compare} Theorem \ref{main} applies in particular to the generalized Burniat type surfaces in the families $\mathcal S_{11}$ and $\mathcal S_{12}$ of \cite{BCF} (as shown in loc. cit., these are Sicilian surfaces). It is instructive to contrast this with the behaviour of generalized Burniat type surfaces in the family $\mathcal S_{16}$ (these have $p_g(S)=q(S)=3$). Indeed, any surface $S$ in the family $\mathcal S_{16}$ has a surjective cup product map \[ H^{1}(S,\mathcal O_S)\otimes H^{1}(S,\mathcal O_S)\ \to\ H^2(S,\mathcal O_S)\ .\] Moreover, $S$ has finite--dimensional motive \cite[Theorem 4.13]{BCF}. The main result of \cite{moimult} then implies that \[ i_S\colon\ \ A^1(S)\otimes A^1(S)\ \to\ A^2(S) \] is surjective (just as for abelian surfaces). \end{remark} \begin{remark} The argument of theorem \ref{main} applies in a more general setting: it suffices that $S$ be a surface with $q(S)=1$ obtained as the resolution of a nodal surface $S_{can}$, which can be embedded as ample divisor \[ S_{can}\ \ \subset\ X=A/G\ ,\] where $A$ is an abelian threefold, and $G$ a finite group acting by compositions of translations and group homomorphisms. It follows that theorem \ref{main} is also true for the generalized Burniat type surfaces in the families $\mathcal S_j$, $5\le j\le 12$ of \cite{BCF}. \end{remark} \begin{remark} As Sicilian surfaces (and generalized Burniat type surfaces) are closely related to abelian varieties, it seems natural to ask whether they admit a {\em multiplicative Chow--K\"unneth decomposition\/}, in the sense of \cite[Section 8]{SV}. I hope to return to this question later. \end{remark} \vskip1cm \begin{nonumberingt} Thanks to Kai and Len, my dear colleagues at the Schiltigheim Math Research Institute. Thanks to Chris Peters for helpful discussions concerning \cite{Pet}, and to the referee for insightful comments that helped to improve this paper. \end{nonumberingt} \vskip1cm
{ "timestamp": "2018-02-21T02:07:18", "yymm": "1802", "arxiv_id": "1802.07032", "language": "en", "url": "https://arxiv.org/abs/1802.07032" }
\section{Introduction} \vspace{-2.3mm} In digital communications, data transmission typically entails source coding and channel coding. In source coding the data is mapped to a sequence of symbols where the sequence length is optimized. In channel coding redundant symbols are systematically added to this sequence to detect or correct at the receiver the errors that may be introduced during data transfer. One of the consequences of the source-channel coding theorem by Shannon \cite{sha98} is that source and channel codes can be designed separately, with no loss in optimality, for memoryless and ergodic channels when infinite block length codes are used. This is known as the separation theorem, and can be extended to a larger class of channels \cite{vem95sourChan}. Optimality of separation in Shannon's theorem assumes no constraint on the complexity of the source and channel code design. However, in practice, having very large block lengths may not be possible due to complexity and delay constraints. Therefore, many communication systems may benefit from designing the source/channel codes jointly. Some examples demonstrating this benefit include: wireless channels \cite{gol95soucChan}, video transmission over noisy channels \cite{zha05jointSCvideo}, and image transmission over noisy channels \cite{dav96jointSCimage,bur13jointSCimage}. In this work, we consider design of joint source-channel coding for text data with constrained code lengths. Particularly, our ultimate goal is to design a messaging service where sentences are transmitted over an erasure channel. The erasure channel is used here since it can model a broad class of channels where errors are detected but not corrected. One example is timing channels, where information is encoded on the time of release of packets \cite{ana96bitsQueues}. Our proposed coding technique can be used in this channel to create a covert messaging service over packet-switched networks~\cite{dun09secureTiming,kiy13timing,muk16covertTiming,bis2017survey}. In our messaging service, instead of recovering the exact sentence at the receiver, we are interested in recovering the semantic information such as facts or imperatives of the sentence. Therefore, any sentence that conveys the information in the originally transmitted sentence would be considered as an error free output by the decoder even if it differed from the exact sentence. For example, the phrase ``the car stopped'' and ``the automobile stopped'' convey the same information. One of the first works that considered joint source-channel coding using neural networks is \cite{ron03jointNN}, where simple neural network architectures were used as encoder and decoder for Gauss-Markov sources over additive white Gaussian noise channel. There are also a number of works that use neural networks for compression without a noisy channel (i.e., only source coding). In particular, in \cite{tod16imgComp,tod16fullImgComp} image compression algorithms are developed using RNNs, which outperformed other image compression techniques. Sentence and document encoding is proposed in \cite{li15hierarchical} using neural autoencoders. {\bf Contributions:} Inspired by the recent success of deep learning in natural language processing for tasks such as machine translation~\cite{wu16GoogleTranslate}, we develop a neural network architecture for joint source-channel coding of text. Our model uses a recurrent neural network (RNN) encoder, a binarization layer, the channel layer, and a decoder based on RNNs. We demonstrate that using this architecture, it is possible to train a joint source-channel encoder and decoder, where the decoder may output a different sentence that preserves its semantic information content. We compare the performance of our deep learning encoder and decoder with a separate source and channel coding design\footnote{To the best of our knowledge there are no known joint source-channel coding schemes for text data over erasure channels.}. Since the channel considered here is the erasure channel, we use Reed-Solomon codes for channel coding. For source coding, we consider three different techniques: a universal source coding scheme, a Huffman coding, and 5-bit character encoding. We demonstrate that the proposed deep learning encoder and decoder outperform the traditional approach in term of word error rate (WER), when the bit budget per sentence encoding is low. Moreover, in many cases, although some words may be replaced, dropped, or added to the sentence by the deep learning decoder, the semantic information in the sentence is preserved in a qualitative sense. \vspace{-2.3mm} \section{Problem Description} \label{sec:problem_formulation} \vspace{-2.3mm} In this section, we define our system model associated with transmitting sentences from a transmitter to a receiver using limited number of bits. Let $\mathcal{V}$ be the set of all the words in the vocabulary and let $\vec{s}=[w_1,w_2,\cdots,w_m]$ be the sentence to be transmitted where $w_i\in\mathcal{V}$ is the \ith~word in the sentence. The transmitter converts the sentence into a sequence of bits prior to transmission using source and channel coding. Let $\vec{b} = \varphi_\ell(\vec{s})$ be a binary vector of length-$\ell$, where $\varphi_\ell$ is the function representing the combined effect of the source and channel encoder. Let $\vec{o}$ be the vector of observations at the receiver corresponding to each of the $\ell$-bits in the transmission. Note that $\vec{o}$ does not necessarily need to be a binary vector, and it could be a vector of real or natural numbers depending on the channel considered. Let the combined effect of the source and channel decoder function be given by $\nu_\ell(\vec{o})$. Then $\hat{\vec{s}} = [\hat{w}_1,\hat{w}_2, \cdots, \hat{w}_{m^\prime}]=\nu_\ell(\vec{o})$, where $\hat{\vec{s}}$ is the recovered sentence. The traditional approach to designing the source and channel coding schemes is to minimize the word error rate while also minimizing the number of transmission bits. However, jointly optimizing the source coding and the channel coding schemes is a difficult problem and therefore, in practice, they are treated separately. The problem considered in this work is designing a joint source-channel coding scheme that preserves the meaning between the transmitted sentence $\vec{s}$ and the recovered sentence $\hat{\vec{s}}$, while the two sentences may have different words and different lengths. \vspace{-2.3mm} \section{Deep Learning Algorithm} \vspace{-2.3mm} \label{sec:algorithms} Our work is motivated by the recent success of the sequence-to-sequence learning framework \cite{sut14sequence} in different tasks such as machine translation \cite{wu16GoogleTranslate,bah14align}. Our system, which is shown in Fig.~\ref{fig:encDec}, has three components: the encoder, the channel, and the decoder. The encoder takes as an input a sentence $\vec{s}$, concatenated with the special end of sentence word $<$eos$>$, and outputs a bit vector $\vec{b}$ of length $\ell$. The channel takes an input bit vector $\vec{b}$ and produces an output vector $\vec{o}$. The effects of this module is random. The channel output $\vec{o}$ is the input to the decoder, and the output of the decoder is the estimated sentence $\hat{\vec{s}}$. We now describe each of these modules in detail. \begin{figure} \centering \includegraphics[width=1\columnwidth,keepaspectratio]{EncoderDecoderArchitecture.pdf} \vspace{-0.5cm} \caption{\label{fig:encDec} The encoder-decoder architecture.} \vspace{-0.5cm} \end{figure} \vspace{-2.3mm} \subsection{The Encoder} The first step in the encoder uses an embedding vector to represent each word in the vocabulary. In this work, we initialize our embedding vectors using Glove \cite{pen14glove}. Let $\vec{E} = [\vec{e}_1, \vec{e}_2, \cdots, \vec{e}_m, \vec{e}_{\text{eos}}]$ be the $m+1$ embeddings of words in the sentence. In the second step, the embedded words are the inputs to a stacked bidirectional long short term memory (BLSTM) network \cite{gra05blstm}. The LSTM cell used in this work is similar to that used in \cite{sak14lstmpeep}. The $j^{\text{th}}$~BLSTM stack is represented by \vspace{-2.3mm} \begin{align} \vec{C}_j,\vec{H}_j = \mathrm{BLSTM}_j(\vec{H}_{j-1}), \end{align} where $\vec{C}_j$ is the cell state matrix and $\vec{H}_j$ is output matrix. Each column of $\vec{C}_j$ and $\vec{H}_j$ represents the cell state vector at each time step, and $\vec{H}_0 = \vec{E}$. Fig.~\ref{fig:encDec} shows an encoder with two stacked BLSTM layers. Let $k$ be the total numbers of BLSTM stacks. We concatenate the outputs at the last step and similarly the cell states at the last step of each layer using \vspace{-2.3mm} \begin{align} \vec{h} &= \vec{H}_1[m+1]\oplus\vec{H}_2[m+1]\oplus \cdots \oplus\vec{H}_k[m+1],\\ \vec{c} &= \vec{C}_1[m+1]\oplus\vec{C}_2[m+1]\oplus \cdots \oplus\vec{C}_k[m+1], \end{align} where $\oplus$ is the concatenation operator, and $\vec{H}_j[m+1]$ and $\vec{C}_j[m+1]$ are the $m+1$ column (i.e., the last step) of, respectively, the outputs and cell states of the $j^{\text{th}}$~stack. To convert $\vec{h}$ and $\vec{c}$ into binary vectors of length $\ell/2$ we use the same technique as in \cite{wil92binarizer,cou15binarizer,tod16imgComp}. The first step in this process uses two fully connected layers \vspace{-2.3mm} \begin{align} \vec{h}^* &= \tanh(\vec{W}_h \vec{h}+\vec{a}_h),\\ \vec{c}^* &= \tanh(\vec{W}_c \vec{c}+\vec{a}_c), \end{align} where $\vec{W}_h$ and $\vec{W}_c$ are weight matrices each with $\ell/2$ rows, and $\vec{a}_h$ and $\vec{a}_c$ are the bias vectors. Note that although here we use one fully connected layer, it would be possible to use multiple layers where the size of $\vec{h}$ and $\vec{c}$ is increased or decreased to $\ell/2$ in multiple steps. However, the last layer's activation function must alway be a tanh, to keep the output value in the interval $[-1,1]$. The second step maps the the values in $\vec{h}^*$ and $\vec{c}^*$ from the interval $[-1,1]$ to binary values $\{-1, 1\}$. Define a stochastic binarization function as \vspace{-2.3mm} \begin{align} \beta(x) = x + Z_x, \end{align} where $Z_x \in\{1-x, -x-1\}$ is a random variable distributed according to $P(Z_x = 1-x) = \tfrac{1+x}{2}$ and $P(Z_x = -x-1) = \tfrac{1-x}{2}$. Then final binarization step during training is \vspace{-2.3mm} \begin{align} \vec{b} = \beta(\vec{h}^*) \oplus \beta(\vec{c}^*) \end{align} for the forward pass. During the back-propagation step of the training, the derivative with respect to the expectation $\mathbb{E}[\beta(x)]=x$ is used \cite{rai14binarytech}. Therefore, the gradients pass through the $\beta$ function unchanged. After training the network using $\beta$, during deployment or testing the stochastic function $\beta(x)$, is replaced with the deterministic function $2u(x)-1$, where $u(x)$ is the unit step function. \vspace{-2.3mm} \subsection{The Channel} To allow for end-to-end training of the encoder and the decoder, the channel must allow for back-propagation. Fortunately, some communication channels can be formulated using neural network layers. This includes the additive Gaussian noise channel, multiplicative Gaussian noise channel and the erasure channel. In this work, we consider the erasure channel as it could model packets of data being dropped in a packet switched networks, or wireless channels with deep fades or burst errors. The erasure channel can be represented by a dropout layer \cite{sri14dropout}, \vspace{-2.3mm} \begin{align} \vec{o} = \text{dropout}(\vec{b},p_d), \end{align} where $\vec{o}$ is the vector of observations at the receiver, and $p_d$ is the probability that a bit is dropped. The elements of $\vec{o}$ are in $\{-1,0,1\}$, where 0 indicates erasure (i.e., a dropped bit). Every bit in $\vec{b}$ may be dropped independent of other bits with probability $p_d$. \vspace{-2.3mm} \subsection{The Decoder} At the receiver we use a stack of LSTMs for decoding. The observation vector $\vec{o}$ is input to the decoder. Let $\ominus(\vec{x},v)$ be the inverse of the concatenation operator, where the vector $\vec{x}$ is broken into $v$ vectors of equal length. Then we have \vspace{-2.3mm} \begin{align} \vec{h}^\prime,\vec{c}^\prime = \ominus(\vec{o},2), \end{align} which contribute to the initial $\vec{h}^{(j)}_0$ state and $\vec{c}^{(j)}_0$ state of the $j^{\text{th}}$~ LSTM stack. Particularly, these initial states are given by \vspace{-2.3mm} \begin{align} \vec{h}^{(j)}_0 &= \tanh\left(\vec{W}^{(j)}_h\vec{h}^\prime+\vec{a}^{(j)}_h\right),\\ \vec{c}^{(j)}_0 &= \vec{W}^{(j)}_c\vec{c}^\prime+\vec{a}^{(j)}_c, \end{align} where $\vec{W}^{(j)}_h$ and $\vec{W}^{(j)}_c$ are the weight matrix, and $\vec{a}^{(j)}_h$ and $\vec{a}^{(j)}_c$ are the bias vectors. The first input to the LSTM stack is the embedding vector for a special start of the sentence symbol $<$sos$>$. Note that after the first word $\hat{w}_1$ is estimated, its embedding vector will be used as the input for the next time step. To speed up the training, during the first few epochs, with probability 1 we use the correct word $w_i$ as the input for the $i+1$ time step at the decoder; we gradually anneal the probability with which we replace the correct word $w_i$ with the estimated word $\hat{w}_i$. During deployment and testing we always use the estimated words and the beam search algorithm to find the most likely sequences of words \cite{gra12beamsearch,wu16GoogleTranslate}. \vspace{-2.3mm} \section{Results} \label{sec:results} \vspace{-2.3mm} \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=0.8\linewidth]{bps_word_error} \caption{Word error as bits per sentence changes for 0.05 bit erasure probability. \label{fig:bps_error}} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=0.8\linewidth]{bdr_word_error} \caption{Word error as erasure or bit-drop rate increases for 400 bit encoding. \label{fig:bdr_error}} \end{subfigure}% \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=0.8\linewidth]{len_sentence_word_error} \caption{Effect of sentences of different sizes with 400 bit encoding, 0.05 drop rate. \label{fig:sen_len_error}} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\linewidth} \centering \includegraphics[width=0.8\linewidth]{example_embedding} \caption{Sample embeddings mapped to two dimensions using manifold dimensions with hamming distances between codes. \label{fig:example_embedding}} \end{subfigure} \caption{Performance plots.} \vspace{-2.3mm} \end{figure*} \begin{table*}[h!] \centering \begin{tabular}{|c|p{0.8\linewidth}|} \hline Punctuation error & TX: efficiency – what efficiency ? \\ & RX: efficiency , what efficiency ? \\ \hline Rephrasing & TX: tourism serves as a source of income to totalitarian regimes . \\ & RX: tourism has become a source of income to totalitarian regimes . \\ \hline Rephrasing & TX: a few wealthy individuals compared with millions living in hunger . \\ & RX: a few wealthy individuals face with millions living in hunger . \\ \hline Tense Error & TX: a communist country riding roughshod over human rights . \\ & RX: a communist country rides roughshod over human rights .\\ \hline An inexplicable error & TX: i listened to colleagues who mentioned bicycles . \\ & RX: i listened to colleagues who mentioned goebbels . \\ \hline Long sentence 1 & TX: there is one salient fact running through these data : the citizens want more information and have chosen television as the best means to receive that information . \\ & RX: there is one glaring weaknesses , by the communication : the citizens want more information and hold ' television as the means to receive this information . \\ \hline Long sentence 2 & TX: i hope we will be able to provide part - funding for a renovation programme for energy efficiency as a result of this decision of the eu . \\ & RX: i hope we will be able to provide for funding for the renovation programme for energy efficiency as a result of decision by the eu . \\ \hline \end{tabular} \caption{Sample sentences which were transmitted and received using the deep learning approach. \label{tab:samples}} \end{table*} In this section, we compare the deep learning approach with traditional information theoretic baselines for bit erasure channels. \vspace{-2.3mm} \subsection{The Dataset} \vspace{-2.3mm} We work with the proceedings of the European Parliament \cite{koehn2005europarl}. This is a large parallel corpus that is frequently used in statistical machine translation. The English version has around 2.2 million sentences and 53 million words. We crawl through the corpus to extract the most common words which we call our vocabulary. We pre-process the dataset by selecting sentences of lengths 4-30 where less then 20\% of the words in the sentences are unknown words (i.e., they are outside of the selected vocabulary). The corpus is split into a training and test data set, where the training set has more that 1.2 million sentences and the test data set has more than 200 thousand sentences. \subsection{Deep Learning Approach} \vspace{-2.3mm} We initialize 200-dimensional word embeddings using the Glove pre-trained embeddings \cite{pen14glove} for words in our vocabulary as well as a few special words (unknowns, padding, start and end symbols). We batch the sentences from the corpus based on their sentence lengths to increase efficiency of computation - i.e.\ sentences of similar length are fed in batches of size 128. Two layered BLSTM of dimension 256 with peepholes are used for the encoder followed by a dense layer thats brings the dimension of the resultant state to the required bit budget. The decoder has two layers of LSTM cells each with the dimensions 512 with peephole connections. Note that one disadvantage of the deep learning approach is the use of a fixed number of bits for encoding all sentences of different lengths. \vspace{-2.3mm} \subsection{Separate Source-Channel Coding Baselines} \vspace{-2.3mm} We implement separate source and channel coding which we know is optimal in the asymptote of arbitrarily large block lengths and delays. The source coding is done using three approaches: \begin{enumerate} \item Universal compressors: We use gzip which combines a Lempel-Ziv universal compression \cite{ziv1977universal} scheme with Huffman coding. This method works universally with all kinds of data and theoretically reaches the entropy limit of compression in the asymptote. However, since this technique does not work well for single sentences, we improve its performance by jointly compressing sentences in batches of size 32 or more. Note that this will give this technique an unfair advantage since it will no longer perform source coding on single sentences. \item Huffman coding: To allow for single sentence source coding, we use Huffman coding on characters in the sentence. Using the training corpus, we compute character frequencies, which are then used to generated the Huffman codebook. \item Fixed length character encoding: In this approach, we use a fixed 5-bit encoding for characters (the corpus is converted to lower case) and some special symbols. Decoding gzip and Huffman codes when there are errors or corruptions in the output of the channel decoder is not trivial. However, this baseline with 5-bit encoding can be decoded. \end{enumerate} After source encoding using the above approaches, we use a Reed-Solomon code \cite{reed1960polynomial} that can correct up to the expected number of erasures. In the comparison, we assume the channel code can exactly compensate for erasures that occur. This assumption favors separate source-channel coding baselines as we can expect the number of bit erasures to be larger than the expected number with high probability. If this occurs, the channel decoding process will have errors and this may result in irredeemable corruption for decoding the source codes (gzip or huffman). Finally, we compare performance by using a fixed bit budget per sentence. However, these schemes inherently produce embeddings of different lengths. If the encoding of a sentence exceeds the bit budget, we re-encode the sentence without its last word (resulting in a word error). We repeat the procedure until the encoding is within the bit limit. \vspace{-2.3mm} \subsection{Performance} \vspace{-2.3mm} There is no better metric than a human judge to establish the similarity between sentences. As a proxy, we measure performance of the deep learning approach as well as the baselines using the edit distance or the Levenshtein distance. This metric is commonly used to measure the dissimilarity of two strings. It is computed using a recursive procedure that establishes the minimum number of word insertion, deletion, or substitution operations that would transform one sentence to another. The edit distance normalized by the length of the sentence is what we refer to as the word error rate. Word error rate is commonly used to evaluate performance in speech recognition and machine translation \cite{quirk2004monolingual,wubben2012sentence}. A downside of the metric is that it cannot capture the effect of synonyms or other aspects of semantic similarity. In Fig.\ \ref{fig:bps_error}, we study the impact of the bit budget or the number of bits per sentence on the word error rate when we have a bit erasure probability of 0.05. Among the traditional baselines, gzip outperforms Huffman codes, and Huffman codes outperform the fixed length encoding. All three approaches result in no error if the bit allocation exceeds the number of bits required. This is because we assume the Reed-Solomon code compensates for all channel erasures. We observe that the deep learning approach is most competitive with limited bit allocations. As we enter the regime of excessive redundancy, the word error rate continually falls. In Fig.\ \ref{fig:bdr_error}, we look at the impact of the channel on word error rates when we have a bit allocation of 400 bits per sentence. Between the traditional baselines, we observe again that gzip is optimal as it operates on large batches followed by Huffman codes. 400 bits is not enough to completely encode sentences even when the channel is lossless. We make the observation again that in stressed environments (low bit allocations for large bit erasure rates), the deep learning approach outperforms the baselines. What Fig.\ \ref{fig:bps_error} and Fig.\ \ref{fig:bdr_error} hide is the impact of varying sentence lengths. If we consider a batch of sentences in random order from the corpus, we will have both large and short sentences. Traditional baselines can allot large encodings to long sentences and short encodings to others leading to an averaged bit allocation that may be short with few errors. However, the deep learning approach has the same bit allocation for sentences regardless of their length. We can improve the performance of the deep learning approach here by varying the length of the embedding based on the sentence length. Fig.\ \ref{fig:sen_len_error} illustrates this very clearly. In this case, instead of having batches with sentences of different lengths, we use homogeneous batches to show the impact of the sentence lengths on word error rates (bit allocation 400, bit erasure rate 0.05). For short sentences, we are in the excess bit allocation regime. As the sentence length increases beyond 20, the deep learning approach significantly outperforms baselines. Another aspect to consider is that word errors of the deep learning approach may not be word errors - that may include substitutions of words using synonyms or rephrasing which does not change the meaning of the word. \vspace{-2.3mm} \subsection{Properties of the encoding} \vspace{-2.3mm} The deep learning approach results in a lossy compression of text. It is able to do this by encoding a semantic embedding of the sentence. We can watch this in action in Fig.\ \ref{fig:example_embedding}. Here, we compute the embeddings of a few sentences, groups of which are thematically linked. One group of sentences is about a girl saying something to a man, another is about a car driving and the last is about politicians voting. We then find the Hamming distance between the embeddings and use this dissimilarity matrix and multidimensional scaling approaches \cite{borg2005modern} to view it in two dimensions. Sentences that express the same idea have embeddings that are close together in Hamming distance. We do not see such behavior in information theoretic baselines which do not consider the fact that it is text with semantic information that they are encoding. A few representative errors are shown in Table \ref{tab:samples}. \vspace{-2.3mm} \section{Conclusion} \label{sec:conclusion} \vspace{-2.3mm} We considered the problem of joint source-channel coding of text data using deep learning techniques from natural language processing. For example, in many applications, recovery of the exact transmitted sentence may not be important as long as the main information within the sentence is conveyed. We demonstrated that our proposed joint source-channel coding scheme outperforms separate source and channel coding, especially in scenarios with a small number of bits to describe each sentence. One drawback of the current algorithm is that it uses a fixed bit length to encode sentences of different length. As part of future work, we investigate how to resolve this issue. With severe bit restrictions per sentence, we will also look at deep learning based summarization to represent information. Joint source-channel coding of other forms of structured data such as images, audio, and video would also be a relevant future direction. \bibliographystyle{IEEEbib}
{ "timestamp": "2018-02-21T02:01:35", "yymm": "1802", "arxiv_id": "1802.06832", "language": "en", "url": "https://arxiv.org/abs/1802.06832" }
\chapter{Appendix Title Here} \label{AppendixA} \lhead{Appendix A. \emph{Appendix Title Here}} Write your Appendix content here. \chapter{Appendix Title Here} \label{AppendixX} \lhead{Appendix X. \emph{Appendix Title Here}} Write your Appendix content here. \chapter{Chapter title} \label{ChapterX} \lhead{Chapter X. \emph{short title}} \chapter{Background} \label{Chapter1} \lhead{Background} \section{Introduction} Research is a continuous process. The purpose of research is to introduce new ideas through scientific discourse. As more and more journal articles and conference papers are published year by year, it becomes increasingly difficult to identify research articles that are related to one’s field of interest. Furthermore, it becomes non-trivial to keep up-to-date with newly published research articles as well as to associate them to previously published articles. With the digitization of research publications, there has been a move to use computers to augment the search for related articles which are relevant to a researcher’s field of interest. Such systems are known as research paper recommendation systems. A recommender system can be taken as a black box which takes in a profile of a user and matches it against a candidate set of items in order to suggest previously unseen items for a user. These items are considered to be the most relevant recommendations for that user. \section{Recommender Systems} As explained previously, a recommender system can be most easily visualized as a system that takes as input some characteristics from a user which are processed in order to identify items which are most relevant to the user’s interests. The type of matching used commonly categorizes the approach into either a content based approach, or a collaborative filtering approach. In a content based filtering approach, the tastes and interests of a user are extracted by using the information contained in the items that the user has previously interacted with. The exact action that is considered as an interaction depends on the specifics of the recommender system. For example, a book recommender system might choose to use the act of purchasing a book as an interaction, whereas, a friend recommender system on a social media platform might choose to use the act of sending a message as a relevant interaction. The items that a user interacts with are usually summarized in an \textit{item profile} and this item profile is then compared against the candidate set of items to provide personally tailored recommendations. In a collaborative filtering approach, no information from the items that a user interacts with is used. Instead, the similarities are considered between the users in terms of the items that each user has interacted with. For a new recommendation for a user, a set of most similar users is considered. Then items that similar users have interacted with, but the user for whom we are providing recommendations has not interacted with are used as recommendations. In order to improve the performance of the recommenders, there have been successful efforts to combine the traditional collaborative filtering approach with a traditional content based filtering approach. These type of recommenders are said to use Hybrid recommendation approaches. One way to create a hybrid approach is to use separately construct recommendation lists and then combine them. Another way is to add collaborative filtering ideas into a content based filtering framework, and vice-versa. As examples, Pandora Radio\footnote{www.pandora.com} is a content based music recommender system. IMDb\footnote{www.imdb.com} is a content based movie recommender system. Spotify\footnote{www.spotify.com} has a collaborative filtering music recommender system. Netflix\footnote{www.netflix.com} is a good example of a movie and TV show recommender system that combines the features of traditional content based filtering and traditional collaborative filtering approaches. \section{Research Paper Recommendation} Research-paper recommendation addresses the task of providing recommendations based on an abstraction of the user's profile. More than 200 research articles regarding research-paper recommendation systems have been published in the 16 years until 2015, and there have been more new systems introduced since then which will be described in chapter \ref{Chapter 2}. Depending on the type of information that is available to the research paper recommendation system, we can ascertain if the system uses a collaborative filtering approach or a content based filtering approach. Usually, if a user in the recommendation system has a library of literature articles associated to him, there is a possibility that the system uses a collaborative approach. However, if this is not the case, and there is no library associated to a user, we can comfortably conclude that the system uses a content based approach. Additionally, a system may or may not use the history of literature that a user interacted with in order to recommend new articles. Literature recommendation is usually found to be associated with a reference management software, or a digital library. Usually collaborative filtering approaches would only provide new recommendations when it is used with intervals of time between recommendations. Thus, there is a distinction between a \textit{related article} feature in digital libraries which only uses a content based approach and a \textit{weekly recommendation} feature in a digital library which would probably use a collaborative approach. In this work, the focus is only on identifying baselines for research paper recommendations for a related article search feature. Thus, we will look primarily at content based recommenders. A typical content based research paper recommendation framework is illustrated in Fig. 1. Research paper recommendation is a specific domain in the larger field of recommender systems. The items are research articles, and users are researchers who need to identify relevant literature that pertains to their specific area of interest. \begin{figure} \centering \includegraphics[scale=0.9]{Figures/Fig1-1.pdf} \caption{Schematic of a Research Paper Recommender Framework} \label{fig:my_label} \end{figure} The framework creates candidate models for research articles in the candidate pool. \chapter{Literature Review} \label{Chapter 2} \lhead{Chapter 2. \emph{Lit. Review}} In 2015, Beel et. al. published an extensive literature survey of the field of research paper recommender systems until and including the year 2013\cite{Beel2015rlitreview}. In order to pick the papers that they included in the survey, they used the following query: "[paper | article | citation] [recommender | recommendation] [system | systems]" The query was used as a search input to Google Scholar, ACM Digital Library, Springer Link, and ScienceDirect systems. In the literature review for this thesis, we used this same query. Beel et. al. identified 217 articles in total and described the trends in the field based on the categories based on which evaluation methods were applied (e.g. user-studies or offline evaluations), which evaluation metrics were used (e.g.precision or recall), and the scale of the evaluation as well as the dataset or document catalog used for evaluations. The following sections provide a summary of the work published in the field of research paper recommender systems since the end of 2013 up till August 2016. It is organized according to the a very broad categorization of what kind of recommendation approach is used. \section{Collaborative Filtering} \subsection{Personalized Academic Paper Recommendation System} Lee et. al. \cite{lee2015personalized} used the collaborative filtering approach to construct individual recommendations for researchers. They use a simple k-nearest neighbors approach to find similar users to the target user and make recommendations for the user. They consider all the papers that a user has authored in order to construct their user model. The user is modelled by a text vectorization of his papers and simple vector cosine is used as the similarity measure. The evaluation is carried out in an online and offline fashion. The offline evaluation dealt with measuring the classification accuracy of the recommendations on a corpus of IEEE Xplore and ACM Digital Library papers. The offline method primarily evaluates whether the paper recommendations made fall under the same field of research that the user is focused in. There is no focus on serendipity of recommendations. The online test was a user study with three participants, the results of which cannot be representative of the method. Regardless, their methodology shows a high user satisfaction among the participants. \section{Content Based Filtering} \subsection{Sugiyama and Kan} Sugiyama and Kan published updates to their previous work in research paper recommendation systems over three articles since 2013 \cite{Sugiyama:2015:THR:2719943.2719947} \cite{Sugiyama:2013:EPC:2467696.2467701} \cite{Sugiyama2015}. All their work is explained in \cite{Sugiyama2015}. Their approach has been to recommend papers based on an author’s profile, where the author is the user in their system. In order to get serendipitous results, they linearly combine the user model of the author with the user models of one or many dissimilar users. They also brought in the idea of discovering potential citation papers in addition to the already cited papers in the article in order to expand the contextual network for a paper under consideration. Citation context around every citation in the paper is also a feature that they use. Each of these papers is modelled by its terms in order to get the user profile, and compared against the word model of the candidate papers. They evaluate their model on the ACM Digital library with the ground truth being constructed based on input from 50 researchers. The dataset has been made available publicly. Their model is shown to be superior to some state of the art models such as Wang and Blei’s topic modelling approach and Nascimento’s approach in terms of recommendation accuracy. They also compare the serendipity of their models against Carbonell and Goldstein’s maximal marginal relevance model as well as against a random model to show that the nITN measure (which is not referenced anywhere else in their text) is significantly better than the competing methodologies. \subsection{Effective Academic Research Papers Recommendation for Non-profiled Users} Hanyurwimfura proposes a solution to recommend scientific articles to non-profiled users in their 2015 work \cite{hanyurwimfura2015effective}. This methodology is meant to avoid the problems of collaborative filtering for users for whom there is not enough data available to build their user profile. They take a content based approach in extracting both short and long queries from a single paper provided as the input. The long queries are taken from the abstract and sections similar to the title whereas the short queries are commonly occurring phrases in the paper as well as words from the title. These queries are weighted and used to filter candidate papers from the corpus. The recommendations are made using a simple cosine similarity between the target paper and the filtered papers. They evaluate their approach in the sense of a topic extraction paradigm and also as a recommendation system. The evaluation of the relevancy of the topics extracted is done by 20 participants who are researchers and the average acceptance ratio was found to be 68.3\%. To evaluate the performance as a paper recommendation tool, one paper from each researcher was used to build recommendations and each recommendation had to be rated for its relatedness to their field of work. They reported recall and NDCG scores using a corpus of ACM, IEEE and Science Direct as being incrementally better than using the methods of Nascimento et. al. \cite{nascimento2011source} \subsection{SimSeerX} SimSeerX is a similar paper search engine built on top of the CiteSeer document corpus by Kyle Williams as part of Lee Giles’ research group \cite{Williams2014}. It can take whole documents or text as a query and returns a list of the most similar documents to the query. The idea behind SimSeerX’s structure is to decompose a document into a set of signatures. Each document is then indexed into the system by these signatures. A document submitted as a query is also decomposed into its signature and then searched against the rest of the indexed document. Any similarity function can be used in this system because the document signatures have been built to accommodate for this. Currently, SimSeerX supports Key Phrase Similarity, Shingle Similarity, and something known as a SimHash similarity by customizing the similarity functions in the Solr instance which has indexed all the documents. Since the paper was an exposition of the system, the evaluation was done in terms of the time taken for a query to return results, both in a cold-start and a cached scenario. It was shown that the system scales well upto 3.5 million document corpus size. \subsection{PaperTaste} The authors, Xue et. al., aim to solve recommendation as a supervised ranking problem \cite{xue2014personalized}. They split the corpus into two parts based on a time-frame. The older papers form the training set and the new ones are the validation/test set. A sample in the training data would be an input paper and its citation network combined with randomly sampled uncited papers (which constitute the negative samples). The scores for each node is the number of times that paper has been cited, which is 0 for the negative samples. The problem then becomes the issue of finding suitable features to train this ranking model. The authors choose to construct features such as the page rank for paper, author and venue, the age of the paper, content similarity between titles, abstracts etc. They also use a feature which they call the “author ratio”. It is the percentage of papers in the user’s profile which contain some or all authors of the candidate paper. Using these features, they train a Ranking SVM model. A recommendation for a new paper is made by constructing these features for each candidate paper. If serendipity is a requirement, the authors rank the top 500 recommendations and randomly sample from that list. Evaluation was done against a few baseline approaches such as a basic CF, basic CBF and a Page Rank weighted CF method. In the offline evalution, which was done on a Social Scholar dataset of 730,605 papers for 10,000 authors, it was reported that PaperTaste system outperformed the others in terms of the NDCG$_{k}$ value. Their online evaluation only constituted the reporting of statistics about the percentage of people who activated the recommendation option in their PaperTaste system and then proceeded to interact with the system. \subsection{Recommendation System Based on Fuzzy Cognitive Map} Their method augments the key words present in an input paper to find a fuzzy match between these key words and words in a pre-prepared ontology of topics in the research domain \cite{liu2014recommendation}. Their paper is a bit tough to read in terms of grammar, but they go on to describe something known as a Fuzzy Cognitive Map between topics in the ontology to build recommendations. Their evaluation is done using user studies in comparison with the RecULike system. Not much information is given on how this evaluation was conducted. \subsection{Keyword Based Article Recommendation System using Map-Reduce} In their 2015 paper, Singh and Ahuja \cite{singh2015article} provided a proof of concept in the utilization of Hadoop based technologies to provide paper recommendations. It was not complicated to implement, and their mechanism only does simple keyword matching from the input query. They provide an evaluation in terms of the time taken per number of queries to show that using Hadoop based map reduce infrastructures is essential for large scale recommendations. \subsection{Content-Based Approach in Research Paper Recommendation System} Philip and others in a 2014 paper \cite{philip2014application} use a keyword based vector space model to make article recommendations for digital libraries. They build a system with user interactions in order to build a user profile. They model papers by their keywords using a tfidf approach and uses the cosine similarity measure to find relevant articles to recommend based on an input query. No evaluation of their framework was provided in this paper. \subsection{RefSeer} In 2014, Wenyi Huang authored an article \cite{Huang:2014:RCR:2740769.2740832} that encapsulates all the work done so far in building the RefSeer system as part of C. Lee Giles’ research group. The paper describes the system behind RefSeer as well as giving a concise description of the recommendation methodology used. RefSeer recommends citations for queries. Usually the queries are uncited manuscripts of paragraphs of text. First, to create the global recommendations, Refseer finds the topics in the corpus using the Cite-PLSA-LDA model \cite{huang2012recommending}. For a new query, the top 5 topics are calculated and recommendations are made for these topics. Only with the addition of a local recommendation, does the recommendation system become personalized. This is done by a local recommendation which makes use of a translation model between the original manuscript and the possible citation papers. This model is learned from the corpus using an IBM-1 model with pairs of phrases from the original text and the citations for those texts. For a fresh query, the system makes use of these two frameworks to come up with the list of citations needed. The Refseer team evaluates their work on the CiteSeer corpus and the CiteULike corpus and report the training and recommendation times. The training time is quite sizeable on the entirety of the CiteSeer dataset. However the recommendation time for a fresh query is less than 5 milliseconds. They also report that their MRR on the datasets, but there is no comparison with regard to any other system provided, with an acknowledgment that their model is not superior to other approaches to paper recommendation. \subsection{Recommender Systems with Big Data} This paper \cite{jokar2016contextual} published in 2016 presents a recommendation system that uses cosine similarity between a user profile and keywords, abstracts of articles to suggest recommendations. The user profile is non-dynamic and is a function of the user’s working area. The system was evaluated through a user study with recommendations being suggested based on a corpus of IEEE documents. It does not add anything to the field. \subsection{Investigating the User Curriculum} Magalhaes et. al. built a recommendation system in 2015 that harnesses the vast repository of research articles in Portuguese language database CV-Lattes \cite{magalhaes2015recommending}. The model papers based on terms and concepts. The papers are then indexed based on these. Each paper is associated to concepts by different weights. The user profile is modelled by features taken from the CV-Lattes system. Their experiments deal with finding out how the length of the user profile (in years) affects the performance of the recommendations. Their experiments also focused on the comparability of the paper’s being modelled by terms as opposed to concepts present in the papers. They also compared against Lopes’s recommendation methodology \cite{Lopes:2008:PRS:1666091.1666103}. They report that their system outperformed the existing system, however, using more information than Lopes’ system. They do not compare the system against more state of the art baselines. \subsection{Recommendations using User's Preferences} Igbe et. al. adapt a frequent pattern growth algorithm in order to prune out a set of recommended papers from a larger set of candidate research articles \cite{igbe2016incorporating}. To do this, they first build and extract features from each paper’s metadata. These features all have values between 0 and 1. Many of these features are based on the paper’s citation data. The authors then compare the average feature score against the cut-off feature score in order to limit the scope of the recommendations. Only he articles which have average feature scores above the cutoff are taken into consideration for the next step. For the next step, the authors take user input to select an optional number of search filters to prune the intermediate set of research papers. The frequent pattern growth algorithm is used in this step to select as many papers as possible which satisfy as many filters as needed to satisfy a pre-specified minimum support value. These are the final recommendations. The user inputs are keywords and an optional number of search filters. The authors use an offline evaluation to study their approach in comparison to two other baselines. However the baselines are the Page Rank model, and Sugiyama and Kan’s 2010 model \cite{sugiyama2010scholarly}, both of which have further been refined to produce improvements. This means that the improvements as reflected in the paper may not be an improvement on the current state of the art versions of these models. Additionally the paper does not describe the corpus that the authors have used for their comparisons. \subsection{Personalized Concept-Driven Recommender System for Scientific Libraries} De Nart et. al. prototyped the idea of assigning keywords to all scientific articles in the corpus using the Dikepe Keyphrase extraction module \cite{Nart2014APC}. These keywords are represented in the form of a context graph to cluster similar keywords together. Linked keywords form a network of contexts. They represent the user profile in terms of keywords of the papers that the user rates as relevant. This approach does not overcome the cold-start problem and is not much different from many existing systems. The prototype was rated using user studies by 30 graduate students after using it for a period of a month. They also evaluate the approach on the MovieLens dataset for movies. \subsection{Science Concierge} This system was developed as a recommendation system to recommend research articles which are presented in one particular conference \cite{achakulvisut2016science}. The recommendations are not meant to be serendipitous, instead, the aim is to recommend research articles which are as close to the desired topic. The distance is judged using a human curated topic hierarchy. The idea behind Science concierge is the vectorization of keywords and abstracts of research papers using Latent Semantic Analysis. The system then calculates the nearest neighbors of a set of input research papers. Since nearest neighbor search is expensive, the search is approximated using ball trees. In terms of performance, and using the evaluation criteria mentioned earlier, the Achakulvist et. al. report the scores in the paper as generally better than when only keywords are considered to represent the document. \subsection{PubRec} Alzoghbi’s content based model represents both the research articles and user profiles in terms of domain related keywords \cite{DBLP:conf/lwa/AlzoghbiA0L15}. Varying weights are given to keywords extracted from different sections of the text in terms of the relative importance of the sections. The researcher profile is uniquely built by use of a multivariate regression in terms of his previous publications. The weights on the past publications are decayed based on the age of the article. Then a simple dot product of the article vector and the research profile vector gives the similarity between the article and the user’s tastes, according to the authors. The experiment conducted by Alzaghbi et. al. was to recommend interesting papers for 50 researchers. These results were compared versus Nascimento’s \cite{nascimento2011source} and Sugiyama’s \cite{sugiyama2010scholarly} earlier works in terms of MRR and NDCG. Their reported results reveal that PubRec comfortably outperforms Nascimento’s work (based on keywords) and is competitive to Sugiyama’s recent works in the field. The experiments were conducted using Sugiyama’s Scholarly publication recommendation dataset. \subsection{Neural Probabilistic Model for Context Based Citation Recommendation} Huang et. al. as part of Lee Giles’ group developed a neural probabilistic model to learn a semantic embedding to represent research papers \cite{Huang:2015:NPM:2886521.2886655}. The model that they build eventually learns the probability of citing a paper given the citation context. The query to the model is a citation context, and the output is a list of possible documents to cite. This is done by projecting both the documents and the citation contexts into a shared embedded vector space. Prior to this, word representations and document representations are learned separately. In the final step, batches of pairs of citation contexts and their citation documents are used as training sample to train the neural probabilistic model. In the experiment they conducted, they use the CiteSeer dataset and split it into a training period and a testing period. Their aim was to predict the citations and compare it against other models such as the Citation Translation Model (CTM) \cite{huang2012recommending} and the Cite-PLSA-LDA model \cite{huang2012recommending}. They compared the models in terms of the MRR, MAP and NDCG values. Their results conclusively showed the usefulness of this method. It outperformed every model except for the CTM by 2 or 3 folds. CTM was outperformed by a few percentage points as well. It is not clear whether this model has been incorporated into the CiteSeer system. \subsection{LDA-Based Approach} Amami et. al. used an LDA approach to build the author profile \cite{DBLP:conf/nldb/AmamiPSF16}. Assuming that the authors publications are representative of his interests, they construct the author profile by applying LDA to the abstracts of the author’s publications. Each candidate paper for recommendation is then brought down to a language model (i.e., a set of topics it contains). An important step in the construction of the user profile is the validation of the topic model using a hold-out set of published articles. The similarity between the author profile and the candidate paper is calculated in terms of KL divergence. KL divergence is a similarity measure that is used between two probability distributions. In this case, both the author profile and the candidate paper are represented by a probability distribution over the topics present in the texts. This approach is evaluated against Wang and Blei’s Collaborative Topic Regression model\cite{wang2011collaborative} and Zhang’s CAT model\cite{Zhang2014}. Recommendations are made on the ArnetMiner dataset\footnote{https://cn.aminer.org/} which has around 1.5 million papers. However, this set of papers is whittled down and recommendations are only considered for 1600 authors. The ground truth for each of the papers is the reference list for the paper. The results reported are only for the recall at varying recommendation lengths. The model outperforms the compared models for all number of recommendations in terms of its recall capacity. \section{Graph Based Filtering} \subsection{Authoritative scholarly paper recommendation based on paper communities} Zhou et. al. built an authoritative paper recommendation system which was to help junior researchers identify the important papers in their field of study \cite{zhou2014authoritative}. This was done by identifying communities in the citation network of the entire corpus using the Greedy Clique Expansion algorithm. Within each community, the Paper Rank algorithm was used to rank the most influential nodes in the community. These nodes are then suggested as the most authoritative papers in that community. Their evaluation seemed like a verification of their procedure in that they compared the rankings within five sample communities in the HEP-TP corpus against the number of citations from within that community. No comparison against other ranking algorithm was done, and there was no information as such to combine the rankings between communities to provide, say, a top-K list of recommendations similar to the query paper. There is mention of using the input papers from the researcher in a diagram, so I suppose that they will identify one or two communities relevant to the author and run Paper Rank in those communities only, thus reducing the search space. \subsection{Common Author Authority Propagation (CAAP)} Hsiao et.al. propose a methodology \cite{hsiao2015model} to recommend highly authoritative papers which are related to a query document by making use of the common author network of that paper. Their CAAP method uses authority propagation in the citation network of that query paper as a backup in case the Common Author Network fails to yield any valid recommendations. Their Common Author approach is a Google Scholar search for publications by the co-authors. Out of these, using the terms extracted from the title and keywords, candidate papers are compared for similarity to try to find authors who have published a chain of 2-3 papers related to the same topic. If any such chain is found, then the Common Author approach is successful and is provided as a recommendation, otherwise the Author Propagation in the citation network is used as a backup. This is done on a citation network constructed out of querying the Google Scholar engine for related articles to the terms extracted out of the query paper. Authority propagation serves to filter out the unimportant nodes in the citation network. Evaluation was done in an offline fashion where Recall and Maximum Average Precision were calculated. The results were only compared against a Unified Graph Model (UGM) \cite{Meng:2013:UGM:2541167.2507831}. The experiments showed that the CAAP approach outperformed the UGM on 300 research papers from a few of the top conferences in the Computer Science field. \subsection{Query-oriented Approach for Relevance in Citation Networks} Totti’s work \cite{totti2016query} deals with giving recommendations for an input query. His approach involves an initial content based search to get a number of similar documents and then expanding this candidate set by including into the candidate set all the cited and citation papers that are ‘H’ hops away in both directions. A citation network is built within this subset of papers with each edge being weighted by a combination of text similarity, query similarity and an age decay factor. However, these edge weights are only used for the IQRA-MC model which uses a random walk approach to identify the most influential nodes in this network. Upon testing, Totti reports that a simpler approach of just recommending the top cited articles within this network yields better recommendations. He calls this the IQRA-TC approach. The evaluations were conducted on a subset of the CiteSeerX dataset with 657,000 publications. Offline evaluations were conducted and MAP as well as NDCG were reported in comparison to a number of baseline approaches including Page Rank, Google Scholar, and Arnet Miner. The test set was created by domain experts by choosing 20-30 papers from the dataset for each query which was to be evaluated. Experiments were separately conducted for a subset of this dataset to evaluate the performance on survey papers and using the paper title as the query itself. They report that the IQRA-TC model outperforms all the other approaches on this gold-standard dataset. The authors have also made this dataset available publicly. \subsection{DiSCern} Chakraborty et. al. developed a citation recommendation system \cite{chakraborty2015discern} in 2013, where they take a search query as the input. They find relevant articles related to the search query using the keywords listed in those papers and construct a citation graph between these articles. Then using the DiSCern algorithm, they find the most influential nodes in this graph. DiSCern algorithm is a variation of a vertex-reinforced random walk approach where the transition probabilities between nodes is dynamic over the course of the iterations. However, Chakraborty et. al. report that there is no appreciable improvement in precision or recall between their algorithm and common alternatives such as Page Rank. They do however say that DiSCern gives a much more diverse recommendation set as compared to other approaches, which might help with serendipitous recommendations. Their experiments were conducted on the Microsoft Academic Research Dataset and a High Energy Physics Dataset. For these offline experiments they compare DiSCern generated rankings with PageRank using both Relevancy metrics and Diversity metrics. DiSCern only shows improvements in the Diversity metrics such as l-hop graph density and l-expansion ratio, which are both described in the paper. \subsection{Ferosa} Chakraborty et. al. then created a paper recommendation system after their work with DiSCern in 2016 \cite{chakraborty2016ferosa}. Ferosa not only recommends papers but it categorizes the papers by tags based on which parts of the recommended papers are the basis for the recommendation for the given input paper. These tags are: alternate approach, background, methods, experiments etc. This is useful because it allows the researcher to quickly search for the kind of recommendations they want. These type of recommendations are labelled as Faceted recommendations by Chakraborty in his paper. Based on the initial input paper, a network is created by considering all the citation and cited papers. Then using a random walk approach with restarts, a subset of this network is combined with papers that have content similarity with the target article to generate the final recommendations. This is done for each tag separately. However, in order to compare against existing recommendation systems, they also use an aggregation method to select the overall top recommendations in a method called r-Ferosa. Their evaluation is done on the AAN corpus of papers using user studies. They measure the Overall Precision and Overall Impression. They compare against Google Scholar, Microsoft Academic Search and another graph based approach proposed by Liang et. al \cite{DBLP:conf/waim/LiangLQ11}. The authors present that the user studies show results favoring the r-Ferosa framework of paper recommendations. An evaluation was also conducted to show the effectiveness of the faceted recommendations in comparison with two other self-constructed baselines as the authors claim that no other faceted recommendation system currently exists to compare against. \subsection{ClusCite} ClusCite is a citation recommendation system that operates on the entire corpus at once \cite{ren2014cluscite}. The approach taken is to cluster the citation network of the entire corpus into soft clusters and then to learn a recommendation function for each of these clusters separately. This is done by extracting meta-information from the graph as features and optimizing an equation which derives the paper relative authority within each cluster group. When a query paper is submitted, they classify this new paper into one of these clusters and using the devised function, identify the relevant papers to be recommended. Experiments are conducted on the Pub-Med\footnote{\url{https://github.com/shanzhenren2/PubMed\_subset}} and DBLP corpuses\footnote{\url{http://arnetminer.org/DBLP\_Citation}} and they show substantial improvement to existing approaches in an offline evaluation. In their evaluation, Ren et. al. report performance improvements against systems like rank-SVM \cite{Nie:2005:ORB:1060745.1060828}, LDA approach, Link-PLSA-LDA \cite{Nallapati:2008:JLT:1401890.1401957}, and authority propagation \cite{Joachims:2002:OSE:775047.775067}. The authors also present results from a case study to emphasize how their approach is superior to the other compared approaches by listing the most authoritative venues, authors and papers in a few sample clusters. Their approach, being a citation recommendation system, is lacking in the area of serendipitous recommendations. \subsection{Exploiting Social Relations} Huynh et. al. developed a method to augment the users’ professional relations in the academic field to improve research paper recommendations \cite{huynh2016exploiting}. They reduced the task of recommendations into the task of training a function from the domain of the product of the set of papers and researchers into the range of a ranked list of papers. The method they have developed is offline, which means that the training time is all that is considered. Once the recommendations are made, they are stored in memory and made available when needed to the users. Their approach focuses on extracting features from the citation and co-author network in the academic graph. To study their approach, they made use of the Microsoft Academic Search corpus and computed the recommendations for a 1000 authors. They mention that the training time is large, which is why they picked only 1000 authors to demonstrate their procedure. They compared against their results against the results from Sugiyama’s work \cite{Sugiyama:2015:THR:2719943.2719947} in an offline evaluation. For this they prepared the dataset by selecting the papers published prior to 2006 as the training set and the papers published after that as the test set. Their results show that this “Trend Trust” approach is slightly superior to Sugiyama’s work, and vastly superior to the vanilla baseline approaches of Content Based and Collaborative filtering. \subsection{BABEL: EigenFactor Recommends} BABEL is a web application developed by Wesley-Smith et. al., released in 2016 and aims to be a platform for researchers to test out new algorithms for paper recommendations \cite{wesley2015experimental}. It was developed using the SSRN social science corpus, but has tools to use other corpuses as well. The platform captures metrics such as the click through rate in an attempt to quantify the performance of the experimental algorithms which are hosted on the platform. One of these experimental algorithms is EigenFactor Recommends, developed and tested by the authors. This method involves harnessing the citation graph to calculate Eigenvector centrality in each sub-topic of the corpus at an article level. This way, key papers within each sub-field are identified in an offline fashion. Their method has two variants. One optimizes for authoritative papers, and the other optimizes for serendipitous recommendations. The authors evaluated the system for a week by tracking the system’s click through rate (CTR) for both the eigenfactor variants against a control of the co-download recommendation system. Their results show that the co-download rate performed 4-5 times better than either of the two variants in terms of CTR. \subsection{Academic Paper Recommendation Based on Heterogeneous Graph} Pan et. al. convert the recommendation problem into a graph similarity problem in this work \cite{pan2015academic}. They use the AAN corpus for their research. They construct a citation graph and a word-word similarity graph (using WordNet). The two graphs are interconnected using the tfidf scores for the words in the papers. The authors then use graph similarity to calculate a similarity score between every two papers in the dataset, which is a computationally expensive task. Their final recommendations are the n-most similar recommendations to the target paper. Because the process is computationally expensive, the authors only evaluate this method using 15 input papers in an offline evaluation. They compare their results against other graph based models such as CC-IDF, Co-citation, HITS and a purely content based method. Their approach performs substantially better in terms of NDCG and MRR scores, however as explained before, the process is computationally infeasible for large scale recommendation. \section{Hybrid Recommendation Systems} \subsection{AHITS-UPT} Devised by Lu Meilian et. al., AHITS-UPT stands for Advanced Hyperlink Induced Topic Search with User Paper Topic network \cite{meilian2015ahits}. It was developed with the aim of giving serendipitous recommendations with content based filtering. AHITS is an improvement to an iterative algorithm (HITS) which is meant to assign author and paper values to each author and paper in a network given the constraints of the network. This is more or less a graph ranking algorithm (one that is meant to identify important nodes in the graph). Their framework is a hybrid model because after using LDA to find topics of the papers an author is interested in, they find similar users who have similar topic profiles. For this, they have kept track of user data. If among these users, there are topics which are not explored by the target user, the system finds the top papers from the top authors in those topics. Finally, the top recommendations are the most similar (in terms of cosine similarity) between the target paper and the candidate papers. The system was evaluated in three different ways. A time complexity analysis was done. There was a comparison done between the authoritative author rate and the high quality paper rate. Finally the authors compared the accuracy of the AHITS-UPT recommendation system in an offline fashion. The expereiments were done on a Microsoft Academic Search dataset comprising of 160,000 articles for 10 unique users over 829 user interactions. The comparison was against the HITS recommendation method, the MHITS recommendation method and a traditional content based filtering method. Although in the final comparison, the content based filtering method was left out in the results. They show that the AHITS method was superior to the other HITS based methods, as well as being computationally less expensive. \subsection{Document Recommender Agent Based on Hybrid Approach} This approach presented by Chekima et. al. is a simple hybrid model in the sense that it divides the recommendations between a collaborative approach and a content based approach which is based on bigrams \cite{chekima2014document}. The user profile for the collaborative filter is constructed by keeping track of all the articles that a user interacts with through the recommender agent. A recommendation is done by finding similar users who have also browsed the paper that the target user is currently browsing and considering the papers that the similar users have also read. Then the weights of these candidate papers are increased or decreased depending on the category that the paper falls into (using the ACM category keywords), by matching the bigrams from the article’s keywords with the keywords of the articles that the target user has already browsed. In case there are no similar users, Chekima’s system will recommend articles from the same ACM categories as the predominant category that most of the papers that the user has browsed falls into. His approach does not deal with the issues of a cold start, as no recommendations will be made if the user has not browsed any article yet. The evaluation was carried out against very basic Content based filtering and Collaborative filtering approaches to show that a hybrid model constituting a mixture of the features of the separate approaches performs better. The authors did not provide information on the dataset that they used for the above comparison. \subsection{Rec4LRW} This is a complete literature review framework built by Sesagiri Raamkumar et. al. in 2015. It is meant to be an all in one tool for research \cite{raamkumar2015rec4lrw}. The framework comprises of three separate steps, which happen at three different points of the research process. Initially, when a researcher wants exposure to a new topic, he is to provide keywords and an optional list of research articles as the basis for the recommendation. This is the initial input. The framework then uses keyword similarity method to find an initial set of similar papers. This initial set of papers is re-ranked based on coverage and citation count and 20 top papers are shortlisted for the first phase. In the second phase, the original list of 20 papers is expanded by making use of the citation network and a content based recommender which utilizes the BM25 algorithm \cite{Jones:2000:PMI:364119.364120}. For the third phase, the framework takes in two additional inputs from the user: the type of research article, and the potential keywords that the user will be using in their manuscript. In combination with the final reading list of the user for that project, the third phase recommends a list of citations based on the reading list, the potential keywords and the preferred type of research article. This is done by making use of an item-based collaborative filtering approach to get the candidate set of papers. The candidate set is ranked based on textual similarity with articles in the reading list and the titles of the papers in the reading list to select the top 20 papers as citation recommendations for the project. The evaluation for this system was done a period of three months with user studies \cite{raamkumar2016papers}. The results were published in a follow up article in 2016. The user study consisted of 116 participants equally split between staff and students. The study reported agreement percentages for 7 qualitative attributes of the system such as Relevance, Usefulness etc. This paper is an interesting and comprehensive approach to article recommendation. \subsection{Hybrid Parallel Approach for Personalized Literature Recommendation System} This system prototyped by Ma et. al. addresses the issue of overcoming cold-starts in the Collaborative Filtering approach by first categorizing each document in the corpus using topic models \cite{ma2014hybrid}. A scraper and parser collects all the documents publicly available and categorizes them into various fields. For a new user, a common stereotype set of recommendations are made. The prototype then logs user actions in order to build a user profile over time. This user profile is used in collaborative filtering to create new recommendations for the user. No evaluation of their system was provided in their work. \section{Baselines for Research Paper Recommendation} In this literature review, there was no common set of approaches that were used as baselines for research paper recommender systems. So the focus was expanded to look for baselines in a more general field, and to apply the principles used in these larger fields into the sub-field. In the broader field of Information Retrieval (IR), Muehleisen et al. illustrate how IR systems which are reported to be implementing the same baseline approach can have differing results on the same dataset \cite{Muhleisen}. The differences might be caused by different implementations of the backend, or different parameters in the algorithms that were used by the backend. Lin et al. address the gap in reproducibility by making available a repository containing all the code needed to run a set of standardized open-source IR baselines for the TREC dataset. The baselines are executed with one common script on a virtual machine to ensure reproducibility of the selected algorithms \cite{DBLP:conf/ecir/LinCTCCFIMV16}. The baselines used in their work were widely available, simple to implement, and relevant to the techniques being studied. An analogous common set of approaches is missing in the research-paper recommendation domain, to the best of our knowledge. Beel et al. have shown that similar recommendation approaches with only minor variations may perform vastly differently in evaluations against different datasets\cite{Beel2016z}. One way to account for the performance differences would be to use a set of baselines each of which using different recommendation ideas. For example, we could use a baseline set comprising of CBF, Collaborative Filtering, Graph based approaches. \chapter{Methodology} \label{Chapter3} \lhead{Methodology} \section{What is a Good Baseline?} For literature recommendation, the set of baseline approaches should vary depending on the novel approach that is being evaluated. For example, it would be not be wise to evaluate a citation based approach against a stereotype and a content based approach but not against another citation based approach. For related article search, it would be ill-advised to evaluate a citation based or collaborative filtering approach which takes a user model constituting just one document. Thus, a good baseline approach should be: \begin{enumerate} \item \textbf{Easy to Reproduce:} The implementation of baselines to compare novel approaches against should not be a stumbling block in the process of research. The parameters in the approaches, if any, should be made clear to aid reproducibility. \item \textbf{Relevant to the task:} The baselines should fit the scope of the evaluation. If the novel approach is a CBF approach, it is advisable to compare it against at least one other CBF baseline, as well as other types of recommendations. \item \textbf{System agnostic:} An approach that works for a wider set of documents, would have an advantage over an approach that has limited operability to a subset of the corpus. This means that the former approach can be used in a wider variety of recommendation systems. For example, baselines that are inherently multilingual in nature can be used both in scenarios where the document corpus is multilingual, and in scenarios where the corpus is of only one language. \end{enumerate} \section{Choosing Baselines} The baselines that we chose use only the information from the documents. We avoided the use of external information, thus allowing our suggested approaches to be compatible with other document catalogs when they are implemented on other recommendation systems. Research paper recommendation approaches can be categorized into Content Based Filtering, Collaborative Filtering (CF), Graph Based Filtering, Stereotype and Hybrid models. The baselines we chose have to be relevant to the task of recommending for a related article search. CF, Graph based, and Hybrid approaches were eliminated from the shortlist because they are not compatible for related article search. \subsection{Random} The approach randomly picks the set of documents to recommend to the user. We experiment with this approach by randomly choosing to apply a language filter 50\% of the time. With the language filter, the recommended documents share the same language as the input document. The average time complexity of this approach is O(n), where n is the number of recommended documents. \subsection{Lucene's MoreLikeThis (MLT)} This is the most common \textit approach used in the comparison of compare new CBF approaches. The approach concatenates and tokenizes the title, abstract, keywords, and journal name using Apache Lucene's\footnote{http://lucene.apache.org/core} out-of-the-box \textit{Standard Tokenizer}. The tokens are then indexed, and recommendations are made using Lucene's \textit{More Like This} feature. We chose this approach because it consistently provides recommendations and can be easily implemented by researchers looking for a standard content based filtering baseline. This approach takes O($|$D$|$ * $|$V$|$), where $|$T$|$ is the size of the term vocabulary, and $|$D$|$ is the size of the document corpus. \subsection{Stereotype} Stereotyping uses a very primitive user modeling strategy with fixed recommendation classes. Users are classified, or stereotyped into generic groups and each group is assigned the same set of recommendations. In this study, we stereotype all users as researchers, and selected documents that are of common research interest, such as documents related to 'Academic Writing', or 'Experimental Practices'. The Stereotype approach has an average time complexity of O(1) as it is precompiled. \subsection{Most Popular} The most popular research documents according to our partner's most viewed and most exported lists are provided as recommendations for the users. This approach is also a database read in our implementation and can be done in constant time. \subsection{Key-phrase Based} This is an advanced approach which is an adaptation of the Key-phrase based approach used by Ferrera et al.\cite{Ferrara2011} Whereas the original approach requires the full text of a paper to build acceptable key-phrases, we adapted the approach to do so even with only the title of the paper as input. This approach was parametrized by us and the best parameters were empirically found to be a similarity search using three each of unigram and trigram key-phrases computed using the title and abstract, if available.\cite{keyphrase} Using Lucene, this approach has an average case time of O($|$D$|$ * $|$K$|$), where $|$K$|$ is the size of the keyphrase vocabulary, and $|$D$|$ is the size of the document corpus. In Table 3.1, we describe how each of our chosen baselines coheres with the criterion that we recommended in an earlier section. Each approach is rated between "Moderate, High and Very High" for the convenience of the reader. \begin{table} \centering \caption{Coherence of Baselines with Criteria} \label{my-label} \begin{tabular}{l|l|l|l} \textbf{Approach} & \textbf{Ease to Reproduce} & \textbf{Task Relevancy} & \textbf{System Agnostic} \\ \hline \textbf{Stereotype} & \begin{tabular}[c]{@{}l@{}}Moderate -- \\ As long as\\ stereotype \\ classes \\ are the same\end{tabular} & \begin{tabular}[c]{@{}l@{}}High -- It's a\\ comparison against\\ a human curated \\ recommendation list\end{tabular} & \begin{tabular}[c]{@{}l@{}}High -- Stereotypes\\ can be created\\ for any data corpus\end{tabular} \\ \hline \textbf{Random} & Very High & Moderate & Very High \\ \hline \textbf{Metadata} & \begin{tabular}[c]{@{}l@{}}High -- \\ Out of the box\\ Lucene setup\end{tabular} & \begin{tabular}[c]{@{}l@{}}High -- uses all fields\\ available for a related \\ paper search\end{tabular} & \begin{tabular}[c]{@{}l@{}}Very High -- Applicable\\ to all documents\\ in the corpus\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}\textbf{Most} \\ \textbf{Popular}\end{tabular} & \begin{tabular}[c]{@{}l@{}}High --\\ Over a period \\ of time\end{tabular} & \begin{tabular}[c]{@{}l@{}}Moderate -- Asks the\\ question: Are\\ recommendations\\ supposed to be\\ serendipitous or not?\end{tabular} & \begin{tabular}[c]{@{}l@{}}Very High -- Doesn't\\ depend on the\\ document for which \\ recommendations\\ are requested\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}\textbf{Keyphrase}\\ \textbf{Based}\end{tabular} & \begin{tabular}[c]{@{}l@{}}High -- \\ Pipeline is \\ available on github\end{tabular} & \begin{tabular}[c]{@{}l@{}}High -- Related paper\\ search favors \\ CBF approaches\\ such as this one\end{tabular} & \begin{tabular}[c]{@{}l@{}}Moderate -- Current\\ version not \\ multilingual\end{tabular} \end{tabular} \end{table} \section{Data Catalog} \subsection{Description} We apply and evaluate the suggested baseline algorithms in a recommender system, which is integrated in a digital library containing more than 9.5 million documents called Sowieport, which is hosted by Gesis. These documents have representation from at least 59 languages. Almost 40\% of the documents, around 4 million documents, have an abstract. 70\% of these abstracts, 3.3 million documents are English. 6 million documents only have a title associated with them, of which 2 million are English.\\ \\ \subsection{Fields Available} A document in the catalog can have the following information: Each document in the catalog contains the title, the journal it was published in, the publication year, and keywords as assigned by the author and an optional abstract. \begin{table} \caption{Multilingualism of the catalog} \begin{tabular}{llr} \hline\noalign{\smallskip} Language & Title & Abstract \\ \noalign{\smallskip} \hline \noalign{\smallskip} English & 5,356,952 \ \ \ \ \ \ \ \ \ \ \ & 3,353,406\\ German & 2,045,562 & 641,263\\ No Language Specified & 1,470,385 & 0\\ All 57 Other Languages & 632,846 & 3300\\ \hline\noalign{\smallskip} Total & 9,505,745 & 3,998,029\\ \hline\noalign{\smallskip} There are 5,667,917 documents without an abstract \\ \hline \end{tabular} \end{table} \section{Experiment} \subsection{Baseline Comparison} The ongoing experiments are conducted by providing recommendations based on our partner's catalogue and displaying the recommendations on our partner's digital library. We track the recommendations requests for documents and the clicks on the recommended documents. Online evaluations incorporate the human aspect of recommendations. Offline evalations, such as Mean Average Precision, use less information than the CTR does and are shown to correlate highly by Beel et al. \cite{Beel:2013:CAO:2532508.2532511}. We use CTR as the measure for evaluation of the baseline approaches. \begin{equation} CTR =\frac{Number\ of\ Clicks\ Recorded}{Number\ of\ Recommendations\ Delivered} \end{equation} We compare the performance of the five baseline algorithms over a period of a month. For each request, the recommendations use one the five suggested approaches. If the input document is English, the Keyphrase approach is chosen 70\% of the time, Stereotype 4\%, Most Popular 4\%, Random 2\%, and Lucene's MLT 20\%. Whenever the keyphrase approach returns no related documents, we retry once more with Lucene's MLT approach. For non-English documents, we currently do not use the keyphrase approach and instead default to Lucene's MLT. This distribution of approaches was chosen because of the need to provide a high standard of recommendations to the users. It would not be ideal if they recieved totally random recommendations often, or the same stereotype recommendations once every 10 times. The clicks on the recommended documents are logged and used to calculate and compare the CTRs of the set of baselines. For each recommendation request, we recommend 6 documents. \subsection{Analysis of Keyphrase Based Approach} We conducted three different experiments using the extracted keyphrases: \begin{enumerate} \item By providing recommendations using keyphrases generated from the title only as opposed to keyphrases constructed from the title and abstract of the document. \item By using different combinations of unigram, bigram, and trigram keyphrases. For instance, we provide recommendations using unigrams and trigrams constructed from the title only. \item In randomly choosing the number of keyphrases we used in the similarity calculations. If we chose to use 5 keyphrases for a recommendation, we would use the 5 keyphrases with highest keyphraseness score. So, we could compare the effectiveness of recommendations provided using five unigrams and five trigrams con-structed from the title and the abstract of a document. \end{enumerate} To evaluate the the recommendations that Mr. DLib provides, we use the Click-Through-Rate, which is the percentage of delivered recommendations that have been clicked by the user. All data collected will be shared on the Harvard Dataverse . All comparisons are done using a t-test and results significant at a p-value of 0.05 are reported. To create baselines to compare the performance of the keyphrase approach, we implemented two other algorithms in Mr. DLib: \begin{enumerate} \item We used Lucene’s MLT feature to recommend documents related to a query document. In using this feature, we indexed all textual meta-data related to a doc-ument on Lucene, such as Title, Abstract, and author-provided keywords. Related documents are then recommended using Lucene’s inbuilt BM-25 similarity metric. All other settings of the MLT function were as they were out-of-the-box in Lucene 6.3.0. \item A completely random baseline by which we selected completely random English language documents as recommendations for any query document. This was conceived as a control in the experiment. \end{enumerate} \chapter{Implementation} \label{Chapter4} \lhead{Implementation} \section{Mr. DLib - Literature Recommendation As A Service} Mr. DLib is a machine-readable digital library\cite{beel2011introducing} which recently has been extended with a function that allows third parties to request literature recommendations\cite{beelmrjdcl} from Mr. DLib. These recommendations can then be displayed on their own websites. Mr.DLib currently has two partners which use the API for recommendations. The first is the Sowiport digital library, maintained by GESIS\footnote{www.gesis.org} in Germany. The recommendations are displayed on each article's page using a Javascript element which dynamically requests and loads he recommendations. This process is described further below in this section. The second partner is the reference management application called Jabref\footnote{www.jabref.org}. Mr. DLib was integrated through a related articles tab which dynamically calls the API and processes the XML returned from the query\cite{Feyer2017}. The integration was done so that a majority of control over the display of the recommendations remains with Mr. DLib. This was done by returning HTML snippets that contain the formatting as well as content about each delivered recommendation. The formatting can, of course, be changed at the end of Mr. DLib, thus allowing for live experimentation of different formatting and display options without needing to change the code in the Jabref project. Maintaining and building a recommender system as a service such as Mr. DLib comes with a lot of benefits and challenges\cite{DBLP:conf/ecir/BeelD17}. The benefits are, of course, the ability to simultaneously compare the performance of many different recommendation approaches in a real-world setting. The data collected can be said to be more representative than from lab experiments. Another benefit is that approaches can be tweaked based on historical performance as well as the comparitive performance of other approaches. For example, the same approach can be tested using various parameter settings to find what can be considered as the optimal setting. A third advantage is that recommendation approaches can be compared across different document corporas to see if there are changes in performance across corpora. The challenges include having to deal with extremely noisy data and problems with creating a scalable way to implement a distribution for the randomization of approaches. Mr. DLib's architecture is depicted in Fig. 4-1. Mr. DLib is mostly developed in JAVA and uses standard tools and libraries whenever possible. The central element of Mr. DLib is its Master Storage, namely a MySQL database. This database contains all relevant data including documents’ metadata and statistics of delivered recommendations. Metadata of documents includes: \begin{enumerate} \item Mr. DLib’s document ID \item Partner’s Document ID \item Title \item Authors \item Abstract \item Keywords \item Published in (generic field for journal name, conference, etc.) \item Language \item Publication Year \item Document Type (journal article, conference article, …) \end{enumerate} \begin{figure} \centering \includegraphics[scale=0.2]{Figures/MrDLib_Architecture.pdf} \caption{Mr. DLib system's architecture} \label{fig:4.1} \end{figure} The “Content Retriever” downloads the partners’ content once a month. Currently, Mr. DLib has only one partner, and this partner provides a corpus of 9.5 million documents. GESIS provides its data as Solr XML export. The XML files are backuped on Mr. DLibs server and then the relevant metadata of the documents is stored in the database. Although, GESIS provides full-texts for some documents, Mr. DLib currently does not yet utilizes it in its recommender system due to storage and CPU restraints. To generate recommendations, Mr. DLib uses Apache Solr/Lucene as recommendation framework. Lucene offers an integrated “More like this” function that calculates content-based document similarities. This standard approach is used over the documents’ titles and abstracts to find related documents for one given input document. While currently this simple standard approach is used, it is our highest priority, to increase recommendation effectiveness. Therefore, we are currently experimenting with increasing ranking accuracy based on Mendeley Readership data; utilizing semantic modelling and applying the knowledge from our previous research. In the long-run, more than one recommendation framework will be used by Mr. DLib. Potential further recommendation frameworks are, for instance, Apache Mahout and LensKit. The prospect of using different recommendation frameworks is also the reason why we decided to have one master storage with all information. This way, every recommendation framework can retrieve the required data from this central storage. Storing the partner’s data directly from the XML files in the various recommendation frameworks would be error prone. The creators of the recommendation platform were also among the creators of the recommender system in Docear, a reference management software, so the architecture of Mr. DLib borrows from architecture of the recommender system in Docear\cite{Beel2014bdocearArch}. The ‘Scientometric Data Fetcher’ gathers data from external sources to enhance the recommendation process. Currently, Mr. DLib requests for each document the readership statistics from Mendeley’s API . In the future, further data such as citation counts might be fetched e.g. from Google Scholar. The readership statistics are used to re-rank recommendations based on the document’s popularity on Mendeley. Incorporating bibliometrics into a research paper recommender system is meant to identify the most reliable and important research articles from the larger set of relevant recommendations\cite{DBLP:conf/ecir/SiebertDF17}. Mr. DLib offers a REST API, i.e. partner may send requests as HTTP request (typically GET). To retrieve recommendations, the partner calls https://api.mr-dlib.org/v1/documents/<partner-document\_id>/related\_documents/ and retrieves an XML response containing a list of related documents (JSON is planned). Mr. DLib’s web service is realized with Apache Tomcat and JAVA Jersey. The proprietary “API Manager” writes some statistics to the database and forward the requests to the proprietary “Recommendation Manager”. The “Recommendation Manager” (JAVA) handles all processes related to the recommendations. It looks up required data from the database (e.g. matches the partner’s document id from the URL with Mr. DLib’s internal document ID), decides which recommendation framework to use, calculates and stores statistics, and re-ranks recommendation candidates based on scientometrics\footnote{Description of Architecture is copied from Mr. DLib's official documentation}. \section{Recommendations to the Digital Library} The ongoing experiments are conducted by providing recommendations using the Sowieport catalogue and displaying the recommendations on our partner's digital library. We track the recommendations requests for documents and the clicks on the recommended documents. As seen in Fig. 4.2, when a user accesses a document in the digital library, he is provided with all available fields pertaining to the document such as Title, Abstract, Year of Publication, Publication Journal, Keywords, and Citations. Additionally, on the left hand side, there are displayed a number of documents which are supposed to be related to the original document. Henceforth, we will refer to this document as the requested document. All recommended documents form a recommendation set. A recommendation set usually contains 6 related documents, and lesser if the system could not find six related documents. The number of recommendations that are to be displayed is described by the problem of choice overload and has been analyzed\cite{beierle2017exploring} as part of the Mr.DLib project. The findings from the experiment were that lower click-through rates were observed for higher numbers of recommendations and twice as many clicked recommendations when displaying ten related articles instead of one related article. Their results indicate that users might quickly feel overloaded by choice. With this in mind, for the experiments that we conducted, we decided to use six as the size of the recommendation set. \begin{figure} \centering \includegraphics[scale=0.5]{Figures/Sowieport_landing.PNG} \caption{Landing page on Sowieport for a research article} \label{fig:4.2} \end{figure} Each time a document is accessed in the digital library, Mr. DLib's partner, GESIS, sends a recommendation request to Mr. DLib's servers. The recommendation request is processed and an XML output is returned as shown in Fig. 4.3. This XML output has all the information needed to generate the display on the side of the digital library. \begin{figure} \centering \includegraphics[scale=0.5]{Figures/Xml_output.PNG} \caption{XML output from Mr. DLib's system} \label{fig:4.3} \end{figure} The XML output is then parsed by our partner and is rendered in a column format on the left hand side of the landing page, as seen in Fig. 4.4. When a user is interested by one of these displayed recommendations, he/she clicks on the hyperlink attached to the recommendation. This click is forwarded to Mr. DLib's servers in order to record the click in the database, and redirects to the landing page of this clicked document. The document which was clicked is then opened in a new tab, in order to allow the user the opportunity to click on other recommendations as well. \begin{figure} \centering \includegraphics[scale=0.5]{Figures/Recommendations.PNG} \caption{The XML output is parsed and displayed on the left hand side of the requested document in Sowieport} \label{fig:4.4} \end{figure} \section{Implementation of Keyphrase approach} We extracted keyphrases, which are automatically extracted keywords, as per the methodology described in Ferrera’s work\cite {Ferrara2011} from the title of all documents, as well as from the title and abstract of the 3.3 million documents with English abstracts. The extraction process of keyphrases is a multi-step process using the open-source Distiller framework[22]. Distiller provides an easy-to-use pipeline to automate the process of extracting keyphrases by specifying the steps in the pipeline. A flowchart of the steps can be seen in Fig. 1. The title and the abstract, if used, of each document is tokenized, POS-tagged, stripped of stop words, and stemmed using the Porter Stemmer to generate a stream of tokens. From these tokens, continuous combinations which match pre-specified POS patterns are selected as candidate keyphrases. \begin{figure} \centering \includegraphics[scale=1.5]{Figures/keyphrases.png} \caption{Steps involved in constructing keyphrases from a document} \label{fig:4.5} \end{figure} For instance, two pre-specified patterns are “NN” and "NN/NN/NN”, which stands for ‘Noun, singular or mass’ and ‘a sequence of three nouns’ respectively. Consider an example title: “Research Paper Recommender Systems – A quantitative study of performance”. Table 4.5 shows the results of processing this title through the pipeline and Table 4.6 presents a few candidate keyphrases that can be extracted from this title. \begin{figure} \centering \includegraphics[scale=0.5]{Figures/Exampleskeyphrases.png} \caption{Example of Steps in Keyphrase Extraction Pipeline} \label{fig:4.6} \end{figure} Next, each candidate is annotated with statistical information such as the depth in the document at which it was extracted from, the portion of the document which is encompassed by the first and last occurrence of the keyphrase, called the lifespan, and the number of occurrences of the keyphrase. Each candidate is then given a score corresponding to a concept known as document phrase maximality, which expresses how much that candidate keyphrase is a concept of its own right. \begin{figure} \centering \includegraphics[scale=0.5]{Figures/candidatekeyphrases.png} \caption{Examples of Candidate keyphrases } \label{fig:4.7} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{Figures/weights.png} \caption{Weights used to score candidate keyphrases} \label{fig:4.8} \end{figure} Finally, these scores are weighted and summed up to constitute a final score known as a keyphraseness score as shown in Fig. 4.7. We used different weights, chosen apriori, to extract keyphrases from titles only as opposed to keyphrases from the title and abstract. The original work[6] used an apriori cutoff for the keyphrase-ness score to choose keyphrases. In contrast, we filtered out 19 keyphrases in each of the unigram, bigram, and trigram categories in descending order of keyphraseness score. These extracted keyphrases are used to calculate the similarity between docu-ments. To calculate document similarities, we indexed the each document’s keyphrases using Lucene 6.3.0. \chapter{Results} \label{Chapter5} \lhead{Results} \section{Data Collection} We recorded 3.6 million requests for recommendations, approximately 35,000 requests for recommendations yielding 210,000 recommendations per day. In total, we received 7700 clicks on these recommended documents. The CTR was 0.21\%. \section{Comparison of Baselines:} We first compared the performance of the suggested baselines without accounting for the difference in language of the input document. As illustrated in Fig.1, in the first case, Lucene’s MLT approach, with a CTR of 0.229\% outperforms every other approach. It was surprising that the Keyphrase approach (0.148\%) did not do better than even the random approach, with and without the language restriction (0.149\%, 0.159\%). The stereotype approach had a strong performance with CTR of 0.194\%, confirming the usefulness of incorporating stereotypes as a baseline. Since the initial results did not bring clarity to the usefulness of the Keyphrase approach, we repeated the analysis considering only recommendations for which the input document had an English title. As seen in Fig.1, in the second case, it cannot be said with statistical significance that Lucene’s MLT with a CTR of 0.169\% outperforms the key-phrase approach (CTR=0.148\%). The recommendation performance of the suggested approaches also depends on the type of user requesting recommendations. This inference could explain the observation of Beel et al. that the performance of an approach varies from corpus to corpus\cite{beel}. For example, an explanation for these results might be that German users are more likely to click on a recommendation than users looking exclusively at English documents. It should also be noted that the Stereotype approach maintained its CTR of 0.204\%. On further study of the stereotype and most-popular recommendations over a longer period of time and 28 million delivered recommendations, it was found that Most-popular recommendations achieved a CTR of 0.11\%, and stereotype recommendations achieved a CTR of 0.124\%. Compared to a “random recommendations” baseline (CTR 0.12\%), and a content-based filtering baseline (CTR 0.145\%)\cite{beel2017stereotype}. \section{Comparison By Language} We further drilled down into the data by comparing only between recommendation requests for documents with English titles and English abstracts. In this case, the results favored the keyphrase approach (0.170\%) over the other CBF approach Lucene MLT (0.120\%), as well as the random approaches. We infer that the keyphrase approach does a better job at removing the clutter from the abstracts while retaining the meaning of the text when compared to Lucene’s MLT. Again, the stereotype approach performed well in relative terms with a CTR of 0.178\%. We cannot yet draw definitive conclusions to separate the two random approaches because we do not have enough data yet. However, preliminary data shows that it can serve as a useful approach to evaluate any other approach against because of its steady performance and ease of implementation. Finally, the differences in the CTRs just because of the difference in the language of the input documents supports our reasoning behind using a set of baselines as opposed to just one or two. \begin{figure} \centering \includegraphics[scale=0.85]{"Figures/exp"} \caption{Comparisons of Approaches by Recommendation Request Type} \end{figure} \section{Analysis of Keyphrase Approach} \subsection{Overall Comparison} The CTR for all 31 million recommendations Mr. DLib delivered in the period be-tween 17th October, 2016 and 17th January, 2017 is 0.138\%. Of these, close to 24 million recommendations were made using Lucene’s MLT function at a CTR of 0.147\%. These were primarily recommendations for German language documents. The right two columns in Fig. 5.2 illustrate the number of delivered recommendations and CTRs for recommendations delivered by Mr. DLib related to English documents. The keyphrase algorithm (CTR = 0.067\%) performed worse than the Lucene MLT implementation (CTR = 0.085\%). However, the keyphrase algorithm did perform better than the completely random recommendations (0.055\%). \begin{figure} \centering \includegraphics[scale=0.7]{Figures/overallresults.png} \caption{Overall Comparison of Recommendation Algorithms} \end{figure} Fig. 5.2 also presents the split-up between the recommendations that Mr. DLib delivered for documents which had English titles only, as opposed to documents which had English titles and abstracts. There is a big difference in CTRs between corre-sponding algorithms in the two categories, for instance, the Keyphrase algorithm rec-orded a CTR of 0.123\% with recommendations for documents with just the title in English, whereas the CTR (=0.040\%) dropped for documents having English titles and abstracts. In both cases, the Lucene MLT implementation (CTR=0.15\% and 0.047\%) outperformed the keyphrase algorithm (CTR=0.123\% and 0.040\%). While the outlook seems better when the keyphrase algorithm is used only on English titled documents, it did not outperform the random baseline which also had a CTR of 0.123\%. \subsection{Inclusion of Abstract} Fig. 5.3 presents the number of delivered recommendations, click counts and CTRs for Keyphrase based recommendations made for documents which had English ab-stracts available. There is no statistically significant difference in the CTRs, 0.409\% when we used only the title, and 0.393\\% when we used the title and abstract. \begin{figure} \centering \includegraphics[]{Figures/source.png} \caption{Comparison of Keyphrase Algorithm based on Source of Keyphrases} \end{figure} \subsection{Effect of N-gram Type} As there were noticeable differences between the CTRs when documents had an English abstract available compared to documents with only an English title, we split the analysis by the type of extracted keyphrase into two sections. The left side of Fig 5.4 compares the CTR of recommendations provided using different combina-tions of n-grams. The results show that no one particular combination of n-gram was more effective than the others. The range of CTRs was 0.012, with the lowest being Trigrams at 0.068\% and the highest being a combination of Unigrams and Bigrams at 0.0803\%. When we provided recommendations for documents having an abstract, the average CTRs were lower, as described in Section 5.4.2. The results again do not indicate that one combination of n-grams had any advantage over the others. \begin{figure} \centering \includegraphics[]{Figures/ngram.png} \caption{CTR comparison based on the n-gram type of extracted keyphrase} \end{figure} \subsection{Effect of Keyphrase Count} Table 6 outlays the experimental results for recommendations that used differing number of keyphrases to compare the similarity between documents which shall be termed keyphrase\_count henceforth. We grouped the counts for comparative pur-poses as the data was sparse and skewed toward lower keyphrase\_counts. Once again, we have split the comparison based on whether the query document had an abstract or not. On the left hand side of the table, we display the results from the recommendations provided using keyphrases built out of just the title. As was expected, with most titles being between 5 and 10 words, there were not too many recommendations delivered with keyphrase\_count > 10. Most recommendations were made with a keyphrase\_count of 1 or 2 only. Additionally, the sparsity in the data when keyphrase\_count > 10 meant that the high CTRs recorded in these cases might be anomalous. There is also no significant difference between the CTRs up to keyphrase\_count = 5. \begin{figure} \centering \includegraphics[]{Figures/keyphrasecount.png} \caption{Comparison of CTRs based on number of keyphrases used for document similarity} \end{figure} In the second case, where documents had abstracts available, the distribution of keyphrase\_count is more uniform because more number of candidate keyphrases are available. We observe that although the CTR is highest when keyphrase\_count is between 9 and 13 (0.054\%), the difference is not statistically significant as compared to the CTR when 3 < keyphrase\_count < 5 (0.045\%). However, both these settings perform better than when keyphrase\_count < 3 (0.035\%), or keyphrase\_count be-tween 5 and 8 (0.035\%). \chapter{Future Work} \label{Chapter6} \lhead{Future Work} Through the study of the data we collected in this research, we understood that there are significant variations in the click through rate of recommendations based on the language of the document for which we provided the recommendations. This means that there is a possible scope to improve click through rates by providing cross language recommendations of literature documents. First, we will be focusing on the German-English language barrier. The steps in this process include: \begin{enumerate} \item Identify tools and frameworks to translate abstracts, titles, keywords to English \item Translate documents and store in our database \item Calculate keyphrases for translated documents \item Re-index Solr (both for keyphrases and translated titles, abstracts, keyphrases for Lucene MLT recommender) \item Rewrite code to experiment with cross-language recommendations. \end{enumerate} We have already identified that there are two or three types of machine translation frameworks. These include Rule Based Machine Translation, Statistical Machine Translation and a hybrid of these techniques. Usually, rule based machine translation does not require much processing and the translations thus formed would generally lack in grammatical correctness. Statistical machine translation, on the other hand, has generally been proven to be more effective in the accuracy of the translation in the sense of the grammar and the meaning of the sentence. Thus, for the purpose of literature recommendations, it would be interesting to study which of these two methods provide the better result in terms of click through rates. Of course, CTR might not be the ideal metric that should be used for comparison. Further study has to be conducted in this area to identify relevant metrics for this field. \chapter{Summary} \label{Chapter8} \lhead{Summary} The ever expanding digitization of research literature has created a need for efficient search, information retrieval and recommendation approaches in order to correctly identify and distribute relevant and personalized research documents to researchers. After identifying the need to progress the state of the art in the area of research paper recommendation for digital libraries, this thesis described the existing literature in this respect over the last four years. This was an addition to Beel's extensive literature survey for this field \cite{Beel:2013:CAO:2532508.2532511}. The literature survey in Chapter \ref{Chapter 2} identified the lack of a common set of baselines to compare incremental innovations in related article search for research documents. This was a stumbling block in the speed with which the state of the art advanced in this area. In Chapter \ref{Chapter3}, we set out the methodology for this thesis. The basis of the experiments was described by identifying the features that we thought to be relevant in classifying a baseline as a "good" one or not. With these features in mind, five different recommendation approaches were selected and described that satisfied the aforementioned criterion. Finally, we explained the procedure of the experiment and the idea behind it. In Chapter \ref{Chapter4}, the implementation of the platform for the experiment, Mr. DLib, was illustrated and explained. Knowing the architecture of the system is a good way to source feedback and criticism to improve the performance of the system. Chapter \ref{Chapter4} also addressed the process of a literature recommendation and showcased the format by which these literature recommendations were displayed on our partner's digital library. Chapter \ref{Chapter5} talks about the results of the experiment and the conclusions that we were able to draw from it. Although not entirely conclusive, the results were a great starting point in illustrating the need for a common set of baselines. It was observed that the Click Through Rates (CTR) differed not only by the choice of recommendation approach, but also by the inherent characteristics of the document for which we were recommending literature. One such characteristic that affected the CTR was the language of the original document. We observed that Sowieport being a German based digital library, there was an overall higher CTR for German documents, and the converse was true for English documents. Thus, we believe that to fully understand the performance of an approach, particularly in a multilingual digital library such as Sowieport, it is important to make our recommendations cross language boundaries. Thus, in Chapter \ref{Chapter6} we set out the future work which involves implementing Cross Language literature recommendations on Mr. DLib. This involves the work of translating abstracts and titles in order to make it possible to identify similar documents across language boundaries. There is also a focus on implementing citation based recommendations, provided there is access to document citations and references. As this is an experiment in a live setting, the results would become more and more concrete as time passes.
{ "timestamp": "2018-02-21T02:03:12", "yymm": "1802", "arxiv_id": "1802.06892", "language": "en", "url": "https://arxiv.org/abs/1802.06892" }
\section{Introduction} Gamma- (face centered cubic) iron exists in nature in a relatively narrow temperature interval from 1185 to 1660 K. In this temperature interval it is known to show Curie-Weiss behavior of uniform magnetic susceptibility with large negative Weiss temperature \cite{Susc1,Susc2,Susc3}. In Cu precipitates $\gamma$-iron can be stabilized till very low temperatures, which allows studying its low-temperature magnetic properties. Early experimental studies have shown that this substance is a weak itinerant antiferromagnet with the Neel temperature of the order of $100$~K \cite{Neel1,Neel2,Neel3}. Later it was found \cite{Q1,Q2,Q3} that the corresponding incommensurate wave vector ${\bf Q} \approx 2\pi (1, 0.13, 0)$ in units of inverse lattice parameter $a$ is close to the so called AFM-I magnetic structure. Therefore, in contrast to $\alpha$-iron, which possesses short-range ferromagnetic correlations above Curie temperature, $\gamma$-iron is expected to have short-range antiferromagnetic order above Neel temperature. The stability of various ground states in $\gamma$-iron was analyzed theoretically within the density functional theory (DFT) approaches \cite{BS1,BS2,BS3,BS4,BS5,BS6,BS7,GammaFM1,GammaFM2,Herper99,BS8,Zhang11,BS9,BS10}, which allowed one to reproduce the experimental wave vector \cite{BS5,BS6,BS7} at the lattice parameter, corresponding to low temperatures (or precipitates), while at sufficiently large lattice parameter the ferromagnetic phase was shown to be stable \cite{GammaFM1,GammaFM2,Herper99,BS8,Zhang11}. These first principle approaches allowed one also to obtain the lattice constant dependence of magnetic moment of $\gamma$-iron \cite{GammaFM1,GammaFM2,Herper99,Zhang11} and corresponding magnetic exchange interactions \cite{BS9,BS10}. The {\it ab initio} DFT approaches do not allow, however, treating correlation, as well as temperature effects, which are often crucially important in strongly-correlated materials, such as iron. These effects may be especially pronounced in the presence of local magnetic moments, which appear in particular due to Hund's exchange interaction (in the so called Hund's metals \cite{Hund1,Hund2,Hund3,HundOur}). Recent dynamical mean-field theory (DMFT) studies \cite{OurGamma} have shown partly formed local moments in $\gamma$-iron at not very low temperatures, allowing to consider it as a Hund's metal in some temperature range (see also Ref. \cite{HundOur}). In particular, the inverse local susceptibility is approximately linear in temperature above $T^*\sim 500$~K, corresponding to a crossover temperature scale from the local-moment to itinerant behavior. At temperatures below $T^*$ the local moments in $\gamma$-iron are screened by itinerant electrons. Indeed, fitting the inverse local susceptibility of Ref. \cite{OurGamma} at temperatures $T>T^*$ by the dependence $\chi_{\rm loc}^{-1}\propto T+2T_\textrm{K}$, determining the single-site Kondo temperature $T_\textrm{K}$, below which the local moments are screened \cite{Wilson}, yields $T_\textrm{K} \sim T^{*}$. At the same time, the local moments do not strongly decay at not very low temperatures, which is confirmed by the calculated temperature dependence of dynamic local magnetic susceptibility \cite{OurGamma}. On the other hand, due to thermal expansion, at high temperatures $\gamma$-iron is expected to exhibit stronger ferromagnetic, than antiferromagnetic correlations, as indicated by the DFT approaches \cite{GammaFM1,GammaFM2,Herper99,BS8,Zhang11} and experimental data \cite{GammaFechiq}. According to the comparison of the energies of antiferromagnetic and ferromagnetic phases in \textit{ab initio} studies (see, e.g., Refs. \cite{GammaFM1,GammaFM2,Herper99,BS8}), the transition between these phases occurs at the value of the lattice constant, corresponding to the temperature $T\sim 1000$~K, which is close to the $\alpha$-$\gamma$ transition. Therefore, one can expect strong change of magnetic properties of $\gamma$-iron from itinerant antiferromagnet to local moment substance with dominating antiferromagnetic or ferromagnetic correlations with changing temperature. Although the dependence of the magnetic properties (in particular, exchange parameters) at zero temperature on lattice constant was studied previously within DFT calculations, it seems important to investigate the effect of temperature and electronic correlations on magnetic properties of this substance. In the present paper we apply DFT+DMFT approach \cite{DFT+DMFT,Alpha_supercell} to study magnetic properties of $\gamma$-iron in a broad temperature range. In contrast to the previous study \cite{OurGamma} we vary lattice constant with changing temperature, and, more importantly, use the supercell DMFT approach, considered previously for $\alpha$-iron \cite{Alpha_supercell}, to extract momentum dependence of magnetic susceptibility and exchange interaction, including local vertex corrections. We find that indeed the character of magnetic fluctuations changes from dominating antiferromagnetic ones at low temperatures to ferromagnetic at the temperatures closer to the $\alpha$-$\gamma$ structural transition. We also obtain the corresponding magnetic exchange parameters. The plan of the paper is the following. In Sect. II we discuss the method, in Sect. III present the results and finally in Sect IV we present conclusions. \section{Method and computational details} \label{sec:computational_details} \subsection{Supercell calculation of susceptibilities in DFT+DMFT } \label{SectIIB} First, we have performed DFT calculations using the full-potential linearized augmented-plane wave method implemented in the ELK code supplemented by the Wannier function projection procedure (Exciting-plus code). The Perdew-Burke-Ernzerhof form of GGA was considered. The calculations were carried out with the experimental temperature dependence of the lattice constant in the temperature range, where $\gamma$-iron exists in nature, ${a(T)=a_0 + a_1 T}$ where ${a_0=3.5519}$~\AA, $a_1=8.1593\! \times\! 10^{-5}$~\AA/K ~\cite{Seki2005}; in the following we extrapolate this dependence to lower and higher temperatures. The convergence threshold for total energy was set to $10^{-6}$~Ry. The integration in the reciprocal space was performed using 18$\times$18$\times$18\, $\textbf{k}$-point mesh for unit cell, while\, 15$\times$15$\times$15\,, and 12$\times$12$\times$12\, meshes were used for supercells with 2 and 4 atoms, respectively. From converged DFT results we have constructed effective Hamiltonians in the basis of Wannier functions, which were built as a projection of the original Kohn-Sham states to site-centered localized functions as described in Ref.~\cite{Korotin08}, considering $3d$, $4s$ and $4p$ states. In DMFT calculations we use the Hubbard parameter ${U\equiv F^0=4}$~eV and Hund's rule coupling ${I\equiv (F^2+F^4)/14=0.9}$~eV, where $F^0$, $F^2$, and $F^4$ are the Slater integrals as obtained in Ref.~\cite{Belozerov_UJ} by the constrained DFT in the basis of $spd$ Wannier functions. The on-site Coulomb interaction was considered in the density-density form. The corresponding matrix of Hund's exchange can be expressed via the Coulomb interaction matrix $U^{mm'}_{\sigma,\sigma'}$ as $I^{mm'}=(U^{mm'}_{\sigma,-\sigma}% -U^{mm'}_{\sigma,\sigma})(1-\delta_{mm'})$, $m$ and $\sigma$ are orbital and spin indexes. The double-counting correction was taken in the fully localized limit. The impurity problem was solved by the hybridization expansion continuous-time quantum Monte Carlo method~\cite{CT-QMC}. In our calculations we neglect the redistribution of charge density on the DFT level caused by the self-energy from DMFT, since iron is a moderately correlated metal, in which the $3d$ states are only weakly hybridized with $4s$ and $4p$ states; previous charge self-consistent studies of iron (e.g., Refs.~\cite{Pourovskii2013,Kvashnin2015}) did not result in any significant discrepancies with other DFT+DMFT studies. The non-uniform static spin susceptibility \begin{equation} \chi^{mm'}_{\mathbf q}=\frac{1}{N}\int_0^\beta d\tau \sum_{ij} \langle s^z_{im}(0) s^z_{jm'}(\tau) \rangle e^{i {\bf q}({\bf R}_j-{\bf R}_i)}, \end{equation} where ${\mathbf s}_{im}=c^{+}_{im\sigma} \mbox{\boldmath $\sigma $}_{\sigma \sigma'} c_{i m \sigma'}/2$ are electronic spin operators, $c_{im\sigma} (c^{+}_{im\sigma}$) are electron destruction (creation) operators ($i$ is the site index), can be obtained by calculating a response to a small staggered external field introduced in the DMFT part in a suitable supercell. Namely, for the orbital-resolved magnetic susceptibility we have $\overline{\chi}_{\mathbf{Q}_i}^{mm^{\prime}} =4 \mu_B^2 {\chi}_{{\mathbf{Q}_i}}^{mm^{\prime}}= \partial M_{\mathbf{Q}_i}^{m^{\prime}}/\partial H_{\mathbf{Q}_i}^m$, where $H_{\mathbf{Q}_i}^m$ is the magnetic field applied to orbital~$m$ and corresponding to the wave vector $\mathbf{Q}_i$, $M_{\mathbf{Q}_i}^{m^{\prime}}$ is the magnetization of orbital $m^{\prime}$. In the real space, the applied field takes a form ${\mathbf{H}_{\mathbf{R}_j}^{m,i} = \mathbf{H}_0\, \textrm{cos}(\mathbf{Q}_i \mathbf{R}_j)}$, where $\mathbf{R}_j$ is the position vector of site $j$, $\mathbf{H}_0$ is a constant small field. In practice, we have used the magnetic field corresponding to splitting of the single-electron energies by 0.02 eV. This field was checked to provide a linear response and was considered to be small enough to neglect the redistribution of charge density on the DFT level. For high-symmetry wave vectors the corresponding supercells are compact, and therefore can be studied by real-space extension of DMFT (see, e.g., Refs. ~\cite{Potthoff1999_1,Potthoff1999_2}). In this extension, the self-energy is still local but assumed to be site dependent. As a result, several single-impurity problems have to be solved at each self-consistency loop. Note that neglect of the non-local components of the self-energies may yield an underestimate of the non-local components of the susceptibility. We expect, however, that because of strong on-site electronic correlations, non-local components of the self-energy do not change substantially the obtained results. To calculate the non-uniform susceptibilities we have constructed supercells containing up to four atoms and corresponding to seven high-symmetry points. In particular, for wave vector ${\mathbf{Q}_{\textrm{X}_1}=(0,0,2\pi)/a}$ we considered supercells containing two nearest-neighbor atoms at $(0,0,0)$ and $(0,a/2,a/2)$ in Cartesian coordinates with lattice vectors ${\{0,a,0\}}$, ${\{0,0,a\}}$, and ${\{a/2,a/2,0\}}$. The same atoms were used to construct a supercell for ${\mathbf{Q}_{\textrm{L}}=(\pi,\pi,\pi)/a}$ with lattice vectors ${\{a,a,0\}}$, ${\{a/2,-a/2,0\}}$, and ${\{0,-a/2,a/2\}}$. For ${\mathbf{Q}_{\textrm{W}_1}=(\pi,2\pi,0)/a}$, we built a supercell with four atoms by including two extra atoms at $(a,0,0)$ and $(-a/2,a/2,0)$. The lattice vectors for this supercell are ${\{a,a/2,a/2\}}$, ${\{2a,0,0\}}$, and ${\{0,a,0\}}$. For ${\mathbf{Q}_{\textrm{X}_2}=(2\pi,0,0)/a}$, ${\mathbf{Q}_{\textrm{X}_3}=(0,2\pi,0)/a}$, ${\mathbf{Q}_{\textrm{W}_2}=(2\pi,\pi,0)/a}$, and ${\mathbf{Q}_{\textrm{W}_3}=(2\pi,0,\pi)/a}$ the supercells have been constructed in a similar manner by permutation of corresponding components. The orbital-resolved results for the wavevectors ${\bf Q}_{\textrm{X}_i}$ and ${\bf Q}_{\textrm{W}_i}$ with different $i$ are not equivalent because of the orientation of $d$-orbitals in certain directions in real space. Their rotation by point group operations than yields the off-diagonal (in the orbital space) components of the spin operators ${\mathbf s}_i^{mm'}=c^{+}_{im\sigma} \mbox{\boldmath $\sigma$}_{\sigma \sigma'} c_{i m' \sigma'}$/2, yielding non-Heisenberg components of the exchange interaction, which are not considered here. \subsection{Formulas for magnetic exchange} The orbital-resolved exchange interaction $\mathcal J_{ij}^{mm'}$ can be represented in the RKKY-like form, its Fourier transform reads \cite{Alpha_supercell}% \begin{equation} {\mathcal J}_{\mathbf{q}}^{mm^{\prime}}=2I^{mm^{\prime\prime}}\left( \chi_{\mathbf{q}}^{m^{\prime\prime}m^{\prime\prime\prime}}\right) _{\mathrm{irr}}I^{m^{\prime\prime\prime}m^{\prime}},% \label{Eq:Jq} \end{equation} where the summation (i.e. matrix product) over repeated indexes is assumed and the (transverse) irreducible parts of non-uniform electronic susceptibilities $(\chi^{mm'}_{\mathbf q})_{\rm irr}$ are related to the magnetic susceptibilities $\chi^{mm'}_{\mathbf q}$ by the Hund's exchange interaction, \begin{equation} \left( \chi_{\mathbf{q}}^{mm^{\prime}}\right) _{\mathrm{irr}}=\left[ \left( 2\chi_{\mathbf{q}}^{mm^{\prime}}\right) ^{-1}+I^{mm^{\prime}% }\right] ^{-1},% \label{chi_irr} \end{equation} $\left[ ...\right] ^{-1}$ denotes the matrix inverse with respect to the orbital indexes, the factor of $2$ accounts the difference between the transverse and longitudinal susceptibilities. While the components of the exchange interaction ${\mathcal J}_{\mathbf{Q}_{i}}$ can be determined from the obtained irreducible susceptibilities, to interpolate between different points $\mathbf{Q}_{i}$ we consider an expansion \begin{align} {\mathcal J}_{\mathbf{q}}^{mm^{\prime}} & ={\mathcal J}^{mm^{\prime},(0)}+\overline{\mathcal J}_{\mathbf{q}% }^{mm^{\prime}},\label{EqJ1}\\ \overline{\mathcal J}_{\mathbf{q}}^{mm^{\prime}} & ={\mathcal J}^{mm^{\prime},(1)}_{xy}\cos(aq_{x}/2)\cos (aq_{y}/2)\notag \\ &+\mathcal J^{mm^{\prime},(1)}_{xz}\cos(aq_{x}/2)\cos (aq_{z}/2)\notag\\ &+\mathcal J^{mm^{\prime},(1)}_{yz}\cos(aq_{y}/2)\cos (aq_{z}/2)\nonumber\\ & +\mathcal J^{mm^{\prime},(2)}_x \cos(aq_{x})+\mathcal J^{mm^{\prime},(2)}_y\cos(aq_{y})\notag\\ &+\mathcal J^{m m^{\prime},(2)}_z\cos(a q_{z})+\mathcal J^{mm^{\prime},(3)}\left[\cos(aq_{x})\cos(aq_{y})\right.\notag\\ &+\left.\cos(aq_{y})\cos(aq_{z}) +\cos(aq_{z})\cos(aq_{x})\right],\label{EqJ2} \end{align} such that $\mathcal J^{mm^{\prime},(r)}$ are determined by $\mathcal J_{\mathbf{Q}_{i}% }^{mm^{\prime}}.$ To determine eight matrices $\mathcal J^{mm',(0,3)}$, $\mathcal J_{ab}^{mm',(1)}$, and $\mathcal J_{a}^{mm',(2)}$ we consider irreducible susceptibilities for eight wave vectors ${\bf Q}_\Gamma=(0,0,0)$, ${\bf Q}_L$, ${\bf Q}_{X_i}$, and ${\bf Q}_{W_i}$ Because of neglect of the off-diagonal spin operators ${\mathbf s}_i^{mm'}$, the present treatment is only approximate; as we will see in the following, however, the crystal symmetry breaking in final results for the exchange interaction is sufficiently small, and can be neglected. To extract the physical exchange from the obtained matrices $\mathcal J^{mm',(i)}_{ab}$, we calculate it as \begin{equation} J^{(i)}_{ab}=\sum_{mm'} \mathcal J^{mm',(i)}_{ab} \mu^2_{mm'}/\sum_{mm'} \mu^2_{mm'}, \label{JiAv} \end{equation} where $\mu^2_{mm'}=3[d(1/\chi^{mm'}_{\rm loc})/dT]_a^{-1}$ is the matrix of squares of local moments, $\chi^{mm'}$ are orbital-dependent local susceptibilities, index $a$ indicates that the lattice constant is kept constant when evaluating the derivative. \section{Results and discussion} \begin{figure}[t] \includegraphics[width=0.454\textwidth]{Fig1a_chiirr_beta30_new.eps}\vspace{0.3cm} \includegraphics[width=0.46\textwidth]{Fig1b_chiirr_beta10_new.eps} \caption{(Color online) \label{Fig:chi_irr} Momentum dependence of the irreducible susceptibility, summed over all orbitals (blue solid line), $t_{2g}$ orbitals (green short-dashed line), $e_g$ orbitals (red long-dashed line), and $t_{2g}$-$e_g$ contributions (purple dotted line) for $\beta=30$ eV$^{-1}$ ($a=3.583$~\AA, top) and $\beta=10$ eV$^{-1}$ ($a=3.647$~\AA, bottom). } \end{figure} In Fig.~\ref{Fig:chi_irr} we present the resulting momentum dependencies of the irreducible susceptibilities, summed over all and part of the orbitals; the interpolation between symmetric points is performed by calculating the exchange interactions in Eqs.~(\ref{EqJ1}) and (\ref{EqJ2}) and inverting then Eq. (\ref{Eq:Jq}). Although the obtained dependences are qualitatively similar to those obtained earlier from the bare bubble in DMFT \cite{OurGamma}, the numerical values of susceptibilities are approximately two times larger (similarly to previous study of $\alpha$-iron \cite{Alpha_supercell}) because of the vertex corrections. At low temperatures ($\beta=30$ eV$^{-1}$) the maximum of the obtained susceptibility is at the X point, which shows dominant antiferromagnetic correlations. The susceptibility is, however, weakly momentum dependent, such that these correlations compete with fluctuations with other wave vectors, in particular $\Gamma$ (i.e. ferromagnetism), W, and K. As can be seen from partial contributions, the weak momentum dependence appears as a result of compensation of $e_g$ and mixed $t_{2g}$-$e_g$ contributions, while the $t_{2g}$ contribution is almost momentum-independent itself. For $\beta=10$ eV$^{-1}$ the momentum dependence of total irreducible susceptibility becomes even weaker; the susceptibility at $\Gamma$ point becomes close to that at the X point, which shows that ferromagnetic correlations are as strong, as the antiferromagnetic ones at this temperature. Note that weak momentum dependence of the magnetic susceptibility and close competition of ferro- and antiferromagnetic correlations at temperatures $T\sim 1200$~K, at which $\gamma$-iron exists in nature, agrees with the experimental results \cite{GammaFechiq}. Such a momentum dependence of the susceptibility at not too low temperatures, which is qualitatively different from the low-temperature behavior, is obtained entirely due to using supercell DMFT approach, accounting for the vertex corrections for the magnetic susceptibility, and it was not found in the calculation of momentum dependence of bubble of Green functions in DMFT approach of Ref. \cite{OurGamma}. \begin{figure}[t] \includegraphics[width=0.456\textwidth]{Fig2_susc_with_local_with_moments.eps} \caption{(Color online) \label{Fig:chiQT} Temperature dependencies of the inverse magnetic uniform and staggered susceptibilities (top panel), obtained within the (supercell) DFT+DMFT approach together with the experimental data for the uniform susceptibility. The inverse local magnetic susceptibility is shown in the middle panel. The instantaneous average ${\langle m_z^2\rangle}$ and local magnetic moments from local susceptibility are shown in the bottom panel.} \end{figure} The temperature dependence of the uniform and staggered susceptibilities $\overline{\chi}_{\mathbf Q}=\sum_{m,m'} \overline{\chi}_{\mathbf Q}^{m m'}$, corresponding to ${\mathbf Q}=0$ and ${\mathbf Q}={\mathbf Q}_X$, respectively, is shown in Fig. \ref{Fig:chiQT}(a) (as mentioned above, the susceptibilities, corresponding to different ${\mathbf Q}={\mathbf Q}_{X_i}$ are slightly different; the difference is however small). In agreement with previous calculations \cite{OurGamma} the inverse uniform susceptibility decreases with increasing temperature at low temperatures $T$. The value of the inverse uniform susceptibility, obtained in the present study, is approximately twice smaller than found previously \cite{OurGamma}, mainly due to larger (and more realistic) choice of the Coulomb interaction and agrees well with the experimental data. The slope of the temperature dependence of the inverse uniform susceptibility near the experimental temperature of $\alpha$-$\gamma$ structural transition is not obtained correctly, but one should take into account that the Curie and structural transition temperatures are overestimated in the considered theory, treating Ising symmetry of Hund's exchange \cite{Leonov}. The obtained slope of the inverse susceptibility at the expected theoretical temperature of $\alpha$-$\gamma$ transition $1.2T_{\textrm{C},\alpha}^{\rm LDA+DMFT}\simeq 2600$~K (the Curie temperature of $\alpha$-iron $T_{\textrm{C},\alpha}^{\rm LDA+DMFT}$, obtained within LDA+DMFT analysis, was taken from Refs. \cite{Sangiovanni,Alpha_supercell}), yields better agreement with the experimental data for the slope. On the other hand, the staggered susceptibility increases with decreasing temperature, and approximately fulfills the Curie-Weiss law. The corresponding Weiss temperature $\theta_{\rm stagg}\approx -340$~K is, however, negative, such that no long-range magnetic order is obtained at low temperatures (at least from the extrapolation of the obtained inverse susceptibility). The long-range order in copper precipitates may occur due to temperature dependence of the lattice constant, somewhat different from the considered one, the surface/volume anisotropy effects of $\gamma$-iron nanoparticles, as well as the anisotropic dipole-dipole interaction. It is important to note that despite the negative Weiss temperature, both ferro- and antiferromagnetic correlations at $T\sim 1000$~K are sufficiently strong. In particular, the corresponding values of the inverse staggered and uniform susceptibilities are comparable to the uniform susceptibility of $\alpha$-phase at the $\alpha$-$\gamma$ transition temperature, as follows from previous theoretical results of uniform susceptibility of $\alpha$-iron \cite{Alpha_supercell} at $T=1.2T_{\textrm{C},\alpha}^{\rm LDA+DMFT}$. The temperature dependence of the inverse local susceptibility is shown in Fig.~\ref{Fig:chiQT}(b). In agreement with previous results \cite{OurGamma} in the considered temperature range it is approximately linear for both, fixed and temperature-dependent lattice constant. The temperature dependencies of the instantaneous and static local moments, extracted from the average $\langle m_z^2\rangle$ (which is almost site-independent due to site-diagonal form of the self-energy), where $m_z=2 \mu_B \sum_m s^z_m$, and the derivative of the inverse susceptibility $\mu^2_{\rm loc}=3[d(1/\chi_{\rm loc})/dT]_a^{-1}$ obtained from $\chi_{\rm loc}=\sum_{m,m'}\chi^{mm'}_{\rm loc}$, respectively, are shown in Fig.~\ref{Fig:chiQT}(c). One can see that at fixed lattice constant the average $\langle m_z^2\rangle$ is weakly temperature dependent; the local moment $\mu^2_{\rm loc}$ shows somewhat stronger temperature dependence, especially at low temperatures, reflecting a tendency of destroying static local moments at lower temperatures \cite{OurGamma}. The suppression of the local moments is not pronounced in the considered temperature range, and, therefore, they are well formed above the lowest considered temperature $T=1/30$ eV. The same characteristics of local moments, calculated with temperature dependent lattice constant, show stronger temperature dependencies, reflecting effect of changing lattice constant. At not too high temperatures $T<1500$~K we find weak effect of the lattice constant change on $\mu_{\rm loc}^2$. The obtained value of magnetic moment $\mu_{\rm loc}\approx 3.8\mu_B$ at $T=1200$~K agrees with previous DMFT study \cite{OurGamma}, but somewhat larger than that obtained in DFT approach in both, low-spin (antiferromagnetic) and high-spin (ferromagnetic) phases \cite{GammaFM1,GammaFM2,Herper99}. On the other hand, for the saturated magnetic moment $\mu_{\rm sat}$, defined by $\mu_{\rm loc}^2=\mu_{\rm sat}(\mu_{\rm sat}+2\mu_B)$, we find the value $\mu_{\rm sat}\approx 2.9\mu_B$, which is closer to the high-spin state DFT result. \begin{figure}[b] \vspace{0.3cm} \vspace{-0.5cm} \includegraphics[trim=0cm 0cm 0cm 0cm,width=0.47\textwidth]{Fig3_FigJT2_new.eps \caption{(Color online) \label{Fig:JT} Temperature dependence of the magnetic exchange integrals $J^{(i)}$ in first three coordination spheres; the upper axis shows respective lattice constants and unit cell volumes for the considered temperatures. The error bars show only uncertainty, related to the Heisenberg form of magnetic interaction, see text. } \end{figure} \begin{figure}[t] \includegraphics[width=0.456\textwidth]{Fig4a_J_beta30_av1_new.eps}\vspace{0.3cm} \includegraphics[width=0.456\textwidth]{Fig4b_J_beta20_av1_new.eps}\vspace{0.3cm} \includegraphics[width=0.456\textwidth]{Fig4c_J_beta10_av1_new.eps} \caption{(Color online) \label{Fig:Jq} Momentum dependence of the magnetic exchange integral for $\beta=30$ eV$^{-1}$ (top), $\beta=20$ eV$^{-1}$ (middle), and $\beta=10$ eV$^{-1}$ (bottom). } \end{figure} Let us consider the results for the magnetic exchange. The temperature dependence of the orbital-averaged exchange parameters $J^{(i)}$, obtained according to the Eq.~(\ref{JiAv}), is shown in Fig.~\ref{Fig:JT} (we average the results also with respect to the space indexes $a,b$ and show the corresponding spread of the obtained values for different $a,b$ by error bars, which correspond physically to assuming Heisenberg form of the exchange interaction as discussed above; the on-site contribution $J^{(0)}\approx 0.96$~eV is obtained rather weakly temperature dependent). One can see that the exchange $J^{(3)}$ remains small in the considered temperature range and the dominant contribution comes from first two coordination spheres. In this situation (provided $J^{(2)}>0$) the type of the ground state magnetic configuration (and dominant magnetic correlations at finite temperature) is determined by the sign of $J^{(1)}$: it is ferromagnetic for $J^{(1)}>0$ and antiferromagnetic with the wave vector $(0,0,2\pi)/a$ for $J^{(1)}<0$. One can see that the nearest-neighbor exchange is antiferromagnetic at low temperatures and favors the $(0,0,2\pi)/a$ short-range order, in agreement with the analysis of susceptibilities (weak deviations from the wave vector ${\bf Q}_X$ can not be treated in the considering supercell approach). Approaching the temperature $\beta=10$~eV$^{-1}$, which is closer to the $\alpha$-$\gamma$ structural transition, we obtain, however, almost vanishing nearest neighbor exchange, such that the system appears on the boundary between the regimes with strong ferro- and antiferromagnetic correlations, also in agreement with the analysis of the susceptibility above. We note that DFT calculations yield change of sign of nearest-neighbor exchange at the unit cell volumes 11.8~\AA$^3$ \cite{BS10} or 11.4~\AA$^3$ \cite{BS9}, which are substantially smaller than the unit cell volume $V_0=12.1$~\AA$^3$ at $\beta=10$~eV$^{-1}$. Therefore, present theory allows one to obtain better agreement with the experimental data of Ref. \cite{GammaFechiq}. Although longer-range than third neighbors magnetic exchanges are not considered in the present approach (and third neighbor exchange is small), also in DFT calculations \cite{BS9,BS10} the third- and longer range magnetic exchanges almost compensate each other in the vicinity of the unit cell volume $V_0$. The resulting momentum dependence of the magnetic exchange $J_{\bf q}$, calculated analogously to $\overline{\mathcal J}_{\mathbf{q}}$ in Eq.~(\ref{EqJ2}) with the obtained exchange integrals $J^{(i)}$, substituted instead of $\mathcal J^{(i)}_{mm'}$, is shown in Fig. \ref{Fig:Jq}. In agreement with the above discussed results, we obtain $J_{{\bf Q}_X}>J_0$ at low temperatures and $J_{{\bf Q}_X}\approx J_0$ at $\beta=10$ eV$^{-1}$. The magnetic exchange $J_0=0.032$ eV at $\beta=10$ eV$^{-1}$ (which in our approach is provided mainly by the next-nearest-neighbor interaction), multiplied by the square of effective spin $3/2$ (corresponding to our magnetic moment $\mu_{\rm loc}^2\approx 15\mu_B^2$) is comparable (but somewhat larger) than the exchange $J_0=0.05$ eV between unit spin vectors, obtained in recent DFT approach \cite{BS9}. \section{Conclusion} \label{sec:conclusions} We have studied magnetic properties and magnetic exchange interactions in paramagnetic fcc iron by a combination of density functional theory and dynamical mean-field theory (DFT+DMFT). By using supercell approach and interpolating the values of magnetic susceptibility between the symmetric points of the Brillouin zone with the expansion of the magnetic exchange in coordination spheres up to third nearest neighbors, we have obtained weak momentum dependence of the magnetic susceptibility. In agreement with previous theoretical results and the experimental data we find that the antiferromagnetic correlations with the wave vector close to $(0,0,2\pi)/a$ dominate at low temperatures. At the same time, antiferromagnetic and ferromagnetic correlations closely compete at the temperatures $T\sim 1000$~K, where $\gamma$-iron exists in nature. Although this latter result is also in agreement with the experimental data \cite{GammaFechiq}, to our knowledge it has not been reproduced theoretically previously. The analysis of the inverse uniform susceptibility shows improvement of the agreement with the experimental data in comparison with previous theoretical study due to more realistic Coulomb interaction; the obtained inverse staggered susceptibility shows linear temperature dependence at low temperatures, with negative Weiss temperature $\theta_{\rm stagg} \approx -340$~K. The inverse local susceptibility is found to be also linear at not too low temperatures, showing well formed local moments. Analysis of magnetic exchange between these local moments shows that the dominant contribution to the magnetic exchange comes from first two coordination spheres; the nearest-neighbor exchange is found to be antiferromagnetic at low temperatures, while at temperature of the $\alpha$-$\gamma$ structural phase transition its absolute value becomes small, and the system appears on the boundary between the regimes with strongest antiferro- and ferromagnetic correlations. At higher temperatures the nearest- and next-nearest exchanges are ferromagnetic. We note that in our study the crossover between the regimes with strongest ferro- and antiferromagnetic correlations is due to change of preferred orientation of local moments with weakly varying size $\mu_{\rm loc}$, which is in contrast to the transition from low- to high spin itinerant state in DFT. In our calculations we have used the density-density form of Hund's exchange, which was shown to significantly overestimate the $\alpha$-$\gamma$ structural phase transition temperature \cite{Leonov,our_alpha_gamma}. However, our results are expected to remain qualitatively unchanged for the SU(2) symmetric form, since at high temperatures the ferromagnetic correlations are found to be strongly pronounced. The obtained results extend and deepen previous understanding of the magnetic properties of $\gamma$-iron and stress important role of ferromagnetic correlations in this substance at not too low temperatures. Although the ferromagnetic instability at large lattice parameter was studied previously within band structure calculations \cite{GammaFM1,GammaFM2,Herper99,BS8,Zhang11}, using dynamical mean-field theory allows us to consider the evolution of magnetic properties with raising temperature and describe their change from $\gamma$-iron in Cu precipitates at low temperatures to the $\gamma$-iron, existing in nature. The obtained close competition of ferro- and antiferromagnetic correlations (including possible phase separation on the short-range ordered ferro- and antiferromagnetic regions) may also help to explain the anti-Invar behavior of $\gamma$-iron, beyond high- and low-spin states mechanism, proposed previously \cite{AIGamma}. \newpage \begin{acknowledgments} The work was supported by the Russian Science Foundation (Project No. 14-22-00004). \end{acknowledgments}
{ "timestamp": "2018-07-27T02:12:51", "yymm": "1802", "arxiv_id": "1802.07061", "language": "en", "url": "https://arxiv.org/abs/1802.07061" }
\section{Introduction} Precise predictions for experiments are the backbone of physics. In particle physics these take the form of scattering cross-sections, which are assembled out of scattering amplitudes that are computed perturbatively, as well as experimentally determined input such as parton distribution functions. This inherent interest in computing scattering amplitudes is becoming ever more important due to the absence of a smoking-gun observation of physics beyond the well-established standard model at the Large Hadron Collider (LHC) at CERN: since the energy frontier will not move in the short term, precision physics is the most probable vector for near future discovery. The frontiers of the state-of-the-art are loosely measured in the number of external particles and internal loops. For non-supersymmetric Yang-Mills theory, very recently first (semi-)numerical computations of the planar five gluon amplitudes at two loops were reported in \cite{Badger:2017jhb, Abreu:2017hqn}, while analytically a special, equal helicity amplitude is known at two loops through seven external gluons \cite{Badger:2013gxa, Badger:2016ozq, Dunbar:2017nfy}. Beyond phenomenological interest, scattering amplitudes also attract formal interest as a basic output of quantum field or string theory which displays structures which may not be obvious from their original formulation. A prime example of this are the Kawai-Lewellen-Tye \cite{Kawai:1985xq} relations discovered first in string theory, which relate a certain sum over products of gluon scattering amplitudes to graviton scattering amplitudes. These are referred to generally as ``double copy'' type relations, see e.g. \cite{Bern:2008qj}. Scattering amplitudes obey in general physical constraints such as gauge and global symmetries, locality and unitarity, which are to an extent mutually redundant \cite{Arkani-Hamed:2016rak} \cite{Rodina:2016mbk}. In \cite{Boels:2016xhc}, drawing on ideas in \cite{Barreiro:2013dpa}, it was shown these constraints can be solved systematically. In \cite{Boels:2017gyc} (see also \cite{Bern:2017tuc}) specific solutions for four particle scattering were constructed and used to (re-)compute loop level scattering amplitudes with gluons and gravitons using unitarity. Earlier Feynman-graph based computations in \cite{Glover:2003cm, Gehrmann:2011aa} as well as the well-known computation of the gyromagnetic factor \cite{Peskin:1995ev} employ similar technology. Algebraic complexity has prohibited practical applicability thus far. Here this is solved by obtaining solutions to physical constraints as certain multi-copies of simpler one and two gluon building blocks. The resulting coefficients and integrals are typically reduced further by using the linear integration-by-parts (IBP) identities to express the amplitudes in a much smaller basis of so-called master integrals. This step is a very well-known bottleneck due to its overwhelming intermediate complexity. The multi-copy basis allows a particularly clean view on this reduction and its output. As a result, we uncover the need for intricate relations among the coefficients of the different masters which follow from the absence of residues at non-physical poles. These relations can be derived from differential equations and can be used as powerful internal consistency check as well as as a tool to derive integral coefficients. To demonstrate the potential of the methods explored here we showcase analytic applications to the planar five gluon, two loop amplitude and the planar four gluon, three loop amplitudes in pure Yang-Mills theory, expressed in a master integral basis; the five point result is closely related to the very recent semi-numerical results in \cite{Badger:2017jhb, Abreu:2017hqn}. Furthermore, we briefly show how to extend our techniques to massive matter. Throughout this Letter we work in dimensional regularisation to regulate the divergent loop integrals, in the scheme where all internal and external particles are in $d$ dimensions. \section{External kinematics from a multi-copy} Scattering amplitudes with spinning matter are Lorentz scalars and little group tensors. Every particle is associated with a polarisation tensor, which embeds a copy of the appropriate little group representation into the Lorentz group. For massless bosons for instance, which will constitute our main example for illustration, this involves products of the polarisation vector $\xi_{\mu}^I(p)$ where the Roman index indicates a little group and the greek index a Lorentz vector respectively. The polarisation vectors have to obey transversality, $p^{\mu} \xi_{\mu}^I(p) = 0$, and the corresponding scattering amplitude on-shell gauge invariance \cite{Noether:1918zz}, \begin{equation} A(\{\xi_i\rightarrow p_i\}) \rightarrow 0\, , \end{equation} for each individual massless particle of the amplitude. In addition, there is momentum conservation for the external momenta, which are all taken here to be complex and inward pointing. Since Poincare symmetry is exact, scattering amplitudes are multilinear to all orders in the polarisation vectors. All the mentioned constraints are therefore linear and can be solved at least in principle, as explained in \cite{Boels:2016xhc}. The solution space is spanned by a set of tensor structures. Every scattering amplitude can be expressed as a linear combination of these solutions, \begin{equation} \mathcal A = \sum_{i} \alpha_i B_i \, . \end{equation} In this form the scalar coefficients involve only (integrals over) Lorentz invariants constructed out of internal and external momenta. Given any form of $A$, the coefficients can be determined from multiplication with $B_j$ and summing over all helicities, \begin{equation}\label{eq:Pmatrixfirst} \sum_{\textrm{helicities}} B_j \mathcal A = \sum_{i} \alpha_i \left( \sum_{\textrm{helicities}} B_j B_i \right) \equiv \sum_i P_{ji} \alpha_i \, , \end{equation} which gives a scalar, linear problem after using the completeness relation to sum over helicities, \begin{equation} \sum_{\textrm{helicities}} \xi_{\mu} \xi_{\nu} = \eta_{\mu\nu} - \left(\frac{p_{\mu} q_{\nu} + p_{\nu} q_{\mu} }{ q\cdot p} \right)\, , \end{equation} where $q$ is a gauge choice that drops out of the result. The matrix $P_{ji}$ is invertible in general as the $B$ form a basis. However, as can be seen from the table in \cite{Boels:2016xhc}, the size of the spaces involved grows very quickly: six gluon scattering already involves $2364$ solutions! Computing, let alone inverting, a polynomial matrix of this size is generally unfeasible. This problem is solved here by a good basis choice. Consider the parity even scattering of one gluon and $n-1$ scalar particles. There are $n-2$ independent contractions of external momenta with the polarisation vector of the single gluon. Gauge invariance yields one constraint. The solution space of dimension $n-3$ can be spanned by objects \begin{equation} A_i(j,k) = (p_k \cdot p_i) \,p_j \cdot \xi_i - (p_j \cdot p_i) \,p_k \cdot \xi_i \, , \end{equation} for instance by the set \begin{equation}\label{eq:Aset} \{ A_i(j) = A_i(i+j, i+j+1) | j \in \{1,\ldots, n-3 \} \} \, , \end{equation} with particle momenta identified cyclically. Next, consider the scattering of two gluons and $n-2$ scalar particles. A special class of solutions to the physical constraints is given by multiplying copies of the solutions found in the single gluon case: \begin{equation} A_1(j) A_2(k) \qquad j,k \in \{1,\ldots, n-3 \} \, . \end{equation} This set is however not complete, as there is also \begin{equation} C_{i,j} = (\xi_{i} \cdot \xi_j)(p_i \cdot p_j)- (p_i \cdot \xi_j) (p_j \cdot \xi_i) \, , \end{equation} which is proportional to two contracted linearised field strength tensors, $F_{\mu\nu}(\xi_1) F^{\mu\nu}(\xi_2)$. The inner product of polarisation vectors makes it manifestly independent from the set of two copies of $A$'s. No additional solutions exist in the two gluon case. For more gluons, a set of solutions can always be obtained by multiple copies of lower gluon number solutions. We conjecture this set constructed of all possible $A$ and $C$ type building blocks for a given number of gluons is both linearly independent and complete in general dimensions. This was explicitly checked through six gluon amplitudes. The total number of basis elements with $n$ gluons and no scalars is \begin{equation} N_n = \sum_{k=0}^{\left \lfloor{n/2}\right \rfloor } \frac{n! (n-2)^{(n-2 k)}}{2^k k! (n-2 k)!} \, , \end{equation} which agrees with the numbers obtained in \cite{Boels:2016xhc} through $n=7$. To solve equation \eqref{eq:Pmatrixfirst} first construct a new tensor \begin{equation} D_{i,j} = C_{i,j} - \sum_{k,l=1}^{n-3} X_{ij}(k,l) A_{i}(k) A_{j}(l) \, , \end{equation} and require that it is ``orthogonal'' to the A-tensors as \begin{equation} \sum_{h_i} A_{i}(k) D_{i,j} = 0 = \sum_{h_j} A_{j}(k) D_{i,j},\,\, \forall k, \, , \end{equation} by summing over the helicities of particles $i$ or $j$ respectively. To find the unique solution for $X$, first consider \begin{equation} P^A_i(k,l) = \sum_{h_i} A_{i}(k) A_{i}(l) \, . \end{equation} The set $A$ in equation \eqref{eq:Aset} is mapped to linear combinations of itself by Bose-permutations of the other legs than $i$. This severely constrains the matrix $P^A_i$, as well as its inverse. Now construct the dual vector or projector \begin{equation} A^i(k) \equiv \sum_l (P^A_i)^{-1}(k,l) A_i(l) \, , \end{equation} which obeys \begin{equation} A^i(k) A_i(l) \equiv \sum_{\textrm{helicities},i } A^i(k) A_i(l) = \delta(k,l) \, . \end{equation} With this notation, \begin{equation} D_{i,j} = C_{i,j} - \sum_{k,l=1}^{n-3} A_i(k) A_j(l) \left(A^m(k) A^n(l) C_{m,n}\right) \, , \end{equation} holds. For a given subset of legs $s$ with $|s|$ elements ($|s|$ is even) one can construct all $(|s|-1)!!$ matrices obtained by multiplying $\frac{|s|}{2}$ $D$ matrices, which we will denote $D^{s}$. For this set consider \begin{equation} P^{D^{s}}_{ij} = \sum_{\textrm{helicities}} D^{s}_i D^{s}_j, \quad i,j=1,\dots, (|s|-1)!! \, . \end{equation} This computation is straightforward as for instance \begin{equation} \sum_{\textrm{helicities}}D_{i,j} D_{i,j} = (p_i \cdot p_j)^2 (d-n+1) \, , \end{equation} holds. Bose permutations simply act on the labels of the $D$ matrices and therefore permute the entries of the $P^{D^{s}}$-matrix as well as its inverse: a small number of elements determines the whole matrix. The inverse of this factor is observed up to at least $6$ particles to be very simple: the entries of the inverse are the entries of the original matrix inverted up to a function of $d$. This allows one to efficiently construct a projector, $D^{s,i}$, such that \begin{equation} \sum_{\textrm{helicities}} D^{s,i} D^{s}_j = \delta^i{}_j \, . \end{equation} Since the $D$ and $A$ tensors are orthogonal, the projectors onto specific basis elements factorise into sets with different numbers of $D$ tensors as well as within these sets into choices of which particles are on the $A$-type tensors, see figure \ref{fig:5ptPmatrillust} for an illustration in the five point case. Individual projectors can efficiently be computed, which was explicitly checked through six points. \begin{figure}[t] \includegraphics[scale=0.3]{FivePtPMatrixPlot-eps-converted-to.pdf} \caption{\label{fig:5ptPmatrillust}P-matrix for five gluons for the basis constructed from $A$ and $D$ tensors, evaluated on random integer values.} \end{figure} \section{Necessity of relations among master integral coefficients} After using the projectors on a given loop amplitude one obtains a sum over scalar integrals, \begin{equation} \left(\mathcal I\right)_{\textrm{proj}} = \sum_k \int d\, l_i \frac{f_k(l_i, p_j)}{\prod K_m} \, , \end{equation} where $f$ is a function of all independent inner products of external and internal momenta. The propagators $K_i$ are scalar functions, quadratic in internal momenta. Coordinates to identify integrals have to be chosen, but are only determined up to linear shifts of the loop momenta. Infinitesimally, this freedom induces so-called integration by parts (IBP) identities \cite{Chetyrkin:1981qh} among the integrals. The linear IBP identities can be solved systematically by Gaussian elimination after choosing an ordering on the vector space \cite{Laporta:2001dd}, basically aiming to solve complicated integrals in terms of simpler ones. Several public codes exist to perform this step such as {\tt FIRE} \cite{Smirnov:2008iw, Smirnov:2013dia, Smirnov:2014hma}, KIRA \cite{Maierhoefer:2017hyi}, {\tt Reduze} \cite{vonManteuffel:2012np} and {\tt LiteRed} \cite{Lee:2012cn, Lee:2013mka}. The output is a sum over a much smaller basis of integrals referred to as ``master integrals" \begin{equation} \left(\mathcal I\right)_{\textrm{proj}} = \sum_i c_i {\rm{MI}}_i \, . \end{equation} As emphasised in \cite{Boels:2017gyc} (see also \cite{BjerrumBohr:2007vu}), the poles of these coefficients can contain unphysical poles whose residue has to vanish between contributions of different master integrals. This type of singularity is known to occur in differential equations for the master integrals w.r.t. a Mandelstam invariant, \begin{equation} \frac{\partial {\rm{MI}}_i}{\partial s} = M_i{}^j \, {\rm{MI}}_j \ , \end{equation} see for instance \cite{Henn:2014qga}. The matrix $M$ is a function of the external kinematic invariants and the dimension. If all masters are finite in a certain kinematic limit (or already matched on all logarithmic singularities), say $u\rightarrow 0$, the differential equation can be used to derive a series of relations from a Laurent expansion. As an example, consider the case of a massive one loop contribution of a scalar or fermion to the color-ordered four gluon amplitude with ordering $1234$. There are $6$ master integrals. The physical problem does not allow poles in the cross-channel pole, $u=(p_1+p_3)^2$. The coefficients of the master integrals except the massive tadpole integral can be determined from unitarity cuts and their explicit form contains non-physical poles up to $\frac{1}{u^4}$. The massive tadpole coefficient cannot be determined from unitarity cuts, see e.g. \cite{Badger:2017gta}. Using the differential equation-derived constraints on the Laurent expansion around $u=0$ shows that the residues of the unphysical poles cancel between the unitarity derived master integrals down to $u^0$, \emph{up to the undetermined massive tadpole coefficient}. Taken together with gauge invariance this fixes the massive tadpole coefficient, up to a freedom associated with coupling constant renormalisation. In this very simple example, one can also study the large mass limit. For more complicated cases it is likely advantageous to transform the differential equations to so-called Fuchsian form (see e.g. \cite{Henn:2013pwa}) where all kinematic singularities are simple poles, for instance using \cite{Gituliar:2017vzm}. \section{Application to gluon scattering amplitudes} A good coordinate system to parametrise loop integrals is crucial. For planar integrals integrals with $l$ loops and $n$ legs choose $l$-copies of``n-gon'' one loop integrals, adding the squares of differences of loop momenta, $(l_i - l_j)^2$ for $i<j$. This choice minimises the number of different internal momenta per propagator. In the one-loop basis one should minimise the number of external momenta per propagator. Integral labels can be derived by adding internal propagators to the one-loop progenitor, see figure \ref{fig:5ptillus}. Extensions to massive matter for planar amplitudes are straightforward. \begin{figure}[t] \includegraphics[scale=0.25]{PentagramParam.png} \caption{\label{fig:5ptillus} Deriving a two loop integral parametrisation of a five point scattering problem from a one loop topology.} \end{figure} \subsection{Planar, five point, two loops} A good set of integral coordinates is \begin{multline}\label{eq:fivepointtwoloop} \left\{(l_1 - l_2)^2, (l_1)^2, (l_1 - p_2)^2,( l_1 - p_1 - p_2)^2, \right. \\ ( l_1 + p_3 + p_4)^2, ( l_1 + p_3)^2, (l_2)^2, (l_2 - p_2)^2,\\ \left.( l_2 - p_1 - p_2)^2,( l_2 + p_3 + p_4)^2, ( l_2 + p_3)^2 \right\} \, . \end{multline} The separate pentagons are chosen to never have more than two external momenta. This set has a single manifest graph symmetry since the map $l_1 \leftrightarrow l_2$ induces a permutation of the elements. A cyclic transformation $p_i \rightarrow p_{i+1}$ as well as an inversion such as $(p_1,p_2,p_3,p_4,p_5) \rightarrow (p_5,p_4,p_3,p_2,p_1)$ induce a permutation of the elements after a shift of the loop momentum. The latter transformations do not leave integrals invariant, but map between different integrals parameterised with the above coordinates. Proceed by a unitarity computation as outlined in \cite{Boels:2017gyc} taking inspiration from especially \cite{Bern:1994cg}. This requires computation of all cyclically independent cuts by multiplying tree amplitudes, summing over internal polarisations and projecting on the external kinematics using the basis constructed above. The needed tensor algebra is straightforward to implement even on a laptop. There are $142$ tensor basis coefficients. The IBP identities can be solved using FIRE, running on large computing resources (details on request). The most complicated reductions needed are ``pentabox" integrals with five powers of irreducible numerators. Although all five distinct pentaboxes can be parametrised by the coordinates in equation \eqref{eq:fivepointtwoloop}, one of these contains less terms in its propagators than the other four, see the figure \ref{fig:5ptillus}. This seemingly minor simplification was for us key to solve the IBP identities. The master integrals for the case at hand are expressible in harmonic polylogarithms, \cite{Gehrmann:2015bfy} \cite{Papadopoulos:2015jft}. Possible cross-channel poles of the coefficients can be read off from the projectors for the basis choice, \begin{equation}\nonumber A_i \propto \frac{1}{G (p_i + p_{i+1})^2 (p_i + p_{i-1})^2} \quad D_{i,j} \propto \frac{ 1}{G (p_i + p_{j})^2} \, , \end{equation} where $G$ is the determinant of the Grammian matrix for the independent momenta. Cyclic symmetry can be used to simplify the unitarity computation. For the choice of tensor structure basis above, cyclicity and inversion act as a permutation on the basis. A complete set of master integrals with the same property for cyclic permutations can be chosen, forming a set with $31$ orbits of length $5$. If the coefficient of one representative of a particular orbit is known for all tensor structures, then the coefficients of the other integrals in this orbit can be obtained from cyclic symmetry. There are four topologically distinct cuts: one triple cut and three double double cuts. The triple cut determines $27$ of the orbits, while the double cuts then can be used to fix the remaining four orbits. The thus-obtained results have been verified to match to those of \cite{Abreu:2017hqn, Badger:2017jhb} on a phase-space point by reproducing the master integral coefficients of the result in \cite{Abreu:2017hqn} for all independent helicity configurations. \subsection{Planar, four point, three loops} A good set of integral coordinates is \begin{multline} \left\{( l_1 - l_2)^2,( l_2 - l_3)^2,( l_1 - l_3)^2,\right. \\ ( l_1)^2, ( l_1 - p_3)^2, ( l_1 + p_1 + p_2)^2, ( l_1 + p_2)^2,\\ ( l_2)^2, ( l_2 - p_3)^2, ( l_2 + p_1 + p_2)^2, ( l_2 + p_2)^2,\\ \left. (l_3)^2, ( l_3 - p_3)^2, ( l_3 + p_1 + p_2)^2, ( l_3 + p_2)^2 \right\} \, . \end{multline} This set has a manifest order $6$ permutation symmetry from exchanging the three loop momenta. A cyclic transformation $p_i \rightarrow p_{i+1}$ or an inversion $(p_4 \leftrightarrow p_1), (p_2 \leftrightarrow p_3)$ induces a permutation of the elements of this basis for integrals which follows directly from their one loop origin. The factorised external kinematics project (cuts of) loop amplitudes onto scalar basis coefficients. There are for four gluons $10$ different basis coefficients, falling into $5$ distinct orbits of the cyclic group. The projectors are the source of cross-channel poles, where both $A$ and $D$ are proportional to the determinant of the Grammian matrix for all independent external momenta which here is the product of Mandelstam invariants, $\propto \, s t u $. FIRE5 can be used to address the IBP problem. There are in total $81$ master integrals, for which an analytic expression is known in principle \cite{Henn:2013fah}. \subsection{Planar and non-planar leading singularities through four loops} An interesting quantity that has played a central role in maximally supersymmetric Yang-Mills theory such as the amplituhedron \cite{Arkani-Hamed:2013jha} picture has been the notion of leading singularity \cite{Cachazo:2008vp}: the residue of a scattering amplitude after cutting all propagators. This is for pure Yang-Mills theory as well as for Einstein-Hilbert gravity a product of three point amplitudes, sewn together according to a cut trivalent graph at the appropriate loop order. Since cutting external propagators corresponds to tree singularities, one can limit to cutting $1$-particle irreducible graphs. We have checked, using the graph generator DiaGen \cite{diagen}, that the tensor algebra for all (planar and non-planar) leading singularities of the five point three loop and four point four loop gluon amplitudes is straightforward. This shows tensor algebra is no longer a bottleneck for any further progress. \section{Discussion} This Letter presents fresh insight into an old problem: how to compute scattering amplitudes in physically interesting theories. In particular, the factorised multi-copy basis constructed here offers cleanly exposes kinematic singularities in master integral coefficients which clearly deserve further study. Most pressing is the question to what extend leading singularities determine complete amplitudes. By the relations uncovered in this Letter, these coefficients certainly constrain the non-leading-singularity master integrals. Since the relations require only knowledge of differential equations, this involves a much simpler IBP reduction problem, potentially bypassing a current bottleneck for computation. Knowing in advance the kinematic singularity structure of master integral coefficients alone is useful. It would furthermore be interesting to explore the connection through Fuchsian form differential equations to the activity launched by \cite{Henn:2013pwa}: currently much more is known about integrals than about complete scattering amplitudes. Although the examples in this Letter have focused on planar amplitudes of massless gluons, extensions to more general matter are straightforward. Massive scalars are already covered by the analysis above. An easy further conjecture is that graviton scattering amplitudes can be expressed in a factorised tensor structure basis as well, including $D$-type elements combining the `left' and `right' polarisations. This is certainly correct for four point amplitudes, with $513$ basis elements and a factorised inverse of the P-matrix constructed as for gluons. Massive matter, a subject scratched at the surface here for tadpole coefficients, should be the focus of major attention. Experimental motivation also motivates the study of complete cross-sections: the presented technology can elucidate analytic computations, especially for the needed intricate cancellation of divergences beyond the one loop order. The structure of the integral coordinates is intriguing in this context. While this Letter already presents cutting edge applications for analytic computations, there is considerable room for improvement especially for IBP reduction. Our results were obtained with an older public code, paired with our observation on good integral coordinates for IBP reduction. Several groups have been working on IBP reductions with promising first results especially where finite-field methods and cuts are employed, see e.g. \cite{Ita:2015tya, Larsen:2015ped, Georgoudis:2016wff, Abreu:2017hqn, Boehm:2017wjc, Boehm:2018fpv}. Combining our insights for tensor structures with these developments has the potential to truly revolutionise calculational power for explicit as well as collider-relevant quantum field theory predictions. \begin{acknowledgments} \section*{Acknowledgements} The authors would like to thank Yang Zhang for discussions as well as Ben Page and Harald Ita for help in numerical confirmation of results for planar five point two loop amplitudes. RB would like to thank Vladimir Smirnov, Sven-Olaf Moch, Oleksandr Gituliar and Bernd Kniehl for feedback and encouragement. QJ is supported in part by the Chinese Academy of Sciences (CAS) Hundred-Talent Program, by the Key Research Program of Frontier Sciences, CAS, and by Project 11647601 supported by NSFC. This work was supported by the German Science Foundation (DFG) within the Collaborative Research Center 676 ``Particles, Strings and the Early Universe". \end{acknowledgments}
{ "timestamp": "2018-05-28T02:05:09", "yymm": "1802", "arxiv_id": "1802.06761", "language": "en", "url": "https://arxiv.org/abs/1802.06761" }
\section{\label{sec:level}Introduction:} Topic of plasma expansion along diverging magnetic field is of interest in a number of active research fields in astrophysical \cite{ref_1,ref_2} and laboratory plasmas \cite{ref_prop2,ref_4,ref_9}. Plasma flow along diverging magnetic field leading to magnetic reconnection is also a well-known astrophysical event\cite{ref_3,ref_rec}. Particularly of interest is the, helicon plasma source with expanding plasma geometry and diverging magnetic field due to its potential application as plasma thruster \cite{ref_prop1,ref_prop2,ref_prop3} for space vehicles. The efficiency of thrust generation in such device critically depends on the radial profiles of plasma density and temperature, and of electric and magnetic fields\cite{ref_azimu_Jd_1}. These profiles, except that of the magnetic field, have been studied experimentally \cite{ref_7,ref_8,ref_9} and found to vary substantially on the generation of diamagnetic current, Hall current and additional ionization in the expanding diffusion chamber. In the magnetic nozzle geometry one of the interesting aspect observed\cite{ref_4,ref_5,ref_17} and supported by particle-in-cell simulation \cite{ref_18} is that a hollow density profile is generated in the expansion chamber. This is of serious concern as such type of structure within magnetically expanding plasma causes reduction in the total thrust \cite{ref_7,ref_8} and is therefore necessary to avoid/remove the hollowness of the density profile at the nozzle throat to increase the thrust efficiency. However, understanding of the resulting conical density profile in the expanding plasma is still not comprehensive and demands an exploration of the individual roles played by the Hall current, diamagnetic current and source of ionization. In the helicon plasma source with expanding plasma geometry and diverging magnetic field an off-axis energetic electron component has been observed experimentally \cite{ref_4,ref_5}. These electrons, created by the skin heating \cite{ref_15,ref_16} in the source region near the location of rf antenna, are transported from the source to the expansion chamber via the last diverging magnetic field lines emerging from the open exit of the source chamber. These are speculated to have a role in the formation of the hollow density profile \cite{ref_4,ref_5} by off-axis ionization. However, since the ionization length of these fast electrons is substantially larger than the system dimensions, the proposed explanation needs further investigation. The formation of the hollow structure is also explained in an alternative manner by the radial transport of plasma induced by radial electric field generated because of magnetized electrons and unmagnetized ions in the expansion chamber. In this mechanism an azimuthal current is proposed to be driven by the $\bold E\times \bold B$ drift (Hall current) which in presence of the axial magnetic field causes radial plasma transport. This explanation has been supported in one experiment \cite{ref_17} and by particle-in-cell simulation \cite{ref_18}. On the other hand, in a previous experiment \cite{ref_19} with our helicon plasma device \cite{ref_20,ref_21} the hollow density structure is observed in the expansion chamber; however, in contrast to the above explanation, it is seen that the hollow profile is created even in the absence of the radial electric field. In this experiment \cite{ref_19} the distinct roles of the magnetic and geometric expansions were studied and found that it is the magnetic expansion that plays the dominant role in the hollow density formation in presence of the energetic tail electrons. In the magnetic expansion region these electrons rotate fast in azimuthal direction due to the gradient-B drift. As a consequence their confinement is sufficiently enhanced to allow impact ionization of the neutrals that causes the hollow density structure. After the crucial role of the gradient-B in hollow density formation is established in the previous experiment \cite{ref_19}, it would be interesting to know whether the absolute strength of the magnetic field has any function to play. In the present experiment the magnetic field is varied, which effectively changes the ion Larmor radii in both the source and the expansion chambers, and examine the effect. We observe that the center-peaked radial plasma density profile in magnetically expanding plasma transforms to the hollow profile when the external magnetic field strength is sufficiently increased by keeping other experimental operating parameters same. Transition occurs when the ions becomes magnetized in the expansion chamber, i.e. the ion Larmor radius started becoming smaller than the chamber radius. So, effectively, by changing the magnetic field, the plasma density profile can be either center peaked, or flat or hollow. On the other hand, the density in the source chamber remains always center peaked, irrespective of the magnetic field of our experiment. Our observations may lead to an effective control of plasma density profile with external magnetic field which may help to improve the understanding and efficiency of the helicon source based thrusters. In the next section, the experimental device and the set up for the experiment are described along with the diagnostics. Section III presents the experimental results obtained for the electron parameters in both source and expansion chambers. The results are discuss in Section IV and finally summary and conclusion is given in section V. \section{\label{sec:level} Experimental Setup:} The present experiment is carried out in a linear device based on helicon plasma source, shown schematically in Fig. \ref{fig:1}. The details is described in Ref. 19 and 20. The vacuum system consists of two cylindrical chambers. A source chamber of 9.5 cm inner diameter and 70 cm long made out of borosilicate glass and closed at one end with an insulating (pyrex) plate. The other end is connected to a 50 cm long stainless steel expansion chamber of 20 cm inner diameter, which is closed at the far end by a grounded SS plate. The whole system is evacuated to a base pressure of $1\times10^{-6}$ mbar using a diffusion pump, connected to the expansion chamber. Argon is used as the working gas in the pressure range of $0.7 - 3\times10^{-3}$ mbar. An 18 cm long right helicon antenna, placed around the source chamber and energized by a $13.56 $ MHz radio frequency power generator through L-type impedance matching network, produces the plasma. The reflected power is kept less than 2$\%$ for all the experiments. The location of antenna center is define as $z = 0$ and all other axial locations are with reference to the antenna center as shown in Fig. \ref{fig:1}. \begin{figure} \centering \includegraphics[width = \linewidth]{systemfigure_new.eps} \caption{Schematic of helicon plasma experimental setup.} \label{fig:1} \end{figure} Four forced water-cooled electromagnet coils, as shown in Fig. \ref{fig:1} are used to generate an axial magnetic field ($B_0$). They produce a diverging magnetic field into expansion chamber as shown in Fig. \ref{fig:2}. In the present experiment the electromagnet coil current (direct current), $I_B$ is varied up to 174A, yielding a maximum magnetic field strength of 325G at $z = 15$ cm and 290G at antenna center($z = 0$ cm). The simulated magnetic field lines using Poission Superfish software and the magnetic field strength for $I_B$ = 174A are shown in Fig. \ref{fig:2}a and \ref{fig:2}b, respectively. \begin{figure} \centering \includegraphics[width = \linewidth]{174A_field_new.eps} \caption{(a) Simulated magnetic field lines and (b) field strength at 174A DC current.} \label{fig:2} \end{figure} In order to understand the physics behind the formation of hollow density profile in the expansion chamber, the radial behavior of electron density, $n_e$ and temperature, $T_e$ are determined using rf compensated Langmuir probe \cite{ref_22}. The probe tip is made up of cylindrical tungsten wire of 1 mm diameter and 4 mm length. For rf compensation, another floating electrode is placed near it to sample the local plasma fluctuations. The compensating electrode of 2.5 mm diameter and 20 mm length is made of several closed winding of 0.125 mm tungsten wire over the ceramic holder of the cylindrical probe tip. This compensating electrode feeds the rf plasma fluctuations to the probe tip through a 10nF capacitor. Two tiny self-resonating chokes (at the working frequency $f_1$ = 13.56 MHz and its second harmonic 2$f_1$ = 27.12 MHz) and the compensated electrode are placed as close as possible to the probe tip to minimize the stray capacitance at rf frequencies. These self-resonating chokes are used to block any rf current entering the current measurement circuit. The dimension of the compensated electrode is chosen such that it produce minimum disturbance to the local plasma condition as well as works in a moderate plasma density of $\sim$ $10^{16}$ $m^{-3}$ in the limit of tiny self-resonating rf chokes. Fig. \ref{fig:3} shows the physical dimensions of the rf compensated probe. \begin{figure} \centering \includegraphics[width = \linewidth]{probe_image.eps} \caption{Physical dimensions of probe tip, auxiliary electrode and self-resonating chokes.} \label{fig:3} \end{figure} Two different rf compensated Langmuir probes of same collection areas are used to measure profiles perpendicular to the external magnetic field in the present experiments. The L-shaped rf compensation Langmuir probe is used in the expansion chamber; by rotation and translation of the probe it has been possible to position it in radial and axial direction, respectively. \begin{figure} \centering \includegraphics[width= 7cm]{merge.eps} \caption{Langmuir probe trace in Expansion chamber, at $200W, 130A, (r,z) = (5,50)cm$, Argon fill pressure $ = 1\times10^{-3}mbar$. (a) I-V characteristic with ion contribution (red dash line in inset), (b) semi-logarithmic plot of electron current and linear fit of hot (red dashed line) and cold (blue dotted line) electron population (c) Measured EEPF for the same condition.} \label{fig:4} \end{figure} Current-voltage characteristic of the Langmuir probe is collected by sweeping the bias at the probe from -120V to +70V at a frequency of 2.2Hz. The data is acquired using 14 bit data acquisition system at a sampling rate of 100kHz with record length of 50k samples. Fig. \ref{fig:4}a shows a typical I-V trace of the rf compensation Langmuir probe taken in the expansion chamber at Argon fill pressure $1\times10^{-3}$ mbar, rf power 200W and coil current 130A. It is obtained at the off-axis location of $(r,z) = (5,50)$ cm in the expansion chamber. The electron temperature is estimated from the linear fit to natural log of exponential region of electron current after subtracting off the square fitted ion current (Fig. \ref{fig:4}b). Two separate electron populations, bulk and high energy tail, are identified by the two straight line regions of semi-logarithmic plot Fig. \ref{fig:4}b. The measurement of the electron energy probability function (EEPF) using analog differentiator also shows two separate electron populations at off-axis location in expansion chamber (Fig. \ref{fig:4}c). The temperature of the high energy tail electrons ($T_h$) is determined by fitting a straight line in the linear region of the semi-logarithmic plot for biases much more negative than plasma potential. This fitted line is extrapolated to plasma potential (red dash line in Fig. \ref{fig:4}b) and subtracted from the total electron current to obtain the bulk electrons current. Fitting the remaining trace to another straight line yields the bulk electron temperature ($T_c$) as shown by blue dotted line in Fig. \ref{fig:4}b. The bulk and high energy tail electron densities are found by the respective electron currents at the plasma potential. The plasma potential is measured at zero crossing of analog double derivative of the I-V curve using analog differentiator. The fraction of electrons in the high energy tail $(\alpha)$ is equal to the current at the plasma potential due to the tail divided by the electron saturation current \cite{ref_probe_ana1,ref_probe_ana2}. Based on the kinetic definition of temperature, \begin{equation} T_e = \frac{1}{3} m_e \int_{-\infty}^{\infty} v^2 f(v)dv \end{equation} an effective electron temperature can be derived by assuming the electron energy distribution to be a bi-Maxwellian $f(v) = (1-\alpha)f_c(v)+\alpha f_h(v)$, where $f_c(v)$ is the bulk electron Maxwellian distribution and $f_h(v)$ is another Maxwellian distribution function due to enhanced tail of the EEDF. \begin{equation} T_{e,eff} = (1-\alpha) T_c + \alpha T_h \end{equation} From Fig. \ref{fig:4}, we get $T_c$ = 5.2 eV, $T_h$ = 12.7 eV and $\alpha$ = 0.21, the resulting effective electron temperature is $T_{e,eff}\simeq$7 eV. In our experiments $T_{e,eff}$ is nearly matches (within 10$\%$ error) to electron temperature ($T_e$) obtained without the subtraction of high energy tail electrons contribution as shown by black dotted dash line in Fig. \ref{fig:4}b. In our experiments the ratio of probe radius to Debye length also known as the Debye number varies within the range 3 to 10. Laframboise theory \cite{ref_Lafram} is used to determine the ion density from the ion current as it is valid for the above mentioned values of Debye number \cite{ref_sayak}. The algorithm adopted for determination of ion density from the ion current using the Laframboise theory is described in Ref. 25. \section{\label {sec:level} Experimental Results:} The radial behavior of electron density, $n_e$ and temperature, $T_e$, determined using the rf compensated Langmuir probe are obtained at two different axial locations, namely z = 31 cm and 50 cm. These locations are chosen such that one corresponds to before ($z_{before}$ = 31 cm), and the other after ($z_{after}$ = 50 cm) the magnetic and geometric expansions (Fig. \ref{fig:1}). For an electromagnet coil current of $I_B$ = 174A, the strength of the magnetic field at these two axial locations are 300G and 190G, respectively. To observe the evolution of the radial profiles, these are obtained with different applied magnetic field strengths and are shown in Fig. \ref{fig:6}. Four values of the electromagnetic current $I_B$ are chosen carefully: $I_B$ = 45A,87A,130A and 174A for this purpose. Figs. \ref{fig:6}a and b show the profiles of electron density and temperature at $z_{before}$ whereas Figs. \ref{fig:6}c and d show those at $z_{after}$ at fixed rf power of 200W and argon pressure $1\times 10^{-3}$ mbar. From Fig. \ref{fig:6}a it is seen that the radial profiles of plasma density in the source chamber at $z_{before}$ have a peak on axis for all values of $I_B$ and the ratio of the center and edge ($r\sim4$ cm) densities decreases as $I_B$ is increased. Here the effective electron temperature is nearly same at the axis for all values of $I_B$ and gradually increases towards the edge. The effective electron temperature peaks at an outer radial location and the peak value increases with IB (Fig. \ref{fig:6}b). Increase in the plasma density at the outer edge follows the increase in the effective electron temperature there. It is seen from Figs. \ref{fig:6}a and b that radial profiles of plasma parameters in the source chamber are not substantially different above $I_B$ value of 87A. \begin{figure} \centering \includegraphics[width= 7cm]{B0_new2.eps} \caption{Plasma density (a,c) and electron temperature (b,d) at locations of z = 31cm and z = 50cm, respectively, at rf power 200W and $1\times10^{-3}$ mbar argon fill pressure. For $I_B$, 45A (solid triangles), 87A (solid circle), 130A (solid square) and 174A (solid star). Data are spline fitted for representation.} \label{fig:6} \end{figure} On the other hand, in expansion chamber at $z_{after}$, though the plasma density remains peaked at the center for $I_B$ = 45A, but it becomes hollow for $I_B$ values greater than 80 A (Fig. \ref{fig:6}c). The density peak occurs off axis at $r\sim~$5 cm. At this position ($z_{after} = 50$ cm) the magnetic field is diverging (Fig \ref{fig:2}). The percent of hollowness, that is, the ratio of center-to-peak density is also increases with $I_B$. The nature of electron temperature profiles with $I_B$ follows the same behavior as in source chamber, that is, the temperature peaks at outer radial location, (Fig. \ref{fig:6}d). The ratio of peak-to-center electron temperature increases with $I_B$. The peak location more or less coincides with that of density here at z = 50 cm. Value of electron temperature remains nearly same about (4.5 $\pm$ 1 eV) on axis at the locations before and after the magnetic expansion, however, it becomes significantly different at outer radial locations. It is seems that the electrons having temperature of 9-12 eV at outermost radial location ($r\sim$4 cm) in the source chamber are not available in expansion chamber for all values of $I_B$. The electrons in expansion chamber at z = 50 cm, have maximum temperature of 6 - 8 eV at $r\sim$5 cm which corresponds to the electron temperature at $r$ = 3 - 3.5 cm in source chamber (z = 31 cm). \begin{figure} \centering \includegraphics[width= 7cm]{Power_var_new2.eps} \caption{Plasma density (a,c) and electron temperature (b,d) at locations of z = 31cm and z = 50cm, respectively, for fixed $I_B$ = 130A and argon fill pressure $1\times10^{-3}$mbar, for different rf power, 100W (solid triangles), 200W (solid circle) and 300W (solid square). Data are spline fitted for representation.} \label{fig:7} \end{figure} Establishing the fact that the electron density profile is center-peaked in the expansion chamber at low magnetic there, but becomes hollow above a minimum magnetic field, we try to explore whether this hollow nature depends on the other relevant parameters of the plasma source, like rf power and fill pressure. Fig. \ref{fig:7} and Fig. \ref{fig:8} show the results of such investigations. First, we fixed the magnetic coil current $I_B$ and filling pressure of argon at 130A and $1\times10^{-3}$ mbar, respectively and vary the operating rf power. The radial variations of plasma density and electron temperature are shown in Fig. \ref{fig:7}. In the source chamber the radial profiles of plasma density remains center peaked as before for all rf power values; however, the density increases throughout the profile with rf power, (Fig. \ref{fig:7}a). The effective electron temperature peaked at radially outward location (at around 4 cm) and does not vary substantially with rf power, (Fig. \ref{fig:7}b). In expansion chamber the plasma density shows hollow profile for all rf powers (Fig. \ref{fig:7}c), though overall density density increases to some extent with rf power. Electron temperature is also peaked radially outward at r $\sim$ 5 cm (Fig. \ref{fig:7}d) and here also does not vary much with rf power. \begin{figure} \centering \includegraphics[width= 7cm]{Pressure_var_sir.eps} \caption{Plasma density (a,c) and electron temperature (b,d) at locations of z = 31cm and z = 50cm, respectively, for rf Power 200W and $I_B$ = 130A, for different pressures, $0.7\times10^{-3}$ mbar (solid triangles), $1\times10^{-3}$ mbar (solid circle) and $3.3\times10^{-3}$ mbar (solid square). Data are spline fitted for representation. } \label{fig:8} \end{figure} Next, the magnetic coil current $I_B$ and rf power are fixed at 130A and 200W, respectively, and the filling pressure of argon is varied. The resulting radial plasma density and electron temperature profiles are depicted in Fig. \ref{fig:8}. In the source chamber (z = 31 cm) the plasma density remains center peaked as before for all pressures (Fig. \ref{fig:8}a) and the density increases throughout the radius with increasing filling pressure as expected. The electron temperature profiles (Fig. \ref{fig:8}b) also remained peaked at radially outer location but the temperature is reduced throughout the radius with increasing pressure. In expansion chamber the hollowness in the density profile remains and the density increases over the whole profile with filling pressure, but with increasing pressure the hollowness tends to be suppressed, (Fig. \ref{fig:8}c). Similar behavior is observed with electron temperature (Fig. \ref{fig:8}d) which remains peaked off-axis, but decreases with increasing pressure at all radii. \section{\label {sec:level} Discussion:} The experimental results presented in Fig. \ref{fig:6} to Fig. \ref{fig:8} can be summarized into two broad aspects: A) irrespective of the values of the electromagnet coil current, the radial profiles of the electron temperature remain peaked off-axis in both the source and expansion chambers, though the values are different, and B) the electron density is center-peaked in the source chamber irrespective of the coil current, however, in the expansion chamber it started to become hollow above a critical value of the coil current. In the following we elaborate on these findings and explore the underlying physics behind these observations. \subsection{\label {sec:level} Off-axis electron heating:} The region of radially outward electron temperature peaking solely depends on operating conditions of helicon plasma discharge. Depending on input operating parameters (rf power, gas pressure and applied magnetic field), helicon discharge can be in capacitive (E), inductive (H) or Helicon wave (W) mode \cite{ref_10,ref_11,ref_12,ref_13}. The radial structures of plasma density and electron temperature are also dictated by the dominant mode of discharge operation. In our operating conditions the discharge is predominantly in inductive mode. The off-axis peaking of the electron temperature in source chamber at z = 31 cm (Fig. \ref{fig:6}-\ref{fig:8}) indicates that a local electron heating mechanism must exist near radial plasma boundary. In an insulating source discharge tube where the helicon antenna is placed around, a skin layer is formed near the radial boundary where most of the rf- power is transferred to electrons \cite{ref_15,ref_16} and plasma is produced by electron impact ionization. The collisionless skin depth ($\delta = \frac{c}{\omega_p}$) with a plasma density of $3\times10^{16}m^{-3}$ is about $\sim$2.5 cm, which is nearly half the radius of our source chamber. As there is an external magnetic field, so the bulk electrons are unable to freely diffuse from the skin layer or in other words the electrons which are present in skin layer are restricted by cross-field diffusion mechanism. This is the reason why we see the temperature profiles of electrons in the source region (shown in Fig. \ref{fig:6} - \ref{fig:8}) to be peaked off-axis. The increase of off-axis electron temperature by external magnetic field (Fig. \ref{fig:6}b), is due to confinement of these electrons in the skin layer by their small gyroradii, which are about 0.44 mm for an average energy of 12 eV in 300G (z = 31 cm). It is generally seen that some of the electrons in this region, instead of losing energy by ionization, acquire high energy and form a tail in electron energy Maxwell distribution in the source chamber near the antenna location. These high energy electrons are transported by the last diverging peripheral magnetic field lines emerging from open exit of source into the expansion chamber as shown in Fig. \ref{fig:2}a. The electrons that are very close to the source radial boundary at r $\sim4$ cm are not transported into the expansion, since these electrons are tied to magnetic field lines and follow the curvature of magnetic field (Fig. \ref{fig:2}a) to hit the source peripheral wall near to open exit and lost. Hence the electron energy distribution including the high energy tail (Fig. \ref{fig:4}) in the expansion chamber at r $\sim$5 cm, corresponds to that at the radial location of r $\sim$ 3 - 3.5 cm in source chamber (shown by circle in Fig. \ref{fig:2}a). The external magnetic field plays an important role for maintaining of the electron temperature gradient in our experiment as inhibition of cross field diffusion restricts interaction among electrons of all regions. \begin{figure} \centering \includegraphics[width= 7cm]{Vp_B0.eps} \caption{Radial plasma potential at z = 50cm in expansion chamber at 200W and $1\times10^{-3}$mbar argon pressure. 45A solid triangles, 87A solid circle, 130A solid square and 174A solid star. Data are spline fitted for representation.} \label{fig:12} \end{figure} \subsection{\label {sec:level} Hollow density formation:} Formation of hollow density profile in expansion chamber can either be due to radially outward transport of plasma or due to additional ionization off-axis. Fig. \ref{fig:12} shows the local plasma potential profiles in the same conditions as in Fig. \ref{fig:6}c and d. The radial electric field in expansion chamber at z = 50 cm is very weak, $ < 0.7 V/cm$ and it decreases with increase in external magnetic field. So, the possibility of outward radial transport of plasma due to $\bold E\times \bold B$ drift can be ruled out in our experiment. In reference 18, we established the presence of additional off-axis ionization in expansion chamber. It was shown there that poloidal rotation of electrons due to the gradient-B drift plays a crucial role in the off-axis ionization. The mechanism is briefly stated here. The temperature of high energy tail electrons in expansion chamber ranges from 12-15 eV and their population generally \cite{ref_probe_ana1,ref_probe_ana2} is about 15 - 20 $\%$ of the total electron density. So, there are a substantial number of electrons having energy of 20 - 50 eV present off-axis. However, as their ionizing collision length at argon pressure of $1\times10^{-3}$ mbar is $\sim150$cm, which is much larger than the system size, so they cannot produce additional plasma off-axis moving along the axis. On the other hand, the hollowness in density profile is found only after the magnetic divergence \cite{ref_19}, where a strong gradient in external magnetic field ($\triangledown B$) is present. It has been shown there that these high energetic electrons undergo a high rotational motion by gradient-B drift in azimuthal direction. The motion is localized near the magnetic field divergence and provides sufficient time to cause the ionizing collisions with the neutrals leading to off-axis additional ionization within few rotations. \begin{figure} \centering \includegraphics[width=7cm]{ion_larmor.eps} \caption{ Ion larmor radius at z = 31cm source (open square) and z = 50 cm expansion (open circle) chamber; for different values of coil currents $I_B$. The magnetic field strength has $\sim$1.75 G/A and $\sim$1.1 G/A at z = 31 cm and 50 cm respectively. Two horizontal dotted dash and dash lines correspond to expansion and source chamber radii respectively.} \label{fig:5} \end{figure} However, in the present experiment the interesting result that is obtained is a critical value of the magnetic field above which the hollowness in plasma density in the expansion chamber is found (Fig. \ref{fig:6}c). The radial profile of plasma density in the expansion chamber is center peaked for magnetic field of 50G, (corresponding to coil magnetizing current $I_B$ = 45A). So it seems the grad-B drift effect is an essential condition, but not sufficient tone o form radial hollow density profile. In order to investigate the existence of the critical magnetic field, we plot the calculated ion Larmor radii at the two axial locations (where the experimental results of Figs. \ref{fig:6}-\ref{fig:8} are obtained) for different coil currents $I_B$ in Fig. \ref{fig:5}. The values of the source and expansion chamber radii are also indicated in the figure for reference. Interestingly, it should noted here that out of the four values of $I_B$ (45A, 87A, 130A and 174A) for which the profiles of Figs. \ref{fig:6}-\ref{fig:8} are obtained, only for $I_B$ = 45A the ion Larmor radius in the expansion chamber is close to the chamber radius, that is, the ions are here not magnetized; for all other coil currents ions are magnetized. The change in electron density profile depends on the confinement of ions through ion gyroradius in expansion chamber radius of 10 cm. For magnetic field of 50G at z = 50 cm, as the ions gyro radius $\sim$ 9 cm, that is, close to the system radius. So though the high energy electrons are confined and the gradient-B effect produces off-axis ionization, but ions are not magnetized; as a results extra off-axis plasma is lost to the wall by ambipolar field. The density remains center-peaked due to quasi neutrality and non-magnetized ions. For 95G of magnetic field (corresponding to 87A) the ion gyroradius becomes $\sim$ 4.8 cm; here ions are magnetized and density is flattened on that scale. While for still higher values of magnetic field the ion gyroradius become much smaller than the system dimension and electron density starts to become hollow. \begin{figure} \centering \includegraphics[width=7cm]{ele_pre.eps} \caption{Radial electron pressure $p_e$ profile, at z = 50cm in expansion chamber at 200W rf power and $1\times10^{-3}$mbar argon fill pressure. 45A (solid triangles), 87A (solid circle), 130A (solid square) and 174A (solid star). Data are spline fitted for representation. } \label{fig:13} \end{figure} The hollowness in plasma density and off-axis peaking of electron temperature leads to hollow electron pressure ($p_e = n_0k_BT_e$) profile. The calculated radial profile of electron pressure from the measured profile of plasma density and electron temperature is shown in Fig. \ref{fig:13}. The electron diamagnetic current($j_{De} = \frac{1}{B_z}\frac{\partial p_e}{\partial r}$) flows in anti-diamagnetic direction at around $r\sim5$cm due to hollow pressure profiles ($I_B$ = 130 and 174A), while $j_{De}$ at $r>5$cm is in diamagnetic direction. This analogy is consistent with recent previous measurements \cite{ref_azimu_Jd_1}. Formation of hollow structure within magnetically expanding plasma causes the reduction of total thrust due to anti-diamagnetic direction of azimuthal current, since it leads to revers role of Lorentz force in presence of radial component of applied diverging magnetic field\cite{ref_7,ref_Jd_direction}. \section{\label {sec:level} summary and Conclusion:} In our helicon plasma source based linear device with expanding plasma geometry and diverging magnetic field, the radial density profile is found to be peaked on-axis and evolves into a hollow profile in the expansion plasma as the diverging magnetic field at the axial location is increased beyond a critical value. It is seen that the hollow density profile is formed only when both electrons and ions are magnetized at this location and where the presence of magnetic divergence helps to increase the confinement of hot electrons due to the azimuthal rotation caused by the grad-B drift. On the other hand, irrespective of the plasma operating conditions of the experiment, the source radial density profile is center-peaked. The effective electron temperature is peaked radially outward in the source region for all values of the magnetic field due to the rf skin heating effect near the helicon antenna, where an hot electron component is also generated. The hot electrons and the temperature profile hollowness are transported to the expansion chamber along the divergent magnetic field lines and the hollowness becomes more prominent for higher magnetic field. \nocite{*}
{ "timestamp": "2018-02-21T02:06:13", "yymm": "1802", "arxiv_id": "1802.06986", "language": "en", "url": "https://arxiv.org/abs/1802.06986" }
\section{Introduction} The \textit{proceedings} are the records of a conference.\footnote{This is a footnote} ACM seeks to give these conference by-products a uniform, high-quality appearance. To do this, ACM has some rigid requirements for the format of the proceedings documents: there is a specified format (balanced double columns), a specified set of fonts (Arial or Helvetica and Times Roman) in certain specified sizes, a specified live area, centered on the page, specified size of margins, specified column width and gutter size. \section{The Body of The Paper} Typically, the body of a paper is organized into a hierarchical structure, with numbered or unnumbered headings for sections, subsections, sub-subsections, and even smaller sections. The command \texttt{{\char'134}section} that precedes this paragraph is part of such a hierarchy.\footnote{This is a footnote.} \LaTeX\ handles the numbering and placement of these headings for you, when you use the appropriate heading commands around the titles of the headings. If you want a sub-subsection or smaller part to be unnumbered in your output, simply append an asterisk to the command name. Examples of both numbered and unnumbered headings will appear throughout the balance of this sample document. Because the entire article is contained in the \textbf{document} environment, you can indicate the start of a new paragraph with a blank line in your input file; that is why this sentence forms a separate paragraph. \subsection{Type Changes and {\itshape Special} Characters} We have already seen several typeface changes in this sample. You can indicate italicized words or phrases in your text with the command \texttt{{\char'134}textit}; emboldening with the command \texttt{{\char'134}textbf} and typewriter-style (for instance, for computer code) with \texttt{{\char'134}texttt}. But remember, you do not have to indicate typestyle changes when such changes are part of the \textit{structural} elements of your article; for instance, the heading of this subsection will be in a sans serif\footnote{Another footnote here. Let's make this a rather long one to see how it looks.} typeface, but that is handled by the document class file. Take care with the use of\footnote{Another footnote.} the curly braces in typeface changes; they mark the beginning and end of the text that is to be in the different typeface. You can use whatever symbols, accented characters, or non-English characters you need anywhere in your document; you can find a complete list of what is available in the \textit{\LaTeX\ User's Guide} \cite{Lamport:LaTeX}. \subsection{Math Equations} You may want to display math equations in three distinct styles: inline, numbered or non-numbered display. Each of the three are discussed in the next sections. \subsubsection{Inline (In-text) Equations} A formula that appears in the running text is called an inline or in-text formula. It is produced by the \textbf{math} environment, which can be invoked with the usual \texttt{{\char'134}begin\,\ldots{\char'134}end} construction or with the short form \texttt{\$\,\ldots\$}. You can use any of the symbols and structures, from $\alpha$ to $\omega$, available in \LaTeX~\cite{Lamport:LaTeX}; this section will simply show a few examples of in-text equations in context. Notice how this equation: \begin{math} \lim_{n\rightarrow \infty}x=0 \end{math}, set here in in-line math style, looks slightly different when set in display style. (See next section). \subsubsection{Display Equations} A numbered display equation---one set off by vertical space from the text and centered horizontally---is produced by the \textbf{equation} environment. An unnumbered display equation is produced by the \textbf{displaymath} environment. Again, in either environment, you can use any of the symbols and structures available in \LaTeX\@; this section will just give a couple of examples of display equations in context. First, consider the equation, shown as an inline equation above: \begin{equation} \lim_{n\rightarrow \infty}x=0 \end{equation} Notice how it is formatted somewhat differently in the \textbf{displaymath} environment. Now, we'll enter an unnumbered equation: \begin{displaymath} \sum_{i=0}^{\infty} x + 1 \end{displaymath} and follow it with another numbered equation: \begin{equation} \sum_{i=0}^{\infty}x_i=\int_{0}^{\pi+2} f \end{equation} just to demonstrate \LaTeX's able handling of numbering. \subsection{Citations} Citations to articles~\cite{bowman:reasoning, clark:pct, braams:babel, herlihy:methodology}, conference proceedings~\cite{clark:pct} or maybe books \cite{Lamport:LaTeX, salas:calculus} listed in the Bibliography section of your article will occur throughout the text of your article. You should use BibTeX to automatically produce this bibliography; you simply need to insert one of several citation commands with a key of the item cited in the proper location in the \texttt{.tex} file~\cite{Lamport:LaTeX}. The key is a short reference you invent to uniquely identify each work; in this sample document, the key is the first author's surname and a word from the title. This identifying key is included with each item in the \texttt{.bib} file for your article. The details of the construction of the \texttt{.bib} file are beyond the scope of this sample document, but more information can be found in the \textit{Author's Guide}, and exhaustive details in the \textit{\LaTeX\ User's Guide} by Lamport~\shortcite{Lamport:LaTeX}. This article shows only the plainest form of the citation command, using \texttt{{\char'134}cite}. Some examples. A paginated journal article \cite{Abril07}, an enumerated journal article \cite{Cohen07}, a reference to an entire issue \cite{JCohen96}, a monograph (whole book) \cite{Kosiur01}, a monograph/whole book in a series (see 2a in spec. document) \cite{Harel79}, a divisible-book such as an anthology or compilation \cite{Editor00} followed by the same example, however we only output the series if the volume number is given \cite{Editor00a} (so Editor00a's series should NOT be present since it has no vol. no.), a chapter in a divisible book \cite{Spector90}, a chapter in a divisible book in a series \cite{Douglass98}, a multi-volume work as book \cite{Knuth97}, an article in a proceedings (of a conference, symposium, workshop for example) (paginated proceedings article) \cite{Andler79}, a proceedings article with all possible elements \cite{Smith10}, an example of an enumerated proceedings article \cite{VanGundy07}, an informally published work \cite{Harel78}, a doctoral dissertation \cite{Clarkson85}, a master's thesis: \cite{anisi03}, an online document / world wide web resource \cite{Thornburg01, Ablamowicz07, Poker06}, a video game (Case 1) \cite{Obama08} and (Case 2) \cite{Novak03} and \cite{Lee05} and (Case 3) a patent \cite{JoeScientist001}, work accepted for publication \cite{rous08}, 'YYYYb'-test for prolific author \cite{SaeediMEJ10} and \cite{SaeediJETC10}. Other cites might contain 'duplicate' DOI and URLs (some SIAM articles) \cite{Kirschmer:2010:AEI:1958016.1958018}. Boris / Barbara Beeton: multi-volume works as books \cite{MR781536} and \cite{MR781537}. A couple of citations with DOIs: \cite{2004:ITE:1009386.1010128, Kirschmer:2010:AEI:1958016.1958018}. Online citations: \cite{TUGInstmem, Thornburg01, CTANacmart}. \subsection{Tables} Because tables cannot be split across pages, the best placement for them is typically the top of the page nearest their initial cite. To ensure this proper ``floating'' placement of tables, use the environment \textbf{table} to enclose the table's contents and the table caption. The contents of the table itself must go in the \textbf{tabular} environment, to be aligned properly in rows and columns, with the desired horizontal and vertical rules. Again, detailed instructions on \textbf{tabular} material are found in the \textit{\LaTeX\ User's Guide}. Immediately following this sentence is the point at which Table~\ref{tab:freq} is included in the input file; compare the placement of the table here with the table in the printed output of this document. \begin{table} \caption{Frequency of Special Characters} \label{tab:freq} \begin{tabular}{ccl} \toprule Non-English or Math&Frequency&Comments\\ \midrule \O & 1 in 1,000& For Swedish names\\ $\pi$ & 1 in 5& Common in math\\ \$ & 4 in 5 & Used in business\\ $\Psi^2_1$ & 1 in 40,000& Unexplained usage\\ \bottomrule \end{tabular} \end{table} To set a wider table, which takes up the whole width of the page's live area, use the environment \textbf{table*} to enclose the table's contents and the table caption. As with a single-column table, this wide table will ``float'' to a location deemed more desirable. Immediately following this sentence is the point at which Table~\ref{tab:commands} is included in the input file; again, it is instructive to compare the placement of the table here with the table in the printed output of this document. \begin{table*} \caption{Some Typical Commands} \label{tab:commands} \begin{tabular}{ccl} \toprule Command &A Number & Comments\\ \midrule \texttt{{\char'134}author} & 100& Author \\ \texttt{{\char'134}table}& 300 & For tables\\ \texttt{{\char'134}table*}& 400& For wider tables\\ \bottomrule \end{tabular} \end{table*} It is strongly recommended to use the package booktabs~\cite{Fear05} and follow its main principles of typography with respect to tables: \begin{enumerate} \item Never, ever use vertical rules. \item Never use double rules. \end{enumerate} It is also a good idea not to overuse horizontal rules. \subsection{Figures} Like tables, figures cannot be split across pages; the best placement for them is typically the top or the bottom of the page nearest their initial cite. To ensure this proper ``floating'' placement of figures, use the environment \textbf{figure} to enclose the figure and its caption. This sample document contains examples of \texttt{.eps} files to be displayable with \LaTeX. If you work with pdf\LaTeX, use files in the \texttt{.pdf} format. Note that most modern \TeX\ systems will convert \texttt{.eps} to \texttt{.pdf} for you on the fly. More details on each of these are found in the \textit{Author's Guide}. \begin{figure} \includegraphics{fly} \caption{A sample black and white graphic.} \end{figure} \begin{figure} \includegraphics[height=1in, width=1in]{fly} \caption{A sample black and white graphic that has been resized with the \texttt{includegraphics} command.} \end{figure} As was the case with tables, you may want a figure that spans two columns. To do this, and still to ensure proper ``floating'' placement of tables, use the environment \textbf{figure*} to enclose the figure and its caption. And don't forget to end the environment with \textbf{figure*}, not \textbf{figure}! \begin{figure*} \includegraphics{flies} \caption{A sample black and white graphic that needs to span two columns of text.} \end{figure*} \begin{figure} \includegraphics[height=1in, width=1in]{rosette} \caption{A sample black and white graphic that has been resized with the \texttt{includegraphics} command.} \end{figure} \subsection{Theorem-like Constructs} Other common constructs that may occur in your article are the forms for logical constructs like theorems, axioms, corollaries and proofs. ACM uses two types of these constructs: theorem-like and definition-like. Here is a theorem: \begin{theorem} Let $f$ be continuous on $[a,b]$. If $G$ is an antiderivative for $f$ on $[a,b]$, then \begin{displaymath} \int^b_af(t)\,dt = G(b) - G(a). \end{displaymath} \end{theorem} Here is a definition: \begin{definition} If $z$ is irrational, then by $e^z$ we mean the unique number that has logarithm $z$: \begin{displaymath} \log e^z = z. \end{displaymath} \end{definition} The pre-defined theorem-like constructs are \textbf{theorem}, \textbf{conjecture}, \textbf{proposition}, \textbf{lemma} and \textbf{corollary}. The pre-defined de\-fi\-ni\-ti\-on-like constructs are \textbf{example} and \textbf{definition}. You can add your own constructs using the \textsl{amsthm} interface~\cite{Amsthm15}. The styles used in the \verb|\theoremstyle| command are \textbf{acmplain} and \textbf{acmdefinition}. Another construct is \textbf{proof}, for example, \begin{proof} Suppose on the contrary there exists a real number $L$ such that \begin{displaymath} \lim_{x\rightarrow\infty} \frac{f(x)}{g(x)} = L. \end{displaymath} Then \begin{displaymath} l=\lim_{x\rightarrow c} f(x) = \lim_{x\rightarrow c} \left[ g{x} \cdot \frac{f(x)}{g(x)} \right ] = \lim_{x\rightarrow c} g(x) \cdot \lim_{x\rightarrow c} \frac{f(x)}{g(x)} = 0\cdot L = 0, \end{displaymath} which contradicts our assumption that $l\neq 0$. \end{proof} \section{Conclusions} This paragraph will end the body of this sample document. Remember that you might still have Acknowledgments or Appendices; brief samples of these follow. There is still the Bibliography to deal with; and we will make a disclaimer about that here: with the exception of the reference to the \LaTeX\ book, the citations in this paper are to articles which have nothing to do with the present subject and are used as examples only. \section{Discussion} Our adaptive multi-resolution approach learns models at a fixed \ti{global} resolution at each stage. An open problem remains how to learn at different resolutions \ti{locally}, i.e. by finegraining individual cells rather than all. Such methods could be used to learn the dynamics in systems that intrinsically behave differently at different resolution, such as many dynamical systems. Local gradient statistics could be used in this case to decide when and which cells to finegrain to optimize model performance. \section{Introduction} \label{sec:intro} We study the problem of learning high-dimensional tensor models from large-scale high-resolution spatial data. Such models can compactly describe \emph{multi-way} correlations between predictive features that are spatially distributed. For example, in competitive basketball play, given the positions of all players in the court, we can predict \emph{whether a player will shoot at the basket} using a high-dimensional tensor model. Previous research on individual player models has demonstrated the effectiveness of matrix latent factor models \cite{Yue2014}. For joint team behavior, we can model a basketball player's profile by learning the latent factors among offensive team positions, defending team positions and the player's own profile, which can be modeled by a \emph{tensor latent factor model}. This paper addresses two major challenges for learning spatial tensor models. First, we study \emph{scalability}, as high resolution spatial tracking data is large-scale. Computational methods usually involve fine-grained spatial discretization, leading to high-dimensional tensors that are computationally slow and expensive to train. In addition to scalability, we are also interested in learning \emph{interpretable} models. Apart from accurate prediction, we also wish to learn latent factors that can provide insights for domain experts. For instance, the latent factors in basketball player models could correspond to player performance profiles, revealing hidden traits and commonalities among players. Towards scalable learning of interpretable tensor latent factor models, a key technical challenge is their non-convex nature \cite{yu2016learning}: optimization algorithms can converge to different local optima, depending on the model initialization. Hence, a good initialization is critical for obtaining accurate and interpretable latent factors. For instance, Figure \ref{fig:bb_bad_init_noise} shows two basketball shooting profiles with similar prediction accuracy, learned using the same optimization algorithm (gradient descent). Using a good initialization yields an interpretable solution, while a random initialization does not. \begin{figure}[!t] \centering \includegraphics[width=0.45\linewidth]{clean} \includegraphics[width=0.45\linewidth]{dirty} \caption{ Comparison of learned latent factors for basketball shot prediction. Left: interpretable latent factors learned by our multi-resolution method. Right: uninterpretable factors learned by learning at fixed resolution. Despite similar accuracy, the uninterpretable profile is noisy and not spatially smooth, whereas the interpretable profile shows basketball shooting hotspots that are relevant for prediction.} \label{fig:bb_bad_init_noise} \end{figure} \paragraph{Our Contributions.} Thus we are interested in answering how we can \ti{efficiently} learn both \ti{accurate} and \ti{interpretable} tensor models from high-resolution spatial data. In this paper, we present a novel meta-learning algorithm: multi-resolution tensor learning (\texttt{MRTL}{}), that can 1) learn a good initialization, 2) automatically control when to fine-grain, and 3) easily scale to high-dimensional tensors. More importantly, the multi-resolution learning scheme yields interpretable solutions that are spatially smooth and relevant for the prediction tasks. \texttt{MRTL}{} is based on three key insights. First, to obtain good initializations, instead of directly learning the latent factors from a random initialization, we can first learn a full-rank parameter tensor and use the factors from this parameter tensor as initialization. An analogous approach for matrix latent factor models has been shown in \cite{Miller2014,Yue2014} to yield interpretable factors capturing cohesive spatial semantics. Second, to avoid the curse of dimensionality, we leverage the characteristics of spatial data and learn iteratively at \ti{multiple resolutions}. At a high-level, the multi-resolution training has two stages: in the first stage, we learn a full tensor model using multi-resolution and factorize it as initialization for the latent factor model. In the second stage, we train the latent factor model using multi-resolutions: starting at a coarse resolution and fine-graining the latent factors during training. We prove that the resulting algorithm is faster by a logarithmic factor of the Lipschitz constant of the objective function and the estimation error. Third, we investigate fine-graining criteria. One simple criterion is to measure training loss convergence. However, it requires a preset threshold, which does not easily reflect the spatial distribution at different granularities. We thus take an information theoretical approach by monitoring the entropy of the gradients distribution for every grid cell, which we refer to as \ti{spatial entropy}. We found this fine-graining criterion to be more effective than loss convergence. In summary, our main contributions are: \iitem{ \item We present \texttt{MRTL}{}: a multi-resolution meta-algorithm to learn spatial latent factor models and a number of instantiations using various fine-graining criteria. \item We theoretically analyze \texttt{MRTL}{} and show that it converges faster to a similar accuracy than training at a fixed resolution. \item We empirically show that using gradient statistics, (e.g. gradient entropy), is an effective transition control method across hyperparameters and can outperform loss convergence. \item We empirically demonstrate orders-of-magnitude faster learning than conventional fixed-resolution training. \item We show that our approach reliably and efficiently yields interpretable spatial latent factor models on real-world basketball and animal tracking data. } \section{Multi-resolution Learning} \label{sec:mrl} We now describe our approach for training interpretable spatial latent factor models, such as the tensor model $\T{W}$ in \refn{eq:tensor}. In this section, for simplicity we consider only fine-graining a single dimension of $\T{W}$, although our analysis can be easily generalized. \paragraph{Initialization from Factorization.} The non-convex nature of the tensor model requires a good initialization. We obtain this by factorizing a trained full-rank tensor model as in \refn{eq:tucker} and using the factors as initializations. This approach has been shown to be effective in learning predictive spatial patterns \cite{Miller2014,Yue2014}. We also have similar observations in our experiments. \paragraph{Iterative Fine-graining.} It is generally not tractable in memory to learn a high-dimensional tensor latent factor model, that operates at some high resolution. To make training feasible, the main idea of our algorithm is iterative fine-graining. We will denote the \emph{resolution (i.e. size of a grid cell)} as $d_i$, which is inversely proportional to the number of grid cells $m_i$. During the first (full-rank) phase, we start training with a \ti{full-rank model} $\T{W}_{d_0}$ that operates at a coarse resolution $d_0$ and increase the spatial resolution of the data and model using a sequence of resolutions $d_0, d_1,\ldots,d_f$. That is, instead of learning $\T{W} \in {\mathbb R}^{\ldots\times m_f \times \ldots}$, we learn a sequence of $\{\T{W}_{d_i} \in {\mathbb R}^{\ldots\times m_i \times \ldots} \}$. When moving to a higher resolution, we use the learned weights $\T{W}_{d_i}$ to initialize the next weights $\T{W}_{d_{i+1}}$, by scaling up $\T{W}_{d_i}$ along the spatial dimension. We leverage the fact that spatial data can be viewed at multiple resolutions. Lower resolution models can provide good initializations for higher resolution models with high accuracy. To determine when to transition, we study different fine-graining criterion, which we discuss in section \ref{ss:sgd}. In the second phase, we apply a tensor factorization to $\T{W}_{d_i}$ at resolution $d_f$ as initialization. We use a similar fine-graining procedure for the latent factor model to the final resolution $d_n$. The resolution $d_f$ is determined by considering the memory requirements of the model at different resolutions. This meta-algorithm is outlined in Algorithm \ref{alg:mmt} and depicted in Figure \ref{fig:diagram}. \subsection{Computational Complexity Analysis} We analyze the computational complexity for \texttt{MRTL}{} (Algorithm \ref{alg:mmt}). Intuitively, as most of the training iterations are spent on coarser resolutions with fewer number of parameters, multi-resolution training is more efficient than fixed-resolution training. In this section, we formalize this intuition and give a rigorous analysis of \texttt{MRTL}{}. The analysis is based on the multi-grid method \cite{stuben2001review} commonly used in partial differential equation analysis. We denote the label $y \in {\mathbb R}^{n_a}$, features $\phi(x) \in {\mathbb R}^{m}$, $\psi(x) \in {\mathbb R}^{m'}$ and weight tensor $\T{W} \in {\mathbb R}^{n_a\times m \times m'}$. The prediction model is: \eq{ f(x;\T{W})_a = \sum_{ab} \T{W}_{abc}\phi(\mathbf{x})_b \psi(\mathbf{x})_c+b_a. } Denote the objective function as $\loss(x,y, f(x;\T{W}))$ , \texttt{MRTL}{} (Algorithm \ref{alg:mmt}) aims to solve the optimization problem: \begin{eqnarray} \min_{\T{W}} {\mathbb E} _{(x,y)\sim P} [\loss(x,y, f(x;\T{W}))]. \end{eqnarray} where $\loss$ is the logistic loss. \texttt{MRTL}{} follows gradient descent: \eq{ \T{W} \leftarrow \T{W}- \lambda \nabla \loss \T{W}. } We first start with the coarsest resolution $d_0$ (discretization size), compute an initial estimate $\T{W}_{d_0}$, and keep iterating until $\T{W}_{d_0}$ satisfies certain fine-graining criteria. Then we replace $d_0$ with $d_0/2$ and use $\T{W}_{d_0}$ as an initialization for the next level. In general, this iterative fine-graining procedure yields a sequence of discretization sizes $[d_0, d_1, \cdots, d_n]$. Suppose for each resolution $d$, we use the following as the fine-graining criterion: \eq{ \| \T{W}^{t(d)} - \T{W}^{t(d)-1}\| \leq \fr{C_0 d}{\alpha (1-\alpha)}. \label{eqn:fine-grain} } where $t(d)$ is the number of iterations needed at level $d$. The algorithm terminates when the estimation error reaches $\fr{C_0 d}{(1-\alpha)^2}$. We now show that \texttt{MRTL}{} reduces the number of computation steps. This is done by first formulating a gradient descent update as a fixed point iteration operator $F$\footnote{\emph{Stochastic} gradient descent converges to a noise ball instead of a fixed point.}: \eq{ \T{W} \leftarrow F( \T{W}), \hspace{10pt} F:=I-\lambda \nabla \loss. } Assume the objective function is Lipschitz continuous, with a contraction constant of $\alpha \in (0,1)$, meaning: \eq{ \| F(\T{W} ) - F(\T{W}') \|\leq \alpha \| \T{W} - \T{W}'\|. } \begin{algorithm}[t!] \caption{\texttt{MRTL}{}: Memory-efficient multi-resolution training with spatial entropy control} \label{alg:mmt} \begin{small} \begin{algorithmic}[1] \STATE Input: Tensor weights $\T{W}_{d_0}$ (e.g. Equation \refn{eq:tensor}), data $D$, features $\Psi$. \FOR{each resolution $d_i \in \brckcur{d_1, \ldots, d_f}$} \STATE \# For definition of \emph{resolution}, see Section \ref{sec:mrl}. \STATE \texttt{SGD-se}{}$\brck{\T{W}_{d_i}}$ \# see Algorithm \ref{alg:sgdse}. \ENDFOR \STATE \tb{TensorFactors}$_{d_f}$ = \texttt{Factorize}($\T{W}_{d_f}$) \# e.g. $A,B,C$ in Equation \refn{eq:tucker}. \FOR{each resolution $d_i \in \brckcur{d_{f+1}, \ldots, d_n}$} \STATE \texttt{SGD-se}{}$\brck{\tb{TensorFactors}_{d_i}}$ \ENDFOR \\ \RETURN $\tb{TensorFactors}_{d_n}$ \end{algorithmic} \end{small} \end{algorithm} \begin{algorithm}[t!] \caption{\texttt{SGD-se}{}: Stochastic gradient descent with spatial entropy control} \label{alg:sgd-se} \begin{small} \begin{algorithmic}[1] \STATE Input: $\T{W}_{d}$, data-set $D$, features $\Psi$, length-$T$ rolling window gradient buffer $H$. \WHILE{termination condition (e.g. Condition \refn{eq:ent_cond}) not true} \STATE Gradient descent step on $\T{W}_{d_i}$ using minibatch $B$ \STATE Add gradient $\brckcur{g(x,y)}_{(x,y)\in B}$ to $H$ \ENDWHILE \STATE \texttt{Finegrain}($\T{W}_{d_i}$) \\ \RETURN $\T{W}_{d_{i+1}}$ \end{algorithmic} \end{small} \label{alg:sgdse} \end{algorithm} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{images/algorithm.png} \vspace{-0.05in} \caption{Depicting our multi-stage training, which starts with a coarse-grained full-rank model, and concludes on a fine-grained latent factor model. See Algorithm \ref{alg:mmt} and Algorithm \ref{alg:sgd-se} for more details.} \label{fig:diagram} \vspace{-0.13in} \end{figure} The following main theorem characterizes the speed-up gained by multi-resolution training w.r.t. the contraction factor $\alpha$ and the terminal estimation error $\epsilon$. \begin{theorem} Suppose the fixed point iteration operator (gradient descent) for the optimization algorithm has a contraction factor (Lipschitz constant) of $\alpha$, the multi-resolution training procedure is faster than that of the fixed resolution algorithm by a factor of $ \log\fr{1}{(1-\alpha) \epsilon}$, with $\epsilon$ as the terminal estimation error. \label{thm:mmt} \end{theorem} We prove several useful Lemmas before proving the main Theorem \ref{thm:mmt}. We first analyze the computational cost of the \emph{fixed-resolution} algorithm. \begin{lemma} Given a fixed point iteration operator with a contraction factor of $\alpha$, the computational complexity of a fixed-resolution training for a $p$-order tensor with rank $r$ is \eq{ T = \mathcal{O}\brck{ \fr{1}{|\log \alpha|} \cdot \log \fr{1 }{(1-\alpha)\epsilon} \cdot\brck{\fr{rp}{(1-\alpha)^2\epsilon}}}. } \label{lemma:fixed} \end{lemma} \proof At a high level, we prove this by choosing a small enough resolution $d$ such that the approximation error is bounded with a fixed number of iterations. Let $\T{W}_d^\star$ be the optimal estimate at level $d$ and $\T{W}^t$ be the estimate at step $t$. Then we have \eq{ \| \T{W}^\star -\T{W}^t \| \leq \| \T{W}^\star - \T{W}_d^\star \| + \|\T{W}_d^\star - \T{W}^t \| \leq \epsilon. } Choose a fixed resolution $d$ that is small enough such that \eq{\| \T{W}^\star-\T{W}^\star_d\| \leq \fr{\epsilon}{2},} then the termination criteria $\| \T{W}^\star-\T{W}^\star_d\| \leq \fr{C_0 d}{(1-\alpha)^2}$ gives $d = \Omega ((1-\alpha)^2 \epsilon)$. Initialize $\T{W}^0 =0$ and iterate over $t$ times such that: \eq{ \fr{\alpha^t}{2(1-\alpha) } \|F(\T{W}^0) \| \leq \fr{\epsilon}{2}, } Since $\T{W}^0 =0$, $\|F(\T{W}^0) \| \leq 2C$, we obtain that \eq{ t\leq \fr{1}{|\log \alpha|} \cdot \log\fr{2C}{(1-\alpha)\epsilon}, } Note that for an order $p$ tensor with rank $r$, the computational complexity of every iteration in \texttt{MRTL}{} is $ \mathcal{O}(rp/d)$ with $d$ as the discretization size. Hence, the computational complexity of the fixed resolution training is \eq{ T &= \mathcal{O}\brck{ \fr{1}{|\log \alpha|} \cdot \log\fr{1}{ (1-\alpha)\epsilon} \cdot\brck{\fr{rp}{d}} } \notag\\ &= \mathcal{O}\brck{ \fr{1}{|\log \alpha|} \cdot \log \fr{1 }{(1-\alpha)\epsilon} \cdot\brck{\fr{rp}{(1-\alpha)^2\epsilon}} }. \notag\qed } In the multi-resolution setting, the spatial weights that are learned can be seen as a distribution that is approximated by its values at finitely many points. Given a spatial discretization $d$, we can construct an operator $F_d$ that learns discretized tensor weights. The next Lemma relates the estimation error with resolution: \begin{lemma}\cite{nash2000multigrid} For each resolution level $[d_0, d_1, \cdots, d_n]$, there exists a constant $C_1$ and $C_2$, such that the fixed point iteration with the discretization size $d$ has an estimation error: \eq{\label{lemma:disc} F(\T{W}) - F_d(\T{W}) \leq C_1 + \alpha C_2 \|\T{W}\|_d. } \end{lemma} \proof See \cite{nash2000multigrid} for the details. \qed We have obtained the discretization error for the fixed point operation at any resolution. Next we analyze the number of iterations $t(d)$ needed at each resolution $d$ before fine-graining. \begin{lemma} \label{lemma:iter_level} For every resolution $d \in [d_0, d_1, \cdots, d_m]$, there exists a constant $C'$, such that the number of iterations $t(d)$ before fine-graining satisfies: \eq{ t(d) \leq C'/\log |\alpha|.} \end{lemma} \proof We know that $d_0 = 2d_1=4d_2=\ldots$. At resolution $d$, by using the estimate from the last level as initialization, we have: \eq{ \T{W}_{2d}^{t(2d)} = \T{W}_d^0. } where we use the subscript to index the solution at a given resolution. By combining Lemma \ref{lemma:disc} and the fine-graining criteria in (\ref{eqn:fine-grain}), we can guarantee the estimation error per iteration: \eqn{ &\|F_d(\T{W}_d^0) - \T{W}_d^0\| = \|F_d(\T{W}_{2d}^{t(2d)} ) ) - \T{W}_{2d}^{t(2d)} \| \notag\\ &\leq \|F_d(\T{W}_{2d}^{t(2d)} ) -F(\T{W}_{2d}^{t(2d)} )\| + \|F(\T{W}_{2d}^{t(2d)} ) - \T{W}_{2d}^{t(2d)} \|) \notag\\ &\leq (C_1 + \alpha C_2 \|\T{W}_{2d}^{t(2d)} \|)d \notag\\ & + \|F(\T{W}_{2d}^{t(2d)} ) -F_{2d}(\T{W}_{2d}^{t(2d)} )) \| + \|F_{2d}(\T{W}_{2d}^{t(2d)} ))- \T{W}_{2d}^{t(2d)} ) \| \notag\\ &\leq (C_1 + \alpha C_2 \|\T{W}_{2d}^{t(2d)} \|)d +(C_1 + \alpha C_2 \|\T{W}_{2d}^{t(2d)} \|)2d + \alpha\fr{C_0d}{\alpha (1-\alpha)}. } According to the fixed point iteration definition, we have: \eq{ \|F_d(\T{W}^{t(d)}) - \T{W}^{t(d)}) \| \leq \alpha^{t(d)-1} \| F_d(\T{W}_d^0) - \T{W}_d^0 \| \leq C' d \nonumber } Thus the number of iterations satisfies $t(d) \leq C'/\log |\alpha|$ for all resolutions. \qed. \paragraph{Proof of Theorem \ref{thm:mmt}.} By combining Lemmas \ref{lemma:iter_level} and the computational cost per iteration, we can compute the total computational cost for our \texttt{MRTL}{} algorithm, which is proportional to the total number of iterations for all resolutions: \eq{ T &= \mathcal{O}\brck{\fr{1}{|\log \alpha|}\brcksq {(d_n/rp)^{-1} +(2 d_n/rp)^{-1} + (4 d_n/rp)^{-1} + \cdots} } \notag\\ &=\mathcal{O}\brck{\fr{1 }{|\log \alpha| }\brck{\fr{rp}{d_n}}\brcksq{1 + \fr{1}{2} + \fr{1}{4} +\cdots} } \notag\\ &=\mathcal{O}\brck{\fr{1 }{|\log \alpha|} \brck{ \fr{rp}{d_n} } \brcksq{ \fr{1-(\fr{1}{2}) ^{n}}{1-\fr{1}{2}} } } \notag\\ &= \mathcal{O}\brck{\fr{1 }{|\log \alpha|} \brck{\fr{rp}{(1-\alpha)^2\epsilon}}}, } where the last step uses the termination criterion in (\ref{eqn:fine-grain}) $d_n = \Omega ((1-\alpha)^2\epsilon )$. Comparing with the complexity analysis for the fixed resolution algorithm in Lemma \ref{lemma:fixed}, we complete the proof. \qed \begin{figure}[ht!] \centering \includegraphics[width=\linewidth]{images/finegrain_grad_example.pdf} \vspace{-0.05in} \caption{Gradient distribution for a toy model during multi-resolution training. At the coarse level, the distribution has $\mu=0$ at convergence, whereas at a higher resolution, the distributions for the finegrained weights have non-zero mean.} \label{fig:gradientdist} \vspace{-0.13in} \end{figure} \subsection{Fine-Graining Criteria} \label{ss:sgd} Given the structure of \texttt{MRTL}{} (Algorithm \ref{alg:mmt}), the key technical question is which fine-graining criterion to use at each stage. In Theorem \ref{thm:mmt}, we showed the improved convergence speed when using loss-convergence as a transition criterion. We now propose several fine-graining criteria to instantiate \texttt{MRTL}{}, that empirically outperform loss-convergence. \paragraph{Loss convergence.} Intuitively, one simple fine-graining condition is when training at the present resolution converges (which mimics the termination condition for iterative training of sparse models \cite{johnsonblitz}). That is, at training step $t$, we check whether: \eq{\label{eq:loss_cond} |\loss^t - \loss^{t}| < \tau_L, } where $\loss^{t}$ is the historical mean loss and $\tau_L$ is a threshold. However, since the model at each resolution is used to initialize the training for the next resolution, the converged model at the current resolution might not be the best initialization for the next resolution (i.e., training might be overfitting to the coarser resolution). \paragraph{Using Gradient Statistics.} Given the spatial nature of the model weights \refn{eq:tucker}, we can use more finegrained information during training by analyzing the gradient distribution during training. Intuitively, spatial resolution $d_i$ is too coarse when the data prefers much more fine-grained curvature, as evidenced by substantial disagreement in the gradients between resolution $d_i$ and $d_{i+1}$. For example, consider learning a linear binary classifier $f$ as in \refn{eq:linear}: \eq{ \mathbf{w}^* = \argmi{\mathbf{w}} \mathbb{E}_{(x,y)\sim P}\brcksq{\loss(x,y,f(x;\mathbf{w}))}, } where $x\in\mathbb{R}$ and we consider two resolutions: $m_1 = 1$ cells and $m_{2} = 2$ cells. Suppose we optimize $\mathbf{w}$ using \eq{ \mathbf{w} \leftarrow \mathbf{w} + \Delta\mathbf{w}, \hspace{5pt} \Delta \mathbf{w} = - \lambda h(g_\mathbf{w}; \theta), } where $\lambda$ is the learning-rate, $h$ is a (nonlinear) optimizer with parameters $\theta$ and the true gradient is \eq{\label{eq:trueg} g_\mathbf{w} = \ \mathbb{E}_{(x,y)\sim P}\brcksq{ \nabla_\mathbf{w} L(x,y,f(x;\mathbf{w}))}. } Consider the situation that 1) during training at resolution $d_1$ the model has converged to $w^* = 0$ and 2) the ground truth model at resolution $d_{2}$ is $\mathbf{w}^* = (+1,-1)$. In this case, at the coarse resolution $d_1$ the distribution $P(g_\mathbf{w}(x,y))$ of the point-wise gradient $g_\mathbf{w}(x,y) = \nabla_\mathbf{w} L(x,y,f(x;\mathbf{w}))$ has mean $\mu\approx 0$ (and some variance $\sigma > 0$). However, if we fine-grain the model, the (distribution of) gradients for the higher-resolutions weights $w_1$ and $w_2$ have non-zero mean: the mean gradient $g_{w_1}(x,y)$ will be negative and $g_{w_2}(x,y)$ positive. This is illustrated in Figure \ref{fig:gradientdist}. More generally, we can quantify disagreement between gradients at multiple resolutions via the statistics $\mu, \sigma$ and entropy $S$ of the (empirical) gradient distribution of \ti{all} weights $\mathbf{w}$ at resolutions $d_i$: \eq{\label{eq:ent} S\brck{g_{\mathbf{w}\ddd{i}}} = \mathbb{E}_{ g_\mathbf{w}(x,y) } \brcksq{ \log P\brck{g_\mathbf{w}(x,y)}}. } Intuitively, when the entropy $S\brck{g_{\mathbf{w}\ddd{i}}}$ is high, the gradients of the current discretization are increasingly disagreeing with each other and training can benefit from finegraining. More formally, we can express this via the information gain: \eq{\label{eq:ig} I_{i,i+1} = S\brck{g_{\mathbf{w}\ddd{i}}} - S\brck{g_{\mathbf{w}\ddd{i+1}}}, } If $I_{i,i+1} > 0$, the gradient will have lower entropy at the higher resolution and finegraining could be beneficial.\footnote{This is analogous to the use of information gain for decision tree regularization.} However, in practice it is infeasible to compute information gain, as this requires gradient statistics at a higher resolution. As a proxy to \refn{eq:ig} we can instead use a simpler condition: finegrain when for $\geq p\%$ of the weights in $\mathbf{w}$ the entropy $S$ exceeds a margin $\tau$: \eq{\label{eq:ent_cond} \textrm{for } \geq p \textrm{\% weights } w_j: S\brck{g_{w_j\ddd{i}}} > \tau, } where $\tau$ and $p$ are tunable hyperparameters. \paragraph{Moment-based thresholds.} We can define other criteria based on gradient statistics: finegrain if for at least $p\%$ of the weights: \eq{\label{eq:mom_cond} \sigma\textrm{-threshold: }& \sigma_{t} > \tau_\sigma, \\ \mu,\sigma\textrm{-threshold: }& \sigma_{t} > \tau_\sigma \textrm{ and } |\mu_{t}| < \tau_\mu. } where $\mu_{t}$, $\sigma^2_{t}$ are the gradient mean and variance at step $t$ and $\tau_\cdot$s are tunable hyperparameters. $\sigma$-thresholding is a more coarse statistic compared to the entropy $S$: if the gradient distribution is non-Gaussian or multimodal $S$ can capture higher-order statistics as well. $\mu,\sigma$-thresholding also tracks whether training has converged ($\mu \approx 0$) according to the gradients. \section{Analysis of Interpretable Solutions} \label{sec:interpret} We now qualitatively evaluate a learned model \refn{eq:mt} that demonstrates that our meta-algorithm \texttt{MRTL}{} (Algorihm \refn{alg:mmt}) can learn compact representations of semantic knowledge. We show that: \iitem{ \item The latent factors are interpretable: they capture characteristic basketball shooting profiles and spatial fly behavior. \item Smooth and sparse latent factors correspond to various types of basketball behavior and spatial configurations of fruit flies. } \subsection{Basketball shot prediction} \paragraph{Ball handler shooting profiles.} In competitive basketball play, shots at the basket happen throughout the court and peak at certain hot-spot locations at short range (close to the basket), medium range (between the basket and 3-point line) and long range (at the 3-point line or beyond). Inspecting the smooth shooting profiles in Figure \ref{fig:lfs_basketball}, we see that all profiles are spatially cohesive and have small peaks around different subsets of hot-spots. Profiles 1, 2 and 3 capture medium range and long range hot-spots, while profile 4 covers the short-range zone. In contrast, the sparse shooting profiles are more spatially concentrated and activate only on specific hot-spots. For example, profile 5 covers 2 hot-spots to the far left and right side at the back of the court and profiles 7 and 8 only activate on the short range zone directly around the basket. This leads to an attractive semantic interpretation: smooth shooting profiles describe \textit{inconsistent players} that tend to shoot from many locations throughout the basketball court, whereas sparse shooting profiles capture \textit{consistent players} that shoot from only specific hot-spots. Figure \ref{fig:basketball_activations} show the latent factor activations $A_{ak}$ and $U_{a,k}$ of 6 players that can be grouped as such: players 1, 2, and 3 are \ti{consistent}; players 4, 5, and 6 are more \ti{inconsistent}. \paragraph{Defender influence profiles.} The bottom row of Figure \ref{fig:lfs_basketball} depicts four dense defender profiles $C$ and four sparse defender profiles $W$. The first two dense profiles capture defender suppression in front of the ballhandler across the entire court, since the companion shooting profiles (1 and 2) are diffused throughout. In contrast, the first two sparse defender profiles (5 and 6) describe defender influence specifically at long range, which has a more peaked behavior. The last two sparse profiles (7 and 8) describe ballhandlers that are prone to shoot with a defender close by. This is likely due to confusing correlation with causation, since players close to the basket tend to shoot, even though typically a defender is close by. \subsection{Fruit fly behavior} \paragraph{Configuration profiles.} Figure \ref{fig:fvf_profile} depicts a sample dense and a sparse learned profile that are interpretable. In the dense profile, the fly extends \ti{at least one wing} which defines \textit{wing extension}, a sign of courtship. The sparse profile similarly captures more concentrated spatial characteristics, when the flies are close. Such wing configurations are important in both aggressive and courtship behavior, when the two flies interact closely to each other. \paragraph{Actions and behaviorial types.} Figure \ref{fig:fvf_task_activations} provides a more holistic view across all actions in the \emph{Fly-vs-Fly}{} dataset. For example, we see that \ti{lunge} and \ti{wing threat} have similar activations. This is natural: the former is active physical aggression, while the latter can be interpreted as a fly showing a preview of physical aggression to intimidate the other fly. We observe that both dense and sparse profiles are useful for modeling a wide range of actions/tasks. \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{lfs_set1_bb_activations-crop} \vspace{-10pt} \caption{Basketball player latent factor activation weights $A_{ak}$ for the same model as in Figure \ref{fig:lfs_basketball} with 10 smooth factors ($k = 1\ldots 10$) and 10 sparse factors ($k = 11\ldots 20$). } \label{fig:basketball_activations} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{lfs_set1_fvf_activations} \vspace{-10pt} \caption{Fly action latent factor activation weights $A_{ak}$ for a model with 10 smooth ($k = 1\ldots 10$) and 10 sparse latent factors ($k = 11\ldots 20$). Similar actions activate similar factors.} \label{fig:fvf_task_activations} \end{figure} \begin{figure}[t!] \centering \includegraphics[height=100pt]{fvf_dense_v4} \includegraphics[height=100pt]{fvf_sparse_v4} \caption{ Left: Smooth normalized profile in $B_{bk}C_{ck}$ over \ti{facing angles} and \ti{wing angles}. Right: Sparse profile. Red (blue) means positive (negative). Example spatial configurations of a pair of flies are shown for the extremal regions of the profile. The reference fly (green) always points to the right, with the second fly at various facing angles. The dense profile describes a fly that extends exactly one wing, the maximal (minimal) wing length is large (small) and is turned towards the other fly. The sparse profile activates highly when the legs of the two flies are close and the reference fly keeps both wings close to his body. } \label{fig:fvf_profile} \end{figure} \section{Benchmark Experiments} \label{sec:quant} To validate our approach, we demonstrate that our multi-resolution method converges faster than fixed-resolution training on two real-world datasets: basketball shots and fruit-fly behavior prediction. Furthermore, we show that using spatial gradient statistics, such as its entropy, outperforms using loss convergence and provide a sensitivity analysis of the various transition criteria. Finally, in Section \ref{sec:interpret}, we show that our learned models are interpretable. For both datasets, we learn a tensor model for \ti{multi-task} prediction, where we instantiate the model \refn{eq:tucker} as a sum of low-rank $\T{W}^L$ and sparse $\T{W}^S$ tensors: \eq{\label{eq:mt} P( y_{a} | \mathbf{x} ) &= \sum_{abc} (\T{W}^L_{abc} + \T{W}^S_{abc}) \phi_{b}(x) \psi_{c}(x),\\ \T{W}^L_{abc} &= \sum_{k=1}^K A_{ak}B_{bk}C_{ck}, \hspace{5pt} \T{W}^S_{abc} = \sum_{k=1}^K U_{ak}V_{bk}W_{ck},\notag } where $a\in\mathcal{A}$ indexes the tasks. A key motivation for this decomposition is that $L,S$ can capture semantically meaningful profiles (spatially dense and smooth, or sparse and peaky). \begin{figure}[t!] \centering \includegraphics[width=0.15\textwidth]{fly_pose_spat_feature.png} \includegraphics[width=0.15\textwidth]{court_players.pdf} \caption{Left: \emph{Fly-vs-Fly}{} angles: \ti{facing angles} (1), (3), inter-fly \ti{angle} (2), \ti{minimal} (4) and \ti{maximal wing angle} (5)). Right: sample frame with ballhandler (red) and defensive players (green). Only players close to the ballhandler are used.} \label{fig:fvf_features_explanation} \end{figure} \paragraph{Empirical Gradient Distribution.} During training, we estimate the gradient distribution $P(g_\mathbf{w}(x,y))$ by recording the empirical minibatch gradients and their statistics $\mu,\sigma,S$ over a rolling window of $T$ training steps. While collecting gradients, the weights $\mathbf{w}$ are typically updated too, introducing a bias in the gradient statistics, as gradients are computed for different models at each step. However, we found empirically that using \refn{eq:ent_cond} remains effective. \begin{figure*}[!h] \centering \includegraphics[height=115pt]{full20_80_500_2000} \includegraphics[height=115pt]{3factor_500_2000} \includegraphics[height=115pt]{gold/sensitivity-fvf-parafac.pdf} \caption{Left: training a full-rank tensor model using \texttt{MRTL}{} \refn{alg:mmt} with $\omega=100$ on \emph{BBShot}{}. Colors indicate training stages. For visual clarity, only best runs for fixed-resolution, loss-convergence and entropy-thresholding are shown. Baseline 1 (dark green): training a fixed-resolution fine-grained model is orders of magnitude slower. Baseline 2 (indicated by *): fine-graining using gradient entropy control (green, purple) with $20$ histogram bins converges faster than loss-convergence (red, orange). Middle: training a factored tensor model using iteratively fine-graining outperforms a fixed-resolution approach. Right: Learning a factorized model \refn{eq:mt} on \emph{Fly-vs-Fly}{}. Using gradient statistics converges $\approx$ $50\%$ faster than fixed-resolution learning. } \label{fig:full_fixed_vs_cascade} \end{figure*} \paragraph{Experiments.} First, we compared \texttt{MRTL}{} with fixed-resolution training. Secondly, we compared the various fine-graining criteria (entropy divergence, loss convergence, $\sigma$-threshold, $\mu,\sigma$-threshold) as described in section \ref{ss:sgd}. We instantiated \texttt{MRTL}{} by adaptively upscaling the model $B,C,V,W$ and features $\phi_b$ and $\psi_c$ as in Algorithm \ref{alg:mmt}, checking transition criteria each $\omega=100$ steps for $p=10\%$ and various $\tau$. All models were trained using Adam using cross-entropy loss and $L_2$ ($L_1$) regularization for the dense (sparse) factors. We selected optimal hyperparameters for all models using a grid search over hyperparameters (see Table \ref{tab:hyperp}). For the sensitivity analysis, we used the $\tau$s as in Table \ref{tab:hyperp}. \subsection{Datasets} \paragraph{Basketball tracking data.} We evaluated our method on \ti{basketball shot prediction}, where the tasks correspond to unique basketball players, indexed by $a$, and our multi-task model predicts \ti{whether player $a$ will shoot at the basket immediately after frame $\mathbf{x}_\alpha$} ($\alpha$ indexes the dataset). We used a large player tracking dataset \emph{BBShot}{} \cite{Yue2014,zheng2016generating} that includes hundreds of players and covers millions of game frames captured during competitive basketball gameplay. For each frame $\mathbf{x}_\alpha$, we have binary labels $y_{\alpha,a} \in \{-1,+1\}$: whether player $a$ will (not) shoot at the basket in frame $\mathbf{x}_\alpha$. The features are 1-hot vectors $\phi(\mathbf{x}_\alpha)\in\brckcur{0,1}^{2000}$ for the location $b$ of the ballhandler and $\psi(\mathbf{x}_\alpha)\in\brckcur{0,1}^{144}$ for the defenders in a $12\times 12$ grid around the ballhandler. At full resolution, the court is discretized using a $50\times 40$ grid of 2000 cells of size $1\times 1$. For lower resolutions, we used $2\times 2, 3\times 3$ and $4\times 4$ cells. \paragraph{Caltech Fly-vs-Fly.} The Caltech Fly-vs-Fly behavior dataset \emph{Fly-vs-Fly}{} \citep{Eyjolfsdottir2014} consists of several million video-frames of 2 fruit-flies, labeled by neuro-biologists with 10 behavioral fly actions: \ti{touch}, \ti{wing threat}, \ti{charge}, \ti{lunge}, \ti{hold}, \ti{tussle}, \ti{wing extension}, \ti{circle}, \ti{copulation attempt}, \ti{copulation}. The goal is to predict whether either fly performs any of these 10 actions in video frame $\mathbf{x}_\alpha$. Here, the tasks are the actions indexed by $a$ and the binary labels $y_{\alpha,a}\in\{-1,+1\}$ correspond to whether an action $a$ is present in frame $\mathbf{x}_\alpha$ or not. The dataset includes 12 pose and spatial features, including the \ti{velocity and the wing angles} of a single fly, and pairwise features, such as the \ti{distance and angle} between the two flies (see Figure \ref{fig:fvf_features_explanation}). \begin{table}[t] \centering \resizebox{\linewidth}{!}{% \begin{tabular}{l|cc|cc} Loss-threshold: & 0.62 & Full-rank & 0.64 & Factor \\ \hline Criterion & Minimal time (s) & Mean & Minimal & Mean \\ \hline Fixed Resolution & 18358 & - & 106 & - \\ Loss-conv & $639$ & $10141$ & $96$ & $99$ \\ Entropy-threshold & $541$ & $4258$ & $\bm{40}$ & $\bm{42}$ \\ $\sigma$-threshold & $356$ & $2453$ & $55$ & $56$ \\ $\mu, \sigma$-threshold & $\bm{292}$ & $\bm{677}$ & $880$ & $894$ \\ \end{tabular}} \caption{Minimal and mean time to reach a loss when overfitting starts on \BB{}-200k{} across 20 runs, with full and factorized model \refn{eq:tucker} (as in Figure \ref{fig:sensitivity}). Using gradient statistics reaches the loss threshold consistently significantly faster.} \label{tab:sens} \end{table} \subsection{Accuracy and Convergence Speed} We compare \texttt{MRTL}{} with fixed-resolution learning. Moreover, we compare a number of fine-graining criteria: fine-graining when the loss has converged versus spatial entropy control. For the latter, we used condition \refn{eq:ent_cond} to determine when to fine-grain. Figure \ref{fig:full_fixed_vs_cascade} and Table \refn{tab:sens} show our results using models with total latent dimension 20.\footnote{Performance trend is stable for a wide range of latent dimensions, omitted for brevity.} The left plot shows the results for the full-rank model, which is usually the more computationally intensive part. We see that our multi-stage approach dramatically outperforms (by multiple orders-of-magnitude) a naive approach which only uses the finest resolution.\footnote{Note that for more complex models, the naive approach would not even fit in memory.} Moreover, spatial entropy control outperforms, by an order-of-magnitude, using the loss as a termination criterion. The right plot shows the performance of \texttt{MRTL}{} on the latent factor model after initializing using a factorized model of the previous stage. We see that the learning objective continues to decrease as learning enters what is essentially a fine-tuning phase.\footnote{Note the absolute objective of the un-factorized model is lower than the latent-factor model because the former has more degrees of freedom and is overfitting.} \begin{figure}[t] \centering \includegraphics[width=0.68\linewidth]{gold/sensitivity-bb-full-redux.pdf} \\ \includegraphics[width=0.68\linewidth]{gold/sensitivity-bb-parafac.pdf} \caption{Sensitivity of \texttt{MRTL}{} for 4 transition criteria (e.g. \refn{eq:ent_cond}) to hyperparameters $\tau$ and $p=10\%$ on \BB{}-200k{} using a full-rank (top) and a factored model \refn{eq:tucker} (bottom). For each criterion, we sampled 20 $\tau$s uniformly and show the mean and variance of the converging runs as they start to overfit.} \label{fig:sensitivity} \end{figure} \subsection{Sensitivity Analysis} To evaluate the efficiency and transition behavior using the transition criteria from Section \ref{ss:sgd}, we evaluated their sensitivity to the threshold parameters $\tau$. For this, we used \BB{}-200k{} with 200k \emph{BBShot}{} examples and evaluated Algorithm \ref{alg:mmt} using a random search over $\tau$. The results are in Table \ref{tab:sens} and Figure \ref{fig:sensitivity}. We see that using gradient statistics (\refn{eq:ent_cond}, \refn{eq:mom_cond}) consistently outperforms loss convergence in the short-term ($\sigma$-threshold) and long-term (entropy-threshold). We empirically find that $\mu,\sigma$-threshold is harder to stabilize than other criteria, although it can outperform other methods. \section{Related Work} Modeling spatial data enjoys a long history in the data mining community \cite{miller2009geographic}. Recent applications include urban air quality prediction \cite{hsieh2015inferring}, social media recommendation \cite{yin2016spatio}, and event detection \cite{zhao2016hierarchical}, among others. In this work, we study multi-agent tracking data extracted from high-fidelity tracking cameras. Such spatial data contains high resolution information and are large-scale (cameras operate at high frequency and every frame is one data point). Latent factor models have been a popular method for spatial data modeling \cite{reich2010latent,Yue2014,deng2016latent}. By projecting the raw data into a low-dimensional space, latent factor models can encode patterns such as spatial clustering. However, most existing latent factor models have only focused on capturing two-way interactions (such as user-location matrix models \cite{koren2009matrix}). For multi-agent behavior modeling, we extend the matrix latent factor model to capture multi-way correlations among agents, resulting in a tensor latent factor model. For instance, we can build a ``player $\times$ offense position $\times$ defense position'' tensor model for basketball plays. Tensor models have recently gained considerable attention (cf. \cite{Jenatton2012,takeuchi2013non,Quattoni2014}), but are still of limited applicability due to computational, data sparsity, and memory efficiency concerns. There are many efforts to scale up tensor computation, such as random projection (sketching) \cite{wang2015fast}, parallel computing \cite{austin2016parallel}, or utilizing sparsity structure \cite{perros2017spartan}. In this work, we take a top-down approach to learn a tensor model at multiple resolutions. The multi-resolution training procedure takes advantage of the spatial characteristics of the raw data, and is generically applicable to different tensor models and optimization algorithms. Our approach bears affinity to other multi-stage meta-algorithms (e.g., \cite{johnsonblitz}) and the multi-grid method in PDE analysis. For instance, \citep{chow1991optimal} studies multi-grid methods in the context of dynamic programming. In the machine learning community, our method is also related to multi-resolution sparse approximation \cite{mallat1989multiresolution}. For example, \cite{kondor2014multiresolution} studies multi-resolution matrix factorization based on wavelets theory. \cite{schifanella2014multiresolution} proposes a multi-resolution heuristic for tensor factorization. In contrast, we study multi-resolution learning for \emph{tensor} models and also provide theoretical analysis for its convergence. \section{Supplementary material} \subsection{Computational Complexity Analysis} \begin{lemma} Given a fixed point iteration operator with a contraction factor of $\alpha \in (0,1)$, the computational complexity of a fixed-resolution training for a $P$ dimensional tensor with rank $r$ is \[\mathcal{O}\brck{ \fr{1}{|\log \alpha|} \cdot \fr{1 }{\log (1-\alpha)\epsilon} \cdot\brck{\fr{1}{rp(1-\alpha)^2\epsilon}}}\] \label{lemma:fixed} \end{lemma} \proof at high level, we prove this by choosing a small enough resolution $d$ such that the approximation error is bounded with a fixed number of iterations. Let $\T{W}_d^\star$ be the optimal estimate at level $d$ and $\T{W}^t$ be the estimate at step $t$. Then we have \eq{ \| \T{W}^\star -\T{W}^t \| \leq \| \T{W}^\star - \T{W}_d^\star \| + \|\T{W}_d^\star - \T{W}^t \| \leq \epsilon. } Choose a fixed resolution $d$ that is small enough such that \eq{\| \T{W}^\star-\T{W}^\star_d\| \leq \fr{\epsilon}{2},} then the termination criteria \eq{ \| \T{W}^\star-\T{W}^\star_d\| \leq \fr{C_0 d}{(1-\alpha)^2} } gives \eq{ d = \Omega ((1-\alpha)^2 \epsilon). } Initialize $\T{W}^0 =0$ and iterate over $t$ times such that: \eq{ \fr{\alpha^t}{2(1-\alpha) } \|T_d(\T{W}^0) \| \leq \fr{\epsilon}{2}, } Given that $\T{W}^0 =0$, hence $\|T_d(\T{W}^0) \| \leq 2C$, we obtain that \eq{ t\leq \fr{1}{|\log \alpha|} \cdot \log\fr{2C}{(1-\alpha)\epsilon}, } % the computational complexity of the fixed resolution training is \eq{ t &= \mathcal{O}\brck{ \fr{1}{|\log \alpha|} \cdot \fr{1}{\log (1-\alpha)\epsilon} \cdot\brck{\fr{1}{drp}} } \\ &= \mathcal{O}\brck{ \fr{1}{|\log \alpha|} \cdot \fr{1 }{\log (1-\alpha)\epsilon} \cdot\brck{\fr{1}{rp(1-\alpha)^2\epsilon}} }. } \begin{lemma}\cite{nash2000multigrid} For each resolution level $[d_0, d_1, \cdots, d_n]$, there exists a constant $C_1$ and $C_2$, such that the fixed point iteration with discretization size $d$ has an estimation error: \eq{\label{lemma:disc} T(\T{W}) - T_d(\T{W}) \leq C_1 + \alpha C_2 \|\T{W}\|_d. } \end{lemma} \proof The approximation error of a function with discretization is bounded if the target function is Lipschitz continuous. We have \eq{ T(\T{W} + \Delta \T{W}) = T(\T{W}) +\alpha \Delta \T{W}, } and \eq{ \| T (\T{W} ) - T( \Delta \T{W}) ) \|\leq \alpha \| \T{W} - \Delta \T{W} \|. } Here $T(\T{W}) $ is bounded by $C_1$ and $ \Delta \T{W}$ is bounded by $C_2 d$. By definition of the fine-graining criterion, we know that for every resolution $d \in [d_0, d_1, \cdots, d_n]$, the discretized operator $T_d$ satisfies: \eq{ \| T_d (\T{W} ) - \T{W}) \|\leq \fr{C_0d}{\alpha(1-\alpha)}. } If the estimation error is small enough, the algorithm terminates. Otherwise, we fine-grain and use $\T{W}^d$ to initialize the algorithm at the next resolution. Let the termination criterion at level $d_n$ be \eq{ \fr{C_0 d_n}{(1-\alpha)^2} \leq \fr{\epsilon}{2}. } \section{Tensor Latent Factor Models} We motivate the tensor latent factor model using the example of competitive basketball play. Suppose we have $n_a$ players and want to predict whether a basketball player $a$ ($a=1\ldots n_a$) takes a shot at the basket (binary label $y_a = \pm 1$). We first discretize the court using a grid with $m$ cells (positions $\V{x} \in {\mathbb R}^m$) and then construct a binary feature vector $\phi(\mathbf{x})\in\mathbb{R}^m$ using the positions of all other players on the court. The prediction problem can be formulated as: \eq{\label{eq:linear} P(y_a = 1 | \mathbf{x}) = \brcka{\mathbf{w}_a,\phi(\mathbf{x})} +\V{b}_a, \hspace{10pt} \phi(\mathbf{x}) = (0 \ldots 1 \ldots 0), } where $\mathbf{w}_a \in {\mathbb R}^m$ are the weight parameters for the grid cells and $\V{b}_a$ is the bias unit. The weights $\mathbf{w}_a$ encodes how likely a basketball player will shoot from a given position on the basketball court. \paragraph{Matrix Latent Factor Models.} In matrix form, (\ref{eq:linear}) is equivalent to the following model: \eq{\label{eq:matrix} f_a(\mathbf{x}) &= \sum_{b} W_{ab} \phi_{b}(\mathbf{x})+ \V{b}_a, } where $f_a(\mathbf{x})$ is the prediction function for player $a$, and the matrix $W$ concatenates the parameter vectors $\{ \V{w}_a \}$. Latent factor models assume there exist low-dimensional representations of the weights. Hence the parameter matrix factorizes into two matrices $A$ and $B$ with $K$ components: \eq{W_{ab} = \sum_k A_{ak} B_{bk}.} These latent factors can represent players' behavioral profiles at different positions on the court. \paragraph{Tensor Latent Factor Models.} The limitation of the matrix latent factor model in (\ref{eq:matrix}) is that it can only capture the correlation between the ballhandler and all other players. In order to learn rich behaviors between teams, we need to explicitly model multi-way dependencies and generalize to high-order models. In the basketball example, we separately construct a feature vector $\phi(\mathbf{x}), \psi(\mathbf{x}) \in\mathbb{R}^d$ for the offense and defense team positions, respectively, and model the multi-way interactions as: \eq{\label{eq:tensor} f_a(\mathbf{x}) &= \sum_{bc} \T{W}_{abc} \phi_{b}(\mathbf{x}) \psi_{c}(\mathbf{x}) + \V{b}_a. } Here $\T{W}$ is an order 3 weight tensor. To encode low-dimensional structure, we assume the weight tensor $\T{W}$ factorizes: \eq{\T{W}_{abc} = \sum_{k=1}^K A_{ak}B_{bk}C_{ck},} which corresponds to a CP tensor model \cite{Kolda2009}. In this case, $K$ is called the \ti{tensor rank}. In general, given some input $\mathbf{x}$ and feature transformation functions $\phi(\cdot)$ and $\psi(\cdot)$, \ti{tensor latent factor models} aim to learn a function $f_a(\mathbf{x})$ for each prediction task $a$, such that: \eq{\label{eq:tucker} f_a(\mathbf{x}) &= \sum_{bc} \sum_{k=1}^K A_{ak}B_{bk}C_{ck} \phi_{b}(\mathbf{x})\psi_{c}(\mathbf{x}) + \V{b}_a, } where $A, B, C$ are the latent factors, $\V{b}_a$ is a bias unit and $a$ indexes prediction tasks. It is straightforward to generalize (\ref{eq:tucker}) to other tensor models and higher orders. Figure \ref{fig:model} depicts the model details. The dimension of the weight tensor relates to the discretization size of the spatial data. The main motivation for considering discretizations is that they allow us to learn flexible non-parametric models. In contrast, traditional parametric models such as spatial point processes \cite{Miller2014} have strong assumptions on the form of the spatial correlations (e.g. the generating process is a distribution $P(y|x)$ with multiple modes or has a cyclic factorization structure), which can be hard to learn using conventional learning methods. Tensor models explicitly capture higher-order correlations among the tasks and spatial features. The learned latent factors can also be more interpretable due to the multi-linear nature (in the spatial feature dimensions). For example, if we see the prediction function $f(\mathbf{x}) = P(y|\mathbf{x})$ as a distribution and $\phi_b(\mathbf{x}), \psi_c(\mathbf{x})$ are 1-hot vectors that encode spatial occupancy, the columns $B_{\cdot k}$ and $C_{\cdot k}$ (for a fixed $k$) can be interpreted as smooth spatial probability distributions, representing players' shooting profile. The cost of using such models, however, is that they lead to a hard non-convex optimization problem and can suffer from multiple local optima. Moreover, different initializations can lead to more or less interpretable models. They are also computationally expensive to train, making them infeasible for large-scale high resolution spatial data. Hence, the goal of our multi-resolution method is to efficiently find accurate and interpretable solutions in this setting.
{ "timestamp": "2018-03-01T02:06:07", "yymm": "1802", "arxiv_id": "1802.06825", "language": "en", "url": "https://arxiv.org/abs/1802.06825" }
\section{Introduction} Image sharing in social networks has increased exponentially in the past years. For example, according to official Instagram stats, there are 600 million Instagrammers uploading around 100 million photos and videos per day. Although the analysis of trends, topics and brands in social networks is mostly based solely on texts, the analysis of such a vast amount of images is starting to play an important role for understanding and predicting human decision making, while becoming essential for digital marketing, and customer understanding, among others. Previous works have proven the relation between text and the personality of the authors \cite{golbeck2011predicting, iacobelli2011large}, and recent studies have also shown that some image features can be related to the personality of users in social networks \cite{cristani2013unveiling}. The main hypothesis of this work is that the relation between text and personality observed by researchers like Yarkoni \cite{yarkoni2010personality} translates well into a relation between images and personality when we consider the images conditioned on specific word use. In his work, Yarkoni proved that there exist words that correlate with different personality traits with statistical evidence, see Table \ref{tab:yarkoni_words}. For example, a neurotic personality trait correlates positively with negative emotion words such as 'awful' or 'terrible', whereas an extroverted person correlates positively with words reflecting social settings or experiences like 'bar', 'drinking' and 'crowd'. Considering this proven relation between text and personality, and the fact that posted images have a relation with their accompanying texts, we propose a methodology which, taking advantage of such existing text-personality correlation, exploits the relation between texts and images in social networks to determine those images most correlated with personality traits. The final aim is to use this set of images to train a personality model with similar performances than previous work based on texts or images alone. \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \caption{List of words highly related to each personality trait, extracted from \cite{yarkoni2010personality}} \label{table:words} \centering \begin{tabular}{|l|c|p{10cm}|} \hline \textbf{Trait} & \textbf{Correl.} & \textbf{Related Words} \\ \hline \multirow{2}{*}{O} & High & culture, films, folk, humans, literature, moon, narrative, novel, poet, poetry, sky \\ \cline{2-3} & Low & anniversary, detest, diaper, hate, hatred, hubby, implore, loves, prayers, thankful, thanks \\ \hline \multirow{2}{*}{C} & High & achieved, adventure, challenging, determined, discipline, persistence, recovery, routine, snack, vegetables, visit\\ \cline{2-3} & Low & bang, bloody, boring, deny, drunk, fool, protest, soldier, stupid, swear, vain \\ \hline \multirow{2}{*}{E} & High & bar, concert, crowd, dancing, drinking, friends, girls, grandfather, party, pool, restaurant\\ \cline{2-3} & Low & blankets, books, cats, computer, enough, interest, knitting, lazy, minor, pages, winter \\ \hline \multirow{2}{*}{A} & High & afternoon, beautiful, feelings, gifts, hug, joy, spring, summer, together, walked, wonderful\\ \cline{2-3} & Low & asshole, bin, cost, drugs, excuse, harm, idiot, porn, sexual, stupid, violence \\ \hline \multirow{2}{*}{N} & High & annoying, ashamed, awful, horrible, lazy, sick, stress, stressful, terrible, upset, worse\\ \cline{2-3} & Low & completed, county, ground, later, mountain, oldest, poem, road, southern, sunset, thirty \\ \hline \end{tabular} \label{tab:yarkoni_words} \end{table*} In the computer vision community, the relationship between language and images has been exploited to automatically generate textual descriptions from pictures \cite{vinyals2015show,karpathy2015deep,vinyals2016show}. Indeed automatic captioning can be understood as a sampling process of words from a text distribution $t$ given an image $I$, or $p(t|I)$. In this paper we aim at the opposite: we want to determine $p(I|t)$, that means, those images mostly related with a specific word that is strongly associated to a particular personality trait. Once the set of images is defined, the potential relation between personality and images will be modeled using a state-of-the-art deep neural network. Classification results will suggest whether there is psychological content in certain images, like it has been previously observed for certain texts and image features. In our work, the human personality characterization called the \emph{Big Five} model of personality is considered \cite{digman1990personality,barrick1991big,goldberg1990alternative}. The Big Five model is a well-researched personality description, which has been proven to be consistent across age, gender, language and culture \cite{mccrae1992introduction,schmitt2007geographic}. In essence, this model distinguishes five main different human personality dimensions: Openness to experience (O), Conscientiousness (C), Extraversion (E), Agreeableness (A) and Neuroticism (N), hence it is often referred as \textit{OCEAN}. Personality traits are characterized in the OCEAN model by the following features: \begin{itemize} \item \emph{Openness}: Appreciate art and ideas, imaginative, aware of feelings. People with this trait tend to have artistic interests and have a certain level of intellectuality. \item \emph{Conscientiousness}: Disciplined, dutiful, persistent, compulsive and perfectionist as opposite to spontaneous and impulsive. People with this trait tend to strive for something, and to be hard-workers and organized. \item \emph{Extraversion}: Warm, assertive, action-oriented, and thrill-seeking. Individuals with high levels of extraversion tend to be friendly, sociable, cheerful and fond of being in company with other people. \item \emph{Agreeableness}: Compassionate, cooperative, considerate. Agreeable people tend to be trusty, modest and optimistic. \item \emph{Neuroticism}: Emotional instability, anxious, hostile, prone to depression. Neurotics tend to be frustrated, anxious and experience negative emotions. \end{itemize} The five personality traits have already been related to text \cite{yarkoni2010personality} and images \cite{segalin2016social} uploaded by users. Therefore, personality might be an important factor in the underlying distribution of the user's public posts in social media, and thus, it is possible to infer some degree of personality knowledge from such data. In this work we go a step beyond the works in \cite{yarkoni2010personality}, and \cite{segalin2016social}, showing that personality remains invariant to changes from the text domain to the image domain. Concretely, our contributions are: \begin{itemize} \item A new framework for estimating users personality from images, called \emph{MindPics}, accessed using a refined set of the tags proposed in \cite{yarkoni2010personality}. \item A personality inference model that uses \emph{MindPics}, as a proof that personality remains invariant across textual and visual domains. \end{itemize} \section{Related work} The increasing growth and significance of the social media in our lives has attracted the attention of researchers, which use this data in order to infer about the personality, interests, and behavior of the users. Regarding personality, its inference has mainly been based on (i) text uploaded by users, and (ii) uploaded images. \subsection{Text-based personality inference} The relationship between language and personality has been studied extensively. As commented before, Yarkoni \textit{et al.} \cite{yarkoni2010personality} performed a large-scale analysis of personality and word use in a large sample of blogs whose authors answered questionnaires to assess their personality. This way, by analyzing the text written by users whose personality is known, the author could investigate the relation between word use and personality. The results of the analysis concluded that the usage of some specific words are correlated with the personality of the blogs' authors. Iacobelli \textit{et al.} \cite{iacobelli2011large} used a large corpus of blogs to perform personality classification based on the text of the blogs. They proved that both the structure of the text and the words used are relevant features to estimate the personality from text. Also, Oberlander \textit{et al.} \cite{oberlander2006whose} studied if the personality of blog-authors could be inferred from their personal posts. To do so, the personality profile of 71 different users was collected through a questionnaire. By using word n-grams as features and SVM classifiers they successfully managed to infer the personality of the authors of the post. Besides blogs, where long texts are common, personality studies based on language have also focused in much shorter texts such as the shared in the social network \textit{Twitter}, where the text shared by users is limited to a maximum of 140 characters. In a similar way, Golbeck \textit{et al.} \cite{golbeck2011predicting} showed that the personality of users from Twitter could be estimated from their \textit{tweets} taking into account also other information as the number of followers, mentions or words per \textit{tweet}. \subsection{Image-based personality inference} An early attempt to model personality from images was presented in Steele \textit{et al.} \cite{steele2009your}. This work explored the characteristics of profile pictures and the relationship with the impression that other users had of the owners of these pictures. Their findings show that users better agreed with the targets’ self-reported personalities when the profile picture was a photo of a human, taken outdoors instead of indoors, and the face appears smiling. Cristiani \textit{et al.} \cite{cristani2013unveiling} proved that there are visual patterns that correlate with the personality traits of 300 Flickr users and thus, that personality traits of those users could be inferred from the images they tagged as favorite. To do so, two categories of visual features are extracted from the images: \textit{"Aesthetic"} and \textit{"Content"} features. Aesthetic features focus on aesthetic information of the images, aiming to represent the user preferences. This type of features include the amount of certain colors, the number of edges in the image, the entropy and the level of detail among others. Content features focus on describing what appears in the image by counting the number of faces and identifying the different objects using the GIST descriptor \cite{oliva2001modeling}. Guntuku \textit{et al.} \cite{guntuku2015personality} improved the low level features used in previous work by changing the usual Features-to-Personality (F2P) approach to a two step approach: Features-to-Answers (F2A) + Answers-to-Personality (A2P). Instead of building a model that directly maps features extracted from an image to a personality, with this approach the features are first mapped to the answers of the questionnaire BFI-10 for personality assessment \cite{rammstedt2007measuring}. Then, the answers are mapped to a personality. Besides this two-step approach, they also add new semantic features to extract from the images, like \textit{Black \& White vs. Color image}, \textit{Gender identification} and \textit{Scene recognition}. Later, Segalin \textit{et al.} \cite{segalin2016pictures} proposed a new set of features that better encode the information of the image used to infer the personality of the user who favourited it. They proposed to describe each image with 82 different features, divided in four major categories: \textit{Color}, \textit{Composition}, \textit{Textural Properties} and \textit{Faces}. These groups of features are similar to the ones proposed by \cite{machajdik2010affective}, but excluding the \textit{content} group and using instead the number of faces as a feature of the image. The reason behind is that faces are very common in the images and that the human brain is specifically wired to perform accurate face detection and recognition. Their method proved to be suitable to map an image to a personality trait, but it worked better for attributed personality traits rather than self-assessed personality. Since a Convolutional Neural Network (CNN) won the Imagenet competition in 2012 \cite{krizhevsky2012imagenet} the computer vision field has moved from designing the hand-made image features to learn them in an end-to-end deep learning model. Likewise, the feasibility of deep learning to automatically learn features that are good to estimate personality traits from pictures have been already proven by the same work of Segalin \textit{et al.} \cite{segalin2016social}. In their work, they presented the dataset \textit{PsychoFlickr}, which consists of a collection of images favourited by 300 users from the site \textit{Flickr.com}, each user tagging 200 images as favorite, adding up to a total of 60,000 images. Additionally, the Big Five traits personality profile of each user are provided. There are two different versions of the personality profile for each user, one collected through a self-assessment questionnaire answered by the user, and one attributed by a group of 12 assessors who had evaluated the image set of the user. Subsequently, the authors fine tune a CNN pre-trained on the large dataset of object classification Imagenet \cite{imagenet_cvpr09} to capture the aesthetic attributes of the images in order to be able to estimate the personality traits associated with those images. For each of the Big Five traits they trained a CNN model with a binary classifier. Then, each different CNN estimates if the images are “high” or “low” for the trait the model has been trained for. \\ The study of the personality conveyed by images has not only been used to infer the personality of users, but also to analyze how brands express and shape their identity through social networks. Ginsberg \emph{et al.} \cite{ginsberg2015instabranding} analyzed the pictures posted in Instagram by the leading food brands and classified them into different categories: product, person and product, people and product, humor and product, world events, recipes, campaign with no products, user-generated, celebrity, and video. Then, the analysis of the types of images posted by the brands was used to interpret the identity of each brand along five dimensions of personality: sincerity, excitement, competence, sophistication, and ruggedness. \begin{table*}[!t] \centering \caption{Classification accuracies (\%) reported in the literature using texts or images} \label{tab:tab_soa} \begin{tabular}{|l||c|c||c|c|} \hline & \multicolumn{2}{c||}{Word use only} & \multicolumn{2}{c|}{Image use only} \\ \hline & Golbeck \cite{golbeck2011predicting} & Iacobelli \cite{iacobelli2011large} & Segalin \cite{segalin2016social} & Guntuku \cite{guntuku2015personality} \\ \hline \hline O & 75.50 & 84.36 & 61.00 & 66.10 \\ \hline C & 61.70 & 79.18 & 67.00 & 70.50 \\ \hline E & 58.60 & 71.68 & 65.00 & 69.70 \\ \hline A & 69.70 & 78.31 & 64.00 & 72.30 \\ \hline N & 42.80 & 70.51 & 69.00 & 61.50 \\ \hline \hline \textbf{Avg} & 61.66 & 76.80 & 65.20 & 68.02 \\ \hline \end{tabular} \end{table*} \subsection{Integrating text and image for personality inference} Table \ref{tab:tab_soa} contains the recognition accuracy of each of the OCEAN traits for 5 different methods based on text or images. In this paper, we want to determine whether the correlations found between personality and texts or images separately, also holds when combining image and word use. Indeed all previous visual-based approaches take advantage of the many ways in which users interact with images in social networks, such as posting an image, liking it or commenting on it. Specifically, most of the works described above consist on assessing the personality of users based on the images they have liked. For example, the main difference between the \cite{segalin2016social} and our approach is that in such paper, the personality is inferred based on which images have been tagged as \textit{favorite}, thus becoming a study on the relation between aesthetic preferences and personality. In contrast, in our case, we explore directly the images shared by users based on accompanying texts strongly related to personality traits, so here the relationship between images and personality arises from the mere act of posting a picture in a social network as a process of communication with others. The whole procedure is detailed next. \section{Methodology} As proven by Yarkoni \textit{et al.} \cite{yarkoni2010personality} there exists a relationship between the personality of people and the language they use. In other words, the language that we use can reveal our personality traits, so there is statistical evidence that the use of specific words correlate with the personality of online users. Based on that, we design a set of images $S$ conditioned by those words most related to specific personality traits. This can be seen as sampling images from a distribution of images $I$ conditioned on text $t$, or $S \sim p(I|t)$. For each trait of the Big Five personality traits defined before we have selected the list of words that correlate most positively or negatively with the trait, as suggested by \cite{yarkoni2010personality}. The positively correlated words have been used to identify those images most associated to the strong presence of a trait, and the negatively correlated words are used to determine those images most associated to its absence. From this set of images $S$ we can train a deep learning model that learns to extract a personality representation from a picture, and use it to automatically infer the personality that the picture conveys. \subsection{Finding \emph{Mind{P}ics} in Social Networks} Based on the aforementioned relationship between text and personality, the proposed personality model is built considering a large quantity of images, called \emph{Mindpics}, tagged with specific personality-related words. These words are the most correlated ones with each personality trait, as presented in \cite{yarkoni2010personality}. In Table \ref{table:words}, we showed the words mostly related to each personality trait, which have been also used to identify that set of images for each trait used for training the neural network. So each image related to a tag will correspond to one of the five personality traits, and within the trait it will represent the \textit{high} or the \textit{low} presence of such trait. As the aim of this paper is to evaluate whether there is any of the author's personality information embedded in real world images, we have considered images posted in Instagram \cite{ginsberg2015instabranding,hu2014we,souza2015dawn} . In this social network the users take and share images by posting them together with words or \emph{hashatgs}. To build the \emph{Mindpics} dataset, we first crawl a large collection of publicly-shared photos using the Instagram API. The reason of choosing Instagram as the source of selecting the images for training the Neural Network is threefold: (i) images are the main content of Instagram instead of text. Contrary to more text-based social networks such as Twitter, most of the content and information shared by the Instagram user is conveyed in the image, so it is reasonable to think that such images embed personality, up to some extent. (ii) The fact that the images can be accompanied by text allows to easily identify those images that appear together with specific words. (iii) By considering in our experiments the public pictures posted by hundreds of users, these users are not aware that they are being part of a psychological experiment. \begin{figure*}[!t] \centering \includegraphics[width=0.6\linewidth]{figures/all_ocean.jpg} \caption{\textbf{\emph{Mind{P}ics} Samples.} Each column contains 10 \emph{Mind{P}ics} of each personality trait. From left to right: High openness, Low openness ; High conscientiousness, Low conscientiousness ; High extraversion, Low extraversion ; High agreeableness, Low agreeableness ; High neuroticism, Low neuroticism.} \label{fig:all_ocean} \end{figure*} Instagram images are very interesting, because there are no boundaries in the kind of pictures that users use to communicate with their followers. Thus, different kinds of users will post different kinds of pictures, as proven by Hu \textit{et al.} \cite{hu2014we}. In their work, they show that the pictures posted in Instagram can be classified into eight main categories, and the users can be divided in five different groups, depending on what kind of pictures they post. These eight main picture categories are: friends, food, gadget, captioned pictures, pets, activity, selfies, and fashion. The difference on the type of images posted can be influenced by the city of the users, \cite{hochman2012visualizing} or their age \cite{jang2015generation}. In order to determine the \emph{MindPics} set of images, we used the words from Table \ref{table:words} to query images. For each personality trait, 22 words were used, 11 for each component, and about 1,100 images were selected for each word. The total number of \emph{MindPics} images used for training the personality model is $121,000$. Because we used the same number of words per trait and the same number of images per word, the number of training images results balanced within the 5 personality traits and also within the used tags, thus each trait being trained from around $24,000$ images. In Figure \ref{fig:all_ocean} we show some 10 random samples of each personality trait obtained with the procedure described above. From left to right the classes are: high Openness, low Openness, high Conscientiousness, low Conscientiousness, high Extraversion, low Extraversion, high Agreeableness, low Agreeableness, high Neuroticism, low Neuroticism. By observing these pictures we can see how despite the challenging variability of the data we can identify that some visual patterns emerge, like pictures containing more people and crowds for \textit{High Extraversion} than for \textit{Low Extraversion}, or that \textit{Low Neuroticism} correlates well with images depicting more nature and landscape patterns than in \textit{High Neuroticism}. As it can be seen, images of the same class present a lot of variability, see for example how \textit{High Openness} contains images of different objects like books, faces, text, drawings, and landscapes among others. Also, we can see how images of people appear in several classes. This huge variability is a very challenging problem, because it is harder to recognize unique patterns in the images of each class. Summarizing, despite the huge intra-class and inter-class variabilities of the images associated to each of the personality traits, we can show that there are consistence between the images of the same class, whose hashtags are related to the words suggested by \cite{yarkoni2010personality}. \subsection{Building the Personality model} Once we have defined the procedure to determine which set of images $S$ is most related to each personality trait, we next describe how we can model this relationship between \emph{Mind{P}ics} and personality. In this work we have used a neural network model that maps an input image to a desired output by learning a set of parameters that produce a good representation of the input. This model is hierarchical, \emph{i.e.} it consists of several layers of feature detectors that build a hierarchical representation of the input, from local edge features, to abstract concepts. The final layer consists on a linear classifier, which projects the last layer features into the label space. Let $x$ be an input image and $f(x;\theta)$ a parametric function that maps this input to an output, where $\theta$ are the parameters. Because a neural network model is a hierarchical combination of computation layers, the output is a composition of non-linear functions: \begin{equation} f(x;\theta) = f^N(f^{N-1}(...f^2(f^1(x;\theta_1);\theta_2);\theta_{N-1});\theta_N) \end{equation} where $N$ is the number of layers in the model, and each computation layer corresponds to a non-linear function $f^n$ \cite{Goodfellow-et-al-2016} with its own parameters $\theta_n$. We find $\theta$ by empirical risk minimization: \begin{equation} \theta = \arg\min_{\theta} \mathcal{L}(y, \hat{y}), \end{equation} where $\mathcal{L}$ is the cross-entropy loss function, $y$ is the output of the model defined as $y = f(x;\theta)$ and $\hat{y}$ is the correct output for input $x$. This function is minimized iteratively by means of Stochastic Gradient Descent (SGD), and the process of minimizing the loss function over a set of images $S$ is referred as \emph{training} the model. There are different types of Deep Learning architectures, in this work we use a Convolutional Neural Network (CNN) model \cite{lecun1998gradient}, \cite{krizhevsky2012imagenet}, since CNNs are especially suited for 2D data. After training, the output of the CNN for an image is a vector of scores for each of the personality traits. For predicting the \emph{high} and \emph{low} scores for each of the personality traits, we propose two different strategies: one model for each trait, and a all-in-one model. \subsubsection{One model for each trait} Having one model for each trait consists on training five CNNs, using five different sets of parameters $\theta_i, i\in [1,5]$, each one to predict one of the Big Five personality traits. Namely, each CNN receives an image as input and produces a vector $o$ of two dimensions as output. This vector $o$ is used as input to the softmax function, that produces $p$, a score vector indicating the probability of an individual personality trait to be inferred from the image. For an output vector $ o $ of two components the softmax function for each component $i$ is defined: \begin{equation} \label{eqn:softmax} p_i(o_i)=\frac{e^{o_i}}{\sum_{j=1}^{2} e^{o_j}} \end{equation} Note that $ \sum p = 1 $. Because the task to solve is a classification task, the Multinomial Logistic Loss is used as the loss function to minimize. The loss for one sample is defined as: \begin{equation} \label{eqn:mlogloss} \mathcal{L} = -log(p_{l}), \end{equation} where $p_{l}$ is the probability assigned by the CNN model of the input belonging to the true class $l$. Note that, although training one model for each trait might seem conceptually easier, it increases the computations by a factor of five, and does not benefit from feature reusing, which is important for the model to generalize. To address these shortcomings, we propose the all-in-one model. \subsubsection{All-in-one model} We propose to use the same CNN for all five traits, but using five different classifiers on the last layer, one for each of the Big Five traits. Each output layer is independent of the others, and consists on a binary classifier like the described before, which has its own loss function. In Figure \ref{fig:all_in_one} it is depicted an scheme of this model. \begin{figure}[!t] \centering \includegraphics[width=0.8\linewidth]{figures/all_in_one.jpg} \caption{\textbf{All-in-one-model.} Scheme of the all-in-one model that jointly trains 5 different classifiers that share parameters, one for each personality trait.} \label{fig:all_in_one} \end{figure} The loss used for each of the 5 independent binary classifiers is the Multinomial Logistic Loss from Equation \ref{eqn:mlogloss} as in the single-classifier model, with one minor modification. For the multi-classifier setup we must consider that we only want to backpropagate errors made by the classifier responsible for the ground truth label. For instance, if the true label of an image is \textit{"high Openness"} the only error backpropagated should be the error produced by the \textit{"Openness"} classifier. Hence, we modify Equation \ref{eqn:mlogloss} so that the loss for each classifier is zero if the classifier does not have to be considered: \begin{equation} \label{eqn:mlogloss_multi} \mathcal{L}_c = \begin{cases} -log(p_{l}) &\text{correct classifier}\\ 0 &\text{ignored classifier} \end{cases} \end{equation} This equation defines the loss function computed for one of the independent classifiers $c$. Each classifier is independent from the others, so each classifier will compute its own loss function $\mathcal{L}_c$ and backpropagate the error. The reason for Equation \ref{eqn:mlogloss_multi} is that by setting the loss to 0 for the classifiers that do not have to be considered for the input image, the gradients backpropagated to the previous layers by those classifiers will be also 0. Hence, the only classifier that will take part in updating the network parameters is the desired one. \section{Experiments} In this section we describe the different models that we have tested to infer personality traits from images and explain the details of the training process. We finish the section by exposing the quantitative results of our approach and with a qualitative analysis of the model trained for personality inference. \textbf{CNN models.} There are some common CNN architectures \cite{krizhevsky2012imagenet, simonyan2014very, szegedy2015going, he2016deep} that are well established for computer vision tasks. In our experiments, we have used two of these CNN models. The first model is the same as proposed by Krizhevsky \textit{et al.} \cite{krizhevsky2012imagenet} with the only difference that we replace the output layer of 1000 dimensions by the two configurations explained before. This architecture receives an input image of $227\times227$ pixels which is ran through five consecutive convolutional layers. Then, the output of these convolutional layers is ran through two Fully Connected layers before feeding the output layer. From now on this model will be referred as \textit{Alexnet} for brevity. The other CNN model we have used is a Residual Network presented in \cite{he2016deep}, abbreviated to \textit{ResNet}. In this case we also change only the output layer to adequate the architecture to our task. The main difference between these two models is that the latter uses residual connections between the network layers, allowing for an easier optimization process of deep networks, and thus allowing to increase their depth. Another difference is that the ResNet architecture uses Batch Normalization \cite{ioffe2015batch} after every convolutional layer. This technique reduces the internal covariate shift in the distribution of the model activations by normalizing the inputs of each layer. In the experiments we use a ResNet of 50 layers, whereas the Alexnet model consists of 8 layers. For both models we test the two output configurations explained previously. \textbf{Finetunning.} We also test two different options for initializing the network's weights: the first option is to train the model from scratch starting from random weights and the other option is the finetunning approach, which consists in initializing the network with the weights learned for another task. In this case, we initialized the network with the weights of a model trained on ImageNet \cite{russakovsky2015imagenet}. The finetunning approach has been proved \cite{oquab2014learning} to be useful to train neural networks in small datasets with superior performance. The idea behind this approach is that the network first learns how to extract good visual features in a large dataset and then uses this features to learn a classifier in a smaller dataset. \textbf{Training setup.} To train the models, we used the deep learning framework Caffe \cite{jia2014caffe} and a GPU NVIDIA GTX 770 with 4GB of memory. The optimization algorithm used to train the network is Stochastic Gradient Descent with momentum of 0.9. For the Alexnet model we use a batch size of 128, whereas for the ResNet network we set the batch size to 32. The learning rate is set to $0.01$ when training a network from scratch and to $0.001$ when finetunning, increasing the learning rate by a factor of $10$ at the new layers. Dropout \cite{srivastava2014dropout} is used in the Fully Connected layers with a probability $p=0.5$. $L_2$ regularization of the weights is also used with a factor of $0.0005$. Additionally, during the training stage we apply data augmentation to the input images. Namely, we randomly apply horizontal mirroring to the images and crop a random patch of $224x224$ pixels of the original $256x256$ images. The cropped patch is used as the input to the network. The only pre-processing of the images is mean subtraction. The mean image to be subtracted is computed on the training set. A random split of the Mindpics dataset is used to divide the images in non-overlapping training and testing sets, $80 \%$ of the images are used for training and $20\%$ for testing. \begin{table*}[!t] \centering \caption{Classification accuracies (\%) for each personality trait on the Mindpics dataset} \label{tab:mindpics_results} \begin{tabular}{c c c c c c | c} \toprule \textbf{Model} & \textbf{O} & \textbf{C} & \textbf{E} & \textbf{A} & \textbf{N} & \textbf{Avg} \\ \midrule Alexnet independent & 61.6 & 62.9 & 63.4 & 62.5 & 64.0 & 62.9 \\ Alexnet all-in-one & 62.3 & 63.5 & 63.6 & 62.1 & 65.4 & 63.4 \\ \midrule Alexnet independent & 67.8 & 68.3 & 73.5 & 67.4 & 69.1 & 69.2 \\ Alexnet all-in-one & 66.9 & 69.2 & 73.6 & 67.8 & 69.4 & 69.4 \\ ResNet independent & 69.5 & \textbf{72.8} & 76.9 & 69.1 & \textbf{70.3} & 71.7 \\ ResNet all-in-one & \textbf{69.8} & 72.4 & \textbf{77.7} & \textbf{69.8} & 69.6 & \textbf{71.9} \\ \bottomrule \end{tabular} \end{table*} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{figures/roc_curve.jpg} \caption{ROC curves for all traits} \label{fig:roc_curve} \end{subfigure} \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{figures/pr_curve.jpg} \caption{PR curves for all traits} \label{fig:pr_curve} \end{subfigure} \caption{Receiver operating characteristic and Precision-recall curves for all the personality traits. The ROC plot also shows the Area under the curve for each trait, and the PR plot shows the Average precision for each trait. } \label{fig:curves} \end{figure*} \subsection{Quantitative Results} In Table \ref{tab:mindpics_results} the accuracies on personality recognition for each trait obtained by the different models and configurations we tested are shown. The first two rows show the performance of the Alexnet model trained from scratch with random weights initialization. The first row shows the results obtained by training an independent model for each trait, and the second row the results obtained by training just one model that shares most of the computation and that has an independent output layer for each trait. As it can be seen in the table, the \textit{all-in-one} model obtains better accuracies in all the traits except for Agreeableness, and the average accuracy is 0.5 \% better compared to training five independent models. The rest of the results correspond to the finetunning of Alexnet and ResNet models starting from the weights learned on the ImageNet dataset. The results show that pre-training the network in a larger dataset in order to learn to extract better features increases performance in all the personality traits, increasing the average accuracy by $6\%$. It can also be seen that the \textit{all-in-one} model performs better in this scenario too, both for Alexnet and ResNet neural networks. In these cases however, the increase of performance is smaller, the average accuracy increases by $0.2 \%$ in both cases. The \textit{all-in-one} configuration learns better features because the personality classifiers share the feature extraction layers and each classifier contributes to the learning, whereas a model that only has one classifier does not see as many different images so it learns worse features. However, when pre-training on the Imagenet, both networks start already with good feature extractors, so this effect is not as important as when the networks are trained from scratch. Besides the increase in accuracy performance, the \textit{all-in-one} network is also much more efficient, because it shares most of the image processing steps for all traits, thus reduces the amount of computation by a factor of five. Finally, the results also show that the ResNet network is significaly better than the shallower network Alexnet, achieving up to $2.5\%$ more in average accuracy and consistently increasing the performance for all traits. This increase in performance can be explained from the increase in depth of the model, which allow the models to learn better representations. To further explore the performance results, in Figure \ref{fig:curves} we show the Receiver Operating Characteristics (ROC) and Precision-recall (PR) curves for each trait obtained by best model, along with the area under the ROC curve (AUC) and Average Precision (AP). The predictions used to plot the curves are obtained from the ResNet all-in-one CNN. As it can be seen, the Extraversion trait obtains the best scores, followed by Conscientiousness. The dashed line in the ROC curves figure represents a hypothetical random classifier with and AUC of 0.5. As it can be seen, the ROC curves for each personality trait are far from being random, indicating that the relation between text and images exists and that the model is able to learn a mapping from image to personality. \subsection{Qualitative Results} In this section we analyze the qualitative results obtained with our deep personality model based on \emph{Mind{P}ics}. As previously seen, the best models are those pre-tained in ImageNet, which learn good features in the pre-training process, and the \textit{all-in-one} models that learn to extract better features by sharing the feature extractor layers between different personality classifiers. The relationship between the quality of the extracted features and the final classification performance has already been explored in the literature. For instance, \cite{guntuku2015personality} showed that for personality recognition improving the hand-crafted features extracted from images lead to an improvement of the results obtained with more basic features \cite{cristani2013unveiling}. Therefore, we can assume that the increase in performance of the models that get better results is caused because the network is able to extract a more suitable representation of the image in the feature space, a representation that allows the classifier to better discriminate the images. To get a better insight of what the new deep features are detecting, we visualize and analyze the images that maximally activate the output of the network for each personality trait. Namely, in order to know which images better represent each personality trait, we find those pictures that maximally activate a specific output of our model. Similarly to \cite{girshick2014rich}, we input all the images to our model, inspect the activation values of a specific neuron, and look for the images that produce these maximum activations. In our case, we inspect the output units associated to each personality trait. For example, by looking at the output of our Extraversion classifier for each of the \emph{Mind{P}ics}, we can know which ones are classified as High Extraversion with more confidence. In Figures \ref{fig:max_openness}, \ref{fig:max_cont}, \ref{fig:max_extraversion}, \ref{fig:max_agre} and \ref{fig:max_neuro} the most representative \emph{Mind{P}ics} of the High and Low scores for each trait are shown. \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_High_Openness.jpg} \caption{High Openness} \label{fig:max_high_o} \end{subfigure} \hspace{0cm} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_Low_Openness.jpg} \caption{Low Openness} \label{fig:max_low_o} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/tsne_open.jpg} \caption{t-SNE visualization} \label{fig:tsne_open} \end{subfigure} \caption{\emph{Mind{P}ics} that maximally activate Openness.} \label{fig:max_openness} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_High_Conscientiousness.jpg} \caption{High Conscientiousness} \label{fig:max_high_c} \end{subfigure} \hspace{0cm} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_Low_Conscientiousness.jpg} \caption{Low Conscientiousness} \label{fig:max_low_c} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/tsne_con.jpg} \caption{t-SNE visualization} \label{fig:tsne_con} \end{subfigure} \caption{\emph{Mind{P}ics} that maximally activate the Conscientiousness.} \label{fig:max_cont} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_High_Extraversion.jpg} \caption{High Extraversion} \label{fig:max_high_e} \end{subfigure} \hspace{0cm} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_Low_Extraversion.jpg} \caption{Low Extraversion} \label{fig:max_low_e} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/tsne_extr.jpg} \caption{t-SNE visualization} \label{fig:tsne_extr} \end{subfigure} \caption{\emph{Mind{P}ics} that maximally activate the Extraversion.} \label{fig:max_extraversion} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_High_Agreeableness.jpg} \caption{High Agreeableness} \label{fig:max_high_a} \end{subfigure} \hspace{0cm} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_Low_Agreeableness.jpg} \caption{Low Agreeableness} \label{fig:max_low_a} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/tsne_agre.jpg} \caption{t-SNE visualization} \label{fig:tsne_agre} \end{subfigure} \caption{\emph{Mind{P}ics} that maximally activate the Agreeableness.} \label{fig:max_agre} \end{figure*} \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_High_Neuroticism.jpg} \caption{High Neuroticism} \label{fig:max_high_n} \end{subfigure} \hspace{0cm} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/max_Low_Neuroticism.jpg} \caption{Low Neuroticism} \label{fig:max_low_n} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width=\textwidth]{figures/tsne_neuro.jpg} \caption{t-SNE visualization} \label{fig:tsne_neuro} \end{subfigure} \caption{\emph{Mind{P}ics} that maximally activate the Neuroticism.} \label{fig:max_neuro} \end{figure*} Among the \emph{MindPics} that maximally activate the High Openness trait we can see pictures of books, the moon and sky, while for Low Openness the most relevant pictures are love related. For High Conscientiousness most of the images are photographs of food, specially healthy food, whereas for Low Conscientiousness we mostly see pictures of people. In Extraversion it can be seen a clearly distinction between the images that maximally activate the High and Low outputs. The High Extraversion output is mostly activated by pictures of a lot of people, whereas the Low Extraversion output reacts to cats, books, and knitting images. In High Agreeablenes we mostly see flower pictures, whereas the Low score responds to pictures with text and naked torsos. Lastly, in the Neuroticism trait we observe that the High score is maximally activated by pets, whereas for Low score we see pictures of landscapes and sunsets. \begin{figure*}[t] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{figures/all_high_50.jpg} \caption{High correlation} \label{fig:tsne_all_high} \end{subfigure} ~ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{figures/all_low_50.jpg} \caption{Lows correlation} \label{fig:tsne_all_low} \end{subfigure} \caption{t-SNE projection of the \emph{MindPics} that maximally activate the high (left) and low (right) scores of each trait. The colored border of the images indicates their personality trait: Openness images denoted by orange, Conscientiousness by blue, Extraversion by green, Agreeableness by black and Neuroticism by red. } \label{fig:tsne_all} \end{figure*} In order to further analyze the behavior of our model, we study the image representations that the model produces. These representations are obtained by extracting the activations of the last feature detector of our model (the 50-layer ResNet), a high-level feature vector that describes the input image. In our case, the last layer is a Pooling layer which produces a 2048-dimensional vector. Given that these extracted features reside in a high-dimensional manifold, we can not directly visualize the images in feature space. By projecting the high-dimensional features to a 2D space, we are able to observe the underlying structure of the images representations. To compute this projection we use the t-Distributed Stochastic Neighbor Embedding (t-SNE) \cite{maaten2008visualizing} technique for visualizing high-dimensional data. This method retains local structure of the data and global structure of the data, so similar images appear together in the 2D space. It works by translating similarities between data to joint probabilities, and minimizes the Kullback-Leibler divergence between the probability distribution over high-dimensional points and the probability distribution over the low-dimensional data points. In Figure \ref{fig:tsne_all}, we have projected together the 50 images that maximally activate the High and Low scores of each trait, shown in Figures \ref{fig:tsne_open}, \ref{fig:tsne_con}, \ref{fig:tsne_extr}, \ref{fig:tsne_agre} and \ref{fig:tsne_neuro}. In the t-SNE figures we show with an orange border the pictures belonging to the High score and with a blue border the images belonging to the Low score of each trait. For the t-SNE data visualization we have used a perplexity value of $30$ and a random initialization of the embedding. The visualization of the image features projected onto a 2D plane shows that for all traits, the pictures belonging to the High and Low scores are in general well separated, with only a few errors. We must remark that t-SNE does not use the labels of the images in the projection, so these clusters arise from the image representations made by the model. Besides these intra-class separation, we also observe some inter-class clusters within the projected images. For example, in the Openness t-SNE projection, Figure \ref{fig:tsne_open}, we can see three different types of images of High Openness. In the right hand of the figure, we see a cluster of images of books and writings, in the middle a small cluster of pictures of the moon and in the left cluster we observe more landscapes and sky images. We also project the \emph{MindPics} that maximally activate each trait together in Figure \ref{fig:tsne_all}. We can see how even that the model is not explicitly trained to distinguish images of different traits, the representations are discriminative enough so that images of the same trait cluster together in the feature space. \begin{figure*}[!t] \centering \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{figures/high_o.jpg} \caption{Images from the High Openness class} \label{fig:rand_high_o} \end{subfigure} ~ \begin{subfigure}[b]{0.49\textwidth} \includegraphics[width=\textwidth]{figures/low_o.jpg} \caption{Images from the Low Openness class} \label{fig:rand_low_o} \end{subfigure} \caption{Random samples of the Openness trait.} \label{fig:rand_openness} \end{figure*} \section{Discussion} Social media is the product of the expression of the users by means of text, images, and speech. This is a great opportunity for companies and researchers not only to know about the content of the images that are being shared, but to know about the users that create and interact with those images themselves. One interesting trait to learn from the users is personality. In fact, different methods have already been proposed to extract the users' personality from social media text and favorited images. This indicates that personality might be an underlying factor in the distribution of the users' data. In this work we made a step towards confirming this hypothesis, directly focusing on the authors of the pictures and showing that personality remains invariant when moving from the text to the image domain. In particular, we showed that given the images uploaded by those users who used words correlated to the different personality traits \cite{yarkoni2010personality}, it is possible to retrieve their personality from the images. Thus, images and text are correlated, which can be explained if both depend on a third variable, which is personality. In this study, we do not recover the full spectrum of personality traits of one user, but we infer a specific personality trait that a single image conveys. The underlying hypothesis is that when a user posts a picture in a social network, the picture is not expressing everything the author has in mind, only the specific message intended by the author. In the same way, a picture does not describe the whole personality of the user, but a portion of it. So, in order to get an estimation of the whole personality profile of the users, one could analyze all the different images posted by them, because each image conveys only partial information of their personalities. Quantitative and qualitative results were provided, showing that a neural network is indeed able to successfully learn to map \emph{Mind{P}ics} to personality traits. Moreover, the images retrieved with our model directly reflect the description of the Big Five traits, for instance, highly conscientiousness, which is described as: "Conscientiousness is a tendency to display self-discipline, act dutifully, and strive for achievement against measures or outside expectations..." in \cite{digman1990personality,barrick1991big,goldberg1990alternative}, highly correlates to images of vegetables, and people exercising. Another example of our findings is that we have seen how images of groups of people, concerts, and social settings like a bar or a restaurant are associated with high confidence to high extraversion, in contrast to pictures related to activities that one tend to do alone like reading or knitting, that are associated with low extraversion. Lastly, in Figure \ref{fig:rand_openness} we show some random samples of the Openness trait obtained with the procedure described in this paper, the left side showing high scoring images and on the right side low scoring ones. As it can be seen, images of the same class present a lot of variability, for example \textit{High Openness} contains images of different objects like books, faces, text, drawings, and landscapes among others. This is due to the fact that some tags indeed correspond to quite abstract concepts, that leads to very different content in those related images. Besides the challenge of having this huge inter-class variability, we found that even images from different classes can be visually similar. For example, in Figure \ref{fig:rand_openness} we can see pictures of text and people in both \textit{High} and \textit{Low} scores of Openness. This is caused by the huge variability in real world images, and by the fact there are a lot of different images that are tagged with the same word. \section{Conclusion} A new framework for user personality inference has been proposed. Differently from previous approaches, the proposed framework was used to directly infer the personality of the authors of social media images retrieved according to the tags proposed in \cite{yarkoni2010personality}, which have been shown highly correlated to each of the Big Five personality traits. We then showed that it is possible to model such personality traits assigned to the retrieved images, using different strategies based on deep learning. Quantitative and qualitative results are shown, suggesting that personality is an underlying factor of the social media data distribution of users. These results open new directions of research for improving the proposed personality model by considering more words and specific images, evaluated with the supervision of psychology experts. \vspace{6pt} \section*{Acknowledgements} Authors acknowledge the funding received by the European Union's H2020 SME Instrument project under grant agreement 728633, the Spanish project TIN2015-65464-R (MINECO/FEDER), the 2016FI\_B 01163 and the 2017-SGR-1669 grants by the CERCA Programme/Generalitat de Catalunya, and the COST Action IC1307 iV\&L Net (European Network on Integrating Vision and Language) supported by COST (European Cooperation in Science and Technology). Authors would like to specially thank Ms. Daniela Rochelle Kent and Mr. Alvaro Granados Villodre for their invaluable help with the ontology of words and the images used in the experiments, respectively. We lastly gratefully acknowledge the support of NVIDIA Corporation with the donation of a Tesla K40 GPU and a GTX TITAN GPU, used for this research.
{ "timestamp": "2018-02-20T02:17:44", "yymm": "1802", "arxiv_id": "1802.06757", "language": "en", "url": "https://arxiv.org/abs/1802.06757" }
\section{Introduction} \label{sec:Intro} The question of what boundary conditions are satisfied by an electromagnetic field has been touched upon in the mathematics literature a number of times since the work by Leontovich in the early 1940's, see \cite{Leontovich}, in relation to wave propagation near the surface of the earth. The related setup of an interface between the free half-space (or its curved analogue) and an electromagnetic medium with a large refractive index is well studied from the physics, analysis, and numerics perspectives, and does not present a challenge with the modern computational power available, see {\it e.g.} \cite{Senior_Volakis}. However, the question of what form the Leontovich condition takes when neither of the two media is nearly perfectly conducting is worth exploring, as replacing the interface conditions by ``effective'' conditions on the boundary of a dielectric medium can lead to a reduction in computational demand, and the information about the dispersion relation for such waves may be exploited for the design of new transmission devices. The wider subject of the derivation of effective conditions along an interface between electromagnetic media has been studied extensively during the last few decades, see {\it e.g.} \cite{Slepyan1}, \cite{Slepyan2}, although in many situations the analysis remains outstanding, especially when the media in contact are heterogeneous or frequency-dispersive. As often noted in the existing literature, interfacial and surface waves are often amenable to a direct analysis in view of the simplified geometry, as the solutions admit a separation into oscillations along the surface and exponential decay away from it. This observation prompts us to try and obtain closed-form dispersion relations for the surface of contact between a dielectric and a general Lorentz medium, at least in the case of a flat interface. The related analysis should admit further generalisation towards the curved case when the typical radius of curvature is assumed small compared to the wavelength, see \cite{BabichKuznetsov} for the case of the classical Leontovich condition, which we later explore as a limit of a more general condition. Irrespective of the question of effective boundary conditions, the subject of electromagnenic surface wave propagation, {\it i.e.} wave motion localised the interface between two electromagnetic media, has received a great amount of attention in the physics and applied mathematics literature since mid-20th century, see \cite{PML} for a detailed overview of the subject and extensive bibliography. In particular the case of stratified dielectric media in contact with a homogeneous dielectric was introduced in \cite{YYH}, with a number of papers discussing its applications following, see {\it e.g.} \cite{YYC}, \cite{Robertson_May}, \cite{Robertson}. The case when the homogeneous dielectric half space is replaced by a frequency dispersive (say, Lorentz) medium, remains largely unexplored from the point of view of the analysis of interfacial modes. In the present paper we combine the above two open directions and discuss the case of a two-component stratified dielectric in contact with a Lorentz medium. In particular, we derive the corresponding version of the Leontovich condition, and analyse the associated waves along the contact surface. Assuming that the dielectric properties are periodic in the direction orthogonal to the contact surface allows us to consider a non-homogeneous case, while retaining the ability to carry out an explicit analysis by virtue of the Floquet theory, see Section \ref{Maxwell_section}. Using an effective boundary condition on the surface of contact with the complementary half-space occupied by the Lorentz medium, obtained in Section \ref{sec:LowLor} by imposing the condition of decay of the wave amplitude away from the surface into this half-space, in Section \ref{sec:EqHSProb} we derive a dispersion relation between the wave-number and frequency for waves propagating along the interface and decaying in amplitude away from it. In Section \ref{sec:EqHSProb} we also investigate how the wavenumber-frequency pairs depend on the loss parameter of Lorentz half-space and how the dispersion diagrams for the lossless case depend on the ratio between the plasma and resonant frequencies. Finally, in Section \ref{GeneralToClassical} we discuss the relation between our effective condition and the standard Leontovich condition. \section{Problem setup} \label{sec:Setup} \subsection{Maxwell system} \label{sec:Maxwell_Setup} The Maxwell equations of electromagnetism are, see {\it e.g.} \cite{Jackson}: \begin{equation} \label{eq:Maxwell} \nabla \wedge \vE = -\dfrac{\partial \vB}{\partial t}, \quad\quad\quad \nabla \wedge \vH = \dfrac{\partial \vD}{\partial t}, \end{equation} where $t$ is time, $\vE$, $\vH$ are the electric and magnetic field, $\vD$, $\vB$ are the electric displacement and magnetic flux density (or ``magnetic induction''), and $\wedge$ denoted the standard 3-vector cross product: $\nabla\wedge\mathbf{A}:=\mathrm{curl}\mathbf{A}$ for a vector field $\mathbf{A}.$ The displacement $\vD$ and induction $\vB$ are related to the electric $\vE$ and magnetic $\vH$ fields via the constitutive laws \begin{equation} \vD=\varepsilon\vE,\quad\vB=\mu\vH, \label{constitutive_laws} \end{equation} where the permittivity $\varepsilon$ (``electric permittivity'', or ``dielectric constant'') and $\mu$ (``magnetic permeability'') are material parameters, which may depend on the frequency $\omega,$ see Section \ref{Lorentz_section}. In what follows we denote by $\epsO$ (respectively $\muO$) the permittivity (respectively, permeability) of free space (``vacuum"). We consider the system (\ref{eq:Maxwell}) either in a half-space $\{\mathbf{x}=(x_1, x_2, x_3)\in\mathbb{R}: x_{3}>0\},$ with a boundary condition at $\{x_{3}=0\}$ and a decay condition as $x_3\to\infty$, or in the full space ${\mathbb R}^3$ with an interface condition between two materials at $\{x_{3}=0\}$ and decay conditions as $x_3\to\pm\infty.$ Without loss of generality, we seek waves propagating in the $x_1$ direction on the $\{x_{3}=0\}$ surface (or interface), {\it i.e.} solutions to (\ref{eq:Maxwell}) of the form \begin{equation} \label{eq:SolForm} \vE (\mathbf{x})= \begin{pmatrix} E_{1}(x_3) \\[0.4em] E_{2}(x_3) \\[0.4em] E_{3}(x_3) \end{pmatrix} \exp\bigl({\rm i}(k x_{1} -\omega t)\bigr), \quad\quad \vB (\mathbf{x})= \begin{pmatrix} B_{1}(x_3) \\[0.4em] B_{2}(x_3) \\[0.4em] B_{3}(x_3) \end{pmatrix} \exp\bigl({\rm i}(k x_{1} -\omega t)\bigr),\quad x_1,x_3\in{\mathbb R}, \end{equation} where $k$ is the so-called wavenumber, which has the dimensions of inverse length. In what follows, we choose to work with the amplitude components continuous across the interface: $E_{1},$ $E_{2},$ $D_{3},$ $H_{1},$ $H_{2},$ $B_{3}.$ \subsection{Lorentz materials} \label{Lorentz_section} In the Lorentz oscillator model for the optical properties of materials \cite{AlmogEtAl, Nussenzveig, Rosenfeld}, electrons are considered bound to the nuclei, and the binding force interaction is represented by a ``mass-on-a-spring'' system under the assumption that the nucleus is far more massive than the electron and hence does not change its position. A damping term is introduced, to account for the inherent loss of energy as the (charged) electron accelerates, and the system is subject to a driving force of the same frequency as an incident electromagnetic radiation. The result of this construction is an $\omega$-dependent relative permittivity $\epsL(\omega):=\varepsilon/\varepsilon_0:$ \begin{equation} \label{eL_er} \epsL(\omega)=\epsR(\omega)+{\rm i}\frac{\sigma(\omega)}{\omega},\qquad\omega>0, \end{equation} \begin{equation} \label{eq:lorEps} \epsR(\omega): = 1 + \frac{\wP^2(\wO^{2}-\omega^{2})}{(\wO^{2}-\omega^{2})^{2}+(\omega\gamma)^{2}},\qquad \sigma(\omega):=-\frac{(\wP\omega)^2\gamma}{(\wO^{2}-\omega^{2})^{2}+(\omega\gamma)^{2}}, \end{equation} where $\wP$ and $\wO$ are the so-called plasma and resonant frequencies and $\gamma$ is the loss factor, all of which are material constants with the dimension of frequency. Note that $\epsR(\omega)$ and $\sigma(\omega)$ are real-valued whenever $\omega$ is real. In what follows we assume that $\mu=\mu_0,$ although the Lorentz theory can also be used to obtain the relative permeability $\muL=\mu/\mu_0$ as a function of $\omega$, for any imperfectly conducting material that admits polarisation by an external magnetic field. In addition, we note that a similar theory for the dependence of permittivity and permeability on $\omega$ has been developed for metals \cite[Chapter 7]{Jackson}, which yields the form \Eref{lorEps} with $\wO=0.$ The convention of expressing $\gamma$ in terms of the mean travel time between electron collisions is also commonly adopted in this case. \section{Maxwell system in a homogeneous space} \label{sec:HomMaxwell} In this section we discuss solutions to two auxiliary problems associated with the system (\ref{eq:Maxwell})--(\ref{constitutive_laws}) that describe the propagation of waves in a homogeneous medium, either dielectric (Section \ref{sec:SingLayMax}) or Lorentz (Section \ref{sec:LowLor}). \subsection{Classical solution in free space} \label{sec:SingLayMax} In view of our objective the study the Maxwell system in a stratified half-space in Section \ref{Maxwell_section}, we consider a homogeneous, isotropic dielectric material of permittivity $\varepsilon$ and permeability $\mu$ occupying the region $\{\mathbf{x}\in\mathbb{R}:x_{3}\in(0,a)\}$ for some constant $a\in\mathbb{R},$ which will represent the thickness of individual layers in Section \ref{Maxwell_section}. Using the ansatz \Eref{SolForm} yields two systems \begin{equation} \left\{\begin{array}{lll} E_{1}'= i\omega\mu H_{2} + \dfrac{{\rm i}k}{\varepsilon}D_{3},\\[0.7em] H_{2}'= i\omega\varepsilon E_{1},\\[0.9em] -\omega D_{3} = kH_{2},\end{array}\right.\qquad\qquad \left\{\begin{array}{lll} E_{2}'= -{\rm i}\omega\mu H_{1}, \\[0.7em] H_{1}'= \dfrac{{\rm i}k}{\mu}B_{3} - {\rm i}\omega\varepsilon E_{2}, \\[0.9em] \quad \omega B_{3} = kE_{2}, \end{array}\right. \end{equation} each consisting of two differential equations and one algebraic equation. Our analysis henceforth focuses on the transverse electric (TE) polarisation\footnote{Solutions to the transverse electric system, or ``polarisation" are also referred to as the ``electric waves''. Correspondingly, the transverse magnetic solutions is sometimes referred to as the ``magnetic waves'' in the literature, see {\it e.g.} \cite{BabichKuznetsov}.} involving the field components $E_{1}, H_{2}, D_{3}$, which can be expressed in matrix form as \begin{equation} \label{eq:TESys} \begin{pmatrix} E_{1}' \\[0.4em] H_{2}' \end{pmatrix} = \begin{pmatrix} 0 & -\dfrac{{\rm i}\alpha^{2}}{\omega\varepsilon} \\[0.7em] {\rm i}\omega\varepsilon & 0 \end{pmatrix} \begin{pmatrix} E_{1} \\[0.4em] H_{2} \end{pmatrix},\qquad \alpha^{2} := k^{2}-\omega^{2}\mu\varepsilon. \end{equation} The above $(2\times2)$-system for $\vU:=(E_1, H_2)^\top$ is solved by diagonalising the matrix $A:$ \begin{equation*} \label{eq:FormSolU} \vU(x_{3})= \exp(A x_{3})\vU(0) = \begin{pmatrix} \cosh(\alpha x_{3}) & -\dfrac{{\rm i}\alpha}{\omega\varepsilon}\sinh(\alpha x_{3}) \\[0.8em] \dfrac{{\rm i}\omega\varepsilon}{\alpha}\sinh(\alpha x_{3}) & \cosh(\alpha x_{3}) \end{pmatrix} \vU(0),\quad\ \ x_3\in(0,a). \end{equation*} We shall use this form for the solution when considering the Maxwell system in a stratified material in Section \ref{Maxwell_section}. For completeness, we note that calculations for the transverse magnetic (TM) polarisation, analogous to those above for the TE case, result in a system of the form (\ref{eq:TESys}), with $\vU$ replaced by $\mathbf{V}:=\left(H_{1}, E_{2}\right)^\top$ and $\varepsilon$ replaced by $-\mu.$ \subsection{Decaying solution in Lorentz half-space} \label{sec:LowLor} As preparation for considering the full-space problem, consider \Eref{Maxwell} in the half-space $\{x_{3}<0\}$, occupied by a Lorentz material with permittivity described in (\ref{eL_er})--\Eref{lorEps}. We impose a decay condition away from the boundary, seeking solutions that tend to zero as $x_{3}\rightarrow-\infty$, which yields \begin{equation*} E_{1}(x_{3}) = C\frac{{\rm i}\aL}{\omega\epsL\epsO}\exp(\aL x_{3}),\qquad H_{2}(x_{3}) = C\exp(\aL x_{3}),\qquad x_3<0,\qquad\quad C\in{\mathbb C}, \end{equation*} \begin{equation} \aL(k,\omega):=\sqrt{k^{2}-\omega^{2}\epsO\epsL\muO\muL},\qquad \arg(\aL)\in\left(-\pi/2,\pi/2\right]. \label{alpha_L} \end{equation} In particular, the following condition at $x_{3}=0$ is satisfied: \begin{equation} \label{eq:GeneralLeontovich} E_{1}(0) =-\frac{{\rm i}\aL}{\omega\epsL\epsO}H_{2}(0), \end{equation} which is similar to the classical Leontovich impedance boundary condition \cite{Senior}. The quantity $\aL/(\omega\epsL\epsO)$ has physical dimensions of Ohms and plays a r\^{o}le analogous to the impedance in the classical condition, and in what follows we refer to it as the generalised impedance. In Section \ref{GLC} we show how the condition (\ref{eq:GeneralLeontovich}) is used in the analysis of boundary-value problems, and in Section \ref{GeneralToClassical} we explore its relation to the classical Leontovich condition. In conclusion of this section we note that considering similarly the TM polarisation yields the boundary condition \begin{equation*} \label{eq:GLC_TM} E_{2}(0) = \frac{{\rm i}\omega\muO\muL}{\aL} H_{1}(0), \end{equation*} where $\muL$ is the $\omega$-dependent magnetic permittivity. \section{Maxwell system for stratified media} \label{Maxwell_section} Consider the upper half-space $\{x_{3}>0\}$ occupied by a stratified dielectric, {\it i.e.} a medium consisting of alternating layers of materials A and B, parallel to the $x_{3}$-plane, with permittivities $\epsA$ and $\epsB$, and permeabilities $\muA$ and $\muB$ respectively. We denote the period of the structure by $d,$ so that A-layers have thickness $dh$ and B-layers have thickness $d(1-h)$ for some $h\in(0,1)$. We wish to obtain solutions of the Maxwell system \Eref{Maxwell} that decay as $x_{3}\rightarrow\infty,$ subject to interface conditions (continuity of the fields) at $x_{3}=dh,d,d(1+h),2d,\dots.$ \subsection{Floquet analysis of arbitrary whole-space solutions} \label{Floquet_section} In this section we review results on matrix differential equations and Floquet theory, which we shall invoke when solving the Maxwell system in the region $\{x_3>0\}.$ These are specific for the polarisation and geometry we consider in the present paper but can be developed for other setups, based on the general analytical approach, see {\it e.g.} \cite{Eastham}. In the stratified half-space the Maxwell system has the form \begin{equation} \label{eq:Flo1} \vU'(x)=A(x)\vU(x),\qquad x>0, \end{equation} where we write $x$ instead of $x_3$ for brevity, $\vU$ is a 2-vector, and $A$ a piecewise-constant $(2\times2)$-matrix \begin{align*} A(x) = \begin{cases} A_{1}, &0<x<dh ,\\[0.3em] A_{2}, &dh<x<d, \end{cases} \end{align*} extended $d$-periodically. The matrices $A_{1}$ and $A_{2}$ have the form of the matrix in \Eref{TESys}, with the general $\varepsilon$ and $\mu$ replaced by the constants specific to the materials A and B. Each of them has two distinct eigenvalues and hence $A_j = T_j\Lambda_jT_j^{-1},$ $j=1,2,$ for diagonal $\Lambda_j$ and transformation matrices $T_j,$ whose columns are eigenvectors of $A_j.$ It follows that \begin{align*} \vU(x) = \begin{cases} T_{1}\exp(\Lambda_{1}x)T_{1}^{-1}\vU(0), &0<x\leq dh, \\[0.4em] T_{2}\exp\bigl(\Lambda_{2}(x-dh)\bigr)T_{2}^{-1}T_{1}\exp(\Lambda_{1}dh)T_{1}^{-1}\vU(0), &dh<x\leq d, \\ \end{cases} \end{align*} by solving (\ref{eq:Flo1}) in each layer and using the continuity condition at $\{x_{3}=dh\}.$ In particular, \begin{equation} \label{eq:EndpointEqns} \begin{aligned} &\vU(dh) = T_{1}\exp(\Lambda_{1} dh)T_{1}^{-1}\vU(0), \\[0.3em] &\vU(d)= T_{2}\exp\bigl(\Lambda_{2}d(1-h)\bigr)T_{2}^{-1}T_{1}\exp(\Lambda_{1}dh)T_{1}^{-1}\vU(0). \end{aligned} \end{equation} The general theory systems of linear ODEs implies the existence of an invertible matrix function $\Phi$ (``fundamental matrix'') such that for any solution $\vU$ to (\ref{eq:Flo1}) one has $\vU(x)=\Phi(x)\Phi(0)^{-1}\mathbf{U}(0),$ $x>0.$ Taking $\Phi(\cdot)\Phi(0)^{-1}$ instead of $\Phi(\cdot)$ if necessary shows that one can always choose $\Phi$ so that $\Phi(0)=I$ ({\it i.e.} $\Phi$ is the ``canonical fundamental matrix''), which we do henceforth. In what follows we show that there exists a fundamental matrix $\widetilde{\Phi}$ for \Eref{Flo1} of the form \begin{equation} \widetilde{\Phi}(x)=\widetilde{\Psi}(x)\mathrm{diag}\bigl\{\exp(\widetilde{\lambda}_{1}x), \exp(\widetilde{\lambda}_{2}x)\bigr\},\quad x>0, \label{Phi_tilde_form} \end{equation} where $\widetilde{\lambda}_1,$ $\widetilde{\lambda}_2\in{\mathbb C},$ and $\widetilde{\Psi}(x)$ is a $d$-periodic matrix. The matrix ({\it cf.} (\ref{eq:EndpointEqns})) \begin{align*} \Phi(d) &= T_{2}\exp\bigl(\Lambda_{2}d(1-h)\bigr)T_{2}^{-1}T_{1}\exp(\Lambda_{1}dh)T_{1}^{-1} \end{align*} is referred to as the monodromy matrix (or ``transfer matrix"). We write $\Phi(d)={\mathbb T}\,\mathrm{diag}\left(\lambda_{1},\lambda_{2}\right){\mathbb T}^{-1},$ where $\lambda_{1},\lambda_{2}$ are the eigenvalues of $\Phi(d),$ so that ${\mathbb T}$ is a matrix whose columns are the corresponding eigenvectors of $\Phi,$ and define the matrix function $\Psi$ as follows: \begin{equation*} \Psi(x):=\Phi(x){\mathbb T}\,\mathrm{diag}\biggl\{\exp\left(-\frac{x}{d}\ln\lambda_{1}\right), \exp\left(-\frac{x}{d}\ln\lambda_{2}\right)\biggr\}{\mathbb T}^{-1},\qquad x\in(0,d], \label{Psi_form} \end{equation*} where we use the principal value of the logarithm. Denote by $\widehat{\Psi}$ the $d$-periodic extension of $\Psi$ to $(0,\infty).$ We claim that the function \begin{equation*} \label{eq:ClaimForm} \widehat{\Phi}(x):=\widehat{\Psi}(x){\mathbb T}\,\mathrm{diag}\biggl\{\exp\left(\dfrac{x}{d}\ln\lambda_{1}\right),\exp\left(\dfrac{x}{d}\ln\lambda_{2}\right)\biggr\}{\mathbb T}^{-1},\qquad x>0, \end{equation*} coinsides with $\Phi.$ To see this, note first that $\widehat{\Phi}'=A\widehat{\Phi}$ everywhere except at $d, 2d, 3d,\dots,$ and is continuous at these points, since \begin{equation*} \widehat{\Psi}(d)=\Psi(d)=\Phi(d){\mathbb T}\,\mathrm{diag}\bigl\{\exp\bigl(-\ln\lambda_{1}\bigr), \exp\bigl(-\ln\lambda_{2}\bigr)\bigr\}{\mathbb T}^{-1} = \Phi(d)\Phi(d)^{-1}=I=\widehat{\Psi}(0), \end{equation*} and hence $\widehat{\Psi}$ is continuous. Since $\widehat{\Phi}(0)=\widehat{\Psi}(0)=I=\Phi(0),$ one has $\widehat{\Phi}=\Phi$ by the uniqueness theorem for (\ref{eq:Flo1}), see {\it e.g.} \cite{Coddington_Levinson}. Multiplying both sides of (\ref{eq:ClaimForm}) by ${\mathbb T},$ we find that the fundamental matrix (\ref{Phi_tilde_form}) is given by \begin{equation*} \widetilde{\Phi}(x) :=\Phi(x){\mathbb T} =\widehat{\Psi}(x){\mathbb T}\,\mathrm{diag}\biggl\{\exp\left(\frac{x}{d}\ln\lambda_{1}\right), \exp\left(\frac{x}{d}\ln\lambda_{2}\right)\biggr\}, \quad x>0. \end{equation*} where $\widetilde{\Psi}:=\widehat{\Psi}{\mathbb T}$ is $d$-periodic by the construction of $\widehat{\Psi}.$ An immediate consequence of the above is that any solution $\vU$ to \Eref{Flo1} has the form \begin{equation} \label{eq:solForm} \vU(x) = \widetilde{\Phi}(x) {\mathbf{C}} ={c}_{1} \exp\left(\dfrac{x}{d}\ln\lambda_{1}\right)\widetilde{\Psi}_{1}(x) + {c}_{2}\exp\left(\dfrac{x}{d}\ln\lambda_{2}\right)\widetilde{\Psi}_{2}(x),\quad x>0, \end{equation} with a constant vector $\mathbf{C}=({c}_{1}, {c}_{2})^\top$ and $\widetilde{\Psi}_j,$ $j=1,2,$ denoting the $j^{\mathrm{th}}$-column of the matrix $\widetilde{\Psi}.$ \subsection{Decaying solution in the stratified half-space} \label{sec:UppHSSol} As the matrices $A_1,$ $A_2$ are traceless, {\it cf.} \Eref{TESys}, one has $\lambda_{1}\lambda_{2}=1,$ which gives two possible cases: $\lambda_{1},\lambda_{2}=\lambda_1^{-1}\in \mathbb{R},$ $\vert\lambda_2\vert>1,$ and $\lambda_{2}=\overline{\lambda}_1\in{\rm i}\mathbb{R},$ $|\lambda_1|=1.$ In the second case all solutions (\ref{eq:solForm}) are non-decaying oscillatory and therefore irrelevant to our study. In the first case $\vU$ is a linear combination of exponentials decaying at $-\infty$ or $\infty.$ The condition of decay as $x\rightarrow\infty$ implies that \begin{equation} \label{eq:DecayCond} \vU(x) ={c}_{1} \exp\left(\dfrac{x}{d}\ln\lambda_{1}\right)\widetilde{\Psi}_{1}(x), \quad x>0. \end{equation} Imposing a specific boundary condition\footnote{As we discuss in Section \ref{Classical_Leontovich_cond_sec}, it is customary to impose a Leontovich condition $(E_1, E_2)^\top= Z(\mathbf{n} \wedge (H_1, H_2)^\top)$ on the boundary, see \cite{BabichKuznetsov}, \cite{Senior}. Alternatives include the ``metallic'' condition $E_{3}=0,$ obtained by setting the impedance $Z$ to zero.} at $x=0$ thus links the two components of the vector $\widetilde{\Psi}_1(0)={\mathbb T}_1,$ {\it i.e.} the first column of $\mathbb T.$ This provides an equation for $k, \omega$ describing the set of pairs $(k, \omega)$ for which there is a surface wave satisfying the required boundary condition. \section{Leontovich condition at the boundary of a stratified half-space} \label{GLC} \subsection{Half-spaces in contact} \label{FullSpaceProb} We consider the situation, see \Fref{FSDiagram}, where the half-space $\{x_{3}<0\}$ is occupied by a Lorentz material with $\omega$-dependent permittivity $\epsL$ as in \Eref{lorEps} and constant permeability $\mu$, while the complementary half-space $\{x_{3}\ge 0\}$ is occupied by a stratified dielectric as described in Section \ref{Maxwell_section}. \begin{figure} [h] \center \includegraphics[width=0.82\textwidth]{Fig1_TikzDiagram.pdf} \caption{Diagram of the full-space Maxwell system. \label{fig:FSDiagram}} \end{figure} We study the Maxwell problem in the entire space where the coefficients $\varepsilon, \mu$ in \Eref{Maxwell} take the values corresponding to the material occupying each region of the space and seek interfacial wave solutions of the form \Eref{SolForm}. At each interface we impose the standard conditions that the quantities $\vE \cdot \mathbf{n}$, $\vH \cdot \mathbf{n}$, $\vD \wedge \mathbf{n}$, $\vB \wedge \mathbf{n}$ be continuous, where $\mathbf{n}$ denotes the normal the interface. Seeking a wave on the interface between the two media, we impose the condition of (exponential) decay away from the plane $\{x_{3}=0\}.$ We focus on the TE polarisation, when $\vU=(E_1, H_2)^\top$. We solve the half-space problem in each of the two media and couple the solutions at the shared boundary $\{x_{3}=0\}.$ \subsection{Equivalent problem in a single half-space, and dispersion relations} \label{sec:EqHSProb} Having obtained the solution in the half-space $\{x_3<0\},$ see Section \ref{sec:LowLor}, we find that the full-space problem supports an interfacial wave when the fields are related at $\{x_{3}=0\}$ via \Eref{GeneralLeontovich} whenever $\Re(\aL)>0$. The condition \Eref{GeneralLeontovich} captures the effect of the Lorentz material on the full-space system: we may solve the equivalent half-space problem for a stratified dielectric, using (\ref{eq:GeneralLeontovich}) as a boundary condition and seeking surface wave solutions in the stratified half-space propagating along $\{x_{3}=0\}$. We apply the boundary condition \Eref{GeneralLeontovich} to the solution (\ref{eq:DecayCond}) in the half-space $\{x_3>0\}$ obtained in Section \ref{Maxwell_section}. The corresponding monodromy matrix, see Section \ref{Floquet_section}, is given by \[ {\mathbb T}=\left(\begin{array}{cc}C_{\rm B}C_{\rm A}+\dfrac{\chi_{\rm B}\varepsilon_{\rm A}}{\chi_{\rm A}\varepsilon_{\rm B}}S_{\rm B}S_{\rm A}&-\dfrac{{\rm i}\chi_{\rm A}}{\varepsilon_{\rm A}}C_{\rm B}S_{\rm A}-\dfrac{{\rm i}\chi_{\rm B}}{\varepsilon_{\rm B}}S_{\rm B}C_{\rm A}\\[1.6em] \dfrac{{\rm i}\varepsilon_{\rm A}}{\chi_{\rm A}}C_{\rm B}S_{\rm A}+\dfrac{{\rm i}\varepsilon_{\rm B}}{\chi_{\rm B}}S_{\rm B}C_{\rm A}&C_{\rm B}C_{\rm A}+\dfrac{\chi_{\rm A}\varepsilon_{\rm B}}{\chi_{\rm B}\varepsilon_{\rm A}}S_{\rm B}S_{\rm A} \end{array}\right), \] where we use the following expressions involving the dimensionless wavenumber $\widehat{k}=dk$ and phase velocity ${v_{\rm p}}=\omega/k:$ \begin{equation*} \begin{aligned} \chiA&:= \sqrt{1-{v_{\rm p}^2}\muA\epsA}, \quad \chiB = \sqrt{1-{v_{\rm p}^2}\muB\epsB},\\[0.4em] S_{\rm A}&:= \sinh(\chiA\widehat{k}h), \quad S_{\rm B}:= \sinh\bigl(\chiB\widehat{k}(1-h)\bigr), \quad C_{\rm A}:= \cosh(\chiA\widehat{k}h), \quad C_{\rm B}: = \cosh\bigl(\chiB\widehat{k}(1-h)\bigr). \end{aligned} \end{equation*} Following the argument of Section \ref{sec:UppHSSol}, we are interested in the values of $k$ and $\omega$ for which $(-{\rm i}\alpha_{\rm L}/(\omega\varepsilon_{\rm L}\varepsilon_0), 1)^\top$ is an eigenvector of the matrix ${\mathbb T}$ with an eigenvalue whose absolute value is smaller than one. As a result, we obtain the dispersion relation for interfacial wave solutions: \begin{align} \label{eq:DispRelFullGeneral} &\left[ \frac{\chiA\epsB}{\chiB\epsA} - \frac{\chiB\epsA}{\chiA\epsB} \right]{S}_{\rm A}{S}_{\rm B} + \left[\frac{\chiL\epsA}{\epsL\epsO\chiA} - \frac{\epsL\epsO\chiA}{\chiL\epsA}\right]{S}_{\rm A}{C}_{\rm B} + \left[\frac{\chiL\epsB}{\epsL\epsO\chiB}- \frac{\epsL\epsO\chiB}{\chiL\epsB}\right]{S}_{\rm B}{C}_{\rm A}=0,\\[0.5em] &\chiL:= \sqrt{1-{v_{\rm p}^2}\muO\muL\epsL\epsO}.\nonumber \end{align} The admissible $(\widehat{k},\vp)$ must further satisfy the conditions $\Re(\aL)>0,$ where $\aL$ is given by (\ref{alpha_L}), and \begin{equation} \biggl\vert\biggl(\dfrac{\varepsilon_{\rm B}}{\chi_{\rm B}}S_{\rm B}C_{\rm A}+\dfrac{\varepsilon_{\rm A}}{\chi_{\rm A}}C_{\rm B}S_{\rm A}\biggr)\frac{\chi_{\rm L}}{\varepsilon_{\rm L}\varepsilon_0}+\dfrac{\chi_{\rm A}\varepsilon_{\rm B}}{\chi_{\rm B}\varepsilon_{\rm A}}S_{\rm B}S_{\rm A}+C_{\rm B}C_{\rm A}\biggr\vert<1. \label{lambda_cond} \end{equation} The TM polarisation gives a dispersion relation similar to the above, see a discussion at the end of Section \ref{sec:SingLayMax}. The condition \Eref{DispRelFullGeneral} in general provides two constraints on $\widehat{k}$ and ${v_{\rm p}},$ namely that the real and imaginary parts of the expression vanish. With the exception of a case of homogeneous dielectric occupying $\{x_{3}>0\},$ the real part in \Eref{DispRelFullGeneral} does not vanish identically in any region of $(\gamma, \omega, k).$ Should this happen to the imaginary part, one obtains dispersion branches provided by a single equation\footnote{We can interchange between the pairs $(k,\omega)$ and $(\widehat{k},\vp)$ freely, so a dispersion relation between one pair gives an equivalent one between the other.} in $(k,\omega)$ for a fixed value of $\gamma.$ One such situation is if the Lorentz material is lossless ($\gamma=0$), when one obtains an explicit dispersion relation possessing multiple dispersion branches, see Section \ref{LosslessSystem}. In general, the imaginary part in \Eref{DispRelFullGeneral} is a function of $\gamma, \omega, k:$ one can express (at least locally) $\gamma$ in terms of $\omega,$ $k$ and substitute it into the real part of \Eref{DispRelFullGeneral}, to obtain a dispersion relation in $k$ and $\omega$ only. We do not pursue this approach analytically, however an example of a numerical solution $(\gamma, \omega, k)$ to \Eref{DispRelFullGeneral} is provided in Figure \ref{3Dfigure}. \begin{figure} [h!] \center \includegraphics[width=0.82\textwidth]{TriplesPlot_AllVars.png} \caption{{\sc Solutions $(\gamma, \omega, k)$ to \Eref{DispRelFullGeneral}.} The plot shows the dependence of $(\omega, k)$ on the values of $\gamma$ such that $\log_{10}(\gamma/\omega_0)\in[-15,15].$ Parameter values used: $h=0.5$, $\muA=\muB=\muO$, $\epsA=5\epsO$, $\epsB=10\epsO,$ $\mu_{\rm L}=1.$ The value for $\wP/\wO=2.13$ is the same as in \cite{AlmogEtAl}. \label{3Dfigure}} \end{figure} \begin{figure} [h!] \centering \subfloat[Projection of the $(\gamma,\omega,\hat{k})$ onto the $(\gamma, \hat{k})$-plane.]{\includegraphics[width=0.45\textwidth]{TriplesPlot_kg.png}} \hfill \subfloat[Projection of the $(\gamma,\omega,\hat{k})$ onto the $(\omega, \hat{k})$-plane.]{\includegraphics[width=0.45\textwidth]{TriplesPlot_kw.png}} \hfill \subfloat[Projection of the $(\gamma,\omega,\hat{k})$ onto the $(\gamma, \omega)$-plane.]{\includegraphics[width=0.45\textwidth]{TriplesPlot_wg.png}} \caption{{\sc Solutions $(\gamma, \omega, k)$ to \Eref{DispRelFullGeneral} in projections.} The plots show the projections of the curve in Figure \ref{3Dfigure} onto different coordinate planes in the $(\gamma, \omega, k)$-space. The range for $\gamma$ and the parameter values are the same as in Figure \ref{3Dfigure}.} \end{figure} \begin{remark} We have so far assumed the magnetic permeability $\muL$ of the Lorentz medium to be a fixed material constant. However, it can be more generally modelled as $\omega$-dependent, when takes a form analogous to $\epsL$ in \eqref{eL_er}--\eqref{eq:lorEps} and is treated as a function of the frequency $\omega,$ see Section \ref{Lorentz_section}. \end{remark} \subsection{Interfacial waves for a lossless Lorentz medium and stratified dielectric} \label{LosslessSystem} In the case of a lossless Lorentz material $\varepsilon_{\rm L}=\varepsilon_{\rm r}$, {\it cf.} (\ref{eL_er})--(\ref{eq:lorEps}). As before, the pairs $(\widehat{k},\vp)$ that satisfy (\ref{eq:DispRelFullGeneral}) are also subject to the conditions $\Re(\alpha_{\rm L})>0$ and (\ref{lambda_cond}). The corresponding dispersion branches on the region $\omega\le50\omega_0,$ $dk\le10$ are shown in \Fref{LLDWOKall}. Notably, lowest branch possesses cut-on values for the frequency and wave-number $\omega/\omega_0=1,$ $\hat{k}=0.526,$ below which no interfacial waves exist. \begin{figure}[!h] \centering \subfloat[ The value $\wP/\wO=2.13$ is the same as in Figure \ref{3Dfigure}. ] {\includegraphics[width=0.45\textwidth]{DispBranchesRatio2-13.png}} \hfill \subfloat[The ratio of the plasma and resonant frequencies is increased to $\wP/\wO=5.$ ] {\includegraphics[width=0.45\textwidth]{DispBranchesRatio5-00.png}} \hfill \subfloat[The ratio of the plasma and resonant frequencies is increased further to $\wP/\wO=10.$ Long-wave solution appears at finite frequency. ] {\includegraphics[width=0.45\textwidth]{DispBranchesRatio10-00.png}} \hfill \subfloat[The ratio of the plasma and resonant frequencies is increased further to $\wP/\wO=25.$ Long-wave solution appears at a larger number of finite frequencies. ] {\includegraphics[width=0.45\textwidth]{DispBranchesRatio25-00.png}} \caption{{\sc Dispersion for lossless Lorentz half-space.} Dispersion branches that support decaying surface waves, when a lossless Lorentz material occupies the lower half-space. Parameter values used in each panel: $h=0.5$, $\muA=\muB=\muO$, $\epsA=5\epsO$, $\epsB=10\epsO,$ $\mu_{\rm L}=0,$ $\gamma=0.$ \label{fig:LLDWOKall}} \end{figure} \section{Half-space impedance condition} \label{GeneralToClassical} \subsection{Classical Leontovich condition as a limit of (\ref{eq:GeneralLeontovich})} \label{Classical_Leontovich_cond_sec} It is common to make approximations to the exact interface conditions in systems similar to those in Section \ref{GLC}, by invoking a boundary condition at $\{x_{3}=0\}$ for one of the two half-spaces. One such approximation is the classical Leontovich (or impedance) boundary condition, which requires that at the interface $\{x_{3}=0\}$, the tangential components of the $\vE$ field ($\vE_{\mathrm{t}}$) and the $\vH$ field ($\vH_{\mathrm{t}}$) are related via \begin{equation} \label{eq:Leontovich} \vE_{\mathrm{t}} = Z(\mathbf{n} \wedge \vH_{\mathrm{t}}). \end{equation} Here, $\mathbf{n}$ denotes the normal vector to the interface (pointing out from the material that is to be neglected, into the remaining material). The quantity $Z$ represents an impedance, and more generally has the form $Z=\sqrt{\mu / \varepsilon}$ for a permittivity $\varepsilon$ and permeability $\mu$. In this section we show that (\ref{eq:GeneralLeontovich}) can be used to recover the classical Leontovich condition. One can express the generalised impedance in (\ref{eq:GeneralLeontovich}) as follows: \begin{align*} -\frac{{\rm i}\aL}{\omega\epsL\epsO} = -\frac{\rm i}{\omega\epsL\epsO}\sqrt{k^{2}-\omega^{2}\muL\muO\epsL\epsO} = -{\rm i}\sqrt{\frac{\muL\muO}{\epsL\epsO}}\sqrt{\frac{k^{2}}{\omega^{2}\muL\muO\epsL\epsO}-1}, \end{align*} in the case of TE polarisation, and \begin{align*} -\frac{{\rm i}\omega\muO\muL}{\aL} &= -{\rm i}\sqrt{\frac{\muO\muL}{\epsO\epsL}}\left(\frac{k^{2}}{\omega^{2}\muO\muL\epsO\epsL} -1 \right)^{-\frac{1}{2}} \end{align*} for TM polarisation. For the case of constant $\muL$ and provided \begin{equation} \abs{\frac{k^{2}}{\omega^{2}\epsL}}\ll 1, \label{new_cod} \end{equation} we obtain the classical Leontovich condition with impedance $Z = \sqrt{\muO\muL/\epsO\epsL}$ as an approximation, up to the order $O\bigl(\bigl\vert \omega^{-2}\epsL^{-1}k^{2}\bigr\vert\bigr),$ to the condition \Eref{GeneralLeontovich}. To conclude, we note that in \cite{Senior} the Leontovich condition is purported to be derived under the condition that $\Mod{\epsL}\gg1,$ which coincides with (\ref{new_cod}) under the assumption that $\omega/k$ is bounded above and below. \subsection{Homogeneous half-space with the classical Leontovich condition} \label{sec:BabichRecovery} Under the assumption that the stratified dielectric half-space is actually homogeneous, \Eref{DispRelFullGeneral} is reduced to the equation \begin{align*} &\sinh\bigl(\chiA\widehat{k}\bigr)\left(\frac{\epsL\epsO\chiA}{\epsA\chiL} - \frac{\epsA\chiL}{\epsL\epsO\chiA}\right)=0, \end{align*} by setting\footnote{To obtain a homogeneous half space as a limit of the stratified system, one could also take the limit $h\rightarrow0.$} $\epsA=\epsB$ and $\muA=\muB.$ This equation has solutions for when either factor is zero, in the case of the $\sinh$ factor this happens only when $\chiA=0$ (and hence $\chiB=0$), which does not correspond to a decaying wave into the dielectric (the expression in (\ref{lambda_cond}) has modulus one, and the solution (\ref{eq:DecayCond}) has constant amplitude). In the case of the latter factor we may rearrange and obtain the dispersion relation \begin{align*} \omega^{2} &= \frac{k^{2}}{\muA\epsO\epsL-\muO\muL\epsA}\left(\frac{\epsO\epsL}{\epsA}-\frac{\epsA}{\epsO\epsL}\right). \labelthis\label{eq:DispRelHomoHS2} \end{align*} Note that for a general Lorentz material \Eref{DispRelHomoHS2} still has non-zero real and imaginary part, due to the presence of $\epsL$. Therefore, the discussion of Section \ref{sec:EqHSProb} is still applicable here, although it is now possible to rearrange and obtain $\gamma$ as a function of $\omega$ and $k$. One can also obtain results concerning homogeneous dielectric systems with classical Leontovich boundary conditions from the more general system presented in Section \ref{GLC}, using the fact that the dispersion relation collapses to \Eref{DispRelHomoHS2}. In the regime of bounded $k/\omega$ and with $\abs{\epsL}\gg 1$, the dispersion relation \Eref{DispRelHomoHS2} reduces, to leading order, to the relation obtained in \cite[Eq.\,(9)]{BabichKuznetsov}: \begin{align*} \omega = \dfrac{kc}{\sqrt{\dfrac{\epsA\muA}{\epsO\muO}\left(1-\dfrac{\muO\muL}{\epsO\epsL}\dfrac{\epsA}{\muA}\right)}}. \end{align*} \section{Conclusions} In our analysis of the full-space problem, we have demonstrated the possibility to obtain a dispersion relation as in \Eref{DispRelFullGeneral} that in general has non-trivial expressions for the real and imaginary part, thereby providing only finitely many points $(\omega, k)$ that support surface waves for each value of the loss parameter $\gamma.$ The imaginary component of the dispersion relation has the form $F(\gamma, \omega, k)=0$, and if it permits manipulation for $\gamma$ in terms of the frequency $\omega$ and wave-number $k$, one can obtain dispersion branches via substitution into the real part of the dispersion relation. Each point of such a dispersion relation corresponds to an individual value of the parameter $\gamma.$ In general, however, the equation $F(\gamma, \omega, k)=0$ is unlikely to admit a closed form for $\gamma$ as a function of $\omega,$ $k.$ In the case of a lossless Lorentz material (see section \ref{LosslessSystem}), {\it i.e.} when $\gamma=0,$ one can obtain dispersion branches $(\omega, k)$ that support (decaying) surface waves, see \Fref{LLDWOKall}. These exhibit two qualitative differences from the case of a stratified medium in contact with a non-dispersive dielectric (see Appendix), namely the presence of an additional low-frequency branch with a frequency cut-on at approximately the resonant frequency, as well as the presence of an increasing number of long-wave propagating modes for larger values of the ratio between the plasma and resonant frequencies. In the course of our analysis of the full-space Maxwell system, the interface conditions \Eref{GeneralLeontovich} has been obtained for the Maxwell system \Eref{Maxwell} in the case when the lower (Lorentz) half-space has $\omega$-dependent permittivity as in \eqref{eL_er} and constant permeability. This condition plays an analogous r\^{o}le to the classical Leontovich condition \Eref{Leontovich}, in that it allows one to reduce a full-space problem with an interface to a half-space problem, with a boundary condition derived from one of the constituent media. The expression for the generalised impedance comes from the exterior Lorentz material, and other material parameters emerge from the stratified half-space to which the problem is reduced. In this sense, the approach can be viewed as a combination of the perspectives of \cite{Senior} and \cite{BabichKuznetsov}: in the former, the impedance boundary condition is derived for the Maxwell equations in what would be the analogue of our Lorentz half-space and in the latter these conditions were postulated in the complementary dielectric half-space. In conclusion, we note that the results of the present paper can be generalised to the context of linearised elasticity, non-local constitutive relations (such as those discussed in \cite{Chebakov_et_al}), as well as the case of a thin interfacial layer between heterogeneous and/or dispersive media. We postpone the related analysis to future publications. \section*{Appendix: Lorentz medium replaced by a non-dispersive dielectric} When the half-space occupied by the Lorentz material is filled with a non-dispersive dielectric instead, we obtain dispersion curves shown in Figure \ref{fig:DispBranchesDSD_Bbigger}. \begin{figure} [h!] \center \includegraphics[width=0.66\linewidth]{DispBranchesRatio0-00.png} \caption{{\sc Dispersion curves for a homogeneous dielectric half-space in contact with a stratified dielectric.} Parameter values used: $h=0.5, \muA=\muB=\mu_0,$ $\epsA=5\epsO,$ $\epsB=10\epsO,$ and $\varepsilon_{\rm L}=\mu_{\rm L}=1.$ The value $\wO=6.077\times10^{15}\,{\rm s}^{-1}$ is used to obtain the non-dimensional frequency $\omega/\omega_0.$ \label{fig:DispBranchesDSD_Bbigger}} \end{figure} The convergence, as $\omega_{\rm p}/\omega_0\to0,$ of the dispersion diagrams for waves along the interface between a lossless Lorentz dielectric half-space and a stratified dielectric is illustrated in Figure \ref{fig:DispBranchesDSD_Bbigger1}. \begin{figure}[!h] \centering \subfloat[The ratio $\wP/\wO$ is set to unity. ] {\includegraphics[width=0.45\textwidth]{DispBranchesRatio1-00.png}} \hfill \subfloat[Lowest dispersion branch. The ratio $\wP/\wO$ is set to unity, as in the panel (a). ] {\includegraphics[width=0.45\textwidth]{LowestBranchRatio1-00.png}} \hfill \subfloat[Lowest dispersion branch. The ratio of the plasma and resonant frequencies is decreased to $\wP/\wO=0.1.$ ] {\includegraphics[width=0.45\textwidth]{LowestBranchRatio0-10.png}} \hfill \subfloat[Lowest dispersion branch. The ratio of the plasma and resonant frequencies is decreased further to $\wP/\wO=0.01.$ ] {\includegraphics[width=0.45\textwidth]{LowestBranchRatio0-01.png}} \hfill \caption{{\sc Waves along the interface between a dispersive half-space and a stratified dielectric, as $\omega_{\rm p}/\omega_0\to0.$} Parameter values used in each panel: $h=0.5,$ $\muA=\muB=\mu_0,$ $\epsA=5\epsO,$ $\epsB=10\epsO,$ $\mu_{\rm L}=1,$ and $\gamma=0.$ The value $\wO=6.077\times10^{15}\,{\rm s}^{-1}$ is used to obtain the non-dimensional frequency $\omega/\omega_0.$ \label{fig:DispBranchesDSD_Bbigger1}} \end{figure} \section*{Acknowledgements} KC is supported by Engineering and Physical Sciences Research Council: Grant EP/L018802/2 ``Mathematical foundations of metamaterials: homogenisation, dissipation and operator theory". WG is supported by a scholarship from the EPSRC Centre for Doctoral Training in Statistical Applied Mathematics at Bath (SAMBa), under the project EP/L015684/1.
{ "timestamp": "2019-07-26T02:11:33", "yymm": "1802", "arxiv_id": "1802.06921", "language": "en", "url": "https://arxiv.org/abs/1802.06921" }
\section{Introduction} \label{sec:Intro} The first attempts to classify asteroids, mostly based on multi-color photometry and spectroscopy at visible wavelengths, led to the identification of the so-called S and C ``complexes''. Later on, taxonomy based on low-resolution spectra in the visible domain by \citet{Bus_1999}, \citet{Xu_1995}, and \citet{Bus_Bin} (we will refer to it as SMASS taxonomy hereafter) resulted in the partition of the S-complex into several sub-classes, based on differences in spectral slope and drop of reflectance at wavelengths above 0.72-0.76~${\rm \mu}$m. Among them, the L-class includes asteroids having the smallest drop of reflectance and a relatively steep slope. The first goal of taxonomy is to differentiate asteroids based on their composition. However, the differences between the L and other classes (S, K, and A) identified using visible wavelengths data, are sometimes not very sharp and can lead to compositional misclassification. Another SMASS class, similar to the L, but exhibiting a slightly steeper spectral slope, was also introduced and named Ld \citep{Bus_Bin}. More recently, \citet{Dem_2009} extended the SMASS taxonomy to the near-infrared region (from 0.82 to 2.45~${\rm \mu}$m), including the whole 1~${\rm \mu}$m silicate absorption band and extending up to the other major absorption band of silicates around 2~${\rm \mu}$m. Based on the observed behaviour in this larger wavelength range, most of the previously identified L-class asteroids retained an L-classification. These asteroids were found to exhibit a strong absorption feature at wavelengths around 2~${\rm \mu}$m and an almost absence of the 1~${\rm \mu}$m band. However, several differences were also found with respect to the SMASS classification. Some objects previously classified as K- and A- were found to belong to the new L-class, whereas some SMASS L-class were moved to S-, D- or X-class in the new classification. The SMASS Ld-class was found to be almost fully contained in the new L- and D-class \citep{Dem_2009}. For sake of clarity, in the rest of this paper any references to the SMASS taxonomy will be referred to as (SMASS), while the more recent DeMeo taxonomy will be referred to as (DM). The degree of linear polarization of sunlight scattered by asteroid surfaces is a function of the phase angle, namely the angle between the directions asteroid-Sun and asteroid-observer. The resulting phase-polarization curves share common general morphology, with variations that are mostly albedo-dependent \citep[for a recent analysis of the subject, see][and references therein]{Cel_2015a}. The plane of linear polarization is almost always found to be either coincident with or perpendicular to the scattering plane \citep{Dol_1989,Muinonenetal2002}. The state of linear polarization of asteroids is usually described using the $P_{\rm r}$ parameter, whose module is equal to the measured degree of linear polarization, but adding a sign, to indicate if the plane of polarization is found to be perpendicular (positive sign) or parallel (negative sign) to the scattering plane. Observations show that the range of phase angle for which $P_{\rm r}$ is negative extends from zero up to an \textit{inversion angle} ($\alpha_{\rm inv}$) generally found around $20^\circ$. This region of the phase-polarization curve is commonly called the ``negative polarization branch''. At larger phase angles, the sign of $P_{\rm r}$ becomes positive (``positive polarization''). The details of the morphology of phase-polarization curves (the maximum value of negative polarization $P_{\rm min}$, at which phase angle this is occurring $\alpha(P_{\rm min})$, value of the inversion angle $\alpha_{\rm inv}$ and slope of the curve around it) are not only diagnostic of the albedo \citep{Cel_2015a}, but have also been found to be useful to discriminate among different taxonomic classes based on reflectance spectra \citep{Penttila05}. Fig. \ref{fig:PP_Curve} shows the typical asteroid phase-polarization curve of the asteroid (1)~Ceres. The locations of $P_{\rm min}$, $\alpha(P_{\rm min})$, and $\alpha_{\rm inv}$ are displayed and labelled. \begin{figure} \centering \includegraphics[width=8cm]{PP_Curve_Param.eps} \caption{Example of the typical phase-polarization curve of (1)~Ceres} \label{fig:PP_Curve} \end{figure} \citet{b5} reported the discovery of the anomalous polarimetric properties exhibited by the asteroid (234)~Barbara. The phase-polarization curve of this object possesses an unusually wide negative polarization branch, extending up to an inversion angle around $30^{\circ}$. Such behaviour had not been previously observed and was not predicted by theoretical models \citep{Shk_1994}. Because (234) Barbara had been classified as Ld in the SMASS taxonomy, other asteroids of this or similar taxonomic classes were subsequently observed and 18 objects sharing the same polarimetric behaviour of Barbara were found by \citet{b6}, \citet{b31}, \citet{Gil_2011}, \citet{Gil_2014}, \citet{Cel_2014}, \citet{Bag_2015}, and \citet{Dev_2017a}. They were collectively named \textit{Barbarians} after the asteroid (234)~Barbara. Barbarians are a class of rare objects which do not exhibit any preferred location within the asteroid main belt, apart from the presence of the Watsonia dynamical family \citep{Cel_2014}, and possibly a few more, which members are Barbarians. In this paper, we will call ``Barbarian'' any asteroid exhibiting an inversion angle above $25^{\circ}$, independently on any other physical or dynamical property. Tentative explanations have been proposed to explain the unusual polarimetric properties of the Barbarians. They include peculiar surface composition and/or texture properties, and/or the presence of large concavities that might introduce an unusual distribution of the directions of the incidence and emergence angles of scattered sunlight, with respect to the case of purely convex bodies. Barbara itself, in particular, has been extensively studied and was actually found to have a fairly irregular shape including large-scale concavities. The rotation period was also found to be unusually long ($P = 26.4744 \pm 0.0001$ h) \citep{Tan_2015} compared to other asteroids of the same size. Other known Barbarian asteroids also exhibit slow rotation rates. In \citet{Dev_2017b} we studied in more detail the role of possible concavities and also the significance of an anomalous abundance of slow rotators among Barbarians. However, it seems that we do not have enough evidence yet to draw any definitive conclusion. Available reflectance spectra suggest that Barbarians have spinel-enriched surfaces. The first indication of this came from an analysis of the reflectance spectra of (387) Aquitania and (980) Anacostia performed by \citet{Bur_1992}, well before the discovery of the Barbarian polarimetric properties of these two objects, by \citet{b6} and \citet{b31}. \citet{Bur_1992} noted that these two asteroids, previously classified as S, show reflectance spectra clearly different from that of typical S-class asteroid. Both present a strong 2~${\rm \mu}$m absorption feature, and a nearly absent absorption feature around 1~${\rm \mu}$m while typical S-class spectrum is characterized by a 1~${\rm \mu}$m band stronger than the 2~${\rm \mu}$m one. These authors interpreted such behaviour as being due to the presence on the surface of unusually high amounts of the spinel mineral. Spinels ($\left[{\rm Fe,Mg}\right]{\rm Al_2O_4}$) are important components of the conglomerate of different element called Calcium Aluminium-rich Inclusion (CAI) found in meteorites. Even a small fraction, typically from $10\%$-$30\%$, of FeO enriched aluminous spinel (${\rm Mg Al_2 O_4}$) in CAIs can produce a strong absorption feature around 2~${\rm \mu}$m, similar to what we see in Barbarian spectra. The CO chondrite meteorites exhibit the highest known abundance of CAI, but never exceeding $13\%$ in volume. CV3 chondrites possess the greatest diversity of CAIs, but abundances are lower than $10\%$. \citet{Bur_1992} had originally suggested an abundance between $5$ and $10\%$ of CAIs on the surfaces of (387)~Aquitania and (980)~Anacostia considering an immature regolith. More recently, considering mature regolith, an analysis of the spectra of asteroids (234), (387) and (980) compared with laboratory spectra led \citet{b15,Sun_2008a} to conclude that a fraction of spinel-bearing CAIs of the order of $\sim30\%$ in volume is needed to fit the observed near-infrared spectra of these asteroids. No known example of such high CAI abundances can be found in current meteorite collections. A high concentration of a particular type of CAI, called fluffy-type A, suggests a formation in an environment characterized by a high concentration of CAIs, and an absence of strong thermal alteration after formation. FeO enriched aluminous spinel has a relatively high and variable real part of the refractive index in the visible part of the spectrum \citep{Hos_2008}. It is found to be $n=1.83$ in blue light, and decreases at longer wavelengths, down to $n=1.78$ in the red and near-infrared (NIR), where it is almost constant. \citet{Sun_2008b} suggested that a high refractive index might possibly be responsible for an uncommonly large inversion angle of polarization. In the case that a high abundance of spinel-bearing CAIs can be proven to be the correct explanation of the wide negative polarization branch observed for Barbarians, we would have good reasons to believe that these asteroids accreted in a nebula rich of refractory materials, and contain the most ancient mineral assemblage currently found in the inner solar system. Based on the facts mentioned above, we started an observational campaign of L-class asteroids (both SMASS and DM) and known Barbarian asteroids. We present in Section \ref{sec:Obs} our new polarimetric observations done in the framework of Calern Asteroid Polarimetric Survey (CAPS) \citep{Dev_2017a} and at the Rozhen observatory as well as new spectroscopic observations carried out at the NASA Infra Red Telescope Facility (IRTF) using the near-infrared spectrograph SpeX \citep{Ray_2003}. The section \ref{sec:Data_Anal} is devoted to the description of the models used to analyse the phase-polarization curve and the spectra of L-type asteroids. A Hapke model involving space weathering process is also described. In Section \ref{sec:Res_Disc}, the results relative to the spectral classification, the phase-polarization curve, the geometric albedo, the spectral fitting, asteroid families, and the identification of new Barbarians are presented. The Section \ref{Sec:Inter_Disc} is devoted to the interpretation and discussion of the relation between (DM) L-type and the Barbarians, the composition of L-class asteroids, some interpretation for the high polarimetric inversion angle of the Barbarians. Finally, Section \ref{sec:Conc_Persp} presents our conclusions and perspectives for future works. \section{Observations} \label{sec:Obs} In this work, 36 targets were observed in polarimetry, spectroscopy or both. They were selected on the basis of satisfying one or more of the following criteria: \begin{itemize} \item Being a known Barbarian; \item Belonging to (SMASS) L- or Ld-class; \item Belonging to (DM) L-class; \item Being a member of one of the following dynamical families known or suspected to include Barbarians and/or L-class (SMASS or DM) members: Watsonia, Henan \citep{Nes_2015} and, Tirela \citep{Mot_2008}, renamed Klumpkea by \citet{Mil_2014}. \end{itemize} \subsection{Polarimetric observations} A high-priority goal of this work was to understand the reason of the abnormally large inversion angle of Barbarian. However, we included also some targets which were already known to be non-Barbarians. These targets belong to (SMASS) taxonomic classes which have been found in the past to include Barbarians. Their characterization allows us to better understand the relationship between spectroscopy and polarimetry. The polarimetric data were acquired in two distinct observatories. Most of them were taken at the C2PU (Centre p\'edagogique Plan\`ete et Univers) facility of the Calern station of the Observatoire de la C\^ote d'Azur (Nice, France). The ToPol (Torino Polarimeter) was used to carry out the observations, that were part of the CAPS program started in early 2015. The ToPol is mounted on the Cassegrain focus (F/12.5) of the $1.04$~m West telescope of the C2PU facility. It involves a Wedged-Double Wollaston prism, a configuration yielding the polarimetric reduced Stokes parameters $q$ and $u$ in one single exposure. All data were processed using classical aperture photometry, but using a curve of growth procedure \citep{Bag_2011, Bag_2015}. This procedure consist of measuring the fluxes of the four replicas of the target using gradually increasing aperture size. The optimal aperture size is then selected by visual inspection of the values of $q$ and $u$ as a function of the aperture size. Full description of the instrument and the reduction techniques used for this instrument are described in \citet{SPIE2012} and \citet{Dev_2017a}. Some fainter targets were observed using the 2-Channel-Focal-Reducer Rozhen (FoReRo2) with a polarimetric mode retarder half-wave plate mounted at the 2 m telescope of the Bulgarian National Astronomical Observatory (Rozhen, Bulgaria). See \citet{Joc_2000} for a full description of the instrument. A single Wollaston prism is used to measure on each CCD acquisition either the $q$ or $u$ reduced Stokes parameter. A retarder wave-plate was recently added to more easily rotate the observed polarization angles. The retarder wave plate is not described in the original paper. Table \ref{Pola_meas} lists all the polarimetric observations presented in this paper. All the observations were done between December 20, 2014 and January 14, 2017. All measurements were done in the standard $V$ Johnson-Cousins band. \subsection{Near-infrared spectroscopic observations} \label{sec:NIR_Spec} The new spectroscopic data presented in this work were obtained during two nights (September 21, 2014 and January 22, 2015). The asteroids were observed from 0.8 to 2.5~${\rm\mu}$m, using the SpeX instrument \citep{Ray_2003} in the low resolution (R $\sim$ 200) PRISM mode mounted on the 3-meter NASA InfraRed Telescope Facility (IRTF) telescope on Mauna Kea. All the targets were observed near the meridian and solar analogue stars were observed near the target just after or before the target to calibrate out telluric absorptions and to correct for differences from the solar spectrum. A nodding procedure was used for each set of exposures. This procedure consists in acquiring a pair of spectra at two distinct locations on the CCD field (referred to A and B positions). A $0.8 \times 15$ arcseconds slit aligned north-south was used for all the observations. Flat field images were obtained by illuminating an integrating sphere. Spectra of an argon lamp were also taken immediately before or after the observation of the targets for wavelength calibration. The extraction and first reduction of the spectra were carried out using the IRTF pipeline SpexTool \citep{Cus_2004}. This pipeline performs sky subtraction using the A-B pair, corrects for the flat field, calibrates the wavelength using spectra taken at the beginning and end of each target observation, and finally extracts the reduced spectra. The removal of telluric absorptions was performed using the ATmospheric TRAnsmission (ATRAN) model \citep{Lord_1992} on each individual spectrum. This correction constitutes a very important step since the water vapour has strong absorption bands around 1.4 and 2~${\rm\mu}$m. The final spectrum of an asteroid is constructed by averaging all the individual observed spectra. A sigma clipping procedure is used to reject outliers that may occur due to cosmic rays contamination. The new obtained NIR spectra were merged with SMASS visible spectra whenever available (see Sec.~\ref{sssec:V-IR Merging} for details on the merging procedure). Table \ref{Sum} summarizes all the spectroscopic observations presented in this work. For each target, the magnitude, number of AB pairs, total exposure time and airmass at mid observation are listed. The solar analogue star used for the reduction of each individual target is also listed so as its airmass at mid observation. All the reduced spectra, merged with the visible part when available, are displayed in ~\ref{app:spec}. \begin{table*} \centering \begin{tabular}{| lccccc | cc |} \hline \multicolumn{6}{|c|}{Asteroids} & \multicolumn{2}{c|}{Solar Analogues} \\ Name & $m_v$ & \#AB pairs & $t_{\rm exp}$ [s] & Run & Airmass & Name & Airmass \\ \hline (12)~Victoria & 11.9 & 5 & 200 & 2 & 1.30 & SAO42382 & 1.86 \\ (122)~Gerda & 13.9 & 5 & 900 & 2 & 1.31 & BS4486 & 1.18 \\ (172)~Baucis & 12.3 & 6 & 1080 & 1 & 1.48 & SA 115-271 & 1.15 \\ (458)~Hercynia & 13.0 & 8 & 960 & 1 & 1.22 & SA 93-101 & 1.07\\ (611)~Valeria & 11.4 & 5 & 600 & 1 & 1.12 & SA 93-101 & 1.42 \\ (753)~Tiflis & 14.8 & 5 & 1000 & 2 & 1.05 & Hyades 64 & 1.07 \\ (1372)~Haremari & 14.6 & 7 & 1400 & 2 & 1.08 & SA 98-978 & 1.10\\ (2354)~Lavrov & 15.8 & 6 & 1440 & 1 & 1.30 & SA 115-271 & 1.15\\ (4917)~Yurilvovia & 16.2 & 8 & 1880 & 1 & 1.04 & SA 115-271 & 1.06\\ (8250)~Cornell & 17.7 & 10 & 2400 & 2 & 1.14 & Hyades 64 & 1.07 \\ (15552)~Sandashounkan & 17.1 & 10 & 2400 & 1 & 1.05 & SA 93-101 & 1.07\\ (19369)~1997 YO & 15.9 & 10 & 2400 & 2 & 1.06 & SA 102-1081 & 1.07\\ (26219)~1997 WO21 & 16.9 & 14 & 3360 & 2 & 1.26 & SA 102-1081 & 1.07\\ (67255)~2000 ET109 & 16.5 & 10 & 2400 & 1 & 1.05 & SA 115-271 & 1.06\\ \hline \end{tabular} \caption{Observing conditions at the IRTF telescope. The first column corresponds to the number and the name of the observed asteroid, the second column gives the V magnitude of the target at the time of observation, \#AB pairs stands for the number of AB pairs taken, $t_{\rm exp}$ is the total exposure time, the first run corresponds to the night of September 21, 2014 and the second run to the night of January 22, 2015, the Airmass columns give the airmass at mid-observation. The ``Solar analogues'' frame gives the solar analogue star used for calibration, and its airmass at mid-observation.} \label{Sum} \end{table*} \section{Data analysis} \label{sec:Data_Anal} In this section, we present the data analysis tools that were used to interpret the polarimetric and the spectroscopic data. \subsection{Phase-polarization curve} \label{ssec:Phase_Pol} As already discussed in Section \ref{sec:Intro}, the phase-polarization curves exhibited by asteroids share a general morphology characterized by the presence of a so-called negative polarization branch, in which $P_{\rm r}$ has negative values. Negative polarization reaches an extreme value (conventionally called $P_{\rm min}$ in the literature) at a phase angle $\alpha(P_{\rm min})$. For increasing phase angles $P_{\rm r}$ decreases in absolute value up to the inversion angle $\alpha_{\rm min}$ around $20^{\circ}$ for regular (non-Barbarian) asteroids, where it becomes null again. Beyond the inversion angle, $P_{\rm r}$ takes positive values and shows generally a linear increase up to the extreme values of phase angle that can be possible for asteroid orbits (see Fig. \ref{fig:PP_Curve}). A frequently used model of the phase-polarization curve is the so-called ``Exponential-Linear'' model \citep{Mui_2009}: \begin{equation} \label{eq:Pr2} P_{\rm r}(\alpha) = A \cdot [e^{(-\alpha/B)} -1] + C \cdot \alpha \end{equation} where $\alpha$ is the phase angle, and $A$, $B$, and $C$ are parameters to be derived by least-squares techniques. In a couple of recent papers \citep{Cel_2015a, paper2} the computation of the best-fit parameters for a large number of asteroids was carried out using a genetic algorithm. Having determined the values of $A$, $B$, and $C$, for any given object, the inversion angle was derived by some simple numerical method, while $P_{\rm min}$ and $\alpha(P_{\rm min})$ were derived through a computation of the first derivative of Eq.~\ref{eq:Pr2} using the values of the parameters determined by means of the genetic algorithm. In this paper, the strategy described by \citet{Cel_2015a, paper2} is applied to a larger dataset which includes recent observations obtained at Calern and Rozhen. In addition, to increase the robustness of our results, we derived again the best-fit values of the unknown parameters in Eq.\ref{eq:Pr2} using another, independent approach. In particular, the inversion angle of the phase-polarization curve was analytically determined by evaluating \begin{equation} \label{eq:Alpha_0} \alpha_{0} =\frac{ BC {W}\left(-\frac{A\exp{\frac{-A}{BC}}}{BC}\right) +A}{C} \end{equation} for values of phase angle different from 0, where W($x$) is the positive branch of the Lambert function. An estimation of the $1\sigma$ uncertainty of the inversion angle computed in this way was obtained by using a Markov Chain Monte Carlo fitting procedure (MCMC). The estimation of the $1\sigma$ uncertainty was then done by fitting a normal distribution to the histogram of the derived $\alpha_{0}$ values. The same procedure was used for the other parameters such as the degree of minimum polarization $P_{\rm min}$ and the phase angle at which it is occurring. The results of the computations of polarimetric parameters based on the two independent approaches described above were found to be in excellent mutual agreement, the differences being found to be within the formal error bars computed by the two algorithms. \subsection{Spectral fitting} \label{ssec:Fit_Spectra} \subsubsection{Visible-NIR spectral merging} \label{sssec:V-IR Merging} In our analysis of the spectral reflectance properties, the first step was to merge together available spectra covering the visible and NIR regions. Note that we did not limit our analysis to the objects observed by us at IRTF, but also analysed L-type spectra taken from the literature. The major sources of visible spectra are the SMASS \citep{Bus_1999} and SMASS II \citep{Bus_Bin} data-bases, while for the NIR, most available spectra have been taken at IRTF in the framework of the MIT-UH-IRTF Joint Campaign for NEO Spectral Reconnaissance and also the new data presented in this paper. Merging visible and NIR spectra turned out to be a non-straightforward task. In particular, we found almost systematically noticeable differences in spectral slope between the red end of available visible spectra and the blue end of NIR data. This is not an unusual problem in asteroid spectroscopy. Moreover, the same object can sometimes exhibit bands that are visible in one spectrum, but not in others taken at different epochs. As an example of a difficult case, in the left part of Fig.~\ref{fig:Barb_reco} we show the case of the asteroid (234)~Barbara which was observed both in SMASS and SMASS II as well as, previously, in the 8-colors Asteroid Surveys (ECAS, \citet{8C}). We can see that the red ends of the visible spectra are quite different. We can also notice that none of these spectra merge well with NIR data. However, the older ECAS data, which cover a wider range of NIR wavelengths, links very well to IRTF data from the MIT-UH-IRTF Joint Campaign and are compatible with SMASS I data in the blue part of the spectrum. In the case of asteroid (236)~Honoria (see the right part of Fig.~\ref{fig:Barb_reco}), SMASS II and ECAS data are available as well as two spectra in the near-infrared from SMASS IR \citep{Bur_2002} and IRTF (MIT-UH-IRTF Joint Campaign). One can see that in this case, the two NIR spectra show a more reasonable mutual agreement with each other and are also compatible with available ECAS data. \begin{figure*} \includegraphics[width=18cm]{Reconnection.eps} \caption{Example of Visible-Near-infrared merging issues for (234)~Barbara (left part) and (236)~Honoria (right part). SMASS I data are represented by red squares, SMASS II by blue circles, SMASS IR by green triangles, IRTF by purple diamonds, and ECAS data by black dots.} \label{fig:Barb_reco} \end{figure*} In general we find that the NIR spectra tend to show less variability than the visible spectra. In our procedure, we always multiply the NIR spectrum so as its blue-end merges with the red-end of the visible spectrum at the same wavelength (0.82~${\rm \mu}$m) simply removing visible data covering longer wavelengths. This is justified by the fact that the red end of visible spectra corresponds to a drop in sensitivity of the CCD detectors. However, different merging methods can lead to spectra showing different behaviour near the merging point. As a general rule, therefore, we base our assessment of the quality of the fit by looking at the morphology of the absorption features in the NIR region beyond merging wavelength. Especially, in some cases, an apparent absorption feature around 0.7-0.8~${\rm \mu}$m cannot be modelled by our procedure and is not taken into account. \subsubsection{Fitting techniques} The general approach we adopted to model the obtained spectra was to use a combination of a small number of candidate mineral components using a simplified Hapke spectral mixing model \citep{Hap_1981,Hap_1984, Hap_1986}. The idea is to linearly combine the spectrum of several candidate end-members to find satisfactory best-fits of the observed asteroid spectrum. A technical problem with the adopted approach is that, working in terms of spectral reflectance ($r$), the spectra of intimately mixed materials do not combine linearly \citep{Nas_1974}. On the other hand, working in terms of single scattering albedo ($w$), the combination of different spectra is linear even when the materials are intimately mixed. The fitting procedure was then carried out in the following steps: \begin{enumerate} \item Convert spectral reflectance of the end-member spectra into single scattering albedo. \item Combine linearly the single scattering albedo of the different end-members. \item Convert back single scattering albedo into spectral reflectance. \item Compare the combined reflectance spectrum with the spectrum of the asteroid to be fit. \item Repeat the above steps using an optimization procedure until an acceptable fit is obtained. \end{enumerate} As the result of the fitting procedure, the relative abundance of each end-member in the obtained mixture is simply the coefficient used to linearly combine the single scattering albedo of the considered end-members. The spectral reflectance of asteroid spectra is usually normalized to the value measured at the wavelength of 0.55~${\rm \mu}$m. The absolute reflectance could be therefore computed, in principle, by equalling the reflectance at 0.55~${\rm \mu}$m to the geometric albedo ($p_{\rm V}$) of the object, when this is known. According to its definition, $p_{\rm V}$ is equal to the ratio between the object brightness measured at zero phase angle and that of an ideal, flat and perfectly Lambertian disk, having the same projected surface of the object, where the brightness is measured at the wavelength of 0.55~${\rm \mu}$m, and both the asteroid and the Lambertian disk are assumed to be located at unit distance from the Sun and from the observer. However, the albedo of asteroid is usually known with a noticeable uncertainty. Most asteroid albedo come from the WISE \citep{Mas_2011}, NEOWISE \citep{Mai_2016}, IRAS \citep{Ted_2002,Rya_2010}, and AKARI \citep{Usui_2012} surveys. Comparing the derived values of these surveys leads to differences as high as $20\%$ and even more in many cases. Based on the difficulty in assigning a well determined value of geometric albedo to our objects, we allow it to vary in our optimization procedure. Step (1) of the fitting procedure described above makes use of the equation linking the bidirectional reflectance $r_c$ and the single scattering albedo $w$, according to \citep{Hap_1981}. This equation can be written as: \begin{equation} \label{eq:hapke} r_c = \frac{w}{4}\frac{1}{\mu_0+\mu}\left[\left(1+B(g)\right)P(g)+H(\gamma,\mu_0)H(\gamma,\mu)-1\right], \end{equation} where $\mu_0$ and $\mu$ are respectively the cosine of the incidence and emergence angles, $g$ is the phase angle (\textit{i.e.} the angle between the incident and reflected beams), and \begin{equation} \gamma = \sqrt{1-w}. \label{eq:gamma} \end{equation} Eq.~\ref{eq:hapke} involves the three functions $B(g)$, $P(g)$ and $H(\gamma,x)$ which deserve some explanations. $B(g)$ is the backscatter function which defines the increase in brightness of a rough surface with decreasing phase. This effect is known as the opposition effect. According to \citet{Mus_1989} this function can be set to zero for phase angles greater than $15^{\circ}$. Since all the laboratory spectra used in this work were taken at a phase angle around $30^{\circ}$, $B(g)$ can be safely set to zero. $P(g)$ is the single particle phase function. \citet{Mus_1989} found that if one assumes an isotropic scattering (\textit{i.e} putting $P(g)$ equal to one for all $g$), the resulting errors are of the order of a few percent. Since we do not know this function for all the end-members, we set $P(g)$ to one. $H(\gamma,x)$ is the so-called Chandrasekhar isotropic $H$ function which can be approximated by the analytical expression: \begin{equation} H(\gamma,x) = \frac{1+2x}{1+2\gamma x} \label{eq:H} \end{equation} where $x$ represents $\mu$ or $\mu_0$. Laboratory spectra are normalized to a reference spectrum, taken at the same incidence and emergence angles, for which the single scattering albedo $w$ can be assumed to be $1$ (implying that all particle extinction is due to scattering). Eq.~\ref{eq:hapke} can then be simplified and becomes \citep{Hap_1993,Hap_2001}: \begin{equation} \label{eq:hapke_rc} \Gamma(\gamma) = \frac{r_c(\rm sample)}{r_c(\rm standard)} = \frac{1-\gamma^2}{(1+2\gamma\mu_0)(1+2\gamma\mu)}, \end{equation} Eq.~\ref{eq:hapke_rc} can be solved for $\gamma$: \begin{equation} \label{eq:hapke_gamma} \gamma = \frac{ \sqrt{ \Gamma^2 \left( \mu^2 - \mu_0^2 \right)^2 + \Gamma \left( 4 \mu \mu_0 - 1 \right) + 1 } - \Gamma \left( \mu + \mu_0 \right) }{4 \mu \mu_0 \Gamma + 1}. \end{equation} From $\gamma$, the single scattering albedo for each end-member is then immediately derived (Eq.~\ref{eq:gamma}). Once the single scattering albedo for each end-member has been combined linearly, the bidirectional reflectance of the mixture is computed back using the same equations to obtain the composite spectrum. The best abundance of each end-member which fits at best the asteroid spectrum is then derived using a Levenberg-Marquardt optimization technique. \subsubsection{Space weathering} \label{sssec:Space_Weath} Space weathering is a general process due to chemical and physical mechanisms that affect an airless body surface exposed to the space environment. It is the result of the exposure of an asteroid regolith to micro meteoritic impacts and heavy radiation (Solar wind and cosmic rays). It was observed that space weathering affects different bodies in different ways. In general, the effects of space weathering on bright, silicate-rich, asteroids tends to increase their spectral slope (reddening of the spectra), reduce the optical geometric albedo (darkening) and decrease the absorption bands depth \citep{Cha_1996,Hap_2001,Bru_2013,Bru_2014}. However, recent studies have shown that the effect of space weathering on dark asteroid surfaces in nearly opposite. The reflectance spectrum tends to become bluer and brighter than fresh materials \citep{Lan_2017}. It is believed that these effects are due to the progressive implantation of nanophase metallic iron particles (${\rm npFe^0}$) into regolith grains, as the effect of micrometeoritic impacts and solar wind sputtering. \citet{Hap_2001} proposed a model to take into account the optical constants of iron within the host material to modify a spectrum in the framework of his Hapke reflectance and scattering theory. This model is based on the idea of computing the absorption coefficient of the host material (laboratory spectra) ($\alpha_{\rm h}$) and of ${\rm npFe^0}$ particles ($\alpha_{\rm Fe}$). The total absorption coefficient ($\alpha$) of the space-weathered material is considered to be simply given by the sum of the two: \begin{equation} \alpha = \alpha_{\rm h} + \alpha_{\rm Fe}. \end{equation} In this work, we made use of this model to modify the spectra of each of our chosen end-members, to make them more similar to what we expect to be the case of space-weathered material on the surfaces of celestial objects. The first step is the computation of the single scattering albedo of each end-member using the procedure already explained above. Then, $\alpha_{\rm h}$ is determined as: \begin{equation} \alpha_{\rm h} = \frac{1}{D}\ln{ \left[ S_i + \frac{\left( 1- S_e \right) \left( 1 - S_i \right) }{w-S_e}\right] } \end{equation} where $D$ is the effective size of the particles in the media, $n$ is the refractive index of the end member, and $S_e$ is the Fresnel reflection coefficient of the particle surface averaged over all angles of incidence for light incident from outside the particle, while $S_i$ is the same, but for light incident from inside the particle. They are given by: \begin{equation} S_e = \frac{\left( n-1 \right)^2}{ \left( n+1 \right)} +0.05 \end{equation} and, \begin{equation} S_i = 1.014 - \frac{4}{n\left( n+1 \right)^2}, \label{eq:S_i} \end{equation} The above expressions are useful approximation of the true integral given by \citet{Hap_1993}. In the case of $S_i$, \citet{Hap_1993} used 1 instead of 1.014. However, \citet{Lucey_1998} found that in the range of refractive indexes n = 1.5 to 2, Eq.~\ref{eq:S_i} is a better approximation. As for the contribution of ${\rm npFe^0}$, we have the relation: \begin{equation} \alpha_{\rm Fe} = \frac{36\pi}{\lambda}\phi z \end{equation} where \begin{equation} z = \frac{n_{\rm h}^3 n_{\rm Fe} k_{\rm Fe}}{\left( n_{\rm Fe}^2 - k_{\rm Fe}^2 + 2n_{\rm h}^2 \right)^2 + \left( 2 n_{\rm Fe} k_{\rm Fe} \right)^2} \end{equation} and $\phi$ is the volume fraction of ${\rm npFe^0}$ that are embedded in the host material. $n$ and $k$ are the refractive index and absorption coefficient, respectively, of the host material (h) (end-member) and of iron (Fe). If the iron particles are uniformly distributed $\phi = \rho_{\rm Fe}f/\rho_{\rm h}$ where $\rho_{\rm Fe}$ and $\rho_{\rm h}$ are the solid density of iron in the host material and the solid density of the host material respectively. Finally, $f$ is the bulk mass fraction of iron. The last step is to compute the new bidirectional reflectance of the space-weathered material using the following relations: \begin{equation} \Theta = \exp{\left(-\alpha D\right)} \end{equation} and \begin{equation} w = S_e + \left( 1-S_e \right) \frac{1-S_i}{1-S_i\Theta}\Theta \end{equation} from which, by using Eqs. \ref{eq:hapke_rc} and \ref{eq:gamma}, one can compute the final space-weathered spectrum. An example is shown, in the case of the fluffy type A CAI, in Fig.~\ref{fig:SW_Ex}. \begin{figure} \includegraphics[width=8.8cm]{FTA_SW.eps} \caption{Example of the computation of the space weathering effect (see text) on the reflectance spectrum of the fluffy-type A CAI considered in our analysis. } \label{fig:SW_Ex} \end{figure} \subsubsection{End-members} \label{sssec:End_Members} We describe here the end-members used in our analysis. All end-member spectra were obtained from the NASA REflectance experiment LABoratory (RELAB) spectral database. In our case, the classical mixture of olivine and pyroxene, cannot be assumed. We have therefore to assume the presence, of spinel bearing CAIs \citep{Bur_1992} like those found in CV3s meteorites to model the 2~${\rm \mu}$m band. Convincing arguments supporting this choice can be found in \citet{Bur_1992} and \citet{Sun_2008a}. A complete review of the mineralogy of CVs meteorite can be found in \citet{Clo_2012}. We took into consideration different types of CAIs, that are known to produce large absorptions at wavelengths around 2~${\rm \mu}$m. In addition, we assumed the presence of the olivine mineral, one of the most important silicates found in many meteorites, as well as two example of the meteorites in which CAIs are found as inclusions. In particular, we chose the matrix of the well-known Allende and the Y-86751 meteorites. Note that we are aware that other possible materials could be taken into consideration. Moreover, we did some preliminary tests in which we included also pyroxene among the end-members, but these tests did not give satisfactory results. In what follows, we give a brief description of our selected end-members, while Fig.~\ref{fig:end-members} shows the spectrum of each of them in terms of absolute reflectance. \begin{figure} \includegraphics[width=8.8cm]{EndMembers.eps} \caption{Spectra of all the end-members used to model the spectra of the asteroids studied in our analysis.} \label{fig:end-members} \end{figure} {\it Calcium Aluminium rich Inclusion} - Three distinct types of CAIs (A, B and C) classified on the basis of petrography and geochemistry, are known to exist. The types B and C show evidence of melting by transient heating events before accretion. As opposite, fluffy type A CAIs (FTA) do not show any evidence of melting. FTAs are found in all chrondritic meteorites, whereas types B and C are found only in CV3 meteorites. FTAs and B type CAIs are both dominated by an absorption feature around 2~${\rm \mu}$m, but this absorption is much stronger in the case of FTAs. This stronger absorption feature is due to a much higher concentration of FeO in the aluminous spinel present in these CAIs. In this study we have considered the same spectra of CAIs, three FTAs and three B-type CAIs, already analyzed by \citet{Sun_2008a} (RELAB: TM-TJM-001 to 005, and 007). The three FTA samples contain from $3$ to $14\%$ in weight mass of FeO while type B CAIs only a fraction of percent. The strength of the 2~${\rm \mu}$m absorption band is directly correlated to the abundance in FeO \citep{Sun_2008a}. Considering that abundant FeO is excluded from CAI minerals during condensation, and that we see that CAI-types showing highest percentage of FeO also show an abundance of alteration phases, the FeO present in FTAs and B-type CAIs should come from post-accretion enrichment. In our procedure, we always made use as end-member the FTAs sample showing the highest percentage of FeO. Type B CAI was always found to result in poorer fitting than using FTAs. This result was already noticed by \citep{Sun_2008a}. \textit{MgO rich olivine} - Olivine is an important component of many meteorites, and is most abundant in chondrites. Differences in abundance and composition of olivines is an important criterion for the classification of meteorites \citep{Mas_1963}. Olivine is a magnesium-iron silicate. The most general formula is $ \left[ {\rm Mg}^{+2}, {\rm Fe}^{+2} \right] {\rm SiO}_4$. The end-members are called forsterite (Fo) (${\rm Mg}_2 {\rm SiO}_4$) and fayalite (Fa) (${\rm Fe}_2{\rm SiO}_4$). Olivine is usually described by the relative fraction of Fo and Fa (${\rm Fo}_x$ and ${\rm Fa}_x$). Forsterite is olivine with ${\rm Fo}_x$ between $100\%$ and $90\%$, whereas fayalite is olivine with ${\rm Fo}_x$ between $0\%$ and $10\%$. The spectrum of olivine shows a broad 1~${\rm\mu}$m absorption band which slightly depends upon ${\rm Fo}_x$ content. The real part of the refractive index of olivine is highly dependent on the iron content. \citet{Lucey_1998} determined that the refractive index of olivine satisfies the relation $n = 1.827 - 0.192 {\rm Fo}_x$. This value of $n$ appears to be constant over the whole interval of visible wavelengths. Olivine is already present in the matrix of the meteoritic component that we are considering as end-members (see next paragraph). However, the composition of olivine is varying with respect to their degree of alteration. Unaltered olivine should be Mg-rich (forsterite) and the fraction of Fe will then increase as a function of the alteration. The olivine present in the considered meteoritic components possess moderate to large amount of Fe. On the other hand, some of our asteroids show less alteration than Allende or Y-86751 and should contain more MgO-rich olivine. As a consequence, we are considering forsterite olivine as end-member (RELAB: PO-CMP-076), which has a refractive index $n = 1.635$. The presence of forsterite in our model can then be seen as an indicator of post-accretion alteration. \textit{CV3 meteorites} - CAIs are one of the inclusions found in a matrix of material that constitutes the bulk composition of CV meteorites. The matrix must therefore be taken into account as one of the important constituents of any possible asteroidal composition. In this work, we used two different CV3 meteorites. First, we considered the matrix of the Allende meteorite from which CAI were removed \citep{Sun_2008a} (RELAB: MT-TJM-071). We also used spectral measurement of the Y-86751 meteorite (RELAB: MP-TXH-009). The composition of the matrix of the Allende meteorite, the largest carbonaceous chondrite found on Earth, and one of the most studied examples of primitive meteorites, was measured by \citet{Bla_2004}. They found that it is composed of more than $80\%$ of olivine. The Allende matrix is also found to be pyroxene-poor with only $5.9\%$ of enstatite. The refractive index for the Allende matrix (after removing CAI inclusions) is unknown. \citet{Zub_2015} estimated a value between 1.68 to 1.83 for the Allende meteorite using the polarimetric inversion angle as a proxy. They found a value of 1.7 by fitting the light-scattering response from Allende meteorite particles. In this work we used $n = 1.7$ for the Allende matrix. The Y-86751 meteorite is of the same type than Allende and possess the same bulk composition \citep{Pal_1993}. Characterization using optical microscope to measure transmitted and reflected light shows flow texture possibly due to aqueous alteration \citep{Gyo_2011}. It is known to contain CAIs where the spinel is more FeO-rich than Allende (18-$25\%$ \citep{Mur_1994} instead of 4 to $14\%$). Its matrix also contain fine-grained aluminous spinel \citep{Mur_1994}. \subsubsection{Optimization procedure} The matlab \textit{fmincon}\footnote{More information about this function can be found on the Mathwork website: https://nl.mathworks.com/help/optim/ug/fmincon.html} function was used as optimization procedure. This function allows the user to use constrained values of the optimized parameters. In our case, we set constraint on the end-member abundances so their sum will always be equal to 1. Typical reduced chi-square optimization function was used. This function can be weighted in order to give priority to certain wavelengths. This is useful in the case of doubtful visible and NIR merging. In our case, only the near-infrared region, from to $0.82$ to 2.5~${\rm \mu}$m, was considered to constrain the fitting procedure. This prevent the featureless visible part of the spectrum to play to significant role in case of plausible wrong visible and NIR merging. \section{Results} \label{sec:Res_Disc} In this section we present the results of our analysis of polarimetric and spectroscopic data using the methods described in the previous Section. Table \ref{tab:Param} lists the 43 objects analysed in this work taking profit of the new spectroscopic and/or polarimetric data obtained in our observing campaigns. This table also summarizes some physical properties of the objects, and indicate family membership if any. Bold entries correspond to value directly determined in this work. Albedo values are taken from the NEOWISE (Wide-field Infrared Survey Explorer) catalogue \citep{Mai_2016}, and can be affected by noticeable uncertainties, as suggested by \citet{Cel_2015a}, and also by the fact that in some cases we have for the same object more than one albedo estimate, showing noticeable differences. This suggests that the nominal albedo uncertainties listed in the Table \ref{tab:Param} can be in many cases fairly optimistic. \begin{table*} \centering \begin{tabular}{| l | lll | l | l | l | l | } \hline Asteroid & Tholen & SMASS & DM & Barbarian & D (NEOWISE) & $p_V$ (NEOWISE) & Family \\ & & & & & [km] & & \\ \hline (12)~Victoria & S & L & \textbf{D} & N & $115.1 \pm 1.2$ & $0.16 \pm 0.03$ & \\ (122)~Gerda & ST & L & \textbf{S} & \textbf{N} & $70.7 \pm 0.9$ & $0.25 \pm 0.04$ & \\ (172)~Baucis & S & L & \textbf{L} & Y & $63.5 \pm 3.1$ & $0.13 \pm 0.02$ & \\ (234)~Barbara & S & Ld & L & Y & $45.5 \pm 0.2$ & $0.20 \pm 0.03$ & \\ (236)~Honoria & S & L & L & Y & $77.7 \pm 1.2$ & $0.16 \pm 0.02$ & \\ (387)~Aquitania & S & L & L & Y & $97.3 \pm 3.4$ & $0.20 \pm 0.03$ & \\ (402)~Chloe & S & K & L & Y & $55.4 \pm 1.7$ & $0.16 \pm 0.03$ & \\ (458)~Hercynia & S & L & \textbf{L} & Y & $36.7 \pm 0.4$ & $0.43 \pm 0.07$ & \\ (460)~Scania & & K & L & & $19.7 \pm 0.1$ & $0.26 \pm 0.06$ & \\ (478)~Tergestre & S & L & & N & $80.7 \pm 1.0$ & $0.17 \pm 0.04$ & \\ (599)~Luisa & S & K & L & Y & $70.2 \pm 0.5$ & $0.12 \pm 0.03$ & \\ (606)~Brangane & TSD & K & L & \textbf{Y} & $35.0 \pm 0.2$ & $0.10 \pm 0.01$ & 606 \\ (611)~Valeria & S & L & \textbf{L} & \textbf{Y} & $57.5 \pm 0.2$ & $0.12 \pm 0.01$ & \\ (642)~Clara & S & L & & & $38.2 \pm 5.3$ & $0.11 \pm 0.02$ & \\ (679)~Pax & I & K & L & Y & $63.0 \pm 0.4, 63.9 \pm 0.2 $ & $0.11 \pm 0.01, 0.10 \pm 0.02$ & \\ (729)~Watsonia & STGD & L & L & Y & $50.0 \pm 0.4$ & $0.13 \pm 0.01$ & 729 \\ (753)~Tiflis & S & L & \textbf{S} & \textbf{N} & $20.9 \pm 0.8$ & $0.33 \pm 0.05$ & \\ (824)~Anastasia & S & L & L & \textbf{Y} & $32.5 \pm 0.3$ & $0.11 \pm 0.03$ & \\ (908)~Buda & & L & D & \textbf{N} & $30.8 \pm 0.5$ & $0.09 \pm 0.01$ & \\ (980)~Anacostia & SU & L & L & Y & $74.7 \pm 0.6$ & $0.23 \pm 0.06$ & \\ (1040)~Klumpkea & & & & & $22.3 \pm 0.2$ & $0.24 \pm 0.04$ & 1400 \\ (1284)~Latvia & T & L & & \textbf{Y} & $41.1 \pm 0.5$ & $0.08 \pm 0.02$ & \\ (1372)~Haremari & & L & \textbf{L} & Y & $26.5 \pm 0.3$ & $0.04 \pm 0.01$ & 729 \\ (1406)~Kommpa & & Ld & D & \textbf{N} & $24.2 \pm 0.4$ & $0.17 \pm 0.05$ & \\ (1702)~Kalahari & D & L & & & $34.6 \pm 0.1$, $32.7 \pm 0.2$ & $0.06 \pm 0.01$, $0.06 \pm 0.01$&\\ (2085)~Henan & & L & L & \textbf{Y} & $13.4 \pm 0.1$ & $0.30 \pm 0.06$ & 2085 \\ (2354)~Lavrov & & L & \textbf{L} & & $13.3 \pm 0.1$ & $0.23 \pm 0.05$ & 2085 \\ (2448)~Sholokhov & & L & L & \textbf{N} & $38.5 \pm 0.2$ & $0.16 \pm 0.02$ & \\ (2732)~Witt & & A & L & & $11.0 \pm 0.3$ & $0.30 \pm 0.02$ & \\ (3269)~Vibert Douglas & & & & & $11.7 \pm 0.2$ & $0.14 \pm 0.03$ & 729 \\ (4917)~Yurilvovia & & Ld & \textbf{L} & & $8.0 \pm 2.4$ & $0.21 \pm 0.21$ & \\ (8250)~Cornell & & & & & $9.0 \pm 0.3$ & $0.34 \pm 0.08 $ & 1400 \\ (15552)~Sandashounkan & & & & & $7.6 \pm 0.1$ & $0.37 \pm 0.04$ &1400 \\ (19369)~1997 YO & & & & & $14.3 \pm 0.1$, $13.6 \pm 0.1$ & $0.16 \pm 0.03$, $0.18 \pm 0.02$ &1400 \\ (26219)~1997 WO21 & & & & & $7.6 \pm 0.2$ & $0.22 \pm 0.03$ & 1400 \\ (67255)~2000 ET109 & & & & & $6.6 \pm 0.1$ & $0.28 \pm 0.02$ & 1400 \\ \hline \hline (673)~Edda & S & S & L & & $37.6 \pm 0.4$ & $0.09 \pm 0.03$& \\ (3734)~Waland & & Ld & L & & $9.0 \pm 0.2$ & $0.20 \pm 0.05$ & \\ (3844)~Lujiaxi & & L & L & & $15.0 \pm 0.7$ & $0.17 \pm 0.03$& 2085 \\ (4737)~Kiladze & & L & L & & $8.8 \pm 0.1$ & $0.15 \pm 0.02$& \\ (5840)~Raybrown & & Ld & L & & $9.7 \pm 0.1$ & $0.22 \pm 0.04$& \\ (7763)~Crabeels & & L & L & & $7.8 \pm 0.1$ & $0.22 \pm 0.02$& \\ \hline \end{tabular} \caption{List of the targets observed (spectroscopy and/or polarimetry) during the different campaigns (upper part). Some targets which were not observed by us but discussed in this work were added in the lower part. Each bold entries mean that this is a result determined in this work. The first column corresponds to the number and name of the considered asteroid. The columns Tholen \citep{Tholen84}, SMASS \citep{Bus_Bin,Mot_2008} and DM \citep{Dem_2009,Bus_2009} stand for the taxonomic class in these 3 taxonomies. The Barbarian column indicates whether the asteroid is considered as a Barbarian \citep{b5,b6,b31}. D (NEOWISE) and $p_V$ (NEOWISE) correspond to respectively the diameter and the geometric albedo as given by the NEOWISE catalog \citep{Mai_2016}. Finally, the Family column indicates the number of the parent member of the family in which the asteroid is classified (606 for the Brangane, 729 for the Watsonia, 1400 for the Tirela/Klumpkea and 2085 for the Henan family). } \label{tab:Param} \end{table*} \subsection{Spectroscopy} \subsubsection{Spectral classification} \label{ssec:Spec_Class} Using IRTF, we have obtained new NIR spectra between $0.82$ and 2.45~${\rm \mu}$m for 14 objects. For 9 of them, the visible part of the spectrum is available in the SMASS database and were merged together to produce the visible + NIR spectra. For each of them we derived a taxonomic classification according to the criteria used by \citet{Dem_2009}. Six objects are found to be (DM) L-class. The remaining three objects do not belong to the (DM) L class. (12)~Victoria is a (DM) D-type, while (122)~Gerda and (753)~Tiflis are (DM) S-types. The information about the derived taxonomical classes of the asteroids observed in this work is included in Table \ref{tab:Param} (bold entries). The other five observed asteroids for which no visible spectrum is available seem to be compatible with the (DM) L-class. However, no definitive classification can be made in the absence of the visible part of the spectrum. \subsubsection{Space weathering} Using space-weathered end-member spectra allowed us to model simultaneously the visible (even though the visible region is not taken into account in the optimization procedure) and near-infrared regions of the asteroid spectra. However, in applying our space weathering correction, some information about the optical properties of the end-members are needed. These properties are the real part of the refractive index $n$ and the effective size of the end-members particles ($D$). We choose the values $n = 1.635$ for the Mg-rich olivine \citep{Lucey_1998}, and $n = 1.7$ for the meteoritic component (see Section \ref{sssec:End_Members}). In the case of the spinel bearing CAIs, we used the values computed of $n$ with respect to wavelength derived by \citet{Hos_2008}. However, \citet{Hos_2008} derived an $n$ value valid only up to 1.4~${\rm \mu}$m, we considered $n$ to be constant over a wider interval of wavelengths. According to \citep{Gun_2013} the average size of regolith particles may be dependent upon the size D of the asteroid, ranging from $10$ to 100~${\rm \mu}$m for large asteroids (D $\textgreater 100$ km) and from millimeter to centimeter for asteroids smaller than 100 km. However, the finest fraction of regolith particles is responsible for the principal optical effect of the space weathering \citep{Pie_1993}. We then choose the effective size of the particles to be equal to 25~${\rm \mu}$m. On the other hand, this model gives as a result the mass fraction of nano-phase iron particle implanted on the regolith particles (see $f$ column of Table \ref{Tab:Fit_result2}). This parameter can be interpreted in terms of the extent at which an asteroid spectrum can be affected by space weathering. We show in Fig.~\ref{fig:A_SL_SW} an example of the best fit obtained for asteroid (729)~Watsonia when using fresh (left panel) and weathered (right panel) end-members. \begin{figure*} \includegraphics[width=18cm]{Wat_Fit_New.eps} \caption{Example of the fits performed for asteroid (729) Watsonia using fresh end-members (left panel) and weathered end-members (right panel).} \label{fig:A_SL_SW} \end{figure*} \subsubsection{Spectral fitting} \label{ssec:Spec_Fit} \begin{table*} \centering \begin{tabular}{| c | cccccc | cccccc |} \hline & \multicolumn{6}{ c |}{Allende matrix} & \multicolumn{6}{ c |}{Y-86751 bulk} \\ Asteroid & FTA & Olivine & Allende & $f$ & $p_V$ & $\chi^2$ & FTA & Olivine & Y-86751 & $f$ & $p_V$ & $\chi^2$ \\ \hline (172)~Baucis & $29$ & $32$ & $39$ & 0.078& 0.15 & 2.96 & $17$ & $29$ & $53$ & 0.115& 0.13 &2.45 \\ (234)~Barbara & $25$ & $62$ & $13$ & 0.039& 0.24 & 3.26 & $20$ & $65$ & $15$ & 0.047& 0.24 & 3.05 \\ (236)~Honoria & $12$ & $29$ & $58$ & 0.061& 0.09 & 3.82 & $0$ & $51$ & $49$ & 0.093& 0.15 &3.79 \\ (387)~Aquitania & $31$ & $35$ & $34$ & 0.031& 0.18 & 5.25 & $19$ & $34$ & $47$ & 0.066& 0.16 &3.56 \\ (402)~Chloe & $32$ & $17$ & $50$ & 0.012& 0.16 & 5.18 & $6$ & $0$ & $94$ & 0.106& 0.10 &2.56 \\ (458)~Hercynia & $32$ & $14$ & $53$ & 0.019& 0.13 & 3.29 & $9$ & $1$ & $90$ & 0.121& 0.09 &1.86 \\ (599)~Luisa & $24$ & $29$ & $47$ & 0.074 & 0.15 & 3.29 & $8$ & $30$ & $61$ & 0.121& 0.13 &2.32 \\ (611)~Valeria & $3$ & $11$ & $85$ & 0.085 & 0.19 & 1.60 & $0$ & $68$ & $32$ & 0.080& 0.08 &3.09 \\ (673)~Edda & $34$ & $0$ & $66$ &0.014 & 0.12 & 2.29 & $10$ & $3$ & $87$ & 0.100& 0.10 & 2.10 \\ (679)~Pax & $26$ & $20$ & $53$ & 0.036& 0.14 & 4.12 & $0$ & $0$ & $100$& 0.186& 0.08 &1.24 \\ (729)~Watsonia & $6$ & $26$ & $68$ & 0.083 & 0.10 & 2.58 & $0$ & $61$ & $39$ & 0.092& 0.17 &3.64 \\ (824)~Anastasia & $45$ & $9$ & $46$ & $0.000$ & $ 0.19$ & $9.33$ & $10$ & $0$ & 90 & 0.030 & 0.12 & 4.82 \\ (980)~Anacostia & $48$ & $28$ &$23$ & 0.042& 0.17 & 5.20 & $36$ & $0$ & $64$ & 0.106& 0.11 &3.64 \\ (1372)~Haremari & $13$ & $56$ & $31$ & 0.085& 0.12 & 2.44 & $0$ & $44$ & $56$ & 0.137& 0.12 &1.85 \\ (2085)~Henan & $36$ & $6$ & $58$ & 0.000 & 0.13 & 1.88 & $10$ & $0$ & $90$ & 0.075& 0.10 &1.49 \\ (2354)~Lavrov & $41$ & $35$ & $25$ & 0.028& 0.18 & 1.47 & $26$ & $20$ & $54$ & 0.065& 0.14 &1.35 \\ (2732)~Witt & $27$ & $56$ & $17$ & 0.045& 0.22 & 2.47 & $22$ & $58$ & $20$ & 0.058& 0.21 & 2.17 \\ (3734)~Waland & $40$ & $16$ & $44$ & 0.017& 0.15 & 1.75 & $17$ & $0$ & $83$ & 0.088& 0.10 & 1.50 \\ (3844)~Lujiaxi & $51$ & $12$ & $37$ & 0.000 & 0.19 & 1.78 & $28$ & $0$ & $72$ & 0.031& 0.13 & 1.32 \\ (4737)~Kiladze & $49$ & $0$ & $51$ & 0.020 & 0.14 & 2.46 & $29$ & $0$ & $71$ & 0.073& 0.12 & 1.92 \\ (4917)~Yurilvovia & $20$ & $45$ & $35$ & 0.061& 0.15 & 8.10 & $5$ & $32$ & $62$ & 0.140& 0.11 & 6.05 \\ (5840)~Raybrown & $39$ & $30$ & $31$ & 0.010& 0.18 & 3.82 & $22$ & $0$ & $78$ & 0.075& 0.10 & 3.04 \\ (7763)~Crabeels & $49$ & $11$ & $39$ & 0.022 & 0.18 & 1.16 & $32$ & $0$ & $68$ & 0.071& 0.13 & 1.09 \\ (8250)~Cornell & $23$ & $76$ & $1$ & 0.027 & &1.17 & $23$ & $76$ & $1$ & 0.028& & 1.17 \\ (15552)~Sandashounkan & $21$ & $70$ & $9$ & 0.040 & & 2.21 & $18$ & $73$ & $9$ & 0.043& &2.28 \\ (19369)~1997 YO & $8$ & $0$ & $92$ & 0.103 & & 1.13 & $2$ & $67$ & $31$ & 0.079& &1.96 \\ (26219)~1997 WO21 & $18$ & $77$ & $5$ & 0.017 & & 1.06 & $19$ & $77$ & $5$ & 0.019& & 1.07 \\ (67255)~2000 ET109 & $22$ & $45$ & $33$ & 0.062 & & 1.51 & $13$ & $57$ & $30$ & 0.075 & &1.69 \\ \hline \end{tabular} \caption{Result of the Hapke fitting procedure of reflectance spectra. For each object, identified by its asteroid number, we give the relative abundances of each of the three considered end-members in percent. The column ($f$) gives the fraction of npFe$^0$. The last column (Albedo) gives the reflectance at 0.55~${\rm \mu}$m of the fitted asteroid spectrum (no value when the visible part of the spectrum is missing).} \label{Tab:Fit_result2} \end{table*} The results of the spectral fitting procedure explained in section \ref{ssec:Fit_Spectra} are shown in Table \ref{Tab:Fit_result2}. All fits are shown in \ref{app:fit}. The red continuous lines correspond to the results using the Allende matrix while the blue discontinuous lines are for the results obtained with the spectrum of the Y-86751 meteorite. For each fit, the plot of the residuals is shown below. In the majority of the cases, the residuals are smaller using Y-86751 than using the Allende matrix. All the spectra were normalized to unity at 0.55~${\rm \mu}$m, except those for which no visible counterpart was available. In those cases, the spectra were normalized to unity at 1~${\rm \mu}$m. For 3 of those, we have plotted the available Sloan Digital Sky Survey (SDSS) \citep{Ive_2002} measurements. \citet{Sun_2008a} modelled the spectra of (234)~Barbara, (387)~Aquitania, and (980)~Anacostia using a slightly different approach. They found for these asteroids high CAI abundances never observed in meteorite samples, equal to $22\%$, $25\%$ and $39\%$ of spinel bearing CAIs, respectively. In their analysis, they used four different end-members. Fluffy type A CAI, CAI free Allende matrix, MgO rich olivine and the spectrum of (2448)~Sholokhov to simulate the typical slope of a (DM) L-class asteroid. In our analysis, we find relatively similar values for the same three asteroids. Our values of CAIs abundances are respectively $25\%$, $31\%$, and $48\%$, when using the Allende matrix and $20\%$, $19\%$, and $36\%$ using Y-86751. As it was already mentioned in \citet{Sun_2008a}, only the fluffy type A CAI seems to be able to explain the observed spectra. Among the three different FTAs spectra adopted as possible end-members, the one showing the highest fraction of FeO ($\sim14\%$) within spinel was always found to give a better fit. In no case, using B-type CAIs led to satisfactory fits of the spectra. We did some attempts to consider also pyroxene as a possible end-member (see Fig. \ref{fig:Pyr_CAI}). Even though in very few cases, very small amount of pyroxene ($1\%$ or below) could be added as a possible component, we found that pyroxene, in general terms, does not help to improve the fits of our spectra. \begin{figure*} \includegraphics[width=18cm]{Pyr_CAI2.eps} \caption{Best-fit models for the reflectance spectrum of asteroid (387)~Aquitania. The left panel shows the best result obtained assuming the presence of pyroxene without CAIs. The right panel shows the opposite case, namely CAIs and no pyroxene.} \label{fig:Pyr_CAI} \end{figure*} The composition of L-type asteroids seems to show some diversity as we observe different kind of behaviour. First, we notice that there are only two asteroids for which the Allende matrix provide a better fit (base on the $\chi^2$). From these, (611)~Valeria is the only one which is convincing. For the case of (729)~Watsonia, the fit using the Allende matrix show a shallow absorption band around 2~${\rm \mu}$m which is not present in the asteroid spectrum. In the following, we will always discuss the result associated to Y-86751. Only four spectra exhibit a positive slope from 1.5 to 2.5~${\rm \mu}$m (236, 611, 729, 1372). These spectra are also characterized by the total or nearly absence, of a 2 $\rm \mu$m absorption band. As a result, our model does not need the addition of CAIs, but includes a large part of meteoritic component. Since the meteorite possesses by itself some CAIs, we can still consider that these asteroids are not CAIs free. We note that (729) and (1372) belong to the Watsonia family. Another member of the Watsonia family, (599)~Luisa, also shows a low fraction of CAIs and shows a shallow absorption band around 2~${\rm \mu}$m associated with a null slope. We note that the visible part of the spectrum of (599)~Luisa is not properly modelled. Two asteroids possess very large amount of CAIs ($>30\%$). From those, the spectrum of (980) is relatively well fitted in both the visible and NIR regions. However, the fit could still be improved for the 2~{$\rm \mu$}m absorption band which is shifted toward shorter wavelength in our modeling. The visible part of (7763) is not properly fitted, and also the fit of the 2 $\rm \mu$m region is not ideal. For four of them, the meteorite Y-86751 is almost the unique component ($>90\%$) associated with no (or almost no) olivine and a few percent of CAIs. These spectra are characterized by a monotonically decreasing slope after 2~${\rm \mu}$m, a high value of space weathering and low albedo. Two of them, (402) and (679), have a poor agreement in the visible region while the other two (458) and (2085) show a good agreement. Even though these asteroids do not show significant CAIs, their presence in the Y-86751 composition provides more or less $10\%$ of CAIs. On the other hand, Y-86751 and Allende are almost absent for three asteroids ($<10\%$). All these three asteroids (8250), (15552), and (26219) belong the the Tirela family. The spectrum (only NIR) of these asteroids is well modelled with the highest fraction of olivine found in our sample and moderate fraction of CAIs. The asteroid (824)~Anastasia is a particular case. The model failed when using the Allende meteorite and produce a quite poor fit when using the Y-86751 bulk. The model provide 90\% of meteoritic associated with 10\% of CAIs. Although the fit is quite poor the Y-86751 bulk seems able to reproduce some very specific features such as a small absorption band around 1.1 ${\rm \mu}$m. The spectrum of (824)~Anastasia presents also a very steep slope in the visible which can be only very approximately reproduced. Although it seems that our model is missing some end-member to satisfactorily reproduce the spectrum of Anastasia, we also clearly see that the Y-86751 bulk seems to be the major component. Our model failed in reproducing the spectrum of (2448) and (2732). The composition for these three asteroids should then be different than the rest of the L-class. In the case of (2448) this result agrees with the fact that this asteroid does not show a large inversion angle. We note that this asteroid was previously classified as A-type in the SMASS taxonomy. \subsection{Polarimetry} \subsubsection{Phase-polarization curve} \label{ssec:Phase_Pol_Curv} In this work, $109$ polarimetric measurements of 32 individual asteroids are presented. All the observations were acquired using a standard Johnson $V$ filter. The individual measurements $P_{\rm r}$ are listed in \ref{app:pola}. Phase-polarization curves were built by using data from the literature to complement our new data. Literature data are available at the PDS web site\footnote{Planetary Data System. The data are available at the URL address http://pds.jpl.nasa.gov/ (files maintained by D.F. Lupishko and I.N. Belskaya)}, while others were taken from some recent papers \citep{Gil_2014}. The obtained phase-polarization curves were fitted using Eq.~\ref{eq:Pr2} whenever enough data are available, using the techniques explained in Section~\ref{ssec:Phase_Pol}. Fig.~\ref{fig:All_Pola} presents all the polarimetric measurements available for all asteroids studied in this work. In this figure and for all the following figures presenting polarimetric measurements, full symbols represent data obtained by us while empty symbols correspond to data retrieved in the literature. In Fig.~\ref{fig:All_Pola}, asteroids were subdivided into three groups based on their (DM) taxonomy. Those belonging to the (DM) L-class are plotted as black squares. Their phase-polarization curve is characterized by $P_{\rm min}$ around -1.5\% occurring between 15 to 20 degrees of phase angle and an inversion angle between 25 to $30^{\circ}$. Asteroids which do not belong to the L-types are shown in blue circles. They display different phase-polarization curves which are not compatible with the one displayed by L-class. Those for which the (DM) classification is unknown are displayed as red diamonds. Some of these measurements are compatible with the phase-polarization curves of L-class objects, whereas others are not. \begin{figure} \includegraphics[width=9cm]{All_Pola.eps} \caption{All polarimetric measurement of asteroids studied in this work. All full symbols correspond to data obtained by us while empty symbols correspond to measurements retrieved form literature. The black squares stand for (DM) L-type , the blue circles stand for asteroids which do not belong to the (DM) L-class, and the red diamonds stand for asteroids for which the (DM) classification is unknown.} \label{fig:All_Pola} \end{figure} Among these 32 observed asteroids, 11 of them (172, 234, 236, 387, 402, 458, 599, 679, 729, 980, and 1372) were already known to be Barbarians \citep{b5,b6,b31,Gil_2011,Gil_2014,Bag_2015,Dev_2017a}. Our measurements allowed us to improve the coverage of the phase-polarization. For seven of them, the phase angle coverage is sufficient to calculate a phase-polarization curve. The measurements and derived phase-polarization curves for these asteroid are presented in Fig.~\ref{fig:AI_Inv_pola}. The derived polarimetric parameters are summarized in Table~\ref{tab:Pol_info}. From these, (172), (234), (236), (387), and (980) had been previously modelled \citep{b6,Cel_2015a,Bel_2017}. Our results are in agreement with previously published phase-polarization modellings. The only major difference concerns (234)~Barbara for which the inversion angle is found to be $2^{\circ}$ lower by \citet{b6} and \citet{Bel_2017}. This is mainly due to one measurement at $\alpha = 30.4^{\circ}$ where $P_{\rm r} = 0.72$ \citep{b6} which is not compatible with two more recent measurements at $30.8^{\circ}$ and $33.54 ^{\circ}$ which provide $P_{\rm r}$ equal to $0.017 \pm 0.027$ and $0.62 \pm 0.046$, respectively. More observations at high phase angle of (234)~Barbara are needed to see if these contradictions between several observations are due to measurement errors or to intrinsic variation of the polarization degree measured in different observing circumstances. \begin{figure*} \includegraphics[width=19cm]{Pola_PP.eps} \caption{Phase-polarization curves derived for Barbarian asteroids using new polarimetric data presented in this work. These asteroids are the same as those of Fig.~\ref{fig:CAI_Inv}. Full symbols refer to data obtained in this work while open symbols refer to data obtained in the literature.} \label{fig:AI_Inv_pola} \end{figure*} \begin{table} \begin{tabular}{c c c c} \hline Asteroid & $\alpha_{\rm inv}$ & $\alpha(P_{\rm min})$ & $P_{\rm min}$ \\ \hline (12)~Victoria & $23.0 \pm 0.2$ & $11.3 \pm 0.1$ & $-0.81 \pm 0.01$ \\ (122)~Gerda & $18.4 \pm 2.3$ & $8.2 \pm 0.5 $ & $ -0.73 \pm 0.05 $ \\ (172)~Baucis & $28.1 \pm 0.1$ & $12.8 \pm 0.2$ & $-1.42 \pm 0.02 $ \\ (234)~Barbara & $29.4 \pm 0.1$ & $12.5 \pm 0.4$ & $-1.57 \pm 0.04 $ \\ (236)~Honoria & $26.6 \pm 0.2$ & $13.0 \pm 0.1$ & $-1.27 \pm 0.01 $ \\ (387)~Aquitania & $28.2 \pm 0.5$ & $13.7 \pm 0.2$ & $-1.46 \pm 0.02 $ \\ (402)~Chloe & $32.0 \pm 0.4$ & $15.7 \pm 0.2$ &$-1.68 \pm 0.02$ \\ (679)~Pax & $28.2\pm0.1$ & $13.7 \pm 0.1$ & $-1.59 \pm 0.02$ \\ (980)~Anacostia & $28.6 \pm 0.5$ & $12.5 \pm 0.2$ &$ -1.24 \pm 0.01$ \\ (1284)~ Latvia & $26.1 \pm 0.4$ & & \\ \hline \end{tabular} \caption{Summary of the phase-polarization curve parameters for some asteroids studied in this work. $\alpha_{\rm inv}$ is the inversion angle, $\alpha(P_{\rm min})$ is the phase angle in the negative polarization branch where linear polarization reaches its largest (negative) value $P_{\rm min}$, which is listed in the last column.} \label{tab:Pol_info} \end{table} For some targets, our measurements are the first polarimetric observations. The new data allow us, in some cases, to obtain for the first time an estimate of the inversion angle, leading us to decide whether they are Barbarians or not (we include this information in Table~\ref{tab:Param}). \subsubsection{Identification of new Barbarians} The new polarimetric measurements presented in this work allow us to identify some new Barbarian asteroids. {\it (606)~Brangane}. For this asteroid, we have two polarimetric measurements at phase angles around $20^{\circ}$. The corresponding $P_{\rm r}$ values are around $-1.2\%$ which is a clear diagnostic of a Barbarian behaviour. {\it (611)~Valeria}. For this asteroid, we have two polarimetric measurements at similar phase angles around $20^{\circ}$. The corresponding $P_{\rm r}$ values are around $-0.8\%$ which is a clear diagnostic of a Barbarian behaviour. {\it (824)~Anastasia}. For this asteroid, we have five polarimetric measurements at phase angles ranging from $3.35^{\circ}$ to $18.11^{\circ}$ . The observed $P_{\rm r}$ values at $\alpha = 18.11^{\circ}$ is $-1.71 \pm 0.18\%$ which is an indication of a Barbarian behaviour. {\it (1284)~Latvia}. Four polarimetric measurements are available at medium and high phase angles ($\alpha = 9.23^{\circ}$, $22.89^{\circ}$, $22.95^{\circ}$, and $25.20^{\circ}$) with $P_{\rm r}$ values respectively equal to $-1.93$, $-0.55$, $-0.53$, and $-0.19$. The three measurements at phase angles higher than $22^{\circ}$, although one of them has a large error bar, nicely indicate an inversion angle around $26^{\circ}$ which is clearly Barbarian-like. {\it (1372)~Haremari}. We have only one single measurement for this asteroid at a phase angle of $19.96^{\circ}$ for which the value of $P_{\rm r}$ is $-0.911\pm0.124\%$. As for (606)~Brangane, this negative value at a phase angle as high as $19.96^{\circ}$ is by itself a strong evidence of its Barbarian nature. Haremari belongs to the dynamical family of Watsonia. This single available measurement for Haremari is in agreement with the (yet fairly noisy) phase-polarization curve of Watsonia, the largest remnant of this family. This is the first family identified among the Barbarians \citep{Cel_2014}. In addition to the above-mentioned objects, we have also one polarimetric observation of (2085)~Henan, for which we find $P_{\rm r} = -1.920 \pm 0.090$ at a phase angle of $15.82^{\circ}$. This observation is still far from the inversion angle, but the polarization is strongly negative. This makes (2085)~Henan a reasonable Barbarian candidate, to be confirmed by future measurements. Fig.~\ref{fig:new_Barb} shows the polarimetric data for the asteroids discussed in this Section. The phase - polarization curve of (234)~Barbara is also shown for a comparison. \begin{figure} \includegraphics[width=9cm]{New_Barb2.eps} \caption{Polarimetric data for (606)~Brangane (red diamonds), (824)~Anastasia (yellow pentagrams), (1284)~Latvia (magenta squares), (1372)~Haremari (large blue disk), and (2085)~Henan (green triangle). Polarimetric data of (234)~Barbara are also shown as small black points for a comparison.} \label{fig:new_Barb} \end{figure} \section{Discussions} \label{Sec:Inter_Disc} \subsection{The abundance of CAIs} In our sample of asteroids, we are observing a wide range of CAIs abundance from almost 0\% (including the CAIs present inside the Y-86751 bulk) to more than 36\%. This wide range of CAIs suggest that all the L-type asteroids do not have formed either in the same time and/or in the same location in the Solar nebula. This support the hypotheses that CAIs were spread inhomogeneously in the Solar nebula after their formation close to the Sun. \subsection{Relation between L-class (DM) and Barbarians} We already saw in Fig.~\ref{fig:All_Pola} that L-class asteroids display a distinct phase-polarization characterized by the deep and very wide negative polarization branch typical of Barbarians. The first Barbarian asteroids were discovered when asteroid taxonomy was based only on reflectance spectra limited to the visual region. We have already mentioned that, according to the SMASS taxonomy, these first Barbarians belonged to the L-, Ld- and K-classes. Later, it has been found that known Barbarians belong to the (DM) L taxonomic class, defined by taking into account also the NIR region of the spectrum. Some asteroids belonging to the (SMASS) L-class have been included in our sample, even if there was no evidence that they belong to the (DM) L-class, to get a more definitive evidence that the Barbarian polarimetric behaviour is indeed uniquely associated with the (DM) L-class. These objects are (12)~Victoria, (122)~Gerda, (753)~Tiflis. New spectroscopic measurements in the NIR presented in this work allow us to conclude that they belong to the (DM) D- (Victoria) and (DM) S-class (Gerda and Tiflis). Our polarimetric measurements for these three asteroids confirm that they do not exhibit a large inversion angle and certainly are not Barbarians (see Fig.~\ref{fig:Non_barb_pola}). Another asteroid in our sample, (908)~Buda, was also classified as (SMASS) L-class, but it was later reclassified as a (DM) D-class \citep{Dem_2009}. Our two polarimetric observations of this asteroid at high phase angle ($P_{\rm r} = -0.381 \pm 0.033$ at $\alpha = 22.01^{\circ}$ and $P_{\rm r} = 0.640 \pm 0.166$ at $\alpha = 25.36^{\circ}$) suggest an inversion angle around $23^{\circ}$. This value is fairly high in general terms, but still too low to be considered as clearly diagnostic of a Barbarian asteroid, until new data will confirm or rule out this hypothesis. Asteroid (1406)~Komppa was already found to belong to the (DM) D-class (MIT-UH-IRTF survey), but it had been previously classified as a (SMASS) Ld asteroid. Our unique polarimetric measurements indicate a $P_{\rm r} = -0.44 \pm 0.10$ at a phase angle of $11.06^{\circ}$. This phase angle is too low to draw any conclusion about its inversion angle, but the degree of linear polarization seems to be exceedingly low to be considered as a likely Barbarian candidate. Some asteroids, (478)~Tergestre and (1702)~Kalahari are found to belong to the (SMASS) L-class, but no NIR spectra are available to determine the (DM) taxonomic classification. (478)~Tergestre does not possess a large inversion angle and most probably is not a (DM) L-type asteroid. In the case of (1702)~Kalahari, our polarimetric observations were taken at too low phase angles to draw any reliable conclusion. \begin{figure} \includegraphics[width=8.8cm]{Non_Barb_Pola.eps} \caption{Polarimetric data for (12)~Victoria (red diamonds), (122)~Gerda (black squares), (753)~Tiflis (blue circles), (908)~Buda (magenta triangles), and (234)~Barbara (green left-oriented triangles). The four first asteroids are (SMASS) L-type, but are not L-type in the (DM) taxonomy. (234)~Barbara is an L-type in the (DM) taxonomy and a Barbarian. It is displayed here as reference for the typical behaviour of (DM) L-type /Barbarian asteroids.} \label{fig:Non_barb_pola} \end{figure} On the other hand, asteroids (606), (824), (1372), and (with some more uncertainty) (2085), are found in this work to be Barbarians, and all of them are found to belong to the (DM) L-class. Based on these results, and on the fact that no Barbarian identified so far belongs to taxonomical classes other than the (DM) L-class, while only one peculiar (DM) L-class asteroid (2448)~Sholokhov (see Sec.~\ref{ssec:Spec_Fit} for a discussion about this peculiar case), we can safely and definitively confirm the existence of a biunivocal relation. \subsection{Aqueous alteration} We found evidence in our modelling procedure that the meteoritic sample Y-89751 was almost always providing better result than using the matrix of Allende. As already discussed, these two meteorites are of similar type and similar bulk composition. Y-89751 contains CAI inclusions with more FeO-rich spinels (18-25\% compared to 4-14\% for Allende). Neither Allende nor Y-89751 present hydrated minerals on their matrices, but according to \citet{Gyo_2011}, the material in the groundmass around the chondrules of Y-89751 may show a flow texture due to aqueous alteration. The possible presence of aqueous alteration for L-type asteroids was already suggested by \citet{Sun_2008a} to explain the apparent absence of igneous differentiation that would have destroyed the observed FTAs \citep{Gri_1989}. On the other hand, \citet{Riv_1998} observed a 3~${\rm \mu}$m feature on the spectrum of (387)~Aquitania which is characteristic of aqueous alteration. However, the improvement of the modelling with Y-89751 is not a proof of the presence of hydration on L-type asteroids. The improvement could as well be due to slight modifications in composition, FeO enrichment of the spinel and/or different preparation of the measured laboratory samples. Indeed, most of the differences observed between Allende and Y-89751 arise in the visible part of the spectrum and around the 2~${\rm \mu}$m absorption band. We do not see the typical 0.7~${\rm \mu}$m absorption feature associated to hydrated silicates neither in the spectrum of Y-89751 nor in that of Allende or any L-type studied in this work, and the 2~${\rm \mu}$m band is not diagnostic of aqueous alteration. Only a spectroscopic survey of L-types around 3~${\rm \mu}$m would assess the aqueous alteration of L-types. \subsection{Interpretation of the high polarimetric inversion angle of Barbarian asteroids} \label{ssec:CAI_Barb} In this section, different possibilities are given to explain the large polarimetric inversion angle of Barbarian asteroids. \subsubsection{Regolith size} In a recent paper, \citet{paper2} have shown an updated version (see Fig.~9 of the above-mentioned paper) of a classical plot \citep[see. for instance,][]{Dol_1989} showing the distribution of the asteroids in the space of the polarimetric parameters $P_{\rm min}$ versus inversion angle. The authors display that the Barbarians show a well distinct behaviour with respect to ``regular'' asteroids, since they occupy a region of the $\alpha_{\rm inv}$ - $P_{\rm min}$ space corresponding to the domain occupied, according to laboratory experiments, by very finely divided silicaceous powders and thin lunar fines, whereas regular asteroids are usually found in a region of the plot corresponding to pulverized rocks and meteorites with coarser grains, having sizes between 30~${\rm \mu}$m and 300~${\rm \mu}$m. \citet{paper2} also noted that regular asteroids tend to group together, in the above-mentioned space, according to the albedo. In turn, the albedo is known to be a parameter that is related both to composition and regolith properties \citep[see also, in this respect,][]{Vesta}. This polarimetric property suggests therefore that the Barbarian behaviour is related to anomalous surface regolith properties, in particular by regolith particle sizes much smaller than usual. \citet{paper2} also noted that another non-Barbarian asteroid, (21)~Lutetia visited by the \textit{Rosetta} probe, is also found in a location close to that occupied by Barbarians in the $(\alpha_{\rm{inv}},P_{\rm{min}})$ space. They noted that this asteroid is unusual in several respects, and there are reasons to believe that Lutetia is a primitive asteroid \citep{Cor_2011,Sie_2011}. The \textit{Rosetta} instruments VIRTIS and MIRO also found evidence of very low thermal inertia \citep{Gul_2012,Oro_2012}. This is in general agreement with polarimetric data suggesting that Lutetia's surface could be rich in fine dust. Surfaces rich of fine dust could also be interpreted in terms of age, through the cumulative effect of a long exposure to moderate impacts unable to totally disrupt the body, but more than sufficient to finely pulverize its surface. This is a possible interpretation deserving further scrutiny, because we have already seen that the likely composition and also possibly the slow rotations of Barbarians could suggest that these asteroids could be extremely old. However, the size of regolith is believed to be dependent on the asteroid size. The fact that Barbarian asteroids are also found among small members of dynamical family is contradictory with a relation between asteroid regolith size and Barbarian behaviour. \subsubsection{High abundance of spinel bearing-CAIs} Fig.~\ref{fig:CAI_Inv} shows a graph of the relative abundance of fluffy type A CAIs (obtained by our analysis) against the inversion angle of the phase-polarization curve for some Barbarian asteroids for which we have decent phase-polarization curves. The asteroids (402) and (679) have been represented using different symbol (blue triangles). Indeed, as already discussed in Sec.~\ref{ssec:Fit_Spectra} these two objects seem to be different with respect to the other known Barbarians. Based on our model, they are the only ones (out of the 7) to be almost only composed of meteoritic component ($100\%$ for (679) and $96\%$ for (402)). These asteroids also possess very high values of space weathering. We also note that the visible part of their spectrum is not well fitted in both cases. With the exception of (402) and (679), a correlation between the modelled abundance of CAIs and the inversion angle seems to be apparent, in spite of the fairly large uncertainties in the determinations of CAI abundances. The interpretation of such a correlation is not trivial, however. The results suggest that the polarimetric inversion angle of Barbarians tends to increase with increasing abundance of CAIs. According to current understanding, the only active phase of CAIs in determining the 2~${\rm \mu}$m absorption band in the reflectance spectrum is spinel. Moreover, there are reasons to believe that the strength of the absorption band is determined primarily by the spinel FeO content. So, when we plot our resulting abundances of the fluffy A-type CAI component in our modelled mineralogic compounds, we are also indirectly dealing with the FeO content of the spinel assumed to be present in the modelled CAI component. One should note that even if (236) seems to not possess any CAIs, its spectrum is still modelled using meteoritic material which possess CAIs inclusions in which spinel has as much as $25\%$ of FeO. Taking into account an abundance of CAIs of $\sim10\%$, we can assume a CAI abundance of $\sim 5\%$ for (236). \begin{figure} \includegraphics[width=8.8cm]{CAI_INV_Y.eps} \caption{Plot of the polarimetric inversion angles for 7 Barbarian asteroids studied in this work, as a function of the derived CAI abundances obtained from fitting their reflectance spectra.} \label{fig:CAI_Inv} \end{figure} The most optically active compound inside CAIs is the FeO-bearing spinel. This material has a high refractive index which is highly dependent on the wavelength in the visible range \citep{Hos_2008}. Because olivine, and also the meteoritic components (which are mainly composed of olivine) all have a refractive index mostly constant in the visible, an indirect proof for the presence of FeO-bearing spinel would be a wavelength dependence of the polarimetric properties measured at different visible wavelengths \citep{Zub_2015}. Most polarimetric observations of asteroids have been historically done in $V$-band, only, but there are a few exceptions regarding asteroids particularly bright and/or interesting. Among them, there is (234)~Barbara. Fig.~\ref{fig:Barb_refind} shows our computations of the inversion angle of the asteroid (234)~Barbara using data taken in the Johnson-Cousin $B$, $V$, $R$ and $I$ standard filters. In doing this plot, we are using some $BRI$ data that are still unpublished and were obtained in the past mostly at the CASLEO observatory (San Juan, Argentina). In producing this plot, we chose to plot in the horizontal axis not the wavelength (which would be the direct observable, being based on the known properties of the standard photometric filters), but the value of the refractive index of spinel taken at the effective wavelength of each filter. As indication, the corresponding wavelengths are indicated in the upper horizontal axis. The value of the inversion angle is shown on the vertical axis. One can notice that, in spite of all uncertainties, there is a clear trend of increasing inversion angle with increasing refractive index. The error bars for the refractive index in Fig.~\ref{fig:Barb_refind} have been estimated based on the FWHM of each filters and the wavelength dependence of the refractive index. The errors are bigger at shorter wavelength since the refractive index is varying quickly in this region while it is almost constant for the I filter. One should take into account also that the sensitivity of most detectors decreases quickly in the blue spectral region, justifying the larger error bars for the inversion angle derived with the B filter. A similar variation of the inversion angle with respect to the wavelength was also found for the other Barbarian (599)~Luisa. One spectropolarimetric measurement was acquired by \citet{Bag_2015} at high phase angle ($26.9^{\circ}$). This measurement shows VRI polarization of respectively $-0.39$, $-0.30$, and $-0.16\%$. This confirms the strong correlation of inversion angle with decreasing wavelength. However, in the case of Luisa, this variation is smaller. Based on our phase-polarization curve, Barbara would have VRI polarization of respectively $-0.41$, $-0.18$, and $0.15\%$ at phase angle equal to $26.9^{\circ}$. One should note that the CAI abundance derived in this work for Luisa is mush lower than the one found for Barbara ($8$ and $20$\% respectively). This could be an explanation for the differences we observe between these two asteroids in spectropolarimetry. \begin{figure} \includegraphics[width=8.8cm]{Ref_ind_Inv_Wavel.eps} \caption{Inversion angle of (234)~Barbara as a function of the refractive index of spinel and wavelength.} \label{fig:Barb_refind} \end{figure} The variations of the polarimetric inversion angle as a function of wavelength and of the derived abundance of CAIs suggest that the large inversion angles of Barbarian asteroids can be a consequence of a higher-than-normal and wavelength-dependent refraction index of the surface regolith, to be possibly interpreted as due to the presence of a high abundance of spinel-bearing minerals, fluffy A-type CAI being our preferred candidates to explain the available observational evidence. \subsubsection{Space weathering} Space weathering certainly affects the spectroscopic properties of the objects, but it is expected to affect also some polarimetric properties. The most direct effect of space weathering (in the case of S-type asteroids) is a darkening of the surface and it is known that polarimetry is highly sensitive to the albedo. Fig.~\ref{fig:SW_Pmin} shows an apparent relation between the derived amount of nano-phase iron particles needed to fit the reflectance spectra, and the extreme value of negative polarization, $P_{\rm min}$. This effect could possibly be interpreted as due to an increase of the imaginary part of the refractive index, according to \citet{Zub_2015}. Since ${\rm npFe^0}$ have a high imaginary refractive index, this interpretation is consistent with our results. As it was already noted for the relation between the inversion angle and the abundance of CAIs, the asteroids (402) and (679) seems (blue triangles) to follow a different behaviour. \begin{figure} \includegraphics[width=8cm]{SW_MIN_Y.eps} \caption{Abundance of ${\rm npFe^0}$ (mass fraction) vs. $P_{\rm min}$} \label{fig:SW_Pmin} \end{figure} \subsection{Geometric albedos} \label{ssec:Alb_Det} Geometric albedos have been obtained for a large number of asteroids using the thermal radiometry technique applied to thermal IR data obtained by the WISE \citep{Mas_2011} and AKARI \citep{Usui_2012} satellites. The results tend to suggest that (DM) L-class objects, including also the new ones identified in this paper, have an albedo which appears to be bimodal. The distribution is peaking around $0.11$ and $0.18$ as seen in Fig.~\ref{fig:LType_Alb}. One should note that small L-type asteroids tend to have a higher albedo than bigger ones. Fig.~\ref{fig:ALB_VS_DIAM} represents the albedo of L-type asteroids as a function of derived diameter by the NEOWISE survey. Actually, all asteroids with a size below $20$ km have an albedo higher than $0.15$ while asteroids bigger than this treshold tend to have albedo lower than $0.15$. This property is expected if space weathering act as a surface darkening process. Since the collisional lifetime decreases with the size of an asteroid \citep{Far_1992,Bin_2004}, smaller asteroids are expected to have a younger surface than larger ones. This hypothesis was strengthened by the observation of a size dependence of the spectral slope of S-type asteroids \citep{Gaf_1993,Car_2016}. \citet{Bin_2004} also observed a correlation between the size and the spectral slope of near earth asteroids. Since the space weathering is also expected to increase the slope of asteroid spectra, a relation between size and slope, and albedo should also be expected. \begin{figure} \includegraphics[width=8.8cm]{Alb_Bimod.eps} \caption{Histogram of the albedo of L type asteroids studied in this work.} \label{fig:LType_Alb} \end{figure} \begin{figure} \includegraphics[width=8.8cm]{Alb_Diam_Err.eps} \caption{Albedo of (DM) L-type asteroids with respect to their derived IRAS diameter.} \label{fig:ALB_VS_DIAM} \end{figure} Families supposed to be populated by (SMASS) L-type asteroids, such as the Henan (2085) and Tirela (1400) families, possess high albedos (respectively $0.22 \pm 0.08$ and $0.28 \pm 0.11$). However, only three of these asteroids were considered in the histogram presented in Fig.~\ref{fig:LType_Alb}. It is then possible that more (DM) L-type high albedo asteroids will be identified in the near future. The darkening property of space-weathering can be seen in our modelling. Fig.~\ref{fig:Alb_SpaceW} shows the derived fraction of nanophase iron particle derived by our modelling procedure as a function of the NEOWISE albedo. This figure strongly suggests that the albedo decreases with the space-weathering in the case of L-type asteroids. However, according to \citet{Cel_2010} this property does not seem apply in the case of S-type asteroids. \begin{figure} \includegraphics[width=8cm]{Alb_SpaceW.eps} \caption{Abundance of ${\rm npFe^0}$ (mass fraction) vs. the NEOWISE albedo for L-type asteroids.} \label{fig:Alb_SpaceW} \end{figure} \begin{comment} \begin{table*} \begin{tabular}{c c c c c} \hline Asteroid & $p_V$ & $p_V$ (WISE) & $p_V$ (AKARI) & $p_V$ (IRAS) \\ (172)~Baucis & $0.136 \pm 0.014$ & $0.120 \pm 0.017$ & $0.121 \pm 0.004$ & $0.138 \pm 0.006 $ \\ (234)~Barbara & $0.131 \pm 0.012$ & $0.151 \pm 0.034$ & $0.192 \pm 0.007 $ & $0.228 \pm 0.011$ \\ & & $0.212 \pm 0.030$ & & \\ (236)~Honoria & $0.131 \pm 0.010$ & $0.111 \pm 0.017$ & $0.144 \pm 0.004 $ & $0.127 \pm 0.012$ \\ (387)~Aquitania & $0.120 \pm 0.010$ & $0.203 \pm 0.026$ & $0.174 \pm 0.005 $ & $0.190 \pm 0.011$ \\ (402)~Chloe & $0.129 \pm 0.010$ & $0.183 \pm 0.013$ &$0.119 \pm 0.007$ & $0.148 \pm 0.015$ \\ & & $0.13 \pm 0.032$ & & \\ (679)~Pax & $0.115\pm0.010$ & $0.105 \pm 0.014$ & $0.129 \pm 0.003$ & $0.166\pm 0.017$ \\ & & $0.093 \pm 0.007$ & & \\ (729)~Watsonia & $0.103 \pm 0.7$ & $12.3 \pm 0.3$ & $-1.66 \pm 0.06$ \\ (980)~Anacostia & $0.143 \pm 0.01$ & $ 0.140 \pm 0.012$ &$ 0.219 \pm 0.007$ & $0.172 \pm 0.006 $ \\ \end{tabular} \label{tab:albedo} \caption{Summary of albedo determination for the asteroids studied in this work} \end{table*} The WISE/NEOWISE albedo of our targets are listed in Table~\ref{tab:Pol_info}. These albedo come from \citet{Mas_2011}. Thermal infrared observation of asteroids have been used in order to derived albedo of asteroids. The Fig.~\ref{fig:LType_Alb} represent the geometric albedo histogram of all the asteroid that are confirmed L types in the Bus-Demeo taxonomy. In order to improve the statistics, asteroids for which the NIR part of the spectrum is not available, but are L or Ld types in the SMASS taxonomy were also taken into account. \end{comment} \subsection{Asteroid families} \label{ssec:Ast_Fam} Asteroid families are groups of asteroids that share very similar orbital proper elements, and are interpreted as swarms of fragments issued from the disruption of single parent bodies. The asteroids in our sample include some objects belonging to different dynamical families. \citet{Cel_2014} identified for the first time the existence of an asteroid family composed of Barbarian asteroids, namely the Watsonia family. Two other families are suspected to include Barbarian asteroids, namely the families of Henan and Tirela/Klumpkea. Most members of these families are too faint to be observed by means of ToPol. However, we present here some polarimetric measurements and a few NIR spectra of candidate family members. \subsubsection{The Watsonia family} \label{sssec:Watsonia} Asteroid (729)~Watsonia was identified as the largest member of a dynamical family by \citet{Nov_2011}. This is a high inclination family (proper inclination $\sim 30^{\circ}$) located at an heliocentric distance of $2.76$ AU. The parent member was found to be a Barbarian by \citet{Gil_2014}. By means of VLT polarimetric observations of a sample of Watsonia family members, \citet{Cel_2014} discovered the first known case of a family consisting of Barbarian asteroids. The polarimetric measurements presented in this work of the parent body (729)~Watsonia are shown in Fig.\ref{fig:Watsonian_Pola}, together with our measurement of the other largest asteroid in this family, (1372)~Haremari, and (3269)~Vibert-Douglas. Our data confirm that they seem to share the same polarimetric properties of Watsonia (see Fig.\ref{fig:Watsonian_Pola}). However, in the case of (3269) Vibert-Douglas the only one available measurement is insufficient to confirm a Barbarian behaviour. The asteroid (599)~Luisa can also be considered as a member of the Watsonia family \citep{Cel_2015a}. A more ancient super-family including also the nearby Barbarians (387)~Aquitania and (980)~Anacostia is also suspected to exist \citep{Cel_2014}. Our polarimetric measurement of Luisa has been taken at a phase angle too low to conclude that Luisa is another confirmed Barbarian. However, \citet{Bag_2015} presented one spectropolarimetric measurement at high phase angle of this asteroid confirming its Barbarian nature. They exhibit similar spectra, (see Fig. \ref{fig:Watsonian_Spec}), with relatively low CAI abundances ($0\%$ for (729) and (1372), and $8\%$ for (599)). These results strengthen the hypothesis that these bodies are genetically related. Figs. \ref{fig:Watsonian_Pola} and \ref{fig:Watsonian_Spec} show the available polarimetric and spectroscopic data, respectively, for asteroids belonging to the Watsonia family. \begin{figure} \includegraphics[width=8.8cm]{Watsonian_Pola2.eps} \caption{Polarimetric data of asteroids belonging to the Watsonia family} \label{fig:Watsonian_Pola} \end{figure} \begin{figure} \includegraphics[width=8.8cm]{Watsonia_Spec_2.eps} \caption{Spectra of (599)~Luisa, (729)~Watsonia, and (1372)~Haremari normalized to one at 0.55~${\rm \mu}$m.} \label{fig:Watsonian_Spec} \end{figure} \subsubsection{The Henan family} \label{sssec:Henan} \citet{Bro_2013} identified a family of 946 asteroids with (2085)~Henan as the largest member, but \citet{Mas_2013} did not classify this as a family because it is too dispersed and probably contaminated by many interlopers. This controversial family was the first one found to include some (SMASS) L-class asteroids by \citet{Bus_1999}. As discussed above, we have only one polarimetric measurement of (2085)~Henan and one measurement of the second largest member, (2354)~Lavrov, the latter having been observed at a phase angle too small to draw any conclusion about its behaviour. The only one measurement of Henan, however, indicates a very high probability of being a Barbarian. A few NIR spectra of Henan candidate members exist in the literature. Most of them are (DM) L-class and look similar, strengthening the possibility of a common origin. According to our modelling attempts, these objects are characterized by value of CAI ranging from 10 to 28 \%, and display moderate value of space weathering. Fig.~\ref{fig:Henan_Spec} shows the spectra of (2085)~Henan, (2354)~Lavrov, and (3844)~Lujiaxi. A spectrum of (1858)~Lobachevski is also available, however, this asteroid was classified as an S-type and possesses an albedo of 0.37. This make it probably an interloper inside the Henan family. Some differences are noticed between the Henan and the Watsonia family. Henan family spectra exhibits a negative slope in the near-infrared region whereas members of the Watsonia family display positive slope. However, the visible spectral region shows identical behaviour. \begin{figure} \includegraphics[width=8.8cm]{Henan_Spec_2.eps} \caption{Spectra of (2085)~Henan, (2354)~Lavrov, and (3844)~Lujiaxi normalized at 0.55~${\rm \mu}$m.} \label{fig:Henan_Spec} \end{figure} \subsubsection{The Tirela/Klumpkea family} \label{sssec:Tirela} The Tirela family was first identified by \citet{Nes_2005}. It is located at the edge of the outer belt (proper semi-major axis $a_p$ = 3.12 AU) and possesses high eccentricity and inclination ($e_p$ = 0.20 and $i_p = 16.8^{\circ}$). This family is characterized by high geometric albedo ($0.2-0.3$), whereas nearby asteroids in the same region have generally low albedo. This family was found to include (SMASS) L/Ld-class members by \citet{Mot_2008}. \citet{Mil_2014} found also a family in this region, but they assigned a different membership and called it the Klumpkea family. We observed (1040)~Klumpkea in polarimetry, but only at low phase angles which cannot provide a diagnostic of Barbarian properties. In spectroscopy, five Tirela family members (8250), (15552), (19369), (26219), and (67255) were observed, but only in the NIR. However, for some of them, spectro-photometric data in the visible domain are available in the SDSS database. Their NIR spectra are characterized by strong 2~${\rm \mu}$m absorption band which leads to high abundance of CAIs. Only (19369) is lacking a strong 2~${\rm \mu}$m absorption band. This asteroid also possesses lower albedo than all other Tirela/Klumpkea family asteroid observed in this work. We suspect this asteroid to be an interloper inside this family. For the other ones, the spectral modeling provide CAIs abundance ranging from 13 to 23 \% associated with almost no meteoritic component and high fraction of olivine. \section{Conclusions and perspectives} \label{sec:Conc_Persp} Our comprehensive analysis of the evidence coming from polarimetric and spectroscopic data allows us to draw some robust conclusions as well as some more tentative interpretation attempts based on current observational evidence. The most robust result is the proven equivalence between the polarimetric Barbarian behaviour and the taxonomic classification as L-class objects according to the \citet{Dem_2009} taxonomy. This correlation between polarimetric and spectroscopic behaviour had been already suggested in the past, we show in this work a very convincing observational proof of that. Another important result is that we confirm preliminary conclusions by \citet{Sun_2008a}, and we find that the spectra of (DM) L-class objects can be successfully modelled using primitive materials, including primarily CAIs, MgO-rich olivine and the mineral compounds forming CV3s meteorite. We tried two CV3 meteorites in our Hapke model. We obtained better results when using the CV3 displaying CAIs with more FeO-rich spinels and showed some possible clue of aqueous alteration (Y-89751). We could also rule out the presence of large amounts of pyroxene. Our fits of available reflectance spectra were generally good, both in the NIR and the visible spectral regions. An essential feature in our modelling exercises is that the presence of fluffy type A CAIs are needed to obtain acceptable fits of the reflectance spectra. We found evidence of a relation between the relative abundance of CAIs on the surface of these asteroids and the large polarimetric inversion angle which characterizes the Barbarian behaviour. Such a relation seems to be strengthened by the observed variation of the inversion angle of asteroid (234)~Barbara as a function of wavelength. This variation can be interpreted as due to the wavelength-dependent variation of the refractive index of the spinel mineral. Other possible explanations of the Barbarian behaviour, however, cannot be ruled out, including the possibility that Barbarians have surface regoliths formed by very thin particles, as suggested by \citet{paper2}. Of course, different possible explanations are not necessarily mutually exclusive. Instead, the high abundance of fluffy type A CAI suggest that Barbarian asteroids could be extremely old and primitive. The important role played by space weathering processes was also stressed by the results of our investigations. A tentative relation was found between the estimated abundance of nano phases iron believed to be characteristic outcomes of space weathering, and the extreme value of negative polarization $P_{\rm min}$. Polarimetric and NIR reflectance spectra of a few members of dynamical families known to include L-class members were also obtained. We could confirm an L-classification for some of these family members. This is the first step of an investigation that deserves to be pursued making use of large telescopes. We plan also to extend our analysis in the future, by setting up laboratory activities, including polarimetric measurements of CAI material found in meteorite samples. These laboratory measurements will allow to definitely understand the polarimetric behaviour of CAIs and be able to provide more robust answers to the enigma represented by Barbarian asteroids. \section{Acknowledgements} \label{sec:Ackn} The authors wish to thank J. de Leon for her constructive review and remarks which improve the paper. MD thanks the Li{\`e}ge University for their financial support during his scientific missions in Calern. The Torino polarimeter was built at the INAF - Torino Astrophysical Observatory and funded by INAF in the framework of INAF PRIN 2009. Part of the polarimetric data in this work have been obtained on the C2PU facility (Calern Observatory, O.C.A.). Part of this work by MD was supported by the COST Action MP1104 ``Polarization as a tool to study the Solar System and beyond''. This work is based on data collected with 2-m RCC telescope at Rozhen National Astronomical Observatory. The authors gratefully acknowledge observing grant support from the Institute of Astronomy and Rozhen National Astronomical Observatory, Bulgarian Academy of Sciences. The near-infrared data were acquired by MD and PT as Remote Astronomer at the Infrared Telescope Facility, which is operated by the University of Hawaii under contract NNH14CK55B with the National Aeronautics and Space Administration. Data were also obtained and made available by the The MIT-UH-IRTF Joint Campaign for NEO Reconnaissance. The IRTF is operated by the University of Hawaii under Cooperative Agreement no. NCC 5-538 with the National Aeronautics and Space Administration, Office of Space Science, Planetary Astronomy Program. The MIT component of this work is supported by NASA grant 09-NEOO009-0001, and by the National Science Foundation under Grants Nos. 0506716 and 0907766. We also acknowledge the support from the French ''Programme Nationale de Plan\'etologie''. GB gratefully acknowledges observing grant support from the Institute of Astronomy and Rozhen National Astronomical Observatory, Bulgarian Academy of Sciences. JL acknowledge support from the AYA2015-67772-R (MINECO, Spain). The asteroid diameters and albedos based on IRAS and NEOWISE observations were obtained from the Planetary Data System (PDS) .
{ "timestamp": "2018-02-21T02:05:43", "yymm": "1802", "arxiv_id": "1802.06975", "language": "en", "url": "https://arxiv.org/abs/1802.06975" }
\section{Introduction} \label{pno_eom_introduction} Accurate description of electronic spectra of medium ($<100$ atoms) and large ($>100$ atoms) molecular systems has always been a challenge for quantum chemistry. The time-dependent density functional theory (TDDFT) is the most popular method for the analysis of excited states due to its computational efficiency, capable of treatment of systems with hundreds and thousands of atoms. Although TDDFT provides medium accuracy for one-electron excitations, the accuracy of TDDFT can be limited for certain types of excited states (e.g. Rydberg or charge transfer)\cite{Dreuw2005} and in general its accuracy depends strongly on the density functional\cite{Jacquemin2009}. In contrast to TDDFT, multiconfiguration/multireference (MR) wave function models, such as MR perturbation theory methods (e.g. complete active space perturbation theory (CASPT2) \cite{Andersson1992} and n-electron valence states perturbation theory (NEVPT2) \cite{Angeli2001}) and MR configuration interaction \cite{Werner1988,Szalay2012} can recover both static and dynamic electron correlation, can treat multiple electronic states on equal footing, and attain high accuracy, albeit for rather small systems\cite{Schreiber2008}. Among the challenges of the MR approaches is the need to select the active space, and the exponential growth of complexity with the size of active space. Although the latter can be avoided for certain types of systems by numerical approximations such as density matrix renormalization group\cite{Chan2011,Schollwock2005} and other tensor network approaches, the MR methods are generally difficult to use for nonspecialists. Accurate treatment of dynamical electron correlation in the context of MR methodologies is an ongoing direction of research. In this work we focus on the treatment of excited states by the coupled-cluster method. The highly robust coupled-cluster hierarchy provides unparalleled accuracy for the ground states by systematically including two-, three- and higher-body correlation effects from a single determinant reference. The CC ansatz can be extended to excited states through the use of the linear-response (LR) theory,\cite{Monkhorst1977} the symmetry-adapted cluster configuration interaction (SAC-CI) method,\cite{Nakatsuji1983,Nakatsuji1979} or the equation of motion coupled-cluster (EOM-CC) method.\cite{Sekino1984,Stanton1993} However, the high-order scaling of the coupled-cluster methods limits its application to small molecules. Even with truncation to singles and doubles excitations, the excited state CCSD methods still have polynomial scaling with large factor \bigO{N^6} and are constrained to systems containing only 20-30 atoms without access to campus-level or national computing resources. Recently, the development of {\em reduced scaling} variants of the coupled-cluster methods has been reinvigorated by Neese's introduction\cite{Neese:2009db} of Pair Natural Orbitals (PNOs) in the context of local correlation formalisms of CC initiated by Pulay\cite{Pulay1983} and pursued by Werner\cite{Hampel1996,Korona2003} and others.\cite{DanielCrawford2002,Russ2004} PNOs were originally proposed in the 1960s under the name Pseudo-natural Orbitals. \cite{Edmiston1966,Meyer1971} Truncation of PNOs significantly reduces the number of unoccupied orbitals while only introducing small errors in correlation energies in post-Hartree-Fock calculations. However, the demanding computational cost of the pair-specific integral transformation to the PNO space, which scales $\bigO{N^7}$ if there is no truncation of the PNO space, prevents the development of PNO-based electronic structure theories. In 2009, Neese {\em et al.}~revived local PNOs (LPNOs) for the CEPA \cite{Neese:2009db} and CCSD \cite{Neese2009a} methods, making use of density fitting approximations to accelerate the integral transformation process. It makes large-scale coupled-cluster possible for systems with up to 100 atoms using a small workstation. The LPNO approach was improved by imposing block sparsity into cluster operator amplitudes via domains of projected atomic orbitals (PAOs).\cite{Riplinger2013a,Riplinger2013} The DLPNO-CC method was subsequently improved via the linear-scaling density fitting\cite{Pinski2015,Riplinger2016} and the introduction of F12 explicit correlation to reduce the basis set error,\cite{Pavosevic2017,Pavosevic2016,Pavosevic2014} culminating in a linear scaling explicitly correlated CCSD(T) method for ground states. These developments were pursued in parallel by several other groups, with polynomial scaling PNO-CCSD(T) code demonstrated by H\"{a}ttig and co-workers\cite{Schmitz2016} and a scalable implementation of a linear-scaling PNO-CCSD(T)-F12 demonstrated by Werner and co-workers.\cite{Ma2017} The key ideas of modern reduced-scaling coupled-cluster methods apply not only to the ground states but also to the excited states. Two competing visions of how to formulate reduced-scaling excited state methodology have been explored. H\"attig and Helmich have explored excited-state coupled-cluster methods by introducing the $\bigO{N^4}$ scaling PNO-EOM-CC2 with state-specific PNOs \cite{Helmich2013}, as well as PNO-based CIS(D)\cite{Helmich2011} and ADC(2) \cite{Helmich2014}. The common ideas to these developments is the use of state-specific PNOs to compress the cluster operator (computed in the ground state CC equation) and the excited state wave operators for each state, with excited states computed one at a time. Thus the total number of PNOs grows linearly with the number of excited states. Recently, Dutta \textit{et al} presented a PNO-based coupled-cluster method for excited states utilizing the similarity-transformed EOM (STEOM) CCSD framework.\cite{Dutta2016,Dutta2018} In their approach the use of PNOs is limited to the ground state only, with DLPNO-CCSD amplitudes subsequently transformed to the canonical basis and used to evaluate the bt-PNO-STEOM-CCSD energies of manifolds of states, at a \bigO{N^6} complexity but possible to reach \bigO{N^5} with additional improvements. However, this approach back-transformed the PNOs to the canonical space in the equation of motion CCSD calculations, thus limiting the size of the system can be considered. We should also note that the use of local correlation ideas (namely, PAO domains) without the PNO-style compression, have been explored in the context of coupled-cluster methods for excitation energies, such as local EOM-CC2\cite{Kats2006} and local EOM-CCSD \cite{DanielCrawford2002, Korona2003}. In this work, we present a PNO-based approach suitable for robust treatment of manifolds of excited states with the EOM-CCSD methods. The key idea is to use state-averaged PNOs similar to those used in the ground-state PNO coupled-cluster methods through state-averaged guess pair densities averaged over the target excited state manifold. To quickly explore the performance of our approach we simulated it using a massively parallel EOM-CCSD implementation. The new massively parallel EOM-CCSD was implemented in the Massively Parallel Quantum Chemistry (MPQC) package \cite{mpqc4} using the TiledArray\cite{tiledarray} framework, based on the ground-state CCSD implementation described previously.\cite{Peng2016} The new implementation exhibits good strong-scaling parallel performance and allows the calculation of excitation energy for systems with more than 50 atoms and more than 1000 basis functions; this is crucial to the exploration of the state-averaged PNO ansatz for systems of realistic size. In Section \ref{pno_eom_methods}, the theory and implementation of state-averaged PNOs are discussed. Section \ref{pno_eom_computational_detail} describes the computational details as well as the computing resources used. Section \ref{pno_eom_result} demonstrates the performance of the parallel EOM-CCSD code and the accuracy of state-averaged PNOs. \section{Methods} \label{pno_eom_methods} The coupled-cluster ground-state wave function, \begin{equation} \Psi_{(0)} \equiv e^{\hat{T}} \ket{0}, \end{equation} where the $\ket{0}$ stands for the zeroth order reference wave function (usually a Hartree-Fock determinant), is determined by projection of the Schr\"odinger equation against excited determinants \begin{align} 0 & = \bra{\overline{^a_i}} \bar{H} \ket{0}, \\ 0 & = \bra{\overline{^{ab}_{ij}}} \bar{H} \ket{0}. \end{align} with $\bar{H} \equiv e^{-\hat{T}} H e^{\hat{T}}$ the usual similarity-transformed Hamiltonian. Within the equation of motion coupled-cluster method\cite{Sekino1984,Stanton1993} $k$th excited-state wave function is obtained in a CI fashion, by the action of a linear excitation operator acting on the ground-state CC wave function: \begin{equation} \Psi_{(k)} \equiv \hat{R}_{(k)} e^{\hat{T}} \ket{0}, \end{equation} $\hat{R}_{(k)}$ and the corresponding energies $E_{(k)}$ are obtained by diagonalizing the similarity-transformed Hamiltonian: \begin{equation} \bar{H} \hat{R}_{(k)} \ket{0} = E_{(k)} \hat{R}_{(k)} \ket{0}. \end{equation} In practice the ground and excited states are represented in terms of single and double excitations only: \begin{align} \hat{T} & \overset{\text{CCSD}}{\equiv} \hat{T}_1 + \hat{T}_2, \\ \hat{T}_{1} & \equiv T^{i}_{a} E_{i}^{a} , \\ \hat{T}_{2} & \equiv \frac{1}{2} T^{ij}_{ab} E_{ij}^{ab}. \\ \hat{R}_{(k)} & \overset{\text{CCSD}}{\equiv} \delta_{k0} + \hat{R}_{1(k)} + \hat{R}_{2(k)}, \\ \hat{R}_{1(k)} & \equiv B^{i}_{a(k)} E_{i}^{a} , \\ \hat{R}_{2(k)} & \equiv \frac{1}{2} B^{ij}_{ab(k)} E_{ij}^{ab}. \end{align} The storage and operation costs of the CCSD and EOM-CCSD methods are $\bigO{N^4}$ and $\bigO{N^6}$, respectively. Two-body amplitude tensors, $T^{ij}_{ab}$ and $B^{ij}_{ab}$, are efficiently rank-compressed by transforming each $ij$ block into the $ij$-specific subspace. For the ground-state amplitudes the optimal pair-specific subspaces are robustly approximated by the truncated singular subspace of the corresponding ground-state pair densities, $\textbf{D}_{(0)}^{ij}$, computed from guess amplitudes: \begin{align} \textbf{D}_{(0)}^{ij} = \frac{2}{1 + \delta_{ij}} (\textbf{T}^{ij} \tilde{\textbf{T}}^{ij \dagger} + \textbf{T}^{ij \dagger} \tilde{\textbf{T}}^{ij} ), \end{align} where $\left( \textbf{T}^{ij} \right)_{ab} \equiv T^{ij}_{ab}$ and $\tilde{\textbf{T}}^{ij} \equiv 2 \textbf{T}^{ij} - \textbf{T}^{ij \dagger}$. (Semi)canonical MP1 amplitudes, \begin{equation} (T^{(1)})^{ij}_{ab} \equiv \frac{G^{ij}_{ab}}{f_{i}^{i} + f_{j}^{j} - f_{a}^{a} - f_{b}^{b}}, \label{eq:T1} \end{equation} are typically used as the guess \cite{Neese2009a} (In Eq. \eqref{eq:T1} $f$ are the matrix elements of the Fock operator, and $G$ stands for the two-electron integral). PNOs are the basis for the singular subspace of $\textbf{D}_{(0)}^{ij}$ ; they are obtained by solving the eigensystem: \begin{equation} \textbf{D}_{(0)}^{ij} \textbf{U}_{(0)}^{ij} = \textbf{U}_{(0)}^{ij} \textbf{n}_{(0)}^{ij}. \end{equation} where $\textbf{n}_{(0)}^{ij}$ are the PNO occupation numbers. PNOs with occupation numbers less than user-provided threshold $T_\text{CutPNO}$ are omitted, hence the number of PNOs per pair is independent of the system size (i.e. $\bigO{1}$). One-body amplitudes, $T^{i}_{a}$ and $B^{i}_{a}$, are compressed similarly to the two-body counterparts by transforming into the basis of orbital-specific virtuals (OSVs)\cite{Yang2011}. OSVs are traditionally defined to be identical to the PNOs of diagonal pairs but truncated according to a different threshold, $T_\text{CutOSV}$. As pointed out by H\"{a}ttig and Helmich, \cite{Helmich2011} the optimal singular subspaces for the ground and excited state amplitudes differ; as a result, the PNOs and OSVs must be constructed separately for the ground and excited states. H\"attig and Helmich proposed the use of state-specific PNOs, where the PNOs for each state are constructed using CIS(D) doubles amplitudes with respect to that state: \cite{Helmich2013} \begin{equation} B^{ij}_{ab (k)} = \frac{K^{ij}_{ab (k)}}{\omega_{(k)} + f_{i}^{i} + f_{j}^{j} - f_{a}^{a} - f_{b}^{b}}, \label{eq:cis_d} \end{equation} \begin{equation} K^{ij}_{ab (k)} = B_{c (k)}^{i} G_{ab}^{cj} + B_{c (k)}^{j} G_{ab}^{ic} - B_{a (k)}^{l} G_{lb}^{ij} - B_{b (k)}^{l} G^{ij}_{al} , \end{equation} where $\textbf{B}^{i}_{a (k)}$ and $\textbf{B}^{ij}_{ab (k)}$ are the CIS singles amplitudes and CIS(D) doubles amplitudes for excited state $k$, and $\omega_{(k)}$ is the CIS excitation energy. The state-specific PNOs for excited states can be obtained from the state-specific pair density using the CIS(D) doubles amplitudes similar to the approach used in ground state: \begin{equation} \textbf{D}^{ij}_{(k)} = \frac{2}{1 + \delta_{ij}} (\textbf{B}^{ij}_{(k)} \tilde{\textbf{B}}^{ij \dagger}_{(k)} + \textbf{B}^{ij \dagger}_{(k)} \tilde{\textbf{B}}^{ij}_{(k)} ), \end{equation} Such definition of excited state PNOs yields good accuracy in the context of PNO-EOM-CC2 method.\cite{Helmich2013} However, there are several factors that prompted us to look beyond the state-specific PNOs. First, and foremost, the cost of PNO construction and integral transformation grow linearly with the number of states. This is particularly notable since the cost of PNO-based methods is often dominated by the cost of the integral transformation, even when domain approximations are employed.\cite{Pinski2015,Riplinger2016} Second, state-specific PNOs make it difficult to deal with degenerate state manifolds (which ideally need to be expressed in the same basis). Lastly, the use of state-specific PNOs increases the complexity of formalism and implementation. Thus we decided to investigate PNO-EOM-CCSD that uses one set of PNOs for all excited states, in particular, we propose to use the {\em state-averaged} PNOs. The state-averaged PNOs are defined as the eigenvectors of averaged pair densities over an $N$-state manifold: \begin{equation} \label{eq:eom:sa_density} \textbf{D}^{ij} = \frac{1}{N} \sum_{k}^{N} \textbf{D}^{ij}_{(k)}. \end{equation} (State-averaged OSVs will be defined in this work the PNOs of the diagonal pairs, in complete analogy with the construction of the ground-state OSVs). Although the work is underway in our group to develop a production implementation of reduced-scaling CC, here our goal is more modest: we aim to evaluate the proposed state-averaged PNO formulation in the context of EOM-CCSD. Hence we initially implemented a simulation for PNO-EOM-CCSD based on a newly-developed massively parallel canonical (i.e., \bigO{N^6}) EOM-CCSD program in the MPQC code. Note that simulation has been used previously for initial evaluation of locally-correlated PAO-based EOM-CCSD by Russ and Crawford\cite{DanielCrawford2002,Russ2004} and by Korona and Werner \cite{Korona2003}. Werner {\em et al.}~and Crawford {\em et al.}~also used simulation to compare PAO-, OSV-, and PNO-based formulations of CCSD.\cite{Krause2012,McAlexander2016} It should also be noted that H\"attig and Helmich have demonstrated {\em production}-level \bigO{N^4} PNO-EOM-CC2 methods,\cite{Helmich2013} but PNO-EOM-CCSD has not yet been reported at the time of writing this manuscript. The canonical closed-shell EOM-CCSD program in MPQC was implemented on top of the TiledArray tensor framework following the formalism of Bartlett and Stanton \cite{Stanton1993}. The implementation details generally follow the ground-state explicitly correlated CCSD implementation reported previously.\cite{Peng2016} All amplitudes and intermediates are distributed in memory, and contractions are evaluated using the communication-optimal implementation of the distributed-memory scalable universal matrix multiplication algorithm (SUMMA) implemented in TiledArray.\cite{Calvin2015a,Calvin2015b} Similarly to the ground-state CCSD, the largest intermediate needed to compute in the EOM-CCSD is the $W_{ab}^{cd}$ term with four virtual indices: \begin{equation} W_{ab}^{cd} \equiv G_{ab}^{cd} - T^{i}_{b} G_{ai}^{cd} - T^{i}_{a}G_{ib}^{cd} + (T_{ab}^{ij} + T_{a}^{i}T_{b}^{j})G_{ij}^{cd}. \end{equation} When contracting with $B_{ab}^{ij}$, this intermediate can be avoided through a back-transformed intermediate: \begin{align} W_{ab}^{cd}B_{cd}^{ij} &= X^{ij}_{\rho \sigma} C^{\rho}_{a} C^{\sigma}_{b} - X^{ij}_{\sigma \rho} C^{\rho}_{k} C^{\sigma}_{a} T_{b}^{k} - X^{ij}_{\rho \sigma} C^{\rho}_{k} C^{\sigma}_{b} T_{a}^{k} + X^{ij}_{\rho \sigma} C^{\rho}_{k} C^{\sigma}_{l} (T_{ab}^{kl} + T_{a}^{k}T_{b}^{l}) ,\\ X^{ij}_{\rho \sigma} &= (B_{cd}^{ij} C_{\mu}^{c} C_{\nu}^{d}) G_{\rho \sigma}^{\mu \nu}. \end{align} Computing intermediate $X$ requires evaluating atomic two-electron integral on the fly. In this way, the storage requirements of the EOM-CCSD program have been reduced, allowing us to carry out calculations on systems with over 1000 basis functions. The same technique has been used by Ku{\'{s}} {\em et al.}{} in ACES \rom{3}. \cite{Kus2009} The ground-state PNO-CCSD simulation was implemented with modification to the Jacobi update in the following manner: \begin{enumerate} \item After the CCSD amplitude residuals $\textbf{R}_{1}$ and $\textbf{R}_{2}$ are computed, $\textbf{R}_{1}$ is transformed into a semi-canonical OSV basis and $\textbf{R}_{2}$ is transformed into a semi-canonical PNO basis: \begin{equation} \bar{\textbf{R}}^{i} = \textbf{U}^{i \dagger} \textbf{R}^{i}, \end{equation} \begin{equation} \bar{\textbf{R}}^{ij} = \textbf{U}^{ij \dagger} \textbf{R}^{ij} \textbf{U}^{ij}, \end{equation} where $\textbf{R}^{i}$/$\textbf{R}^{ij}$ are the corresponding orbital/pair blocks of $\textbf{R}_{1}$/$\textbf{R}_{2}$ residuals, and $\textbf{U}^{i}$/$\textbf{U}^{ij}$ are the ground-state OSV/PNO bases. \item The residuals are updated through a Jacobi update in the OSV and PNO space: \begin{equation} \bar{\Delta}^{i}_{a_{i}} = \frac{\bar{R}^{i}_{a_{i}}} {f_{i}^{i} - \bar{f}_{a_{i}}^{a_{i}} }, \end{equation} \begin{equation} \bar{\Delta}^{ij}_{a_{ij}b_{ij}} = \frac{\bar{R}^{ij}_{a_{ij}b_{ij}}} {f_{i}^{i} + f_{j}^{j} - \bar{f}_{a_{ij}}^{a_{ij}} - \bar{f}_{b_{ij}}^{b_{ij}} }, \end{equation} where $a_{i}$ and $a_{ij}$ are unoccupied orbitals in the truncated OSV and PNO basis, respectively. \item The updated residuals are extrapolated with DIIS and back-transformed into the canonical basis: \begin{equation} \mathbf{\Delta}^{i} = \textbf{U}^{i} \bar{\mathbf{\Delta}}^{i}, \end{equation} \begin{equation} \mathbf{\Delta}^{ij} = \textbf{U}^{ij} \bar{\mathbf{\Delta}}^{ij} \textbf{U}^{ij \dagger}. \end{equation} \item The new CCSD amplitudes are formed as an update to the current amplitudes: \begin{equation} \textbf{T}^{i}_{n+1} = \textbf{T}^{i}_{n} + \mathbf{\Delta}^{i} , \end{equation} \begin{equation} \textbf{T}^{ij}_{n+1} = \textbf{T}^{ij}_{n} + \mathbf{\Delta}^{ij}, \end{equation} where $n$ stands for the number of current iteration. \item The CCSD residuals are recomputed using the new amplitudes, and this process is repeated from step 1 until convergences is reached. \end{enumerate} Similarly, the state-averaged PNO simulation in EOM-CCSD can be done with modification in the Davidson solver: \begin{enumerate} \item The residuals produced by the Davidson algorithm are transformed into the OSV and PNO bases: \begin{equation} \bar{\textbf{R}}^{i}_{(k)} = \textbf{U}^{i \dagger} \textbf{R}^{i}_{(k)}, \end{equation} \begin{equation} \bar{\textbf{R}}^{ij}_{(k)} = \textbf{U}^{ij \dagger} \textbf{R}^{ij}_{(k)} \textbf{U}^{ij}. \end{equation} \item A preconditioner is applied to the residuals in the OSV and PNO spaces: \begin{equation} \bar{B}^{i}_{a_{i}(k)} = \frac{\bar{R}^{i}_{a_{i}(k)}} { \omega_{(k)} + f_{i}^{i} - \bar{f}_{a_{i}}^{a_{i}} }, \end{equation} \begin{equation} \bar{B}^{ij}_{a_{ij}b_{ij}(k)} = \frac{\bar{R}^{ij}_{a_{ij}b_{ij}(k)}} { \omega_{(k)} + f_{i}^{i} + f_{j}^{j} - \bar{f}_{a_{ij}}^{a_{ij}} - \bar{f}_{b_{ij}}^{b_{ij}} } , \end{equation} where $\omega_{(k)}$ is the eigenvalue of state $k$. \item The updated trial vectors are projected back into the canonical space: \begin{equation} \textbf{B}^{i}_{(k)} = \textbf{U}^{i} \bar{\textbf{B}}^{i}_{(k)} , \end{equation} \begin{equation} \textbf{B}^{ij}_{(k)} = \textbf{U}^{ij} \bar{\textbf{B}}^{ij}_{(k)} \textbf{U}^{ij \dagger} . \end{equation} \item The new trial vectors are added to the next iteration of the Davidson algorithm to update the subspace, and the process is continued from step 1 until it reached convergence. \end{enumerate} \section{Computational Details} \label{pno_eom_computational_detail} The canonical EOM-CCSD code was implemented and tested in the developmental version of the MPQC program.\cite{mpqc4} All computations were performed on a commodity cluster at Virginia Tech, each node of which has two Intel Xeon E5-2670 CPUs (332 GFLOPS) and 64 GB of RAM. MPQC was compiled using GCC 5.3.0 with Intel MPI 5.0 and the serial Intel MKL version 11.2.3. All computations launched 1 MPI process per node with 16 threads per MPI process, with the orbital block size set to 20. The state-averaged PNO-EOM-CCSD simulation code was also implemented in MPQC. To simplify the definition of PNOs and OSVs truncation, we set $T_{\textmd{CutOSV}}$=$T_{\textmd{CutPNO}}/10$ for both ground and excited states. Neither domain nor weak pair approximations were utilized. The occupied MOs were localized in all calculations via the Foster-Boys algorithm \cite{Foster1960, Boys1960}. The density-fitting (resolution-of-identify) approximation \cite{Feyereisen1993, Vahtras1993} and frozen core approximation were used for all the calculations performed in this work. We have used the cc-pVTZ \cite{Dunning1989} and aug-cc-pVD/TZ \cite{Kendall1992} atomic orbital basis sets in our calculations, with the corresponding auxiliary basis sets cc-pVTZ-RI and aug-cc-pVD/ TZ-RI \cite{Weigend2002} for density-fitting. In Section \ref{pno_eom_result}, the geometries of the methylated uracil dimer with water and the phenolate form of the anionic chromophore of the photoactive yellow protein were obtained from Ref. \citenum{Epifanovsky2013}. The structure of 11-cis-retinal protonated Schiff base was obtained from Ref. \citenum{Dutta2016}. The structures of benzonitrile and acetamide were obtained from Ref. \citenum{Schreiber2008}. In Section \ref{section:error_analysis}, a total number of 10 excited-states were computed for the benchmark dataset of 28 organic molecules by Thiel \cite{Schreiber2008}. \section{Results \& Discussion} \label{pno_eom_result} \subsection{Parallel Performance of EOM-CCSD} The new canonical EOM-CCSD code can attain high efficiency and good parallel scalability as illustrated in Fig. 1 and Fig. 2 for realistic computations (with aug-cc-pVDZ and cc-pVTZ basis sets) on excited-states of the methylated uracil dimer with water and 11-cis-retinal protonated Schiff base, respectively. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{mU_h2o_eom_time} \caption{Parallel performance of EOM-CCSD on 4 states of the methylated uracil dimer with water (39 atoms) with the aug-cc-pVDZ (645) and cc-pVTZ (882) basis sets} \label{fig:eom:mU_h2o} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{cis_retinal_eom_time} \caption{Parallel performance of EOM-CCSD on 4 states of the 11-cis-retinal protonated Schiff base (51 atoms) with the aug-cc-pVDZ (753) and cc-pVTZ (1050) basis sets} \label{fig:eom:cis_retinal} \end{figure} The data in Fig. \ref{fig:eom:mU_h2o} corresponds to a $\sim5$ speedup from 16 to 128 nodes with the aug-cc-pVDZ basis and a $\sim3$ speedup from 32 to 128 nodes with the cc-pVTZ basis. The data in Fig. \ref{fig:eom:cis_retinal} corresponds to a $\sim5.5$ speedup from 16 to 128 nodes with the aug-cc-pVDZ basis and a $\sim1.6$ speedup from 64 to 128 nodes with the cc-pVTZ basis. The demonstrated strong scaling is not as impressive as that of the ground-state CCSD program,\cite{Peng2016} but additional improvements are planned. The performance of our code is already sufficient to be able to treat multiple states of a system with 50-100 atoms and 1000-1500 basis functions. \subsection{Accuracy of State-Averaged PNOs} To quantify the performance of state-averaged PNOs we computed errors in excitation energies relative to the canonical EOM-CCSD values introduced by the truncation of PNOs (and the corresponding truncation of OSVs). Table \ref{table:eom:sa_pno} lists the PNO truncation errors for benzonitrile in the cc-pVTZ basis for a fixed value of the $T_{\textmd{CutPNO}}$ parameter, as a function of the number of computed states. \begin{table}[!h] \caption{Truncation errors in excitation energy (eV) of benzonitrile (cc-pVTZ,V=283 \textsuperscript{\emph{a}}) with respect to total number of states at $T_{\textmd{CutPNO}}$=$10^{-8}$} \label{table:eom:sa_pno} \begin{center} \begin{tabular}{lrrrrrrr} \toprule nStates & 1 & 2 & 4 & 6 & 8 & 10 & 20 \\ ESnPNO & 63 & 79 & 91 & 95 & 101 & 104 & 133 \\ \midrule $S_1$ & 0.0072 & 0.0016 & 0.0014 & 0.0003 & -0.0001 & -0.0001 & -0.0011 \\ $S_2$ & & 0.0043 & 0.0023 & 0.0008 & 0.0005 & 0.0004 & -0.0005 \\ $S_3$ & & & 0.0235 & 0.0039 & 0.0019 & 0.0019 & 0.0005 \\ $S_4$ & & & 0.0263 & 0.0022 & 0.0022 & 0.0022 & 0.0009 \\ $S_5$ & & & & 0.0161 & 0.0173 & 0.0171 & 0.0040 \\ $S_6$ & & & & 0.0069 & 0.0049 & 0.0042 & 0.0004 \\ $S_7$ & & & & & 0.0876 & 0.0235 & 0.0050 \\ $S_8$ & & & & & 0.1834 & 0.0557 & 0.0012 \\ $S_9$ & & & & & & 0.0201 & -0.0001 \\ $S_{10}$ & & & & & & 0.0058 & 0.0044 \\ MAE \textsuperscript{\emph{b}} & 0.0072 & 0.0030 & 0.0134 & 0.0050 & 0.0372 & 0.0131 & 0.0018 \\ MAX \textsuperscript{\emph{c}} & 0.0072 & 0.0043 & 0.0263 & 0.0161 & 0.1834 & 0.0557 & 0.0050 \\ \bottomrule \end{tabular} \end{center} \begin{tablenotes} \item[1] \textsuperscript{\emph{a}} Total number of unoccupied orbitals \item[2] \textsuperscript{\emph{b}} Mean absolute error \item[3] \textsuperscript{\emph{c}} Max absolute error \end{tablenotes} \end{table} As expected, the average number of excited-state PNOs (ESnPNO) increases with the total number of states. However, the rate of increase is rather modest: raising the number of states from 1 to 20 increases the number of PNOs only by a factor of $\sim2$. Clearly, the total number of state-averaged PNOs grows with the number of states far slower than the linear growth of the total number of state-specific PNOs used by H\"attig and co-workers. This is not entirely surprising; since the low-energy states in molecules to zeroth order have many occupied orbitals in common; correlation effects will be largely similar among the states. Clearly, the use of state-averaged PNOs should offer substantial savings in the costs of the integral transformation. The errors in excitation energies also decrease as the total number of states increases because of the concomitant increase in the number of PNOs. On average our approach to PNO construction is rather accurate: the mean absolute errors are below 0.02 eV for all cases except nStates=8, which has a mean absolute error of 0.037 eV. These errors are small relative to the average accuracy of the EOM-CCSD model even for states with single excitation character. Note that the mean absolute errors do not smoothly decrease as nStates increases. This is correlated with sporadic increases in the maximum absolute errors as the number of states is increased, such as in the case of states 3 and 4 when nStates is at 4 and states 7 and 8 when nStates is at 8. However, these errors are significantly reduced when nStates is increased to 6 and 10, respectively. This indicates that the highest excited states in the computed manifold sometimes have more significant errors with state-averaged PNOs, which can be observed from the nStates = 4,8 data. The reason for this behavior is that the composition of $N$ lowest EOM-CCSD states may not be similar to that of CIS, either due to pure root flipping or, more generally, due to nonperturbative effects of dynamical correlation on the excited state character and ordering. Analysis of the excited states in this example suggests that CIS states 3, 4, 5, and 6 become EOM-CCSD states 5, 6, 3, and 4. Therefore accurate description of EOM-CCSD states 3 and 4 will require including pair densities from CIS(D) states 5 and 6. This is not a serious issue since in excited state computations to increase the probability that $N$ lowest-energy target states have been reproduced is to compute $M > N$ states. Hence, a slightly larger error in a few of the highest excited states would not be an issue since typically the number of {\em computed} states is always greater than the number of {\em target} states. Lastly, note that the $T_{\textmd{CutPNO}}$ threshold is kept constant in Table 1. Therefore the averaged errors decrease as the number of states increases, at the cost of increasing the average number of state-averaged PNOs per pair. Clearly, if we wanted to keep the average error per state constant we could loosen the $T_{\textmd{CutPNO}}$ threshold as the number of states is increased. This would further alleviate the modest increase of the total number of PNOs with the number of states. Dependence of the error on $T_{\textmd{CutPNO}}$ will be examined next. Tables \ref{table:eom:pypb_gs}, \ref{table:eom:pypb_es} and \ref{table:eom:pypb_gses} illustrate the correlation between the $T_{\textmd{CutPNO}}$ parameter and the errors in the excitation energies of the 4 lowest singlet excited states of the phenolate form of the anionic chromophore of the photoactive yellow protein (PYPb). Since $T_{\textmd{CutPNO}}$ affects the EOM-CCSD excitation energies through both ground-state ($\hat{T}$) and excited-state ($\hat{R}$) operators, we examined its effects separately on the ground-state cluster operators only (Table \ref{table:eom:pypb_gs}), excited-state operators (Table \ref{table:eom:pypb_es}) and both (Table \ref{table:eom:pypb_gses}). \begin{table}[!h] \caption{Truncation error in excitation energy (eV) of PYPb (aug-cc-pVDZ, V=296 \textsuperscript{\emph{a}}) by only truncating the ground-state PNOs} \label{table:eom:pypb_gs} \begin{center} \begin{tabular}{lrrrrr} \toprule $T_{\textmd{CutPNO}}$ & $10^{-6}$& $10^{-7}$ & $10^{-8}$ & $10^{-9}$ & $10^{-10}$ \\ GSnPNO & 13 & 25 & 45 & 74 & 110 \\ \midrule $S_1$ & -0.0872 & -0.0289 & -0.0092 & -0.0030 & -0.0011 \\ $S_2$ & -0.0892 & -0.0301 & -0.0098 & -0.0033 & -0.0012 \\ $S_3$ & -0.0855 & -0.0287 & -0.0093 & -0.0031 & -0.0012 \\ $S_4$ & -0.0875 & -0.0296 & -0.0098 & -0.0033 & -0.0013 \\ \bottomrule \end{tabular} \end{center} \begin{tablenotes} \item[1] \textsuperscript{\emph{a}} Total number of unoccupied orbitals \end{tablenotes} \end{table} \begin{table}[!h] \caption{Truncation error in excitation energy (eV) of PYPb (aug-cc-pVDZ, V=296 \textsuperscript{\emph{a}}) by only truncating the excited-state PNOs} \label{table:eom:pypb_es} \begin{center} \begin{tabular}{lrrrrr} \toprule $T_{\textmd{CutPNO}}$ & $10^{-6}$& $10^{-7}$ & $10^{-8}$ & $10^{-9}$ & $10^{-10}$ \\ ESnPNO & 6 & 15 & 32 & 58 & 94 \\ \midrule $S_1$ & 0.1323 & 0.0564 & 0.0211 & 0.0053 & 0.0015 \\ $S_2$ & 0.1215 & 0.0525 & 0.0197 & 0.0047 & 0.0011 \\ $S_3$ & 0.1354 & 0.0635 & 0.0256 & 0.0078 & 0.0024 \\ $S_4$ & 0.1296 & 0.0611 & 0.0261 & 0.0092 & 0.0035 \\ \bottomrule \end{tabular} \end{center} \begin{tablenotes} \item[1] \textsuperscript{\emph{a}} Total number of unoccupied orbitals \end{tablenotes} \end{table} \begin{table}[!h] \caption{Truncation error in excitation energy (eV) of PYPb (aug-cc-pVDZ, V=296 \textsuperscript{\emph{a}}) by truncating both the ground and excited-state PNOs} \label{table:eom:pypb_gses} \begin{center} \begin{tabular}{lrrrrr} \toprule $T_{\textmd{CutPNO}}$ & $10^{-6}$& $10^{-7}$ & $10^{-8}$ & $10^{-9}$ & $10^{-10}$ \\ GSnPNO & 13 & 25 & 45 & 74 & 110 \\ ESnPNO & 6 & 15 & 32 & 58 & 94 \\ \midrule $S_1$ & 0.0457 & 0.0276 & 0.0119 & 0.0022 & 0.0003 \\ $S_2$ & 0.0332 & 0.0226 & 0.0098 & 0.0013 & -0.0002 \\ $S_3$ & 0.0500 & 0.0348 & 0.0163 & 0.0046 & 0.0012 \\ $S_4$ & 0.0422 & 0.0315 & 0.0163 & 0.0058 & 0.0022 \\ \bottomrule \end{tabular} \end{center} \begin{tablenotes} \item[1] \textsuperscript{\emph{a}} Total number of unoccupied orbitals \end{tablenotes} \end{table} As expected (see Table \ref{table:eom:pypb_gs}) truncating the ground-state PNOs only lowers the excitation energies (the errors in excited states are all negative) since the energy of the ground state becomes higher due to a decrease in the amount of correlation energy that is recovered. Similarly, truncating the excited-state PNOs raises the excitation energy since the calculated energy of the excited states is now higher as a result of recovering less of the correlation energy (Table \ref{table:eom:pypb_es}). When both the ground and excited state PNOs are truncated, the opposite signs of the two sources of error partially cancel each other out, leading to smaller errors in the excitation energy, as can be seen in Table \ref{table:eom:pypb_gses}. However, this error cancellation may lead to occasional non-monotonic convergence. \subsection{Error Analysis} \label{section:error_analysis} To further test the performance of state-averaged PNOs, we used the PNO-EOM-CCSD method to compute the excitation energies of the lowest six singlet excited states of 28 organic molecules in the benchmark dataset from Thiel {\em et al.} \cite{Schreiber2008}. Variation of statistical measures of the errors with the $T_{\textmd{CutPNO}}$ parameter are presented in Fig. \ref{fig:eom:benchmark_tz_ee} and Fig. \ref{fig:eom:benchmark_atz_ee} for cc-pVTZ and aug-cc-pVTZ basis sets, respectively. The corresponding average numbers of ground-state and excited-state PNOs are shown in Fig. \ref{fig:eom:benchmark_tz_pno} and Fig. \ref{fig:eom:benchmark_atz_pno}, respectively. \begin{figure} \centering \includegraphics[width=0.7\linewidth]{benchmark_tz_ee} \caption{Mean absolute (MAE) and maximum (MAX) PNO truncation errors (in eV) of PNO-EOM-CCSD/cc-pVTZ excitation energies for the 28-molecule benchmark set.} \label{fig:eom:benchmark_tz_ee} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{benchmark_atz_ee} \caption{Mean absolute (MAE) and maximum (MAX) PNO truncation errors (in eV) of PNO-EOM-CCSD/aug-cc-pVTZ excitation energies for the 28-molecule benchmark set.} \label{fig:eom:benchmark_atz_ee} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{benchmark_tz_pno} \caption{ Convergence of average number of PNOs per pair per molecule in ground state (GSnPNO) and excited states (ESnPNO) of PNO-EOM-CCSD/cc-pVTZ } \label{fig:eom:benchmark_tz_pno} \end{figure} \begin{figure} \centering \includegraphics[width=0.7\linewidth]{benchmark_atz_pno} \caption{ Convergence of average number of PNOs per pair per molecule in ground state (GSnPNO) and excited states (ESnPNO) of PNO-EOM-CCSD/aug-cc-pVTZ } \label{fig:eom:benchmark_atz_pno} \end{figure} The average excitation energy errors become smaller than 0.1 eV already with $T_{\textmd{CutPNO}}$=$10^{-6}$, further reduce to below 0.02 eV with $T_{\textmd{CutPNO}}$=$10^{-7}$, and further decrease monotonically with $T_{\textmd{CutPNO}}$ for both basis sets. The maximum errors also decrease monotonically, but require tighter truncation, $T_{\textmd{CutPNO}}$=$\{10^{-7},10^{-8}\}$ for the \{cc-pVTZ,aug-cc-pVTZ\} basis set, to reduce below 0.1 eV. Hence, a threshold of $10^{-7}$ is suitable for general applications while a threshold of $10^{-8}$ should be sufficient for high-accuracy applications. With these thresholds, the average numbers of ground/excited state PNOs are $\sim$50/70 and $\sim$100/120 for cc-pVTZ and aug-cc-pVTZ basis, respectively, which is a significant decrease over the average number of total virtual orbitals. For all values of $T_{\textmd{CutPNO}}$ the number of excited state PNOs is only 20\%-30\% higher than the corresponding number of ground state PNOs. \subsection{Rydberg and Charge Transfer States} To study the accuracy of state-averaged PNO-EOM-CCSD on excited states with Rydberg and charge transfer character we selected two prototypical examples: states $S_1$ and $S_4$ of acetamide (Table \ref{table:eom:acetamide}) and states $S_1$ and $S_2$ of the ethylene-tetrafluoroethylene ($\ce{C2H4}-\ce{C2F4}$) model\cite{Dreuw2003} (Table \ref{table:eom:c2h4_c2f4}). The latter model was also used to test other PNO-based excited state methods, by H\"attig and Helmich \cite{Helmich2013} and by Dutta {\em et al.} \cite{Dutta2016}. \begin{table}[!h] \caption{Truncation errors ( eV) of the aug-cc-pVTZ \textsuperscript{\emph{a}} PNO-EOM-CCSD excitation energies of the four lowest singlet states of acetamide . States $S_1$ and $S_4$ have strong Rydberg character. Excited-state PNOs were averaged over lowest 10 states.} \begin{center} \begin{tabular}{lrrrrr} \toprule $T_{\textmd{CutPNO}}$ & $10^{-6}$& $10^{-7}$ & $10^{-8}$ & $10^{-9}$ & $10^{-10}$ \\ ESnPNO & 32 & 69 & 125 & 191 & 245 \\ \midrule $S_1$ & 0.0699 & 0.0052 & -0.0017 & -0.0009 & -0.0002 \\ $S_2$ & 0.0038 & -0.0058 & -0.0032 & -0.0012 & -0.0005 \\ $S_3$ & 0.0185 & -0.0001 & -0.0028 & -0.0011 & -0.0003 \\ $S_4$ & 0.0464 & 0.0012 & -0.0023 & -0.0004 & 0.0000 \\ \bottomrule \end{tabular} \label{table:eom:acetamide} \end{center} \begin{tablenotes} \item[1] \textsuperscript{\emph{a}} Total number of unoccupied orbitals = 283. \end{tablenotes} \end{table} The truncation errors for the two Rydberg states ($S_1$ and $S_4$) were found to be somewhat larger than the errors for the valence states ($S_2$ and $S_3$) with $T_{\textmd{CutPNO}}$=$10^{-6}$ but they are comparable with $T_{\textmd{CutPNO}}$=$10^{-7}$ or tighter. Overall, no significant differences in the performance of excited state PNOs is observed for Rydberg and non-Rydberg states. \begin{table}[!h] \caption{Truncation errors ( eV) of the aug-cc-pVDZ \textsuperscript{\emph{a}} PNO-EOM-CCSD excitation energies of the four lowest singlet states of the \ce{C2H4}-\ce{C2F4} dimer separater by 10 a.u. (Ref. \citenum{Dreuw2003}). States $S_1$ and $S_2$ have charge-transfer character. Excited-state PNOs were averaged over lowest 10 states.} \begin{center} \begin{tabular}{lrrrrr} \toprule $T_{\textmd{CutPNO}}$ & $10^{-6}$& $10^{-7}$ & $10^{-8}$ & $10^{-9}$ & $10^{-10}$ \\ ESnPNO & 8 & 19 & 37 & 63 & 93 \\ \midrule $S_1$ & 0.1330 & 0.0321 & 0.0087 & 0.0022 & 0.0006 \\ $S_2$ & 0.2059 & 0.0513 & 0.0100 & 0.0014 & 0.0001 \\ $S_3$ & 0.0153 & 0.0013 & 0.0004 & 0.0003 & 0.0001 \\ $S_4$ & 0.0164 & 0.0005 & -0.0001 & -0.0000 & 0.0000 \\ \bottomrule \end{tabular} \label{table:eom:c2h4_c2f4} \end{center} \begin{tablenotes} \item[1] \textsuperscript{\emph{a}} Total number of unoccupied orbitals = 188. \end{tablenotes} \end{table} The truncation errors for the two charge transfer states were found to be substantially larger than the valence states at all truncation thresholds. $T_{\textmd{CutPNO}}$=$10^{-7}$ was required to reduce the errors to below 0.1 eV, and $T_{\textmd{CutPNO}}$=$10^{-8}$ is sufficient to reduce the errors to 0.01 eV for charge transfer excitations. This is in agreement with the findings of H\"attig and Helmich \cite{Helmich2011}, who pointed out that it requires more PNOs to get the same accuracy for charge transfer excitations. They attributed this to the use of the semicanonical CIS(D) amplitudes in constructing the excited-state PNOs (Eq. \eqref{eq:cis_d}); with localized occupied orbitals the off-diagonal matrix elements of the Fock operator are substantial and cannot be neglected. Nevertheless, the performance of semicanonical PNOs is still acceptable. \section{Conclusions} We proposed a state-averaged PNO ansatz for efficient and simple treatment of manifolds of excited states in the context of reduced-scaling excited-state many-body methods. We evaluated the performance of the state-averaged PNO ansatz in the context of PNO-EOM-CCSD method for prediction of excitation energies. The PNO-EOM-CCSD implementation is based on a new massively parallel canonical implementation of EOM-CCSD in the MPQC program. The state-averaged PNO-EOM-CCSD approach has been tested on the first six excited states of 28 organic molecules, yielding an average truncation error below 0.020 eV at $T_{\textmd{CutPNO}}$=$10^{-7}$ for both the cc-pVTZ and aug-cc-pVTZ basis sets. With this truncation threshold, the number of state-averaged PNOs is reduced by more than 70\% for cc-pVTZ and 80\% for aug-cc-pVTZ. Overall, the state-averaged PNOs provide excellent accuracy for low lying valence and Rydberg states, but more PNOs are required to achieve the same accuracy for charge transfer states. These results are sufficiently encouraging to warrant the development of a production-level PNO-EOM-CCSD code based on the state-averaged PNO definitions introduced here.
{ "timestamp": "2018-02-20T02:17:11", "yymm": "1802", "arxiv_id": "1802.06738", "language": "en", "url": "https://arxiv.org/abs/1802.06738" }
\section{Introduction} Let $G$ be a permutation group acting on a finite set $\Omega$ of size $n$. A subset $\Sigma$ of $\Omega$ is called a {\it base} for $G$ if the pointwise stabilizer of $\Sigma$ in $G$ is trivial. The minimal size of a base for $G$ on $\Omega$ is denoted by $b_{\Omega}(G)$ or by $b(G)$ in case $\Omega$ is clear from the context. The minimal base size of a primitive permutation group has been much investigated. Already in the nineteenth century Bochert \cite{Bochert} showed that $b(G) \leq n/2$ for a primitive permutation group $G$ of degree $n$ not containing $\mathrm{Alt}(n)$. This bound was substantially improved by Babai to $b(G) < 4 \sqrt{n} \log n$, for uniprimitive groups $G$, in \cite{BabaiAnnals}, and to the estimate $b(G) < 2^{c \sqrt{\log n}}$ for a universal constant $c$, for doubly transitive groups $G$ not containing $\mathrm{Alt}(n)$, in \cite{BabaiInvent}. (Here and throughout the paper the base of the logarithms is $2$ unless otherwise stated.) The latter bound was improved by Pyber \cite{PyberDoubly} to $b(G) < c {(\log n)}^{2}$ where $c$ is a universal constant. These estimates are elementary in the sense that their proofs do not require the Classification of Finite Simple Groups (CFSG). Using CFSG, Liebeck \cite{Liebeck84} classified all primitive permutation groups $G$ of degree $n$ with $b(G) \geq 9 \log n$. It is easy to see that any permutation group $G$ of degree $n$ satisfies $|G| < n^{b(G)}$, and hence $b(G) > \log |G| / \log n$. A well-known question of Pyber \cite[Page 207]{pyber}, going back to 1993, asks whether there exists a universal constant $c$ such that $b(G) < c (\log |G| / \log n)$ for all {\it primitive} groups $G$. This question generalizes other conjectures in the area: for example, the Cameron-Kantor conjecture, which asserts that every almost simple primitive group in a non-standard action has base size bounded by a universal constant $C$; and Babai's conjecture, that there is a function $f:\N\go \N$ such that any primitive group that has no alternating or classical composition factor of degree or dimension greater than $d$, has base size less than $f(d)$. The Cameron-Kantor conjecture was proved in \cite{LS99} (and in a strong form with $C=7$ in \cite{Burness}, \cite{BurnessLiebeckShalev}). Babai's conjecture was proved in \cite{GSS} with $f$ a quadratic function (improved to a linear function in \cite{LS99}). Despite a great deal of attention, Pyber's conjecture remained open until very recently, when it was proved in \cite{DHM}. It is shown in \cite{DHM} that there exists a universal constant $c>0$ such that for every primitive permutation group $G$ of degree $n$ we have \[ b(G) < 45 (\log |G| / \log n) + c. \] To obtain a more explicit, usable bound, one would like to reduce the multiplicative constant 45 in the above, and also to estimate the constant $c$. In this paper we achieve this aim. Our main result is the following. \begin{thm} \label{mainresult} Let $G$ be a primitive permutation group of degree $n$. Then the minimal base size $b(G)$ satisfies \[ b(G) \le 2 \frac{\log |G|}{\log n} + 24. \] \end{thm} The multiplicative constant 2 in Theorem \ref{mainresult} is best possible, as is shown by the following. \begin{prop} \label{acc}\leavevmode \begin{itemize} \item[{\rm (i)}] For every positive integer $k$ there exists a sequence of finite primitive permutation groups $G_n$ of degrees $n$ such that as $n \to \infty$, \[ (b(G_n) \log n) / \log |G_n| \to 2k/(k+1). \] \item[{\rm (ii)}] There is an infinite sequence of primitive permutation groups $H_n$ of degrees $n$ such that $b(H_n) = \lfloor 2 (\log |H_n|/\log n) \rceil - 2$ for all $n$ and $b(H_{n})$ is unbounded. \end{itemize} \end{prop} A corollary of Theorem \ref{mainresult} and its proof is the following. \begin{cor} \label{maincorollary} Let $G$ be a primitive permutation group of degree $n$ not containing $\mathrm{Alt}(n)$. Then $G$ has a base of size at most $\max\{\sqrt{n} , \ 25\}$. \end{cor} Theorem \ref{mainresult} is proved for almost simple groups in the next two sections (see Theorems \ref{generalalternating}, \ref{generalclassical} and \ref{almostsimple}): alternating and symmetric groups are handled in \S2, and classical groups in \S3. The remaining non-affine primitive groups are covered in \S4 (see Theorem \ref{nonaffine}), and affine groups in \S5 (Theorem \ref{generalaffine}). Proposition \ref{acc} follows from Proposition \ref{2k/(k+1)} and Proposition \ref{sp}. Finally, Corollary \ref{maincorollary} is proved in Section \ref{SecCor}. \section{Alternating and symmetric groups} In this section we consider the minimal base sizes of alternating and symmetric groups in primitive actions. Here is the main result. \begin{thm} \label{generalalternating} Let $G$ be a primitive permutation group of degree $n$ with socle isomorphic to $\mathrm{Alt}(m)$ for some integer $m \geq 5$. Then $$b(G) \leq 2 \frac{\log |G|}{\log n} + 16.$$ \end{thm} In the proof of Theorem \ref{generalalternating}, we may assume that $19 \leq b(G) \leq \log|G|$. In particular $m \geq 7$ and $G = \mathrm{Alt}(m)$ or $\mathrm{Sym}(m)$. Let $\O$ be a set of size $n$ permuted by $G$, let $\a \in \O$ and let $H = G_\a$, a maximal subgroup of $G$. There are three possiblities to consider, according to the action of $H$ on the underlying set $\{1,\ldots,m\}$: \begin{itemize} \item[(1)] $H$ is intransitive: here $H = (\mathrm{Sym}(k) \times \mathrm{Sym}(m-k))\cap G$ for some $k\le m/2$; \item[(2)] $H$ is transitive and imprimitive: here $H = (\mathrm{Sym}(b) \wr \mathrm{Sym}(a))\cap G$, where $m=ab$; \item[(3)] $H$ is primitive on $\{1,\ldots,m\}$. \end{itemize} In case (1), the action of $G$ on $\O$ is the action on $k$-element subsets of $\{1,\ldots,m\}$, and in case (2) the action is on partitions into $a$ parts of size $b$. These actions are considered in Sections \ref{Section2.1} and \ref{Section2.2}, and the proof of Theorem \ref{generalalternating} is completed in Section \ref{Section2.3}. \subsection{Action on subsets} \label{Section2.1} Here we prove Theorem \ref{generalalternating} in the case when the action is on subsets (see Proposition \ref{maink}). Let $\mathrm{Sym}(m)$ act on the set $\Omega(m,k)$ of all $k$-element subsets of the set $\{ 1, \ldots , m \}$, where $k\le m/2$. Set $n = |\Omega(m,k)| = \binom{m}{k}$. Let $b(m,k)$ denote the minimal size of a base for $\mathrm{Sym}(m)$ acting on $\Omega(m,k)$. For convenience set $t = m/k$. A detailed study of the function $b(m,k)$ was carried out in \cite{H12}. Here are the main results from that paper that we need. \begin{thm} {\rm (\cite[Thm. 3.2, Cor. 4.3]{H12})} \label{precise} \begin{itemize} \item[{\rm (i)}] We have $b(m,k)\leq \left\lceil\log_{\lceil t\rceil}(m)\right\rceil \left(\lceil t\rceil-1\right).$ \item[{\rm (ii)}] If $k^{2} \leq m$, then $$b(m,k) = \Big\lceil \frac{2m-2}{k+1} \Big\rceil < \frac{2m}{k+1} + 1 = \frac{2k}{k+1} t + 1.$$ \end{itemize} \end{thm} We shall need the following estimates for $\ln |\mathrm{Sym}(m)| / \ln |\Omega(m,k)|$. \begin{lem} \label{binom} We have $$\Big( \frac{t}{\ln(t)+1} \Big) (\ln m - 1) < \frac{\ln |\mathrm{Sym}(m)|}{\ln |\Omega(m,k)|} < \Big( \frac{t}{\ln(t)} \Big) \ln m.$$ \end{lem} \begin{proof} By the inequalities $${(m/k)}^{k} < \binom{m}{k} < {(me/k)}^{k} \hbox{ and } {(m/e)}^{m} < m! < m^{m},$$ we have $$\frac{m (\ln m - 1)}{k (\ln (m/k) + 1)} < \frac{\ln |\mathrm{Sym}(m)|}{\ln |\Omega(m,k)|} < \frac{m \ln m}{k \ln (m/k)} = \frac{m/k}{\ln(m/k)} \ln m.$$ From this the lemma follows. \end{proof} The next result establishes the conclusion of Theorem \ref{generalalternating} under the assumption that $k^{2} \leq m$. \begin{lem} \label{klarge} Assume that $k^{2} \leq m$. Then $$b(m,k) < 2 \frac{\ln |\mathrm{Sym}(m)|}{\ln |\Omega(m,k)|} + 4.$$ \end{lem} \begin{proof} Assume first that $k \geq 8 > e^{2}$. By Theorem \ref{precise}(ii) and Lemma \ref{binom}, we have $$\frac{b(m,k) \ln n}{\ln |\mathrm{Sym}(m)|} < \Big( \frac{2t+1}{t} \Big) \Big( \frac{\ln(t)+1}{\ln(m)-1} \Big).$$ Since $k \geq 8 > e^{2}$, it follows that $\frac{\ln(t)+1}{\ln(m)-1} < 1$. By this and Lemma \ref{binom}, $$b(m,k) < 2 \frac{\ln |\mathrm{Sym}(m)|}{\ln n} + \Big( \frac{\ln (m)}{\ln(m)-1} \Big) \Big( \frac{\ln(t) +1}{\ln(t)} \Big).$$ It is easy to see that the second term is less than $4$, giving the conclusion in this case ($k \geq 8 >e^2$). Hence we may assume that $k \leq 7$. A GAP \cite{GAP} computation shows that the bound in the conclusion of the lemma holds for $5 \leq m \leq 148 < e^{5}$. Thus assume also that $m \geq 149 > e^{5}$. If $2 \leq k \leq 7$ then Theorem \ref{precise} gives $b(m,k) < \frac{2k}{k+1}t + 1$, and so by Lemma \ref{binom}, $$\frac{b(m,k) \ln n}{\ln |\mathrm{Sym}(m)|} < \Big(\frac{2k}{k+1} + \frac{k}{m}\Big) \Big(\frac{\ln m - \ln k + 1}{\ln m -1}\Big).$$ This is less than $2$ for $m \geq 149 > e^{5}$. \end{proof} Here is the main result of this subsection. \begin{prop} \label{maink} We have $$b(m,k) \leq 2 \frac{\ln |\mathrm{Sym}(m)|}{\ln |\Omega(m,k)|} + 16.$$ \end{prop} \begin{proof} By Lemma \ref{klarge}, we may assume that $k^{2} > m$, which is equivalent to saying that $t^{2} < m$. Define $r$ to be the integer $r \geq 2$ with $t^{r} < m \leq t^{r+1}$. Then by Theorem \ref{precise}(i), we have $b(m,k) \leq (r+1)t$. By Lemma \ref{binom}, this gives $$\frac{b(m,k) \ln n}{\ln m!} < \frac{(r+1)(\ln t + 1)}{r \ln t - 1}.$$ A GAP \cite{GAP} computation shows that the right hand side is less than $2$ provided that $r = 2$ and $t \geq 149 > e^{5}$, or $r = 3$ and $t \geq 20 > e^{3}$, or $r \geq 4$ and $t \geq 11$. If $r = 3$ and $t \leq 20 < e^3$, then $4t - 2 (\frac{3 \ln t - 1}{\ln t + 1} ) t \leq 16$, which gives the conclusion (using Lemma \ref{binom}). Similarly, if $r \geq 4$ and $t < 11$, then $(r+1)t - 2 (\frac{r \ln t - 1}{\ln t + 1} ) t \leq 11$, giving the conclusion. This leaves the case where $r=2$ and $t \leq 148 < e^5$. We first distinguish eleven different cases according to some possible ranges of values of $\ln t$. If $\ln t$ falls in any of the intervals $[\epsilon,\epsilon + 0.2]$ where $\epsilon = 2.8 + 0.2 \ell$ and $\ell$ is a non-negative integer at most $10$, then $3t - 2 (\frac{2 \ln t - 1}{\ln t + 1} ) t \leq 16$. Thus we may assume that $t < e^{2.8}$. But then $m \leq t^{3} < e^{8.4} < 4500$. By a GAP \cite{GAP} calculation, we see that if $5 \leq m \leq 4500$, then $3t - 2 (\frac{2 \ln t - 1}{\ln t + 1} ) t \leq 11$. This completes the proof. \end{proof} The final result of this subsection gives the first part of Proposition \ref{acc}. \begin{prop} \label{2k/(k+1)} Fix a positive integer $k$. Then as $m \to \infty$, $$\frac{b(m,k) \log|\Omega(m,k)|}{\log|\mathrm{Sym}(m)|} \to 2k/(k+1).$$ \end{prop} \begin{proof} Assume that $m \geq k^{2}$. Then, by Theorem \ref{precise}(ii), $b(m,k)=\left\lceil\frac{2m-2}{k+1}\right\rceil$, and hence $b(m,k)/m \to \frac{2}{k+1}$ as $m\to \infty$. Also $(m \ln|\Omega(m,k)| / \ln|\mathrm{Sym}(m)|) \to k$ by Lemma \ref{binom}. The result follows. \end{proof} \subsection{Action on partitions} \label{Section2.2} Now consider the minimal base size $f(a,b)$ of the group $\mathrm{Sym}(m)$ acting on the set $\Omega$ of all partitions of $\{ 1, \ldots, m \}$ into $a$ parts each of size $b$, where $m = ab$ and $a$, $b \geq 2$. In this case $n = |\Omega| = m!/({b!}^{a}a!)$. Bases for this action were studied in \cite{BCN}, where the following was proved. \begin{thm}\label{bcnprop} {\rm (\cite{BCN})} Suppose $b\ge 3$. Then one of the following holds: \begin{itemize} \item[{\rm (i)}] $a\ge b$ and $f(a,b)\le 6$; \item[{\rm (ii)}] $a<b$ and $f(a,b) \leq \log_{a}(b) + 4$. \end{itemize} \end{thm} We shall need the following bound. \begin{lem} Let $a,b$ be integers with $2\le a<b$. Then $$\frac{\ln b}{\ln a} - 1 < \frac{\ln((ab)!)}{\ln \Big(\frac{(ab)!}{{(b!)}^{a}a!}\Big)}.$$ \end{lem} \begin{proof} Write $g(a,b) = \ln((ab)!)\,/\,\ln \Big(\frac{(ab)!}{{(b!)}^{a}a!}\Big)$. Then using the bounds $$\sqrt{2 \pi} \cdot {\ell}^{1/2} {\Big(\frac{\ell}{e}\Big)}^{\ell} < \ell ! < e \cdot {\ell}^{1/2} {\Big(\frac{\ell}{e}\Big)}^{\ell}$$ which hold for all positive integers $\ell$, we have \[ \begin{array}{rl} g(a,b) >& \frac{ab (\ln (ab)-1)}{\ln((ab)!) - a \ln(b!) - \ln(a!)} \\ >& \frac{ab(\ln(ab)-1)}{ \ln(e/\sqrt{2\pi}) + ab(\ln a) + \frac{1}{2}\ln(ab) - a \ln(\sqrt{2\pi}) - \frac{1}{2}a \ln b - \frac{1}{2}\ln a - a \ln a + a}\\ =& \frac{\ln(ab)-1}{\ln a + \frac{1}{b}(1 - \ln a - \frac{1}{2}\ln(2 \pi)) + \ln(e/\sqrt{2\pi})/(ab) + \frac{1}{2b} ( (\ln (b))/a - \ln b ) }\\ \geq & \frac{\ln(ab)-1}{\ln a + \frac{1}{b}(1 - \ln a - \frac{1}{2}\ln(2 \pi) - \frac{\ln b}{4}) + \ln(e/\sqrt{2\pi})/(ab) } \geq \\ \geq & \frac{\ln(ab)-1}{\ln a + \frac{1}{b}(1 - \ln 2 - \frac{1}{2}\ln(2 \pi) - \frac{\ln 3}{4} + \ln(e/\sqrt{2\pi})/2) } \\ > & \frac{\ln(ab)-1}{\ln a - (0.8)/b} > \frac{\ln b}{\ln a} - 1. \end{array} \] \end{proof} Here is the main result of this subsection. \begin{prop}\label{partition} With the above notation, we have $f(a,b) \leq \frac{\ln |\mathrm{Sym}(m)|}{\ln n} + 5$. \end{prop} \begin{proof} If $b\ge 3$, this follows immediately from Theorem \ref{bcnprop}. And for $b=2$, Remark 1.6(ii) of \cite{BGS} gives $f(a,2) \leq 3$. \end{proof} \subsection{Proof of Theorem \ref{generalalternating}} \label{Section2.3} Let $G = \mathrm{Alt}(m)$ or $\mathrm{Sym}(m)$ act primitively on a set $\O$, and let $H$ be a point-stabilizer in $G$. The cases where $\O$ is a set of $k$-subsets or partitions of $\{1,\ldots ,m\}$ have been dealt with in Propositions \ref{maink} and \ref{partition}. Hence by the remarks at the beginning of the section, we may assume that $H$ is primitive on $\{1,\ldots ,m\}$. In this case, it is proved in \cite[Cor. 2]{BGS} that $b_{\Omega}(G) \leq 5$. This completes the proof of Theorem \ref{generalalternating}. \section{Classical groups} \label{Section3} In this section we study base sizes of primitive actions of classical groups. Our main result is the following. \begin{thm} \label{generalclassical} Let $G$ be an almost simple primitive permutation group of degree $n$ whose socle is a classical simple group. Then $b(G) \leq 2 (\log|G| / \log n) + 16$. \end{thm} We shall divide the proof of this theorem into several subcases. First we give a definition, taken from \cite{LS99}. Let $G$ be an almost simple group with socle $G_0$, a classical group with natural module $V$, a vector space of dimension $d$ over a field $\F_q$ of characteristic $p$. We call a maximal subgroup $M$ of $G$ a {\it subspace subgroup} if it is reducible on $V$, or is an orthogonal group on $V$ embedded in a symplectic group with $p=2$; more specifically, $M$ is a subspace subgroup if one of the following holds: \begin{itemize} \item[(1)] $M=G_U$ for some proper nonzero subspace $U$ of $V$, where $U$ is totally singular, non-degenerate, or, if $G$ is orthogonal and $p=2$, a nonsingular 1-space ($U$ is any subspace if $G_0=PSL(V)$); \item[(2)] $G_0 = PSL(V)$, $G$ contains a graph automorphism of $G_0$, and $M\cap G_0 = (G_0)_{U,W}$ where $U,W$ are proper nonzero subspaces of $V$, $\dim V = \dim U+\dim W$ and either $U\subseteq W$ or $V = U\oplus W$; \item[(3)] $G_0 = Sp_{2m}(q)$, $p=2$ and $M\cap G_0 = O^{\pm}_{2m}(q)$. \end{itemize} Note that in (3), if we regard $G_0$ as the isomorphic orthogonal group $O_{2m+1}(q)$, then $M\cap G_0 = O_{2m}^\pm(q)$ is the stabilizer of a hyperplane of the natural module of dimension $2m+1$. If $M$ is a subspace subgroup, we call the action of $G$ on the coset space $G/M$ a {\it subspace action}. Bases for non-subspace actions of classical groups were studied in detail in \cite{Burness}, so our main task is to prove Theorem \ref{generalclassical} for subspace actions. First we require the following general bound. \begin{prop} \label{prop:ClassicalBound_nperk} Let $G$ be as above, and suppose $M$ is as in $(1)$, so that the coset space $X = G/M$ is a $G$-orbit of $k$-dimensional subspaces of $V$, for some $k$. Assume also that $k \le d/2$. Then \[ \frac{\log|G|}{\log|X|} \ge \frac{d}{tk}-1, \] where $t=1$ if $G_0 = PSL(V)$, and $t=2$ otherwise. \end{prop} \begin{proof} Observe that $|G| > q^{(d^2/t)-d}$, while \[ |X|\le \binom{d}{k}_q:=\frac{(q^d-1)(q^d-q)\cdots (q^d-q^{k-1})} {(q^k-1)(q^k-q)\cdots (q^k-q^{k-1})}\leq \left(\frac{q^d}{q^{k-1}}\right)^k= q^{dk-k^2+k}. \] Hence \[\frac{\log|G|}{\log|X|}\geq \frac{(d^2/t)-d}{dk-k^2+k}\geq \frac{d}{tk}-1. \] \end{proof} \subsection{Action on an orbit of subspaces} \label{classicalorbit} In this subsection we prove Theorem \ref{generalclassical} for subspace actions of classical simple groups as in case (1) in the list above. This is the main part of the proof of the theorem. \begin{thm} \label{classicalmain} Let $G$ be a simple classical group on $V$, a vector space of dimension $d$ over $\F_q$. Let $X$ be a $G$-orbit of $k$-dimensional subspaces of $V$ with $k \leq d/2$, on which $G$ acts primitively. Then $$b_X(G)\leq \frac{d}k+11.$$ \end{thm} \begin{proof} In every subcase, we will define a base $\mB\subset X$ of $G$ in a number of steps. We do this by starting with $\mB=\emptyset$ and at each step adding some subspaces to $\mB$. Throughout the proof, $G_{(\mB)}$ denotes the pointwise stabilizer of $\mB$ -- that is, the set of group elements that fix all the subspaces in $\mB$. \vspace{2mm} \noindent \emph{Action on the set of all $k$-dimensional subspaces.} First, let us assume that $X$ is the set of all $k$-dimensional subspaces. Let $d=ak+r$ for $a\geq 2$ and $0\leq r<k$. Take any direct sum decomposition $V=V_1\oplus \ldots\oplus V_a\oplus U$ with $\dim V_i=k$ for $1\leq i\leq a$, and let $V_1,\ldots,V_a\in \mB$. Fix a basis $B_i=\{x_1^{(i)},\ldots, x_k^{(i)}\}\in V_i$ for each $i$ and define $W_1=\langle \sum_{i=1}^a x_s^{(i)}\,|\,1\leq s\leq k\rangle\in X$ and put $W_1$ into $\mB$. Then the matrix form of the restriction of a $g\in G_{(\mB)}$ to $V_1\oplus\ldots\oplus V_a$ is a block diagonal matrix with equal blocks (with respect to the basis $B_1\cup\ldots\cup B_a$). Define $M_g=[g_{V_1}]_{B_1}=[g_{V_2}]_{B_2}=\ldots=[g_{V_a}]_{B_a}$ and let $\{\gamma,\delta\}\subset SL(V_2)$ be a generating set of $SL(V_2)$ and $C=[\gamma]_{B_2},\,D=[\delta]_{B_2}$ be their matrix forms. Then $g\in G_{(\mB)}$ fixes the subspaces \[ W_2=\langle x_s^{(1)}+\gamma(x_s^{(2)})\,|\,1\leq s\leq k\rangle\in X,\quad W_3=\langle x_s^{(1)}+\delta(x_s^{(2)})\,|\,1\leq s\leq k\rangle\in X \] if and only if $M_g$ commutes with both $C$ and $D$. Thus, putting $W_2$ and $W_3$ into $\mB$, it follows that $G_{(\mB)}$ acts as scalars on $V_1\oplus\ldots\oplus V_a$. Finally, if $r>0$ then let $\{f_{1},\ldots,f_r\}$ be a basis of $U$ and define \[ W_4=\langle f_1,\ldots,f_r,x_{r+1}^{(1)},\ldots,x_{k}^{(1)}\rangle,\qquad W_5=\langle f_1+x_1^{(2)},\ldots,f_r+x_r^{(2)},x_{r+1}^{(2)},\ldots,x_{k}^{(2)} \rangle. \] Adding $W_4$ and $W_5$ to $\mB$ it is easy to see that $G_{(\mB)}$ contains only scalar transformations. Thus, $b_X(G)\leq a+5\leq \frac{d}k+5$ for this case. \vspace{2mm} \noindent \emph{Action on an orbit of non-degenerate subspaces.} Now we turn to the case when $G$ is a group fixing some non-degenerate form $[\,,\,]$ on $V$ and $X$ is a $G$-orbit of non-degenerate subspaces. In the special case $d=2k$, we also assume that the Witt index of elements of $X$ is no more than the Witt index of elements of $X^\perp$ (this is only interesting in the orthogonal case, when $k$ is even and $V$ has Witt index $k-1$). This will guarantee that the sums defining $u_i$ and $v_i$ below will have at least two terms. Let $d=ak+r$ with $1\leq r\leq k$ and take any orthogonal decomposition $V=V_1\oplus \ldots\oplus V_a\oplus V_{a+1}$ with $V_1,\ldots,V_a\in X$. Put $V_1,\ldots,V_a$ into $\mB$. Then any $g\in G_{(\mB)}$ also fixes $V_{a+1}=(\sum_{i=1}^a V_i)^\perp$. Let $l$ and $m\leq 2$ denote the Witt index and the Witt defect of the subspaces in $X$, so $k=2l+m$. First, let us assume that $l\geq 1$. Then for every $1\leq s\leq a$, the subspace $V_s$ is a direct sum of orthogonal subspaces $V_s=V_s^{(h)}\oplus V_s^{(m)}$, where each $V_s^{(h)}$ contains a basis $B_s=\{x_1^{(s)},\ldots,x_l^{(s)},y_1^{(s)},\ldots,y_l^{(s)}\}$ with $[x_i^{(s)},x_j^{(s)}]=[y_i^{(s)},y_j^{(s)}]=0,\,[x_i^{(s)},y_j^{(s)}]=\delta_{ij}$ for all $i,j$ and $V_s^{(m)}$ has dimension and Witt defect $m$. (For orthogonal groups of characteristic 2, we also have $Q(x_i^{(s)})=Q(y_i^{(s)})=0$ for all $i$, where $Q$ is the underlying quadratic form.) Furthermore, let $l'$ be the minimum of $l$ and the Witt index of $V_{a+1}$ and $m'=\dim (V_{a+1})-2l'$. Similarly to the above, choose an orthogonal decomposition $V_{a+1}=V_{a+1}^{(h)}\oplus V_{a+1}^{(m)}$ along with a basis $B_{a+1}=\{x_1^{(a+1)},\ldots,x_{l'}^{(a+1)}, y_1^{(a+1)},\ldots,y_{l'}^{(a+1)}\}$ of $V_{a+1}^{(h)}$ satisfying $[x_i^{(a+1)},x_j^{(a+1)}]=[y_i^{(a+1)},y_j^{(a+1)}]=0,\, [x_i^{(a+1)},y_j^{(a+1)}]=\delta_{ij}$ for $1\leq i,j\leq l'$, and define $x_i^{(a+1)}=y_i^{(a+1)}=0$ for $l'<i\leq l$. For $1\leq i\leq l$, define \[ u_i=\sum_{s=1}^{a+1} x_i^{(s)},\, v_i=\sum_{s=1}^{(a+1)} y_i^{(s)}. \] We define the subspaces \begin{align*} W_1^{(h)}&=\langle u_1,\ldots,u_l,y_1^{(1)},\ldots,y_l^{(1)}\rangle,\quad &W_2^{(h)}&=\langle x_1^{(1)},\ldots,x_l^{(1)},v_1,\ldots,v_l\rangle,\\ W_3^{(h)}&=\langle u_1,\ldots,u_l,y_1^{(2)},\ldots,y_l^{(2)}\rangle,\quad &W_4^{(h)}&=\langle x_1^{(2)},\ldots,x_l^{(2)},v_1,\ldots,v_l\rangle. \end{align*} Then $W_t:=W_t^{(h)}\oplus V_1^{(m)}\in X$ for each $1\leq t\leq 4$. Adding $W_1,W_2,W_3,W_4$ to $\mB$, we see that any $g\in G_{(\mB)}$ fixes each $V_s^{(h)}$ and, moreover, the matrix form of each restriction $g_{V_s^{(h)}}$ satisfies \[ \left[g_{V_1^{(h)}}\right]_{B_1}= \left[g_{V_2^{(h)}}\right]_{B_2}=\ldots= \left[g_{V_a^{(h)}}\right]_{B_a}=\begin{pmatrix}A_g&0\\0&B_g\end{pmatrix} \] for some $A_g,B_g\in GL(l,q)$. The use of the $x_i^{(a+1)},y_i^{(a+1)}$ as summands in the $u_i$ and $v_i$ also implies that \[ \left[g_{V_{a+1}^{(h)}}\right]_{B_{a+1}}= \begin{pmatrix}A_g'&0\\0&B_g'\end{pmatrix}, \] where $A_g'$ and $B_g'$ are left upper $l'\times l'$ submatrices of $A_g$ and $B_g$, respectively. Adding also the subspace $W_5=W_5^{(h)}\oplus V_1^{(m)}$ with $W_5^{(h)}:=\langle x_i^{(1)},y_i^{(1)}+x_i^{(2)}\,|\,1\leq i\leq l\rangle$ we can also guarantee that $A_g=B_g$ holds for any $g\in G_{(\mB)}$. Let $B_2^{(x)}=\{ x_1^{(2)},\ldots,x_l^{(2)}\}$ and $V_2^{(x)}$ be the subspace spanned by $B_2^{(x)}$. Choose $\varphi,\psi\in \textrm{End}(V_2^{(x)})$ generating $\textrm{End}(V_2^{(x)})$ as an algebra. Define \[ W_6^{(h)}=\langle x_i^{(1)}+\varphi(x_i^{(2)}), y_i^{(1)}\,|\,1\leq i\leq l\rangle,\quad W_7^{(h)}=\langle x_i^{(1)}+\psi(x_i^{(2)}), y_i^{(1)}\,|\,1\leq i\leq l\rangle. \] and let $W_6:=W_6^{(h)}\oplus V_1^{(m)},\ W_7:=W_7^{(h)}\oplus V_1^{(m)}$. Adding $W_6,W_7$ to $\mB$, we see that $g_{V_2^{(x)}}$ commutes with both $\varphi$ and $\psi$ for any $g\in G_{(\mB)}$. Thus $g_{V_2^{(x)}}$ is a scalar transformation. Therefore, any $g\in G_{(\mB)}$ acts as a scalar on $V_1^{(h)}\oplus\ldots\oplus V_{a+1}^{(h)}$. For every $1\leq s\leq a+1$, choose other orthogonal decompositions $V_s=V_s^{(h)'}\oplus V_s^{(m)'}$ such that each $V_s^{(h)'}$ is isometric to $V_s^{(h)}$ and $V_s^{(h)}+V_s^{(h)'}=V_s$. Applying similar constructions as for $W_1,\ldots,W_4$ before, by adding $4$ further subspaces to $\mB$ we can synchronize the action of any $g\in G_{(\mB)}$ on each $V_s^{(h)'}$ with its action on each $V_t^{(h)}$. Thus, now each $g\in G_{(\mB)}$ acts as a scalar on the whole vector space $V_1\oplus\ldots\oplus V_{a+1}$. Thus, $b_X(G)\leq a+7+4\leq \frac{d}{k}+11$ in this case. Now, let us assume that $l=0$, so $k=m\leq 2$. The case $k=1$ is trivial. The case $k=m=2$ implies that $V$ is orthogonal. We can also assume that $a\geq 3$, since otherwise $d\leq 6$. Then each $V_s$ has a basis $x^{(s)},y^{(s)}$ with $Q(x^{(s)})=1,\, Q(y^{(s)})=\alpha,\,[x^{(s)},y^{(s)}]=1$, where $\alpha\in\FF q$ is such that the polynomial $t^2+t+\alpha$ is irreducible over $\FF q$. Additionally, choose an arbitrary spanning set $\{ x^{a+1},y^{a+1} \}$ of $V_{a+1}$. Since $\{Q(z)\,|\,z\in V_s\}=\FF q$, we can define \[u_1=\sum_{s=2}^{a+1} x^{(s)}+z^{(1)},\quad v_1=\sum_{s=2}^{a+1} y^{(s)}+w^{(1)},\quad u_2=\sum_{s=1}^{a-1} x^{(s)}+z^{(a)},\quad v_2=\sum_{s=1}^{a-1}y^{(s)}+w^{(a)} \] with $z^{(1)},w^{(1)}\in V_1,\,z^{(a)},w^{(a)}\in V_a$ such that $Q(u_1)=Q(u_2)=1,\,Q(v_1)=Q(v_2)=\alpha$. Now, let \begin{gather*} W_1=\langle u_1 ,y^{(2)}\rangle,\quad W_2=\langle u_1 ,y^{(3)}\rangle,\quad W_3=\langle x^{(2)},v_1 \rangle,\quad W_4=\langle x^{(3)},v_1 \rangle,\\ W_5=\langle u_2 ,y^{(1)}\rangle,\quad W_6=\langle u_2 ,y^{(2)}\rangle,\quad W_7=\langle x^{(1)},v_2 \rangle,\quad W_8=\langle x^{(2)},v_2 \rangle. \end{gather*} Adding each $W_i$ to $\mB$ we see that the restriction of any $g\in G_{(\mB)}$ to the subspaces $V_1,\ldots,V_a$ has matrix form \[ \Big[g_{V_1}\Big]_{\{x^{(1)},y^{(1)}\}}=\ldots= \Big[g_{V_a}\Big]_{\{x^{(a)},y^{(a)}\}}= \begin{pmatrix}c&0\\0&d\end{pmatrix} \textrm{ for some }c,d\in \FF q. \] Using that $Q(g(x^s))=1,\,Q(g(y^{(s)}))=\alpha,\,[g(x^{(s)},g(y^{(s)})]=1$ it follows that $c=m=\pm 1$, so any $g\in G_{(\mB)}$ acts on $V_1\oplus\ldots\oplus V_a$ as a scalar transformation. The use of $x^{(a+1)},y^{(a+1)}$ guarantees that $g\in G_{(\mB)}$ is a scalar on the whole of $V$. Thus, $b_X(G)\leq \frac{d}k+8$ for this case. \vspace{2mm} \noindent \emph{Action on an orbit of totally singular subspaces.} From now on, let $X$ be the set of $k$-dimensional totally singular subspaces of $V$. Again, we can assume that $k\geq 2$. Let $l$ be the Witt index and let $m\leq 2$ be the Witt defect of $V$, so $k\leq l$ (since otherwise $X=\emptyset$) and $d =2l+m$. Let $l=ak+r$ for $0\leq r<k$ and denote $w(s)=k$ for $1\leq s\leq k$ and $w(a+1)=r$. Take an orthogonal decomposition $V=V_1\oplus\ldots\oplus V_a\oplus V_{a+1}\oplus U$ such that $V_s$ has dimension $2w(s)$ and Witt index $w(s)$ for each $1\leq s\leq a+1$. For every $1\leq s\leq a+1$ let $B_s=\{x_1^{(s)},\ldots,x_{w(s)}^{(s)},y_1^{(s)},\ldots,y_{w(s)}^{(s)}\}$ be a basis of $V_s$ such that $V_s^{(x)}=\langle x_1^{(s)},\ldots,x_{w(s)}^{(s)}\rangle, \ V_s^{(y)}=\langle y_1^{(s)},\ldots,y_{w(s)}^{(s)}\rangle$ are $w(s)$-dimensional singular subspaces, and $[x_i^{(s)},y_j^{(s)}]=\delta_{ij}$ for every $1\leq i,j\leq w(s)$. Furthermore, define $x_i^{(a+1)}=y_i^{(a+1)}=0$ for $r<i\leq k$. Finally, take the additional $k$-dimensional singular subspaces \[ \begin{array}{l} V_{a+1}^{(x)'}=\langle x_1^{(a+1)},\ldots,x_r^{(a+1)}, x_{r+1}^{(1)},\ldots,x_k^{(1)}\rangle,\\ V_{a+1}^{(y)'}=\langle y_1^{(a+1)},\ldots,y_r^{(a+1)}, y_{r+1}^{(1)},\ldots,y_k^{(1)}\rangle. \end{array} \] Let $u_i=\sum_{s=1}^{(a+1)} x_i^{(s)},\, v_i=\sum_{s=1}^{(a+1)} y_i^{(s)}$ for $1\leq i\leq k$ and define \[ W_1=\langle u_1,\ldots,u_k\rangle,\qquad W_2=\langle v_1,\ldots,v_k\rangle. \] First, add each of the subspaces $V_1^{(x)},V_1^{(y)},\ldots,V_a^{(x)},V_a^{(y)}, V_{a+1}^{(x)'},V_{a+1}^{(y)'},W_1,W_2$ to $\mB$. Then the subspaces $V_s$ for each $1\leq s\leq a+1$ are fixed by any $g\in G_{(\mB)}$ and the restrictions $g_{V_s}$ have matrix form \begin{gather*} \Big[g_{V_1}\Big]_{B_1}=\ldots=\Big[g_{V_a}\Big]_{B_a}= \begin{pmatrix}A_{g}&0\\0&(A_{g})^{-T}\end{pmatrix},\\ \Big[g_{V_{a+1}}\Big]_{B_{a+1}}= \begin{pmatrix}A'_{g}&0\\0&(A'_{g})^{-T}\end{pmatrix}, \end{gather*} where $A_g\in GL(k,q)$ and $A'_g$ is the left upper $r\times r$ submatrix of $A_g$. Next, we define additional $k$-dimensional singular subspaces of the form \begin{align*} W^{(x)}(C)&=\Big\langle \sum_{s=1}^{a+1}\Big(x_j^{(s)}+ \sum_{i=1}^k c_{ij}y_i^{(s)}\Big)\,\Big| \,1\leq j\leq k\Big\rangle,\\ W^{(y)}(C)&=\Big\langle \sum_{s=1}^{a+1}\Big(y_j^{(s)}+ \sum_{i=1}^k c_{ij}x_j^{(s)}\Big)\,\Big| \,1\leq j\leq k\Big\rangle, \end{align*} where $C=(c_{ij})\in M(k,q)$. The subspaces $W^{(x)}(C)$ and $W^{(y)}(C)$ are singular if the matrix $C$ is symmetric (when $V$ is a symplectic space) or anti-symmetric (when $V$ is an orthogonal or a unitary space). Furthermore, $g\in G_{(\mB)}$ fixes $W^{(x)}(C)$ (resp. $W^{(y)}(C)$) if and only if $A_g^T C=CA_g^{-1}$ (resp. $A_g C=CA_g^{-T})$ holds. First, let us assume that $V$ is a symplectic space and choose $C,D\in M(k,q)$ symmetric matrices, which generate the full matrix algebra $M(k,q)$ (as an algebra). Adding $W^{(y)}(I), W^{(y)}(C), W^{(y)}(D)$ to $\mB$ we see that any $g\in G_{(\mB)}$ satisfies $A_{g}=A_g^{-T}$, and, therefore, $A_{g}C=CA_g,\, A_{g}D=DA_g$. It follows that $A_g$ is a scalar matrix for any $g\in G_{(\mB)}$. Thus, $g$ acts as a scalar on the whole $V=V_1\oplus\ldots\oplus V_a\oplus V_{a+1}$. Now, let $V$ be an orthogonal or unitary space and choose antisymmetric matrices $C=E_{12}-E_{21},\, D=\sum_{i=2}^{k-1}(E_{i,i+1}-E_{i+1,i})$. (Here $\{E_{ij}\,|\,1\leq i,j\leq k\}$ denotes the usual basis of the full matrix algebra $M(k,q)$.) Add the subspaces $W^{(x)}(C),W^{(y)}(C),W^{(x)}(D),W^{(y)}(D)$ to $\mB$ and let $g\in G_{(\mB)}$. Then we have $A_g C A_g^T=A_g^T CA_g=C$ and $A_g D A_g^T=A_g^T DA_g=D$. Using the implication \[ AXA^T=X,\,A^TYA=Y \Rightarrow A XY= A XA^TYA=XYA, \] we see that $A_g$ commutes with every product $P$ of $C$'s and $D$'s with an even number of terms. Similarly, $A_g PA_g^T=A_g^TPA_g=P$ holds for every product $P$ of $C$'s and $D$'s with an odd number of terms. In particular, $A_g$ commutes with $CD=E_{13},\,-DC=E_{31},-CD^2C=E_{11}$ etc. Continuing this way, we see that $A_g=\lambda\cdot I$ is a scalar matrix. The equation $A_g C A_g^T=C$ also shows that $\lambda^2=1$ and so $A_g=A_g^{-T}=\pm I$. Thus, any $g\in G_{(\mB)}$ is a scalar transformation on $V_1\oplus\ldots\oplus V_{a+1}$. If $U\neq 0$, we can choose a $k$-dimensional singular subspace $V_{a+2}^{(x)}\leq V_1\oplus U$ satisfying $V_1+V_{a+2}^{(x)}=V_1+U$. Let $x_1^{(a+2)},\ldots,x_k^{(a+2)}$ be any basis of $V_{a+2}^{(x)}$. Adding the subspaces $V_{a+2}^{(x)},\ \langle x_1^{(a+2)}+y_1^{(1)},\ldots,x_k^{(a+2)}+y_k^{(1)}\rangle$ to $\mB$ gives the result. So, $b_X(G)\leq 2a+10\leq \frac{d}k+10$. The above argument works if $X$ is the set of all totally singular subspaces, which is indeed a $G$-orbit in most cases. The only exception is when $V$ is an orthogonal space, $d=2k$, and $G = \O^+(V)$, so we assume this from now on. Then two totally singular $k$-dimensional subspaces $V_1,V_2$ are in the same $G$-orbit if and only if $\dim (V_1\cap V_2)\equiv k\pmod 2$. Since the full orthogonal group $O(V)$ interchanges the two $G$-orbits, it does not matter, which orbit we choose. Note that in the above construction the subspaces $V_1^{(x)}, V_1^{(y)}, W^{(x)}(C),W^{(y)}(C),W^{(x)}(D),W^{(y)}(D)$ are in the same $G$-orbit provided that $k$ is even (the further subspaces defined in the proof are now meaningless). So, $b_X(G)\leq 6$ in this case. Now, let $k$ be odd, and choose an orthogonal decomposition $V=\langle x,y\rangle \oplus U$, where $x,y$ is a hyperbolic pair. Then $\dim U=d-2=2(k-1)$, so the above construction works for a $G_U$-orbit of ($k-1$)-dimensional totally singular subspaces of $U$, since $k-1$ is even. That is, there are $6$ totally singular ($k-1$)-dimensional subspaces $U_1,\ldots,U_6$ of $U$, which form a base for the action of $G_U$ on $U$. By construction, \[ U_1=\langle x_1,\ldots,x_{k-1}\rangle,\quad U_2=\langle y_1,\ldots,y_{k-1}\rangle \] with $[x_i,y_j]=\delta_{ij}$ for $1\leq i,j \leq k-1$. Define the subspaces $V_s=\langle x\rangle\oplus U_s$ for $1\leq s\leq 6$. Furthermore, let \[ W_1=\langle y,y_1,x_2,\ldots,x_{k-1}\rangle,\quad W_2=\langle y,x_1,y_2,\ldots,y_{k-1}\rangle. \] Then all of $V_1,\ldots,V_6,W_1,W_2$ are totally singular $k$-dimensional subspaces with pairwise odd-dimensional intersections, so they are in the same $G$-orbit $X$. Adding $V_1,\ldots, V_6$, $W_1$, $W_2$ to $\mB$ we see that any $g\in G_{(\mB)}$ fixes the subspaces $\langle x\rangle=V_1\cap V_2,\, \langle y\rangle=W_1\cap W_2$ and $U=(V_1+V_2)\cap (W_1+W_2)$. Furthermore, $g\in G_{(\mB)}$ also fixes $U_s=V_s\cap U$ for each $s$, so $g_U$ is a scalar transformation by the definition of the $U_s$. Adding also the subspace \[ W_3=\langle x+x_1,y-y_1,x_2,\ldots,x_{k-1}\rangle \in X \] to $\mB$, we get that any $g\in G_{(\mB)}$ is a scalar transformation on the whole of $V$. Hence $b_X(G)\leq 9$ in this case. \end{proof} \begin{rem} By a more detailed argument, Burness, Guralnick and Saxl were able to calculate the exact base size for a classical group over an algebraically closed field acting on an orbit of subspaces of its natural module \cite[Section 4]{BGS2}. While part of their constructions could be translated to the finite case, we had to give new constructions for other cases. (This is especially true for the orthogonal case, since, in contrast to the finite case discussed above, in any dimension there is just one type of non-degenerate orthogonal space over an algebraically closed field.) \end{rem} \subsection{Action on pairs of subspaces} \label{transpose} In this subsection we handle the subspace actions arising from case (2) in the list after the statement of Theorem \ref{generalclassical}. \begin{prop}\label{pairs} Let $G = PSL(V) = PSL_d(q)$, and let $M$ be the stabilizer in $G$ of a pair $U,W$ of nonzero subspaces, where $\dim U = k < d/2$, $\dim W = d-k$, and either $U\subseteq W$ or $V = U\oplus W$. Let $X$ be the coset space $G/M$. Then \[ b_{X}(G) \leq \frac{d}{k} + 11 \leq 2\frac{\log |G|}{\log |X|} + 12. \] \end{prop} \begin{proof} Let $X_k$ be the set of all $k$-dimensional subspaces of $V$. A straightforward computation shows that $|X| < |X_k|^2$. Clearly $b_X(G) \le b_{X_k}(G)$. Now the result follows from Theorem \ref{classicalmain} and Proposition \ref{prop:ClassicalBound_nperk}. \end{proof} \subsection{Proof of Theorem \ref{generalclassical}} \label{Sec3.3} Let $G$ be an almost simple group with socle $G_0$, a classical group on $V$, a vector space of dimension $d$ over $\F_q$. Suppose $G$ acts faithfully and primitively on a set $\O$. If the action of $G$ on $\O$ is not a subspace action, then $b(G) \le 5$ by \cite{Burness}. Hence we may assume that the action is a subspace action, so that one of the cases (1), (2), (3) listed after the statement of Theorem \ref{generalclassical} holds. In case (1), $\O = U^G$ is an orbit of $G$ on $k$-dimensional subspaces, for some $k$, and we can assume that $k\le d/2$ (by replacing $U$ with $U^\perp$ if necessary, in the case where $G_0 \ne PSL(V)$, and by considering the equivalent action of $G$ on $(d-k)$-spaces, when $G_0 = PSL(V)$). Now Theorem \ref{classicalmain} and Proposition \ref{prop:ClassicalBound_nperk} give \[ b(G_0) \le \frac{d}{k}+11 \le 2 \frac{\log |G|}{\log |\O|} + 13. \] Hence we can choose a set $\mB$ of at most $\frac{d}{k}+11$ points of $\O$ such that $G_{(\mB)} \cap G_0 = 1$, so that $G_{(\mB)}$ is isomorphic to a subgroup of $G/G_0$. This is a soluble group possessing a normal series of length at most 3 with cyclic factor groups. Since the base size of a cyclic linear group is 1, by \cite{SZ}, it follows that $b(G) \le 2 \frac{\log |G|}{\log |\O|} + 16$, as required. Now consider case (2): here $G_0 = PSL(V)$ and $\O = \{U,W\}^G$ where $U,W$ are subspaces of dimensions $k,\,d-k$ and either $U\subseteq W$ or $V = U\oplus W$. In the latter case, if $k = d/2$ then $G_0$ has an element interchanging $U$ and $W$, and $(G,\O)$ is not a subspace action (it is a ${\mathcal C}_2$-action in the terminology of \cite{Burness}). Hence we may assume that $k<d/2$. Now Proposition \ref{pairs} implies that $b(G_0) \le 2 \frac{\log |G|}{\log |\O|} + 12$, and this yields the result as above. Finally, consider case (3): here $G_0 = Sp_{2m}(q)$, $p=2$ and $M\cap G_0 = O_{2m}^\pm(q)$, where $M$ is a point-stabilizer in $G$. Regarding $G_0$ as the isomorphic orthogonal group $O_{2m+1}(q)$, the set $\O$ is an orbit of $G_0$ on hyperplanes of the natural module $V_{2m+1}(q)$. Hence $b_{\O}(G_0) \le 2m+1$, which is less than $2 \frac{\log |G|}{\log |\O|} + 3$, and the conclusion follows again. This completes the proof of Theorem \ref{generalclassical}. \section{Non-affine primitive permutation groups} In this section we prove the main Theorem \ref{mainresult} for primitive groups which are not of affine type. \begin{thm} \label{nonaffine} Let $G$ be a primitive permutation group of degree $n$. Assume that $G$ is not of affine type. Then $b(G) \leq 2 (\log |G| / \log n) + 24$. \end{thm} According to the O'Nan-Scott theorem (see for example \cite{LPS}), non-affine primitive groups are of the following types: almost simple, diagonal type, product type, and twisted wreath type. We shall deal with these types separately in the following subsections. \subsection{Almost simple groups} For this case we prove \begin{thm} \label{almostsimple} Let $G$ be an almost simple primitive permutation group of degree $n$. Then $b(G) \leq 2 (\log |G| / \log n) + 16$. \end{thm} \begin{proof} Theorems \ref{generalalternating} and \ref{generalclassical} give the result when the socle of $G$ is an alternating or classical group. For the remaining cases, the socle of $G$ is a group of exceptional Lie type or a sporadic group. In these cases we have $b(G) \leq 7$ by \cite{BurnessLiebeckShalev} and \cite{BurnessObrienWilson}. \end{proof} \subsection{Diagonal type groups} \label{Sec4.2} Work of Fawcett \cite{Fawcett} (and also Gluck, Seress, Shalev \cite[Remark 4.3]{GSS}) implies that, in the diagonal type case, we have \begin{equation} \label{equat} b(G) \leq (\log |G| / \log n) + 3. \end{equation} \subsection{Product type groups} Bases for primitive groups of product type were studied by Burness and Seress in \cite{BS}. We will use their notation. Let $\Omega = \Gamma^k$ for some set $\Gamma$ and integer $k \geq 2$. There exists a primitive group $H \leq \mathrm{Sym}(\Gamma)$ of almost simple type or of diagonal type such that the following holds. Let the socle of $H$ be $T$. Let $P$ be the (transitive) action of $G$ on the set of the $k$ direct factors of $\mathrm{Soc}(G) = T^k$. We have $T^{k} \leq G \leq H \wr P$. We recall two definitions. A {\it distinguishing partition} for a finite group $X$ acting on a finite set $\Sigma$ is a coloring of the points of $\Sigma$ in such a way that every element of $X$ fixing this coloring is contained in the kernel of the action of $X$ on $\Sigma$. The minimal number of parts (or colors) of a distinguishing partition is called the {\it distinguishing number} of $X$ and is denoted by $d(X)$. Let $d(P)$ be the distinguishing number of the transitive permutation group $P$. By \cite[Theorem 1.2]{DHM}, we have $d(P) \leq 48 \sqrt[k]{|P|}$. \subsubsection{The case when $H$ is almost simple} \label{Sec4.3.1} Assume that $H \leq \mathrm{Sym}(\Gamma)$ is an almost simple group with socle $T$. We follow not only \cite{BS} here but \cite[\S4]{DHM}. However we avoid the use of the bound $|\mathrm{Out}(T)| \leq {|T|}^{\alpha}$, since this is expensive. Instead we use the estimate $|\mathrm{Out}(T)| \leq |\Gamma|$ found in \cite[Lemma 2.7]{AG}. Thus $|G| \geq {|T|}^{k}|P| \geq ({|H|}^{k}|P|)/{|\Gamma|}^{k}$. This gives $\log({|H|}^{k}|P|)/\log |\Omega| \leq (\log |G| / \log |\Omega|) + 1$. By using the idea of \cite[Lemma 3.8]{BS} combined with Lemma 2.1 of \cite{DHM}, we see that \begin{equation} \label{e11} b(G) < \frac{\log d(P)}{\log |\Gamma|} + 1 + b(H) < \frac{\log |P|}{\log |\Omega|} + b(H) + 4, \end{equation} since $|\Gamma| \geq 5$. By Theorem \ref{almostsimple}, this gives $$b(G) < \frac{\log |P|}{\log |\Omega|} + 2 \frac{\log |H|}{\log |\Gamma|} + 20 < 2 \frac{\log ({|H|}^{k}|P|)}{\log |\Omega|} + 20 \leq 2 \frac{\log |G|}{\log |\Omega|} + 22.$$ \subsubsection{The case when $H$ is of diagonal type} \label{Sec4.3.2} Now assume that $H$ is of diagonal type. Here $\mathrm{Soc} (H) = T = S^{\ell}$, where $S$ is a non-abelian simple group and $\ell \geq 2$. We have $S^{\ell} \leq H \leq S^{\ell}.(\mathrm{Out}(S) \times Q)$ where $Q \leq \mathrm{Sym} (\ell)$ is the permutation group induced by the conjugation action of $H$ on the $\ell$ factors of $S^{\ell}$. The set $\Gamma$ can be thought of as the set of right cosets in $H$ of the subgroup $H_{0} = (D \times Q) \cap H$ where $D$ denotes the diagonal subgroup of ${\mathrm{Aut}(S)}^{\ell}$. In particular, $|\Gamma| = {|S|}^{\ell-1}$. By (\ref{equat}), $b(H) \leq (\log|H|/\log|\Gamma|) + 3 \leq 8$, provided that $\ell \leq |S|$. Thus, in view of (\ref{e11}), we may assume that $\ell \geq 3$. Let $\mathcal{C}$ be the set of complete representatives of the right cosets in $H$ of the subgroup $H_{0}$ consisting of the elements of $S^{\ell}$ where the first coordinate is $1$. Let $s_1$ and $s_2$ be elements of $S$ such that they together generate $S$. Let $\gamma_{0}$, $\gamma_{1}$, $\gamma_{2} \in \mathcal{C}$ be those elements for which every coordinate of $\gamma_{0}$ is $1$, all but the first coordinate of $\gamma_{1}$ is $s_{1}$, and all but the first coordinate of $\gamma_{2}$ is $s_{2}$. Consider the pointwise stabilizer $Q_0$ of $\{ H_{0}\gamma_{0}, H_{0}\gamma_{1}, H_{0}\gamma_{2} \}$. This group $Q_0$ is contained in the stabilizer $H_0$ of $H_{0}\gamma_{0}$. For any element $h_{0} \in H_{0}$ and any index $i$ in $\{1, 2\}$, we have $H_{0}\gamma_{i}h_{0} = H_{0}\gamma$ for some $\gamma \in \mathcal{C}$ with $1$ or $\ell -1 > 1$ entries equal to $1$. Moreover if $h_{0}$ is in $Q_{0}$, then the first case must hold. Since the only automorphism of $S$ fixing both $s_{1}$ and $s_{2}$ is the identity, we see that $Q_{0}$ is a subgroup of $Q$ leaving $\mathcal C$ invariant. Therefore, whenever $q_0\in Q_0$ and $\gamma\in \mathcal C$, we have $(H_0\gamma)q_0=H_0\gamma^{q_0}$ for $\gamma^{q_0}\in \mathcal C$. Let $\omega_{0}$, $\omega_{1}$, $\omega_{2}$ be those elements of $\Omega$ for which all $k$ coordinates of $\Omega$ are $H_{0} \gamma_{0}$, $H_{0} \gamma_{1}$, $H_{0} \gamma_{2}$, respectively. By the previous paragraph and the fact that $G \leq H \wr P$, the pointwise stabilizer in $G$ of $\{ \omega_{0}, \omega_{1}, \omega_{2} \}$ is a permutation group $R$ permuting the $k \ell$ coordinates of the vectors in $S^{k \ell}$. More precisely, if the coordinates are labelled by the integers $1, \ldots , k \ell$, then $R$ is a permutation group on $\{ 1, \ldots, k \ell \}$ such that $\{ j \ell + 1 \mid j \in \{0, \ldots , k-1 \} \}$ is $R$-invariant. Since $R$ is a subgroup of a transitive group on $k \ell$ points which has order at most $|G|$, we see, by \cite[Theorem 1.2]{DHM}, that $d(R) \leq 48 \sqrt[k \ell]{|G|}$. Consider a distinguishing partition $\mathcal{P}$ with $d(R)$ colors for the action of $R$ on $\{ 1, \ldots, k \ell \}$. Define a new coloring of the $R$-invariant subset $\{ 1, \ldots, k \ell \} \setminus \{ j \ell + 1 \mid j \in \{0, \ldots , k-1 \} \}$ using no more than $d(R)^{2}$ colors in the following way. For any integers $j$ and $u$ with $0 \leq j \leq k-1$ and $1 < u \leq \ell$ color $j \ell + u$ with the color $(\alpha, \beta)$ where $\alpha$ is the color of $j \ell + 1$ in $\mathcal{P}$ and $\beta$ is the color of $j \ell + u$ in $\mathcal{P}$. Clearly, no non-identity element of $R$ preserves this new coloring. For $\ell \geq 3$, we see, by Lemma 2.1 of \cite{DHM}, that $G$ has a base $B$ containing $\{ \omega_{0}, \omega_{1}, \omega_{2} \}$ such that $$b(G) \leq |B| = 2 \frac{\log d(R)}{\log |S|} + 4 < 2 \frac{\log |G|}{k \ell \log |S|} + 6 = 2 \frac{\log |G|}{\log n} + 6.$$ \subsection{Twisted wreath product type groups} \label{Sec4.4} This type was treated in Burness and Seress \cite[Section 4]{BS}. We follow their discussion. By the previous section we know that if $L$ is a primitive permutation group of product type acting on a set $\Omega$, then we have $b(L) \leq 2 (\log |L| / \log |\Omega|) + 22$. Let $G$ be a primitive permutation group of twisted wreath product type acting on the set $\Omega$. Then $G$ contains a regular normal subgroup $T^k$ isomorphic to the direct product of $k$ copies of a non-abelian simple group $T$. We may write $G = T^{k} P$ where $P$ is a transitive permutation group acting on $k$ points. As explained in \cite[Section 3.6]{P}, we may embed $G$ in a group of product type $L$ which is of the form ${(T^{2})}^{k}.P$. Thus $b(G) \leq b(L) \leq 2 (\log |L| / \log |\Omega|) + 22 = 2 (\log |G| / \log |\Omega|) + 24$. \vspace{4mm} This completes the proof of Theorem \ref{nonaffine}. \section{Affine primitive permutation groups} The main result of this section is \begin{thm} \label{generalaffine} Let $G$ be an affine primitive permutation group of degree $n$. Then $b(G) \leq 2 (\log |G| / \log n) + 16$. \end{thm} Let $G$ be an affine primitive permutation group of degree $n$ with a point-stabilizer $H$. Then $G = VH \le AGL(V)$, where $V$ is a finite vector space of order $n = p^k$ ($p$ prime), and the stabilizer $H$ of the zero vector is an irreducible subgroup of $GL(V)$. Since $b(G) = b(H)+1$, Theorem \ref{generalaffine} follows immediately from \begin{thm} \label{ez} Let $H$ be a subgroup of $GL(V)$ acting irreducibly on the finite vector space $V$. Then $b_V(H) \leq 2 (\log |H| / \log |V|) + 17$. \end{thm} In the above theorems, the multiplicative constant $2$ is best possible, as shown by the following example (which completes the proof of Proposition \ref{acc}). \begin{prop} \label{sp} Let $V$ be a $d$-dimensional ($d$ even) non-degenerate symplectic space over the finite field $\FF q$ and let $H= Sp(V)$ with its natural action on $V$. \begin{itemize} \item[{\rm (i)}] Then $b_V(H)=d$. \item[{\rm (ii)}] If $G = VH \le AGL(V)$ is the corresponding affine primitive permutation group, then for sufficiently large values of $q$, we have \[ b(G) = \lfloor 2 (\log |G|/\log n) \rceil - 2. \] \end{itemize} \end{prop} \begin{proof} (i) Clearly, any basis of $V$ (as a vector space) is also a base for $H$, so $b_V(H)\leq d$. For the equality, let $\{b_1,\ldots,b_l\}\in V$ be any set of vectors with $l\leq d-1$. Then there is a subspace $U\leq V$ containing $\{b_1,\ldots,b_l\}$ with $\dim U=d-1$. Hence it is enough to show that for every such subspace $U$, there exists a non-identity $g\in H$ that acts trivially on $U$. Let $U\leq V$ be a subspace of dimension $d-1$ and let $[\ ,\ ]$ denote the non-degenerate symplectic bilinear form on $V$ preserved by $H$. Then the restriction of $[\ ,\ ]$ to $U$ is degenerate: there exists $0\neq x\in U$ such that $\langle x\rangle=U^\perp$. Let $y\in V\setminus U$ be arbitrary. We claim that the map \[ A:cy+u\mapsto c(y+x)+u\ (c\in\FF q,\,u\in U) \] is an element of $H$, which acts trivially on $U$. To see this, let $c,d\in \FF q$ and $u,v\in U$. Then we have $[A(cy+u),A(dy+v)]=[cy+u+cx,dy+v+dx]=[cy+u,dy+v]$, proving the claim. (ii) This follows from a simple computation using the order formula for $|Sp(V)|$. \end{proof} It remains to prove Theorem \ref{ez}. We do this in the following two subsections. \subsection{Primitive linear groups} In this subsection we prove Theorem \ref{ez} in the case where $H\le GL(V)$ acts primitively on $V$ as a linear group. In fact we prove the following stronger bound for this case. \begin{thm}\label{main} Let $V$ be a finite vector space, and let $H \le GL(V)$ be an irreducible, primitive linear group on $V$. Then one of the following holds: \begin{itemize} \item[{\rm (i)}] $b(H) \leq 15$; \item[{\rm (ii)}] $b(H) \le 2\,\frac{\log |H|}{\log |V|} + 9$. \end{itemize} \end{thm} A version of Theorem \ref{main} was proved in \cite{LSbase, LSbase2} with a much worse multiplicative constant, and unspecified constants in place of the constants 15 and 9. The proof of Theorem \ref{main} will be along the lines of that proof. However, in order to make our constants explicit (and small), we need to improve several of the results in \cite{LSbase, LSbase2}. For a field $\F_q$ and a positive integer $m$, by the {\it natural} module for the symmetric or alternating group $\mathrm{Sym}(m)$ or $\mathrm{Alt}(m)$ over $\F_q$, we mean the fully deleted permutation module of dimension $m' = m-\d$, where $\d \in \{1,2\}$. The first result is a version of Proposition 2.2 of \cite{LSbase} with an explicit constant: \begin{prop} \label{quasi} Let $V = V_d(q)$ $(q=p^e)$ and $G \leq GL(V)$, and suppose that $E(G)$ is quasisimple and absolutely irreducible on $V$. Then one of the following holds: \begin{itemize} \item[{\rm (i)}] $E(G) = \mathrm{Alt}(m)$ and $V$ is the natural $\mathrm{Alt}(m)$-module over $\F_q$, of dimension $d = m-\d$ ($\d \in \{1,2\}$); \item[{\rm (ii)}] $E(G) = Cl_d(q_0)$, a classical group with natural module of dimension $d$ over a subfield $\F_{q_0}$ of $\F_q$; \item[{\rm (iii)}] $b(G)\le 6$. \end{itemize} \end{prop} \begin{proof} This is proved in \cite{LL}. \end{proof} The next result is an explicit version of \cite[Proposition 3.6]{LSbase}. \begin{prop}\label{fitt} Let $V = V_d(q)$ $(q=p^e)$ and $G \leq GL(V)$, and suppose that the Fitting subgroup $F(G)$ is absolutely irreducible on $V$. Then $b(G) \le 13$. \end{prop} \begin{proof} We begin by arguing exactly as in the proof of \cite[3.6]{LSbase} that $F=F(G)$ can be taken to be the central product of an extraspecial group $s^{1+2m}$ and the group $Z = \F_q^*$ of scalars, where $s$ is a prime and $d = s^m$. We can also assume that $G = N_{GL(V)}(F)$, so that $G/F$ is isomorphic to either $Sp_{2m}(s)$ or $O_{2m}^\pm (2)$, with $s=2$ in the latter case. Moreover, $q\equiv 1\hbox{ mod }s$, and also $q\equiv 1\hbox{ mod }4$ if $s=2$ and $G/F \cong Sp_{2m}(2)$. Define $F^0 = F.Z(G/F)$, an extension of $F$ by a group of order at most 2. By \cite{Se}, there are three vectors $v_1,v_2,v_3 \in V$ such that $F^0_{v_1v_2v_3}=1$. So if we let $J = G_{v_1v_2v_3}$, then $J\cap F^0=1$ and $J \cong JF^0/F^0$ is isomorphic to a subgroup of $PSp_{2m}(s)$ or $O_{2m}^\pm(2)$. Obviously $b(G) \le 3+b(J)$. Assume for a contradiction that \[ b(J) > 10. \] Then clearly $d = \dim V > 10$, and $V^{10} = \bigcup_{h\in J\setminus 1}C_{V^{10}}(h)$. Hence \begin{equation}\label{triv} |V|^{10} \le \sum_{h\in J\setminus 1} |C_V(h)|^{10}. \end{equation} For $h\in J\setminus 1$, Theorem 4.1 of \cite{GS} shows that there are $2m+1$ conjugates of $h$ that generate $G$ modulo $F$, and hence there are $2m+2$ conjugates of $h$ generating $G$. It follows that \[ \dim C_V(h) \le \left(1-\frac{1}{2m+2}\right)\dim V. \] Hence (\ref{triv}) gives $|V|^{10/(2m+2)} \le |J|$, and so as $|V| = q^d = q^{s^m}$, we have \begin{equation}\label{bd} q^{10s^m/(2m+2)} \le |J| \le \left\{\begin{array}{l} |PSp_{2m}(s)|,\,s \hbox{ odd}, q\equiv 1\hbox{ mod }s \\ |Sp_{2m}(2)|,\,s=2,\, q\equiv 1\hbox{ mod }4 \\ |O^\pm_{2m}(2)|,\,s=2,\, q \hbox{ odd}. \end{array} \right. \end{equation} Straightforward computation shows that the only possible values satisfying (\ref{bd}) are $s=2$, $q=3$ and $m=4$ or 5. For $m=5$ we have $G/F \cong O_{10}^\pm (2)$, and in the above argument, \cite[4.1]{GS} shows that we may replace $2m+2$ by $2m+1$ in (\ref{bd}), yielding a contradiction. And if $m=4$ then $d = 2^4=16$ and it is very easy to argue directly that $b(G) \le 13$, as required. \end{proof} The next result is an improvement of Lemma 3.7 of \cite{LSbase}. \begin{prop}\label{sub} {\rm (i)} Let $\F_{q_0}$ be a subfield of $\F_q$, let $q = q_0^r$, and let $M = \F_q^*\,GL_d(q_0) \le GL_d(q) = GL(V)$, where $\F_q^*$ denotes the group of scalars. Then for the action of $M$ on $V$ we have \[ b(M) \le \frac{d}{r}+2. \] {\rm (ii)} Let $q=p^r$ with $p$ prime, and let $M = \F_q^*\, \mathrm{Sym}(m) \le GL_{m'}(q) = GL(V)$, where $V$ is the natural module for $\mathrm{Sym}(m)$ over $\F_q$, of dimension $m' = m-\d$ ($\d\in\{1,2\}$). Then \[ b(M) \le \frac{\log_pm}{r}+4. \] \end{prop} \begin{proof} (i) Let $\l_1,\ldots ,\l_r$ be an $\F_{q_0}$-basis for $\F_q$, and let $e_1.\ldots,e_d$ be the standard basis for $V = \F_q^d$ (that is, $e_i = (0,\ldots ,1,\ldots 0)$ where the 1 is in the $i^{th}$ coordinate). Write $d=kr+l$ with $k,l \in \Z$ and $0\le l<r$, and define \[ \begin{array}{l} v_i = \sum_{j=(i-1)r+1}^{ir} \l_je_j\;\;(1\le i\le k), \\ v_{k+1} = \sum_{j=kr+1}^d \l_je_j. \end{array} \] If we write $M_0 = GL_d(q_0)$, it is easy to see that $(M_0)_{v_1\ldots v_{k+1}} = 1$. Hence if $J = M_{v_1\ldots v_{k+1}}$ then $J \cong JM_0/M_0$ is cyclic, and so by \cite[3.1]{SZ}, $J$ has a base of size 1. Thus $b(M) \le k+2 \le \frac{d}{r}+2$. (ii) This follows directly from the proof of \cite[3.7(ii)]{LSbase}. \end{proof} As in \cite{LSbase}, for $H\le GL(V)$ define $b^*(H)$ to be the minimal size of a set $B$ of vectors such that any element of $H$ that fixes every 1-space $\la v\ra$ with $v\in B$ is necessarily a scalar multiple of the identity. We call such a set $B$ a {\it strong} base for $H$. By \cite[3.1]{LSbase}, \[ b(H) \le b^*(H) \le b(H)+1. \] Next we give an improvement of Lemma 3.3(iii) of \cite{LSbase}. \begin{lem}\label{33} Let $V_1,V_2$ be vector spaces over $\F_q$ with $\dim V_i = n_i$ and $n_1 \leq n_2$, and let $H_i \leq GL(V_i)$ for $i=1,2$. Denote by $H_1 \otimes H_2$ the image of $H_1 \times H_2$ acting in the natural way on the tensor product $V_1 \otimes V_2$. If $n_1 \le b^*(H_2)$, then \[ b(H_1 \otimes H_2) \le \frac{b^*(H_2)}{n_1}+3. \] \end{lem} \begin{proof} We follows the proof of \cite[Lemma 3.3(iii)]{LSbase}. Let $b = b^*(H_2)$. Assume $n_1\le b$, and let $y_1, \ldots ,y_b$ be a linearly independent strong base for $H_2$ in $V_2$. Let $x_1,\ldots ,x_{n_1}$ be a basis of $V_1$. Write $b = rn_1+s$ with $r,s$ integers and $0 \leq s < n_1$ For $1 \leq i \leq r$ define \[ v_i = \sum_{k=1}^{n_1} x_k \otimes y_{(i-1)n_1+k}, \;\;W_i = \la x_k \otimes y_{(i-1)n_1+k} : 1\le k\le n_1\ra, \] and set $v_{r+1} = \sum_{k=1}^s x_k \otimes y_{rn_1+k}$, $W_{r+1} = \la x_k \otimes y_{rn_1+k} : 1\le k\le s\ra$. Consider the stabilizer $L=(H_1 \otimes H_2)_{v_1\dots v_{r+1}}$. By Lemma 3.3(i) of \cite{LSbase}, $L$ stabilizes $V_1\otimes W_i$ for all $1\le i\le r+1$. Next choose $C,D \in SL_{n_1}(q)$ generating $SL_{n_1}(q)$, and for each $i$, define $\g_i = C\otimes 1, \d_i=D\otimes 1 \in GL(V_1 \otimes W_i)$. Let \[ v = \sum_{i=1}^{r+1}v_i\g_i,\; w = \sum_{i=1}^{r+1}v_i\d_i. \] At this point the argument at the end of the proof of \cite[3.3(iii)]{LSbase} shows that $L_{vw}=1$. Hence $ b(H_1 \otimes H_2) \le r+3 \le \frac{n_1}{b}+3$, as required. \end{proof} The next result is Theorem 1 of \cite{LSbase2}, with an explicit constant $C = 14$. The proof is identical to that in \cite{LSbase2}, but using Propositions \ref{quasi} and \ref{fitt} at the end to justify that $C=14$ works. \begin{prop}\label{corr} Let $V = V_d(q)$, and let $H$ be a subgroup of $\G L(V)$ such that $H$ acts primitively on $V$ and $H^0:=H\cap GL(V)$ is absolutely irreducible on $V$. Suppose that $b^*(H^0) > 14$. Then \[ H^0 \le H_0 \otimes \bigotimes_{i=1}^s \mathrm{Sym}(m_i) \otimes \bigotimes_{i=1}^t {\rm Cl}_{d_i}(q_i), \] where $s+t\ge 1$ and the following hold: \begin{itemize} \item[(i)] $H_0\le GL_{d_0}(q)$ with $b^*(H_0)\le 14$ \item[(ii)] each factor $\mathrm{Sym}(m_i) < GL_{m_i'}(q)$ and ${\rm Cl}_{d_i}(q_i) \le GL_{d_i}(q)$ is acting on the natural module over $\F_q$, where $m_i' = m_i-\d_i$, $\d_i\in \{1,2\}$ \item[(iii)] $d = d_0\cdot \prod_1^s m_i' \cdot \prod_1^t d_i$ \item[(iv)] $F^*(H^0)$ contains $\prod_1^s \mathrm{Alt}(m_i) \cdot \prod_1^t {\rm Cl}_{d_i}(q_i)^{(\infty)}$. \end{itemize} \end{prop} The next result is an improvement of \cite[Proposition 2]{LSbase2}. \begin{prop} \label{basest} Let $H,H^0$ be as in Proposition $\ref{corr}$, with $b^*(H^0)>14$. Take $m_s' = {\rm max}(m_i':1\le i\le s)$ and $d_t = {\rm max}(d_i:1\le i\le t)$ (define these to be $0$ if $s=0$ or $t=0$, respectively). \begin{itemize} \item[(i)] Suppose $t \ge 1$ and $m_s' \le d_t$, and let $q = q_t^r$. Then $d<d_t^2$, $d_t \ge 14$, and \[ b(H^0) \le b(GL_{d/d_t}(q) \otimes GL_{d_t}(q_t)) \le \frac{d_t^2}{dr}+5. \] \item[(ii)] Suppose $s \ge 1$ and $m_s' > d_t$, and let $q = p^r$. Then $d<(m_s')^2$, $m_s' \ge 14$, and \[ b(H^0) \le b(GL_{d/m_s'}(q) \otimes \mathrm{Sym}(m_s)) \le \frac{m_s\log_p m_s}{dr}+6. \] \end{itemize} \end{prop} \begin{proof} We follow the proof of \cite[Proposition 2]{LSbase2}, but as the constants are different we give a few details. We proceed by induction on $s+t$. For the base case $s+t=1$, we have $H^0 \le H_0\otimes M$, where $M = {\rm Cl}_{d_1}(q_1)$ or $\mathrm{Sym}(m_1)$. Consider the first case, and write $q=q_1^r$. Proposition \ref{sub} gives $b(M) \le \frac{d_1}{r}+2$, hence also $b^*(M) \le \frac{d_1}{r}+3$. If $d_0=1$ the conclusion in (i) is immediate, so assume $d_0\ge 2$. As in the proof of \cite[Proposition 2]{LSbase2}, we see that $d_0\le d_1$. Then Lemma \ref{33} gives \[ \begin{array}{ll} b(H^0) & \le \frac{b^*(M)}{d_0}+3 \\ & \le \frac{d_1}{rd_0}+\frac{3}{d_0}+3 \\ & < \frac{d_1}{rd_0}+5, \end{array} \] so that (i) holds (note that $d_1\ge 14$ by \cite[3.3(ii)]{LSbase}). Similarly (ii) holds when $M = \mathrm{Sym}(m_1)$. Now assume $s+t\ge 2$. Let $m$ be the maximum of $d_t$ and $m_s'$, and write $M$ for the corresponding group ${\rm Cl}_{d_t}(q_t)$ or $\mathrm{Sym}(m_s)$. Note that $m\ge 14$, since otherwise \cite[3.3(ii)]{LSbase} implies that $b^*(H^0) \le 14$. Let $N$ be the tensor product of $H_0$ and the other factors ${\rm Cl}_{d_i}(q_i)$, $\mathrm{Sym}(m_i)$, so that $H^0 \le N\otimes M$. If $b^*(N)\le 14$ the conclusion follows as in the $s+t=1$ case, so assume $b^*(N)>14$. Let $m'$ be the largest among the dimensions $d_i,m_i'$ omitting $m$, and write $N_1$ for the corresponding group ${\rm Cl}_{d_i}(q_i)$ or $\mathrm{Sym}(m_i)$. Consider the case where $N_1 = {\rm Cl}_{d_i}(q_i)$. Let $q = q_i^u$. By induction we have \[ b^*(N) \le b(N)+1 \le \frac{d_i^2m}{du}+6 \le \frac{d_i}{u}+6. \] Suppose $d\ge m^2$. Then $b^*(N)\ge m$ by \cite[3.3(iv)]{LSbase}, so Lemma \ref{33} implies that \[ b(H^0) \le \frac{b^*(N)}{m}+3 \le \frac{d_i}{um} + \frac{6}{m}+3. \] Since $m\ge d_i$ and $m>14$, this yields $b(H^0) < 5$, a contradiction. Hence $d<m^2$ in this case. Now the conclusion of the proposition follows by the argument given for the $s+t=1$ case. Finally, consider the case where $N_1 = \mathrm{Sym}(m_i)$. Let $q=p^r$. By induction, \[ b^*(N) \le b(N)+1 \le \frac{(m_i\log_pm_i)\cdot m}{dr}+7 \le \frac{\log_pm_i}{r}+8. \] Now the argument of the previous paragraph gives the conclusion. \end{proof} \vspace{4mm} \noindent {\bf Proof of Theorem \ref{main}} We are now in a position to prove Theorem \ref{main}. We begin just as in the proof of \cite[Corollary 3]{LSbase2}. Suppose $H\le GL(V)$ acts primitively and irreducibly on a finite vector space $V$ defined over a field of size $q_{0}$. Choose $q=q_0^r$ maximal such that $H \le \G L_d(q) \le GL_{dr}(q_0)$. Write $H^0 = H\cap GL_d(q)$ and $V = V_d(q)$. By \cite[12.1]{TAMS}, $H^0$ is absolutely irreducible on $V$. If $b^*(H^0) \le 14$ then $b(H) \le 15$, and the conclusion of Theorem \ref{main} holds. So assume now that $b^*(H^0) > 14$. Then $H^0$ is given by Proposition \ref{corr}, and (i) or (ii) of Proposition \ref{basest} holds. Consider case (i) of Proposition \ref{basest}. Write $m=d_t$ and $q = q_t^r$. Then $d<m^2$, $H^0 \triangleleft Cl_m(q_t)$, $m\ge 14$, and \begin{equation}\label{last} b(H^0) \le \frac{m^2}{dr}+5. \end{equation} From the order formulae for classical groups, we see that \[ \log_{q_t}(|H^0|) \ge \log_{q_t} |\O_m^\pm (q_t)| \ge \frac{1}{2}m(m-1)-1. \] Hence \[ \frac{\log |H|}{\log |V|} \ge \frac{\frac{1}{2}m(m-1)-1}{rd}, \] and so (\ref{last}) gives \[ b(H^0) \le 2\,\frac{\log |H|}{\log |V|} + 5 + \frac{m+2}{rd} < 2\,\frac{\log |H|}{\log |V|} + 7. \] This completes the proof in case (i) of Proposition \ref{basest}. To conclude, consider case (ii) of Proposition \ref{basest}. Write $m=m_s$, $m'=m_s'$ and $q = p^r$. Then $d<m^2$, $H^0 \triangleleft \mathrm{Alt}(m)$, $m' \ge 14$, and \begin{equation}\label{symeq} b(H^0) \le \frac{m\log_p m}{dr}+6. \end{equation} Now $|H^0| \ge \frac{1}{2}m! > \frac{1}{2}\left(\frac{m}{e}\right)^m$, and so \[ \frac{\log |H|}{\log |V|} \ge \frac{m(\log_pm-\log_pe)-1}{rd}. \] Hence (\ref{symeq}) gives \[ b(H^0) \le \frac{\log |H|}{\log |V|} + 6 + \frac{m\log_2e+3}{rd} < \frac{\log |H|}{\log |V|} + 8, \] giving the conclusion of Theorem \ref{main} (actually, a stronger version, with the constant 2 replaced by 1). This completes the proof of Theorem \ref{main}. \subsection{Imprimitive linear groups} It remains to prove Theorem \ref{ez} in the case where the irreducible linear group $H \le GL(V)$ acts imprimitively on $V$. Assume that $H$ preserves the direct sum decomposition $V = V_{1} \oplus \cdots \oplus V_{t}$ where $V_{1}, \ldots , V_{t}$ are subspaces of $V$, and $t>1$ is chosen maximally. Let $H_{1}$ be the stabilizer of $V_{1}$ in $H$. Let us denote the minimal base size of the action $K_{1}$ of $H_{1}$ on $V_{1}$ by $b(K_{1})$. By the choice of $t$, the action of $K_1$ on $V_1$ is primitive, so by Theorem \ref{main}, we have $b(K_{1}) \leq 15$ or $$b(K_{1}) \leq 2 (\log |K_{1}| / \log |V_{1}|) + 9.$$ If $b(K_{1}) \leq 15$, then, by \cite[Theorem 3.4]{DHM} and its proof, we have $$b(H) \leq \log |H| / \log |V| + b(K_{1}) + 1 + (\log 48 / \log (2^{b(K_{1})})) \leq \log |H| /\log |V| + 17.$$ So assume now that $b(K_{1}) \geq 16$ and $b(K_{1}) \leq 2 (\log |K_{1}| / \log |V_{1}|) + 9$. In that case our proof strictly follows the arguments of \cite{DHM}, but in order to be able to prove Theorem \ref{ez} we need to give more precise estimates for the constants appearing there. In the proof we will freely use the concepts and notation of \cite{DHM}. A main step in proving \cite[Theorem 3.17]{DHM} was to reduce the problem for linear groups which do not preserve any tensor product decomposition of the vector space (possibly over a proper field extension of the base field). In order for the reduction argument to work, a generalisation of the problem was needed. Let us view $H$ not just as a subgroup of $GL(V)$ but also as an abstract group. We will define certain maps $X:H \rightarrow GL(V)$. For this let $T_{V}$ denote the group \[T_V=\{g\in GL(V)\,|\,g(V_i)=V_i \textrm{ and }g|_{V_i}\in Z(GL(V_i)) \ \ \forall 1\leq i\leq t\}\simeq (\FF q^\times)^t\] where $\FF q$ is the field of definition for $V$. According to \cite[Definition 3.5]{DHM} we say that a map $X:H \rightarrow GL(V)$ is a $\pmod {T_{V}}$-representation of $H$ if the following two properties hold: (1) $X(g)$ normalizes $T_{V}$ for every $g\in H$; and (2) $X(gh)T_{V} = X(g)X(h)T_{V}$ for every $g,h\in H$. We will always consider $\pmod {T_{V}}$-representations $X$ of $H$ such that the group $X(H)T_{V}$ acts transitively on the set of factors $\Pi = \{ V_{1}, \ldots , V_{t} \}$ of the above direct sum decomposition of $V$. Let $b_{X}(H)$ denote the minimal base size of the group $X(H)T_{V}$. In the rest of this subsection we will show that if $X$ is an imprimitive irreducible linear representation of $H$ preserving the decomposition $V = V_{1} \oplus \cdots \oplus V_{t}$, then $b_X(H) \leq 2 (\log |H| / \log |V|) + 17$, or $b(H) \leq 2 (\log |H| / \log |V|) + 17$. Theorem \ref{ez} will follow by specializing to the case when $X$ is the identity. If the representation of $H$ is alternating-induced in the sense of \cite[Section 3.2]{DHM}, then $b(H) \leq 2 (\log |H| / \log |V|) + 17$, by \cite[Theorem 3.9]{DHM}. By \cite[Section 3.4]{DHM} (and especially by \cite[Corollary 3.15]{DHM}), it is sufficient to establish the proposed bound in the claim for $b(H)$ or for $b_{X}(H)$ in case $X$ is a (not necessarily linear) $\pmod{T_{V}}$-representation of $H$ which is classical-induced satisfying the multiplicity-free condition, in the sense of \cite[Section 3.3]{DHM}. Let $X$ be such a representation of $H$. Let $N$ be the kernel of the action of $H$ on $\Pi$. Let $\fX : H \go N_{GL(V(p))}(T_V)/T_V$ be the homomorphism defined by $\fX(h):=X(h)T_V/T_V$ where $V(p)$ denotes the vector space $V$ defined over the prime field (of size $p$) of $\FF q$ and where $h \in H$. In \cite[Section 3.3]{DHM} a bound is given for $b_{X}(H)$. By the argument after \cite[Theorem 3.11]{DHM} (from the second to the fifth paragraphs), we get $b_{X}(H) < (\log |H| / \log |V|) +12$ when $\fX(N) = 1$. So assume that $\fX(N) \not= 1$. We use the notation (and a minor modification of the argument) of the paragraph following \cite[Theorem 3.11]{DHM}. In this case $\mathrm{Soc}(\fX (N))$ is a subdirect product of isomorphic simple classical groups $S_{1}, \ldots , S_t$. The linking factor, denoted by $r$, is at most $2$. We have $|N| \geq {|S_{1}|}^{t/r}$. Since $b(K_{1}) \geq 16$, the center of $K_{1}$ has size less than ${|V_{1}|}^{1/16}$ and $|\mathrm{Out}(S_{1})| \leq {|V_{1}|}^{6/16}$, (the latter by \cite[Lemma 7.8]{GMP}). Thus $|S_{1}| \geq |K_{1}|/{|V_{1}|}^{1/2}$, which implies $|N| \geq {|K_{1}|}^{t/r}/{|V|}^{1/2}$. Assume that $r=1$. By Theorem \ref{main}, we have \[ b(K_{1})\leq 2 (\log |K_{1}| / \log |V_{1}|) + 9 = 2 (\log ({|K_{1}|}^{t}) / \log |V|) + 9 \leq 2 (\log |N| / \log |V|) + 10. \] By \cite[Theorem 3.4]{DHM} and its proof, we then get $b(H) \leq 2 (\log |H| / \log |V|) + 12$ since $\log |V_{1}| \geq 16$. Now assume that $r = 2$, so $t=2k$ for some integer $k$. By changing the order (if necessary) of the summands in the direct sum $V=\oplus_{i=1}^t V_i$, we can assume that the sets $\Delta_i:=\{V_{2i-1},V_{2i}\}$, for all $i$ with $1\leq i\leq k$, form a system of $H$-blocks with $S_{\Delta_i} := \mathrm{Soc}(\fX_{\Delta_i}(N))$ a full diagonal subgroup of $S_{2i-1}\times S_{2i}$. Here $\fX_{\Delta_{i}}$ is defined as follows. Put $V_{\Delta_{i}}:= V_{2i-1} \oplus V_{2i}$ and let $T_{V_{\Delta_{i}}}$ be defined analogously as $T_{V}$. Let $X_{i} : N_{H}(\Delta_{i}) \rightarrow GL(V_{\Delta_{i}})$ be the $\pmod {T_{V_{\Delta_{i}}}}$-representation of $N_{H}(\Delta_{i})$ obtained naturally from $X$ by restricting first from $H$ to $N_{H}(\Delta_{i})$ and then from $V$ to $V_{\Delta_{i}}$. Now define $\fX_{\Delta_{i}}$ to be the homomorphism given by $\fX_{\Delta_{i}}(h):= X_{i}(h) T_{V_{\Delta_{i}}} / T_{V_{\Delta_{i}}}$ for $h \in N_{H}(\Delta_{i})$. The multiplicity-free condition (and the definition of $\Delta_{i}$) guarantees that there are no functions $\varphi_i:V_{2i}\to V_{2i-1}$ and $\lambda_i:N_H(\Delta_i) \to \FF q^\times$ such that $\varphi_i$ is a semilinear invertible map and $$X_{2i-1}(g)=\lambda_i(g)\cdot \varphi_{i} X_{2i}(g) \varphi_i^{-1}$$ for every $g\in C_H(\Delta_i)$. By using \cite[Theorem 2.1.4]{KL}, it follows that $S_1\simeq PSL(V_1)$. Thus, $|N|\geq (q^{\dim^2 V_1-\dim V_1})^{t/2}/{|V|}^{1/2}$, which means that $$b_{X_{\Delta_1}}(N_H(\Delta_1))\leq \dim V_1\leq 2(\log |N|/\log|V|) +2.$$ Now we can apply \cite[Theorem 3.4]{DHM} to the $H$-invariant direct sum decomposition $V = \oplus_{i=1}^{k} V_{\Delta_i}$ to deduce that $b_X(H)\leq 2(\log |H|/\log |V|) +3+\log 48 < 2(\log |H|/\log |V|) +9$. This completes the proof of Theorem \ref{ez}. \section{Proof of Corollary \ref{maincorollary}} \label{SecCor} In this section we will prove Corollary \ref{maincorollary}. Let $G$ be a primitive permutation group of degree $n$. For later use, we recall the definition of {\it standard} actions of almost simple primitive groups: these occur for groups with socle an alternating group $\mathrm{Alt}(m)$ or a classical group. In the former case they are actions on an orbit of subsets or partitions of $\{1,\ldots,m\}$; and in the latter, they are subspace actions (as defined in Section 3). Assume first that $G$ is $\mathrm{Sym}(m)$ or $\mathrm{Alt}(m)$ for some integer $m \geq 5$. We consider standard actions of $G$. If the action of $G$ is on a set of partitions of the underlying set of size $m$, then $b(G) \leq \log n + 4$ by Theorem \ref{bcnprop}. The right hand side is less than $\sqrt{n}$ for $n \geq 256$, and $b(G) < 12$ otherwise. Now assume that $G$ acts on $k$-element subsets of $\{ 1, \ldots ,m \}$ for some integer $k$ with $2 \leq k \leq m/2$ and $n = \binom{m}{k}$. Assume also that $b(G) \geq 26$. Let $k^{2} \leq m$. Then, by Theorem \ref{precise}, $b(G) \leq \frac{2m}{k+1} + 1$. Since $b(G) \geq 26$, we have $m \geq 38$ and $n \geq 625$. For $k =2$ and $n \geq 625$ we get $b(G) \leq \frac{2m}{3} + 1 < \sqrt{n}$. For $k \geq 3$ and $n \geq 625$ we find that $m^{2} \leq \binom{m}{k} = n$ and so $b(G) < \sqrt{n}$. Let $k^{2} > m$. Then $k \geq 3$ since $m \geq 5$. By Theorem \ref{precise}, $b(G) \leq \left\lceil\log_{\lceil t\rceil}(m)\right\rceil \left(\lceil t\rceil-1\right)$ where $t = m/k$. Since $b(G) \geq 26$, this forces $m \geq 18$. In particular $k \geq 5$ and $n \geq 625$. Thus assume that $m \geq 18$, $k \geq 5$, and $n \geq 625$. Let $k \geq 8$. By Theorem \ref{precise} and the assumption $k^{2} > m$, we have $b(G) \leq (\log m + 1)t < (2 \log k + 1)t$. Since $t \geq 2$, it follows that $(2 \log k + 1)t < t^{k/2} = {(\frac{m}{k})}^{k/2} < \sqrt{\binom{m}{k}}$. If $5 \leq k \leq 7$, then $b(G) \leq 6 (\log m + 1)$, by Theorem \ref{precise} (recall that $m<k^2$). The right hand side is less than $\sqrt{\binom{m}{k}}$ provided that $m \geq 18$ and $5 \leq k \leq m/2$. Next assume that $G$ is a group as in Section \ref{Sec4.3.1}. Let us use the notation and results of that section. By (\ref{e11}), we have $b(G) < \log k + 1 + n^{1/k}$. For $k \geq 3$ this is less than $\sqrt{n}$ since $n\geq 5^k \geq 125$. Now let $k = 2$. Then $G$ is a subgroup of $\mathrm{Sym}(t) \wr \mathrm{Sym}(2)$ with $n = t^{2}$. In this case $b(G) \leq t$, that is, $b(G) \leq \sqrt{n}$. At this point, by \cite[Theorem 1.1]{Mar}, we may assume that $|G| \leq n^{1 + \log n}$ (and $n \geq 26$). If $G$ is as in Section \ref{Sec4.3.2} or as in Section \ref{Sec4.4}, then $n > 2500$, and so by Theorem \ref{mainresult}, $$b(G) \leq 2 \frac{\log |G|}{\log n} + 24 \leq 2 \log n + 26 < \sqrt{n}.$$ If $G$ is as in Section \ref{Sec4.2}, we use Fawcett's \cite{Fawcett} bound $b(G) \leq (\log |G|/\log n) + 3$ to obtain $b(G) \leq \log n + 4 < \sqrt{n}$ provided $n \not= 60$. If $n = 60$, then $|G| \leq 4 n^{2}$, and so $b(G) \leq 5$. Assume that $G$ is almost simple. If the action of $G$ is non-standard, then $b(G) \leq 7$ by \cite{Burness} and \cite{BurnessLiebeckShalev}. Thus assume that the action of $G$ is standard. In particular the socle of $G$ is a simple classical group or an alternating group. The case when the socle is an alternating group was treated before. Let $G$ be an almost simple group with socle a classical group with natural module a vector space of dimension $d$ over some finite field. Since $|G| \leq n^{1 + \log n}$ and $b(G) \leq 2 (\log |G| / \log n) + 16$, the latter by Theorem \ref{generalclassical}, we see that $b(G) < \sqrt{n}$ for $n \geq 1600$. Thus assume that $n < 1600$. By Theorem \ref{classicalmain} and Section \ref{Sec3.3}, we have $b(G) \leq d + 14$. By \cite[Table 5.2.A]{KL}, we find that $d$ must be at most $11$, and so $b(G) \leq 25$. Finally, assume that $G$ is of affine type with $n \geq 4$. Put $n = p^{d}$ where $p$ is a prime and $d$ is an integer. Then $b(G) \leq 1 + d$. This is at most $p^{d/2}$ unless $p = 2$ and $2 \leq d \leq 5$. In particular $b(G) \leq 6$ and $n \leq 32$. This completes the proof of Corollary \ref{maincorollary}.
{ "timestamp": "2018-02-21T02:05:21", "yymm": "1802", "arxiv_id": "1802.06972", "language": "en", "url": "https://arxiv.org/abs/1802.06972" }
\section{Introduction} \subsection{Background} Multi-target tracking with one or more sensors plays a significant role in many surveillance and robotics applications. A tracking algorithm provides higher-level systems with the ability to make real-time decisions based on an accurate picture of the surrounding environment. Within ITS, it can be used for pedestrian detection at intersections \cite{meissner2012real}, self-driving cars \cite{osep2017combined}, and for traffic surveillance \cite{Roy2011} \cite{alldieck2016context} \cite{yang2016multiple} \cite{jodoin2016tracking}. Multi-target tracking also has a myriad of other applications ranging from general security systems to tracking cells in microscopy images \cite{Liang2013}. There are many different sensor modalities that can be used for these applications; the most common are video, radar, and LiDAR. As a motivating example, consider a vision system that tracks all traffic participants at an urban intersection. The real-time tracking data can be used for adaptive traffic signal control to optimize the flow of traffic at that intersection. However, urban traffic intersections contain numerous challenges for multi-target tracking. Heavy traffic occupying multiple lanes and unpredictable pedestrian motion makes for a cluttered scene with lots of occlusion, false alarms, and missed detections. Variability in the appearance of targets caused by poor lighting and weather conditions is especially problematic for visual tracking. On the other hand, new technologies such as vehicle-to-infrastructure (V2I) communication enables vehicles to transmit information directly to traffic intersections, augmenting the data collected by traffic cameras and other sensors \cite{djahel2015toward}. However, tall buildings, trees, and other vehicles can increase GPS signal interference, a phenomenon known as multipath, that can corrupt the data \cite{Emami:2017:TVE:3132340.3132356}. Identifying and filtering out the effects of multipath is still an ongoing area of research \cite{cheng2016detecting}. \par Prior to the proliferation of vision-based tracking, tracking methods primarily relied on kinematic data. A sensible intuition is that combining kinematic information with the learned representations of high-dimensional sensor data will improve tracking performance. The aim of this survey is to review the algorithms used in data-driven multi-target tracking and discuss recently proposed extensions. We believe that considering tracking from the perspective of an assignment problem is a good way to abstract away a lot of application-specific details and unify the many different approaches. \subsection{Assignment Problems in Multi-Target Tracking} \begin{figure*}[t!] \begin{subfigure}[t]{0.2\textwidth} \centering \includegraphics[scale=0.3]{Fig2A-2.png} \caption{Linear Assignment \label{fig:LAP_1}} \end{subfigure}% ~ \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[scale=0.3]{Fig2B-2.png} \caption{Linear Assignment \label{fig:LAP_2}} \end{subfigure}% ~ \begin{subfigure}[t]{0.33\textwidth} \centering \includegraphics[scale=0.3]{Fig2C-2.png} \caption{Multidimensional Assignment \label{fig:MDAP}} \end{subfigure} \caption{A visual depiction of data association. a) In online tracking, new sensor detections are matched to existing tracks at each time step by solving a linear assignment problem. The assignment hypotheses are the colored, dashed arrows. Each arrow is annotated with the cost $c^{ij}$ of associating track $i$ with detection $j$. b) The optimal linear assignment. Notice how the assignment partitions the set of existing tracks and detections. c) In batch, or offline single-sensor tracking, multiple sets of detections within a sliding window are associated all at once with a set of existing tracks. Here, the sliding window size $T$ is 2 and the optimal assignment is shown. The images are taken from a random video in the MOT Challenge dataset \cite{milan2016mot16}. \label{fig:DA}} \end{figure*} At the core of multi-target tracking lies the data association and track-to-track association problems. The goal of data association is to identify a correspondence between a collection of new sensor measurements and preexisting tracks (Figure \ref{fig:DA}). New measurements can be generated by previously undetected targets, so care must be taken to not erroneously assign one of these measurements to a preexisting track. Likewise, the measurements that stem from clutter within the surveillance region must be identified to avoid false alarms. When there are multiple sensors, there is also the additional problem of track-to-track association. This problem seeks to find a correspondence between tracks of the same target that were generated by different sensors (Figure \ref{fig:t2ta}). This is a necessary step before track-to-track fusion; once the optimal assignment of the multi-sensor tracks has been found, all of the tracks assigned to a single track can be combined to produce the final estimate of that track's state. The sensors might be homogeneous or heterogeneous; in the latter case, the problem becomes even harder as the sensors could produce vastly different types of data. Note that in this work, we use \textit{detections} and \textit{measurements} interchangeably; similarly, we equivocate \textit{targets} with the term \textit{objects}. We will attempt to be as consistent as possible with our usage while also adhering to the norms of the different tracking communities when appropriate. For example, in vision-based tracking, the term \textit{detections} is typically used instead of \textit{measurements}. \par Broadly speaking, algorithms for solving these two association tasks can be classified as either \textit{single-scan} or \textit{multi-scan}. A single-scan algorithm only uses track information from the most recent time step, whereas multi-scan algorithms use track information from multiple previous or future time steps. Generally, multi-scan methods are preferable in situations where the objects of interest are closely spaced and there are a lot of false alarms and missed detections. However, delaying the association to leverage future information negatively affects the real-time capabilities of the tracker. The accuracy and precision of the tracks produced by multi-scan methods are usually superior, and they offer fewer track ID switches, track breaks, and missed targets \cite{poore2006some}. Naturally, multi-scan methods are more computationally expensive and difficult to implement than their single-scan counterparts. \begin{figure*}[t!] \centering \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.2]{Fig3A.png} \caption{\label{fig:MS-T2TA-a}} \end{subfigure}% ~ \begin{subfigure}[t]{0.5\textwidth} \centering \includegraphics[scale=0.2]{Fig3b.png} \caption{\label{fig:MS-T2TA-b}} \end{subfigure} \caption{ a) There are three different sensors (circles, triangles, and diamonds) covering the surveillance region, each maintaining two tracks. In this scenario, the sliding window is of size $T = 1$. The dashed arrows in (a) depict the assignment hypothesis corresponding to associating one of the tracks from the circle sensor with tracks from the triangle and diamond sensors. We don't show the other arrows that originate from each track in (a) for visual clarity. b) The best track-to-track association hypothesis; the two groups of tracks associated together are indicated by the solid and dashed lines. The solution effectively partitions each sensor's track lists.\label{fig:t2ta}} \end{figure*} \par A common way to formulate these association tasks is as an assignment problem. See Table \ref{taxonomy of ass problems} for a categorization of the various association tasks mapped onto assignment problems. The simplest version is the 2D assignment problem, also known as bipartite matching or linear assignment (LAP), which seeks to match $m$ workers, e.g., tracks, to $n$ jobs, e.g., sensor measurements. This combinatorial optimization problem constrains the space of solutions so that each track is assigned to exactly one measurement, but measurements are allowed to not be assigned (i.e., false alarms) or to be assigned to a "dummy track" (i.e., a missed detection). The multidimensional extension to the assignment problem for track-to-track association stipulates that each track from each sensor be assigned exactly once. For multidimensional data association, constraints ensure that each sensor measurement at each time step is assigned to a track exactly once. Unfortunately, the MDAP is NP-hard for dimensions $\geq 3$, whereas there exists many polynomial-time algorithms for the LAP. We will formulate these problems more rigorously in Section \ref{sec: problem form}. The algorithms presented in this survey are for solving the various MDAPs encountered in multi-target tracking, and are generally applicable (with modification) to both data association and track-to-track association. \par It has been suggested that non-kinematic data obtained from sensors can be incorporated into association algorithms to improve performance \cite{bar2007track} \cite{osbome2011track} \cite{mori2014performance} \cite{chong2006metrics}. For example, a classifier can be used to prevent two sensor tracks with different target class labels from being associated, which reduces the number of potential assignments. Appearance information has been used extensively in the computer vision community to improve the performance of data association; see \cite{li2013survey} for an in-depth survey. We will be discussing data-driven approaches for discovering features to augment association algorithms. Additionally, we will survey optimization algorithms for finding the solution to a MDAP. \begin{table}[t] \caption{Taxonomy of assignment problems in multi-target tracking. LAP := linear assignment problem and \\MDAP := multidimensional assignment problem. The algorithms presented in this survey are for solving the various MDAPs encountered in multi-target tracking, and are generally applicable (with modification) to both data association and track-to-track association.} \label{taxonomy of ass problems} \centering \begin{tabular}{lll} \toprule & \textbf{Data Association} & \textbf{Track-to-Track Association} \\ \midrule \textbf{Single-Scan} & LAP (1-2 sensors), MDAP ($\geq 3$ sensors) & LAP (2 sensors), MDAP ($\geq 3$ sensors) \\ \textbf{Multi-Scan} & MDAP ($\geq 1$ sensors) & MDAP ($\geq 2$ sensors) \\ \bottomrule \end{tabular} \end{table} \subsection{Comparison with Related Surveys} There are several related surveys to ours, and we wish to highlight the relationship between the contributions of these surveys and those of our own. Both \cite{poore1994multidimensional} and \cite{poore2006some} provide a detailed treatment of how assignment problems are useful for multi-target tracking. They only go so far as to frame assignment problems in the context of multi-target tracking. There are a number of excellent general surveys on multi-target tracking \cite{luo2014multiple} \cite{yilmaz2006object}; however, the scope of these studies is limited to only vision-based tracking and the focus is on all aspects of a tracking solution, whereas our focus is specifically on the data association and track-to-track association problems. The survey on appearance matching in camera-based multi-target tracking \cite{li2013survey} discusses machine learning methods for improving data association, but it does not cover the recent advances in deep learning that have become ubiquitous in the computer vision tracking community. Finally, \cite{foggia2014graph} surveys recent advances in applying machine learning techniques to graph matching, but the connection to multi-sensor multi-target tracking is not mentioned. \subsection{Roadmap} The presentation of the techniques for solving MDAPs is split into two parts. The first part is focused on the optimization problem of finding an assignment for data association and track-to-track association, and the second part is concerned with methods for learning the assignment costs from data. Hence, the rest of the survey will be organized in the following manner. In Section \ref{sec: problem form}, the various assignment problems in multi-target tracking are carefully defined for the reader. Section \ref{section:opt} begins with a presentation of techniques for solving MDAPs in multi-target tracking that were proposed early on but still remain relevant today, followed by an examination of machine learning-based approaches that are now more prominent. In Section \ref{sec: learning ass costs}, we present multiple methods for learning assignment costs in single-sensor and multi-sensor tracking from data. Section \ref{sec: benchmarks} provides a brief overview of available datasets for single-sensor and multi-sensor multi-target tracking, with emphasis placed on ITS applications. Finally, Section \ref{sec: conclusions} concludes with a discussion on future research directions. For a visual representation of the organization of the survey, see Figure \ref{survey tree}. \begin{figure} \centering \begin{tikzpicture}[every node/.style = {shape=rectangle, rounded corners, draw, align=center}, edge from parent/.style={draw,-latex}] \tikzset{level 2/.style={sibling distance=-22pt}} \tikzset{level 3/.style={sibling distance=-24pt}} \tiny \Tree [.{Machine Learning for Assignment Problems\\in Multi-Target Tracking} [.{Problem Formulation} [. {Assignment\\Costs} [.\node[yshift=2em]{Single-sensor}; \node[yshift=2em]{Pre-Deep-Learning}; {Post-Deep Learning} ] {Multi-sensor} ] [.{Optimization} \node[yshift=2.5em]{Greedy Randomized Search}; {Lagrangian Relaxation} \node[yshift=-2em]{Network Optimization}; \node[yshift=-5em]{Conditional Random Fields}; \node[yshift=-3em]{Belief Propagation}; \node[yshift=-1em]{Deep Neural Networks}; \node[yshift=1em]{MCMC}; ] ] ] \end{tikzpicture} \caption{Organization of the survey. We begin by surveying a wide variety of optimization techniques, followed by learning algorithms for assignment costs. \label{survey tree}} \end{figure} \section{Problem Formulation} \label{sec: problem form} We will first formally introduce the linear assignment problem (LAP) for single-sensor data association and track-to-track association with two sensors. Following this, we will examine the various MDAP formulations. \subsection{Linear Assignment} Consider a scenario where there are $m$ existing tracks and $n$ new sensor measurements at time $k$, $k = 1,...,T$. We assume that there is a matrix $C_k \in \mathbb{R}^{m \times n}$, with entries $c^{ij}_k \in C$ representing the cost of assigning measurement $j$ to track $i$ at time $k$ (Figures \ref{fig:LAP_1} and \ref{fig:LAP_2}). The goal is to find the optimal assignment of measurements to tracks so that the total assignment cost is minimized. Using binary decision variables $x^{ij} \in \{0, 1\}$ to represent an assignment of a measurement to a track, we end up with a 0-1 integer program \begin{equation} \min_{x \in X} \sum_{i=1}^m \sum_{j=1}^n c_k^{ij} x^{ij} \end{equation} with constraints \begin{equation} \begin{aligned} \sum_{i=1}^{m} x^{ij} &= 1, \hspace{3ex} j = 1, ..., n\\ \sum_{j=1}^{n} x^{ij} &= 1, \hspace{3ex} i = 1, ..., m \end{aligned} \end{equation} where $x \in X$ is a binary assignment matrix. There are $mn$ constraints forcing the rows and columns of $X$ to sum to 1. Note that $C_k$ is not required to be a square matrix. To capture the fact that some sensor measurements will either be false alarms or missed detections, a dummy track is added to the set of existing tracks, so that $C_k$ is now an $(m+1) \times n$ matrix. The entries in the $(m+1)$\textsuperscript{th} row represent the costs of classifying measurements as false alarms. Missed detections are usually handled by forming validation gates around the $m$ tracks (see \cite{blackman1999design}, Section 6.3). These gates can be used to determine, with some degree of confidence, whether any of the new measurements might have originated from a track. The canonical approach is to use elliptical gates, which are typically computed from the covariance estimates provided by a Kalman Filter. In video-based tracking, a similar tactic is to suppress object detections with low confidence values. \par Even though there are $mn!$ possible assignments, many polynomial-time algorithms exist for finding the globally optimal assignment matrix. Most famous is the $O(n^3)$ Hungarian, or Kuhn-Munkres, algorithm \cite{NAV:NAV3800020109} \cite{munkres1957algorithms}. Another popular method is the Auction algorithm, introduced by Bertsekas in \cite{bertsekas1992auction}. These algorithms are fast and are easy to integrate into real-time multi-target tracking solutions. However, by only considering the previous time step when assigning measurements or tracks, we are making a Markovian assumption about the information needed to find the optimal assignment. In situations with lots of clutter, false alarms, missed detections, and occlusion, the performance of these algorithms will significantly deteriorate. Indeed, it may be beneficial to instead use a sliding window of previous and future track states to construct assignment costs that model the relationship between tracks and new sensor measurements more accurately. Instead of updating the assignment within the sliding window at each time step, an alternative approach is to simply delay making a decision within the sliding window. In the sequel, we describe how this affects the formulation of the assignment problem. As indicated in Table \ref{taxonomy of ass problems}, the single-scan track-to-track association problem with two sensors is also a LAP, where $m$ and $n$ represent the sets of tracks maintained by each sensor. Similar methods for handling false alarms and missed detections in data-association can be used for track-to-track association with uneven sensor track lists, i.e., $m \neq n$. If the assignment costs are known, an optimal track assignment can be found in polynomial-time using one of the previously mentioned algorithms. \subsection{Multidimensional Assignment} Within the single-sensor and multi-sensor tracking paradigms, there are a few different ways to formulate data association and track-to-track association as a MDAP (see Table \ref{taxonomy of ass problems}). Each formulation seeks to optimize slightly different criteria, but each solution technique is generally applicable to all of them with minor modifications. For further reading on the MDAP, see \cite{kammerdiner2008}, \cite{poore1994multidimensional}, \cite{blackman1999design}. \par We begin by considering the MDAP for multi-scan data association with one sensor. This scenario is the one most commonly encountered, especially in video-based tracking. Let the number of scans, or the temporal sliding window size, be given by $T$. Since the objective is to associate new sensor measurements with a set of existing tracks, the resulting MDAP has $T + 1$-dimensions (Figures \ref{fig:MDAP}). When $T \geq 2$, the assignment problem is NP-hard \cite{kammerdiner2008}. \par Let the set of noisy measurements at time $k$ be referred to as \textit{scan} $k$ and be represented by $Z_k = \{z_k^{i}\}$, where $i$ is the $i$\textsuperscript{th} measurement of scan $k$, $i = 1, ..., M_k$. $M_k$ is the number of measurements in each scan, i.e., $\lvert Z_k \rvert = M_k$. The main assumption we are making is that each object is responsible for at most one measurement within each scan. We let $Z^T = \{Z_1, ..., Z_T\}$ represent the collection of all measurements in the sliding window of size $T$. Let $\Gamma$ be the set of all possible partitions of the set $Z^T$. We seek an optimal partitioning $\gamma^* \in \Gamma$, also called a hypothesis, of $Z^T$ into tracks. Note that a track is just an ordered set of measurements $\{z_1^{i}, z_2^{i}, ..., z_T^{i} \}$; one measurement from each scan at each time step is attributed to each track. Hence, a partition $\gamma$ represents a valid collection of tracks that adhere to the MDAP constraints. Now, we define $\gamma^j$ to be the $j$\textsuperscript{th} track in $\gamma$. Following this, we can define a cost for each track $\gamma^j$ in a partition as $c_{i_1, i_2, ..., i_T}$, where the indices $i_1, i_2, ..., i_T$ indicate which measurements from each scan belong to this particular track. This represents the cost of track $j$ being assigned measurement $i$ from scan 1, measurement $i$ from scan 2, and so on. Crucially, the multidimensional constraints prevent measurements from being assigned to two different tracks and ensure that each measurement is matched to a track. If we use binary variables $\rho_{i_1, i_2, ..., i_T} \in \{0, 1\}$ to indicate if a track is present in a partition, then we can represent the MDAP objective as \begin{equation} \label{SSMDAP} \min_{\gamma \in \Gamma} \sum_{i_1 = 1}^{M_1} \ldots \sum_{i_T = 1}^{M_T} c_{i_1, i_2, ..., i_T}\rho_{i_1, i_2, ..., i_T} \end{equation} with constraints \begin{equation} \label{SSMDAP constraints} \begin{aligned} \sum_{i_2 = 1}^{M_1} \ldots \sum_{i_T = 1}^{M_T} \rho_{i_1, i_2, ..., i_T} &= 1; \hspace{1cm} i_1 = 1, ..., M_1 \\ \sum_{i_1 = 1}^{M_1} \ldots \sum_{i_T = 1}^{M_T} \rho_{i_1, i_2, ..., i_T} &= 1; \hspace{1cm} i_2 = 1, ..., M_2 \\ &\vdots \hspace{2cm} \vdots\\ \sum_{i_1 = 1}^{M_1} \ldots \sum_{i_{T-1} = 1}^{M_T-1} \rho_{i_1, i_2, ..., i_T} &= 1; \hspace{1cm} i_T = 1, ..., M_T \\ \end{aligned} \end{equation} The solution $\rho$ to this MDAP is the multidimensional extension of the binary assignment matrix. Simply, one may consider $\rho$ as being a multidimensional array with binary entries, such that the sum along each dimension is 1. Similarly to the LAP, we can augment each scan by including a $z_k^0$ dummy measurement in the set of detections at time $k$ to address false alarms. This is useful for identifying track birth and track death as well, but care should be taken when defining the cost for assigning measurements as false alarms or missed detections to avoid high numbers of false positives and false negatives. \par It is common to solve for an approximate solution within a fixed-sized sliding window $T$, then shift the sliding window forward in time by $t < T$ so that the new sliding window overlaps with the old region. This allows for tracks to be linked over time, and it provides a compromise between "offline" tracking, when $T$ is set to the length of an entire sequence of measurements, and "online" tracking, when $T = 1$. \par The other form of the MDAP we are interested in is multi-sensor association with $S \geq 3$ sensors. This scenario is common in centralized tracking systems, where sensors that are distributed around a surveillance region report raw measurements to a central node \cite{sorokin2009mathematical} \cite{boginski2011sensors}. When each sensor sends its local tracks to a central node for track association and fusion, a MDAP must be solved. In this case, the dimensionality of the MDAP is equal to $S$, and hence, is NP-hard. The main difference between this problem and the previous data association problem is that it deals solely with tracks, as opposed to new sensor measurements from all sensors. Multi-scan track-to-track association with two sensors is also a MDAP, as well as multi-scan multi-sensor data association (Table \ref{taxonomy of ass problems}), but we omit these cases for brevity in our formulation and for the fact that they can be defined quite similarly from what is presented next. \par Following \cite{deb1997generalized}, in this scenario there are $S \geq 3$ sensors, each maintaining a set of local tracks and using a sliding window of size $T \geq 1$. We define $X_k^s = \{x_k^{i,s}\}$, $s = 1, ..., S$, to represent the set of track state estimates produced by sensor $s$ at time $k$. We have $i = 1, ..., N_s$, where $N_s$ is the number of tracks being maintained by sensor $s$ and $x_k^{i,s}$ interpreted as the $i$\textsuperscript{th} track of sensor $s$ at scan $k$. Then, for each sensor, we have $X^{T,s} = \{X_1^s, ..., X_T^s\}$, which represents the collection of track state estimates within the sliding window. We seek an optimal partitioning $\gamma^* \in \Gamma$ of $X^{T} = \{X^{T,1}, ..., X^{T,S}\}$ of tracks over all scans and sensors that minimizes the total assignment cost, and we can define a partial assignment hypothesis in a partition $\gamma$ as $\gamma^l = \{ \{x_1^{j,1}, x_1^{j,2}, ..., x_1^{j,N_s}\}, ..., \{ x_T^{j,1}, x_T^{j,2}, ..., x_T^{j,N_s}\}\}$. In words, this states that the $j$\textsuperscript{th} track of sensor 1 from scan 1, $j$\textsuperscript{th} track of sensor 2 from scan 1, and so on, all correspond to the same underlying track $l$ in scan 1. Likewise, this interpretation extends for all subsequent scans. As a quick example, suppose that there are 3 sensors each maintaining 3 tracks, and that $T = 1$. Then a potential hypothesis $\gamma$, or assignment, is $\{ \{x^{1,1}, x^{2,2}, x^{1,3}\}, \{x^{2,1}, x^{1,2}, x^{2,3}\}, \{x^{1,3}, x^{2,3}, x^{3,3}\} \}$. This hypothesis makes the claim that track 1 from sensor 1, track 2 from sensor 2, and track 1 from sensor 3 all were generated by "true" track 1. The assignments for the other two tracks can be identified similarly. Note that the number of "true" targets in the surveillance region must either be known \textit{a priori} or estimated. Considering the simplest case of $T = 1$, we can write the cost for a partial hypothesis as $c_{i_1,i_2, ..., i_{N_s}}$. Increasing $T$ to include more than one scan corresponds to adding extra dimensions to the problem. We can use binary variables as before, $\rho_{i_1, i_2, ..., i_{N_s}} \in \{0, 1\}$, to indicate whether a particular partial hypothesis is present in $\gamma$. The MDAP can then be written as \begin{equation} \label{MSMDAP} \min_{\gamma \in \Gamma} \sum_{i_1 = 1}^{N_1} \ldots \sum_{i_{N_s} = 1}^{N_s} c_{i_1, i_2, ..., i_{N_s}}\rho_{i_1, i_2, ..., i_{N_s}} \end{equation} with constraints \begin{equation} \label{MSMDAP constraints} \begin{aligned} \sum_{i_2 = 1}^{N_1} \ldots \sum_{i_{N_s} = 1}^{N_s} \rho_{i_1, i_2, ..., i_{N_s}} &= 1; \hspace{1cm} i_1 = 1, ..., N_1 \\ \sum_{i_1 = 1}^{N_1} \ldots \sum_{i_{N_s} = 1}^{N_s} \rho_{i_1, i_2, ..., i_{N_s}} &= 1; \hspace{1cm} i_2 = 1, ..., N_2 \\ &\vdots \hspace{2cm} \vdots\\ \sum_{i_1 = 1}^{N_1} \ldots \sum_{i_{N_{s-1}} = 1}^{N_{s-1}} \rho_{i_1, i_2, ..., i_{N_s}} &= 1; \hspace{1cm} i_{N_s} = 1, ..., N_s \\ \end{aligned} \end{equation} As with the multi-scan data association problem, the solution takes the form of a multidimensional binary array. As before, the number of potential assignment hypotheses in a MDAP can be reduced with gating. Even with gating, solving a MDAP for real-time tracking is infeasible. An analysis on the number of local minima in MDAPs with random costs shows that it increases exponentially in the number of dimensions \cite{grundel2007number}. Notably, the MDAP is closely related to other NP-Hard combinatorial optimization problems, such as Maximum-Weight Independent Set and Set Packing \cite{collins2012multitarget}. In the next subsection, we will show how the costs can be interpreted as probabilities; this will help motivate the use of approximate inference techniques for finding \textit{maximum a posteriori} (MAP) solutions to MDAPs. However, we will begin our discussion of optimization approaches in Section \ref{section:opt} with techniques that do not require any assumptions about the nature of the cost function. \subsection{Assignment Costs} The assignment cost function has a massive impact on tracking performance. In this subsection, we will introduce various perspectives towards defining assignment costs, specifically highlighting probabilistic approaches. \subsubsection{Kinematic Costs} In situations where sensor measurements only consist of noisy estimates of kinematic data from targets (e.g., position and speed), a probabilistic framework can be used to recover the unobservable state of the targets. The most common approach is to handle the uncertainty in the sensor measurements and target kinematics with a stochastic Bayesian recursive filter; see \cite{mahler2007statistical} for a comprehensive overview. The Kalman Filter--probably the most popular filter of this flavor--provides the means for updating a posterior distribution over the target state given the corresponding measurement likelihood, i.e., $P(x_k | z_k) \propto P(z_k | x_{k-1}) P(x_{k-1} | z_{k-1})$. We are using the same notation as before, such that $x_k$ represents the target state at time $k$ and $z_k$ is the measurement at time $k$. One of the reasons for the popularity of the Kalman Filter is that by assuming that all distributions of interest are Gaussian, the posterior update can be computed in closed form. Now, recall that a partial association hypothesis $\gamma^j$ for the multi-scan single-sensor data association problem assigns $T$ measurements to a single track within the sliding window of length $T$. The canonical cost function for data association is to minimize the following negative log-likelihood ratio: \begin{equation} \label{eq:NLL} c_{i_1, i_2, ..., i_T} = -\log \frac{ P(\gamma^j | z_1^{i}, z_2^{i}, ..., z_T^{i})}{P(\gamma^0 | z_1^{i}, z_2^{i}, ..., z_T^{i}) }, \hspace{1em} (\gamma^j, \gamma^0) \in \gamma. \end{equation} The partial hypothesis $\gamma^j$ represents the j\textsuperscript{th} track of the hypothesis $\gamma$, and $\gamma^0$ represents a dummy track where all measurements attributed to it are considered false alarms. Assuming the sensor has a probability of 1 of detecting each target and a uniform prior over all assignment hypotheses, the likelihood that the j\textsuperscript{th} track generated the assigned measurements is \begin{equation} P(\gamma^j | z_1^{i}, z_2^{i}, ..., z_T^{i}) \propto P( z_1^{i}, z_2^{i}, ..., z_T^{i} | \gamma^j). \end{equation} Assuming independence of the measurements and track states between time steps, we can decompose the likelihood that the measurements originated from track $\gamma^j$ as \begin{equation} \label{measurement likelihood} P( z_1^{i}, z_2^{i}, ..., z_T^{i} | \gamma^j) = \prod_{k=1}^{T} P(z_k^{i} | x_k) P(x_k | j). \end{equation} In the Kalman Filter and its extensions, the right-hand side has an attractive closed form representation as a Mahalanobis distance between the measurement predictions and the observed measurements, scaled in each dimension of the measurement space by the state and measurement covariances. This can easily be derived by taking Equation \ref{measurement likelihood} and plugging it into the negative log-likelihood ratio in Equation \ref{eq:NLL}. \par In track-to-track association, the conventional cost function associated with a partial hypothesis is the likelihood that the tracks from multiple sensors were all generated by the same "true" target. When $S = 2$, the simplest approach is to consider the random variable $\triangle_{12} = x^1 - x^2$, which is the difference between the track state estimates from sensor 1 and sensor 2. When the track state estimates are Gaussian random variables, $\triangle_{12}$ is also Gaussian. The cost function becomes the likelihood that $\triangle_{12}$ has zero mean and covariance given by $\Sigma = \Sigma_1 + \Sigma_2 - \Sigma_{12} - \Sigma_{21}$ \cite{bar2004multisensor}. The first two terms of the covariance are the uncertainty around the track state estimates, and the second two terms are the cross covariances. A straightforward way to extend to the $S \geq 3$ case is to use star-shaped costs $\triangle_{1S} = \sum_{i=2}^{S} \triangle_{1i}$ \cite{walteros2014integer}. For the Gaussian case, the cost can also be written in closed form as a Mahalanobis distance between the track state estimates \cite{kaplan2008assignment} \cite{deb1997generalized}: \begin{equation} \label{sum-pairwise} c_{i_1, i_2, ..., i_S} = \sum_{j=2}^{S} \triangle_{1j}^{\intercal}\Sigma_{1j}^{-1}\triangle_{1j} \end{equation} In the Bayesian setting, minimizing Equations \ref{eq:NLL} and \ref{sum-pairwise} is analogous to finding the MAP assignment hypothesis; this will be covered in more detail in Section \ref{section:opt}. \subsubsection{Feature-augmented Costs} It is often the case in multi-target tracking that sensors generate high-dimensional observations of the surveillance region from which target information must be extracted. The most obvious example of this is the image data generated by a video surveillance system. This data, when featurized, can be used to augment or replace the kinematic costs mentioned in the previous subsection. The goal of doing this is to improve the association accuracy, and ultimately the overall tracking performance. \par Due to the high-dimensionality of the raw measurements, almost all such methods attempt to \textit{learn} a pairwise cost between measurements or tracks using features extracted from the data. This pairwise cost can represent the association probability of the two objects, or simply some notion of similarity, e.g., a distance. There are many ways of formulating the problem of learning assignment costs and using it for solving data association or track-to-track association as a machine learning problem; the goal of Section \ref{sec: learning ass costs} is to highlight the approaches that have proven to be the most useful. For example, one technique is to use metric learning to transform the high-dimensional sensor measurements into a lower-dimensional geometric space where a Euclidean distance can be used as the assignment cost function. Learning pairwise costs from data is heavily used in the multi-target tracking computer vision community, partially due to the ease at which features can be extracted from images \cite{li2013survey}. \par There are multiple ways to incorporate learned pairwise costs into a MDAP solver. One common approach is as follows. The probability of association for a pair of measurements or tracks $\Lambda_i$ and $\Lambda_j$ can be written as a joint pdf \cite{osbome2011track}; assuming independence of the kinematic (K) and non-kinematic (NK) components of this probabilistic cost function, the resulting negative log-likelihood pairwise cost is: \begin{equation} \begin{aligned} \label{eq:feature-aug} c_{ij} &= -\log P(\Lambda_i , \Lambda_j) \\ &= -\log \big( P_{\text{K}}(\Lambda_i , \Lambda_j) P_{\text{NK}}(\Lambda_i , \Lambda_j) \big) \\ &= -\log P_{\text{K}}(\Lambda_i , \Lambda_j) - \log P_{\text{NK}}(\Lambda_i , \Lambda_j) \end{aligned} \end{equation} Usually, $P_{\text{NK}}(\Lambda_i , \Lambda_j)$ is parameterized by weights $\theta$ and is a function of the features extracted from the sensor data and $\theta$. For example, this probability could be represented as a neural network that outputs a similarity score between 0 and 1. The kinematic component of this pairwise cost, $P_{\text{K}}(\Lambda_i , \Lambda_j)$, could be adapted from Equation \ref{eq:NLL}. \section{Optimization} \label{section:opt} In this section, we will review recent work on a variety of optimization algorithms for solving MDAPs in real-time multi-target tracking systems. Our focus will be on approaches with a machine learning flavor, e.g., approximate inference techniques and deep neural networks, as well as the probabilistic modeling aspects of the problem. We will start by briefly covering non-probabilistic methods that are useful for contrasting with what is currently popular. The techniques discussed in this section are quite general, and in most cases can be used for both the data association and track-to-track association problems with proper modification. It is important to notice that certain modeling assumptions, such as how the assignment cost function is defined, can cause a tracker to make errors regardless of how strong the optimization approach is. \subsection{Greedy Randomized Search} Heuristically searching through the space of valid solutions within a time limit is an attractive way of ensuring both real-time performance and that a good local optima will be discovered. A search procedure for a MDAP takes as input a problem instance in the form of Equation \ref{SSMDAP} or Equation \ref{MSMDAP} and constructs a valid solution $\gamma$ by adding each legal partial assignment incrementally. The most well-known method, the Greedy Randomized Adaptive Search Procedure (GRASP), was originally introduced in \cite{murphey1997greedy} for multi-sensor multi-target tracking. The idea behind GRASP is to randomly select each partial assignment from lists of greedily chosen candidates to form a solution $\gamma$. Then, a local random search is conducted to attempt to improve this solution. This procedure is repeated until the alloted time runs out or a maximum number of iterations is reached, at which point the best solution that was discovered is returned. GRASP algorithms also use gating techniques to help reduce the search space, and conduct the local searches by permuting a small number of entries within some of the assignments. A parallel implementation of GRASP is described in \cite{oliveira2004randomized}. In \cite{popp2001m} it is suggested that GRASP produces suboptimal solutions of a quality that is not acceptable for real-time performance; however, experiments on modern parallel computer architectures are needed to verify this claim. \par Other greedy search algorithms have been proposed in \cite{perea2011greedy} and \cite{shapero2016adaptive}, based on the semi-greedy track selection (SGTS) algorithm introduced in \cite{caponi2004polynomial}. SGTS-based algorithms first perform the usual greedy assignment algorithm step of sorting potential tracks by track score. Then, they generate a list of candidate hypotheses and return the locally optimal result. This process is repeated iteratively in a manner so that candidate hypotheses are generated that best represent the solution space. In \cite{perea2011greedy}, an extension for the K-best case is also provided, which enables the use of SGTS-esque algorithms for multiple hypothesis tracking (MHT) \cite{blackman1999design}. The construction of SGTS and its extensions are such that they can provide a solution that is within a guaranteed factor of the optimal solution \cite{perea2011greedy}. \par The main strength of search algorithms appear to be their simplicity and the extent to which they are embarrassingly parallel. Despite being quite general, the advent of more sophisticated techniques that can leverage problem-specific information and the necessary hardware necessary to run them in real-time has most likely contributed to the lack of continued research on GRASP algorithms in the academic tracking community. For a survey of research on GRASP for optimization, see \cite{Resende2016}. \subsection{Lagrangian Relaxation} The multidimensional binary constraints \ref{SSMDAP constraints} and \ref{MSMDAP constraints} pose a significant challenge; a standard technique is to relax the constraints so that a polynomial-time algorithm can be used to find an acceptable sub-optimal solution. The existence of $O(n^3)$ algorithms \cite{NAV:NAV3800020109} \cite{munkres1957algorithms} \cite{bertsekas1992auction} for the LAP suggests that if the constraints can be relaxed, a reasonably good solution to the MDAP should be obtainable within an acceptable amount of time. Indeed, Lagrangian relaxation \cite{boyd2004convex} algorithms for association in multi-target tracking, proposed in \cite{deb1993multisensor} and \cite{deb1997generalized}, involve iteratively producing increasingly better solutions to the MDAP by successively solving relaxed LAPs and reinforcing the constraints. A set of Lagrange multipliers for the N-dimensional case, $\mathbf{u} = [u_3, u_4, ..., u_N]$, are introduced to incorporate the relaxed set of constraints into the cost function. Since there are potentially multiple constraints that are not being enforced at each iteration, the obtained solution is an optimistic lower bound on the actual optimal solution, referred to as the dual solution. When the constraints are reapplied, a valid solution is obtained that is an upper bound on the optimal solution, referred to as the primal solution. The idea is then to update $\mathbf{u}$ using subgradient methods (see Appendix A of \cite{deb1997generalized}) and to repeat the procedure until the \textit{duality gap}, the difference between the primal and dual solutions, is below a threshold. To formulate this algorithm for real-time applications, it can also be set to terminate after a maximum number of iterations. \par A parallel implementation of this method for the K-best case was developed in \cite{popp1998adaptive} and \cite{popp2001m}, which enables efficient implementations of MHT algorithms. A variation on this approach using dual decomposition is proposed in \cite{lau2011multidimensional} where the original MDAP is separated into subproblems that contain copies of the original variables; a constraint is introduced via Lagrangian relaxation that requires copies of the same variable to share the same value. In experiments evaluating the performance of the dual decomposition method on a generic tracking problem with six closely spaced targets, it performed comparably with the Lagrangian relaxation algorithm from \cite{deb1997generalized}. \par Lagrangian relaxation has also been used to convert Equation \ref{SSMDAP} into a global network flow problem in \cite{butt2013multi}. The motivation behind this approach is a desire to incorporate higher-order motion smoothness constraints, beyond what is capable when only considering pairwise costs in multi-scan problems. The minimum-cost network flow problem that results from the relaxation can be solved in polynomial-time; updates to the Lagrange multipliers enforcing the constraints are handled by subgradient methods. In the next subsection, we go into more detail on network optimization, one of the leading approaches to solving multi-target tracking association problems. \subsection{Network Optimization} \label{section:graph-cut} \begin{figure} \includegraphics[scale=0.35]{network-flow-2.png} \caption{A network flow graph for multi-scan data association (three scans depicted). The black arcs represent enter/exit edges for a potential track. The red arcs are measurement/observation edges, and the blue arcs are transition edges between measurements. Reproduced from \cite{zhang2008global} with permission. \label{network-flow}} \end{figure} A popular approach to solving MDAPs for data association (Equation \ref{SSMDAP}) in the computer vision tracking community is to transform the problem into finding a minimum-cost network flow \cite{jiang2007linear} \cite{zhang2008global} \cite{pirsiavash2011globally} \cite{berclaz2011multiple} \cite{wang2015learning} \cite{wang2017tracklet} \cite{wu2012coupling} \cite{butt2013multi} \cite{schulter2017deep} \cite{choi2015near}. In the corresponding network, detections at each discrete time step generally become the nodes of the graph, and a complete flow path represents a target track, or trajectory. The amount of flow sent from the source node to the sink node corresponds to the number of targets being tracked, and the total cost of the flow on the network corresponds to the log-likelihood of the association hypothesis. The globally optimal solution to a minimum-cost network flow problem can be found in polynomial-time, e.g., with the push-relabel algorithm. \par Another benefit of using minimum-cost network flow is that the graph can be constructed to significantly reduce the potential number of association hypothesis by limiting transition edges between nodes with a spatio-temporal nearness criteria, similar to gating. Furthermore, occlusion can be explicitly modeled by adding nodes to the graph corresponding to the case where a target is partially or fully occluded by another target for some amount of time. A sliding window approach can be used for real-time performance, rather than using the complete history of previous detections. To help illuminate the mapping from Equation \ref{SSMDAP} to a network flow problem, we adapt the following equations from \cite{zhang2008global}, rewritten using the notation from Section \ref{sec: problem form}. \par Recall that we defined a data association hypothesis $\gamma$ as a partitioning of the set of all available measurements $Z^T$. Then, a MAP formulation of the MDAP for data association is given by \begin{equation} \label{map network objective} \begin{aligned} \gamma^* = & \argmax_{\gamma \in \Gamma} \color{red}{P(Z^T | \gamma)} \color{blue}\prod_{\mathcal{T}_m \in \gamma} P(\mathcal{T}_m) \\ & \textrm{s.t. } \color{violet}{\mathcal{T}_m \cap \mathcal{T}_n = \emptyset, \forall m \neq n} \end{aligned} \end{equation} where the \textcolor{blue}{product over tracks} in the objective reflects an assumption of track motion independence, and the potentially prohibitive \textcolor{violet}{constraint} guarantees that no two tracks ever intersect. It is possible to derive the \textcolor{red}{measurement likelihood} using Equation \ref{measurement likelihood}; in \cite{zhang2008global}, it is factored as $P(Z^T | \gamma) = \prod_z P(\{z \in Z^T\} | \gamma) $, where each term in this product is a Bernoulli distribution with parameter $\beta$ encoding the probability of false alarm and missed detection. The track probabilities $P(\mathcal{T}_m)$ are modeled as Markov chains to capture track initialization, termination, and state transition probabilities. A network flow graph can now be defined as a graph with source $s$ and sink $t$ as follows. For every measurement $z_k^{i} \in Z^T$, create two nodes $u_r, v_r$, create an arc $(u_r, v_r)$ with cost $c(u_r, v_r)$ and flow $f(u_r, v_r)$, an arc $(s, u_r)$ with cost $c(s, u_r)$ and flow $f(s, u_r)$, and an arc $(v_r, t)$ with cost $c(v_r, t)$ and flow $f(v_r, t)$. For every transition $P(z_{k+1}^{i}|z_k^{i}) \neq 0$, create an arc $(v_r, u_s)$ with cost $c(v_r, u_s)$ and flow $f(v_r, u_s)$. An example of such a graph is given in Figure \ref{network-flow}. The flows $f$ are indicator functions defined by \begin{equation} \begin{aligned} f(s,u_r) &= \begin{cases} 1 & \text{if } \exists \mathcal{T}_m \in \mathcal{T}, \mathcal{T}_m \text{ starts from } u_r \\ 0 & \text{otherwise} \\ \end{cases} \\ f(v_r, t) &= \begin{cases} 1 & \text{if } \exists \mathcal{T}_m \in \mathcal{T}, \mathcal{T}_m \text{ ends at } v_r \\ 0 & \text{otherwise} \\ \end{cases} \\ f(u_r, v_r) &= \begin{cases} 1 & \text{if } \exists \mathcal{T}_m \in \mathcal{T}, z_k^{i} \in \mathcal{T}_m \\ 0 & \text{otherwise} \\ \end{cases} \\ f(v_r, u_s) &= \begin{cases} 1 & \text{if } \exists \mathcal{T}_m \in \mathcal{T}, z_{k+1}^{i} \text{ comes after } z_k^{i} \text{ in } \mathcal{T}_m \\ 0 & \text{otherwise} \\ \end{cases} \end{aligned} \end{equation} and the costs are defined as \begin{equation} \label{eq:link-costs} \begin{aligned} &c(s, u_r) = -\log P_{\text{start}}(z_k^{i}) & &c(v_r, t) = -\log P_{\text{end}}(z_k^{i}) \\ &c(u_r, v_r) = \log \frac{\beta_r}{1 - \beta_r} & &c(v_r, u_s) = -\log P_{\text{link}}(z_{k+1}^{i}|z_k^{i}) \end{aligned} \end{equation} and can be derived by taking the logarithm of Equation \ref{map network objective}; see Section 3.2 in \cite{zhang2008global} for more details. The minimum cost flow through the network corresponds to the assignment $\gamma^*$ with the maximum log-likelihood. \par Quite a few variations on this model have been proposed in the literature. One is described in \cite{jiang2007linear}, in which a subgraph is created for each track in the surveillance region and occlusion is modeled by adding special nodes to the graphs. A linear programming relaxation with a sliding-window heuristic then enables approximate global solutions to be found in real-time. A limitation of this approach is the requirement of knowing \textit{a priori} the number of tracks in the surveillance region, as well as the poor worst-case complexity of the simplex method. The method from \cite{pirsiavash2011globally} further optimizes the approach introduced in \cite{zhang2008global} to reduce the run-time complexity. A comparable approach to this one, from \cite{berclaz2011multiple}, formulates the problem as finding the K-shortest paths through the flow graph. In \cite{collins2012multitarget}, the argument that the popular network flow model exhibits an over-reliance on appearance modeling and pairwise costs is made. They offer a variation on the network flow approach that uses a more general cost function. In Section \ref{sec: learning ass costs}, we will go into the details of recent works that propose a variety of machine learning techniques to obtain the link costs (Equation \ref{eq:link-costs}) in network flow graphs. Network optimization techniques offer a good trade-off between complexity, ease of implementation, and performance. \subsection{Conditional Random Fields} \label{section:crf} Probabilistic graphical models provide us with a powerful set of tools for modeling spatio-temporal relationships amongst sensor measurements in data association and amongst tracks in track-to-track association. Indeed, conditional random fields (CRFs), a class of Markov random fields \cite{Lafferty:2001:CRF:645530.655813}, have been used extensively for solving MDAPs in visual tracking \cite{milan2016multi} \cite{yang2012online} \cite{yang2011learning} \cite{le2016long} \cite{choi2015near} \cite{osep2017combined}. A CRF is an undirected graphical model, often used for structured prediction tasks, that can represent a conditional probability distribution between sets of random variables. CRFs are well-known for their ability to exploit grid-like structure in the underlying graphical model. We define a CRF over a graph $G = (V, E)$ with nodes $x_{v \in V} \in X$ such that each node emits a label $y \in Y$. For simplicity of notation, we refer to nodes as $x$ and omit the subscript. The labels take on values from a discrete set, e.g., $\{0, 1\}$; in the context of multi-target tracking, a realization of labels $\mathbf{y}$ usually corresponds to an assignment hypothesis. A key theorem concerning random fields states that the probability distribution being modeled can be written in terms of the cliques $c$ of the graph \cite{hammersley1971markov}. For example, in chain-structured graphs, each pair of nodes and corresponding edge is a clique. \par CRFs, like the network flow models discussed in the previous subsection, are essentially a tool for modeling probabilistic relationships between a collection of random variables, and hence still require a separate optimization process for handling training and inference (such as the graph cut algorithm \cite{boykov2004experimental} or message passing algorithms). We will focus on presenting how the data association problem is mapped onto a CRF and direct the reader to other sources such as \cite{boykov2004experimental} for details on how to do approximate inference for these models. One of the benefits of using graphical models is that we have the flexibility to construct our graph using either sensor measurements, tracklets (measurements that are partially associated to form a "sub"-track), or full tracks. Tracklets are a common choice for CRFs since they give an attractive hierarchical quality to the tracking solution; low-level measurements are first associated into tracklets via, e.g., the Hungarian algorithm, and then stitched together into full tracks via a CRF. By working at a higher level of abstraction, the original MDAP constraints \ref{SSMDAP constraints} and \ref{MSMDAP constraints} are reformulated; all that is needed at the higher level is to ensure that each tracklet is only associated to one and only one track. This can also help reduce processing time for running in real-time. \par Each clique $c$ in the graph has a clique potential $\psi_c$ associated with it; usually, the clique potentials are written as the product of unary terms $\psi_s$ and pairwise terms $\psi_{s,t}$. It is common to assume a log-linear representation for the potentials, i.e., $\psi_c = \exp (w_c^\intercal \phi (x, y_c))$. Note that the implied normalization term in Equation \ref{crf} can be omitted when solving for the maximum-likelihood labeling $\mathbf{y}$ for a particular set of observations $\mathbf{x}$. \begin{equation} \begin{aligned} \label{crf} P(\mathbf{y}|\mathbf{x}, w) &\propto \prod_c \psi_c(y_c | \mathbf{x}, w) \\ & \propto \prod_{s \in V} \psi_s (y_s | \mathbf{x}, w) \prod_{s,t \in E} \psi_{s,t} (y_s, y_t | \mathbf{x}, w) \end{aligned} \end{equation} Features $\phi$ must be provided (or can be extracted from data with supervised or unsupervised learning) and weights $w$ are learned from data. The observations $\mathbf{x}$ can be either sensor measurements (for data association) or sensor-level tracks (for track-to-track association). The Markov property of CRFs can be interpreted in the context of multi-target tracking as assuming that the assignment of the observations to tracks within a particular spatio-temporal section of the surveillance region is independent of how they are assigned to tracks elsewhere---conditional on all observations. This adds an aspect of local optimality and, in a way, embeds similar assumptions as a gating heuristic. A solution to Equation \ref{crf}, i.e., the maximum-likelihood set of labels $\mathbf{y}$, can be used as a solution to the corresponding MDAP. \par As is common with CRFs, the problem of solving for the most likely assignment hypothesis is cast as energy minimization. The objective to minimize is an energy function, computed by summing over the clique potentials; each potential is interpreted as contributing to the energy of the assignment hypothesis. Each clique consists of a set of vertices and edges, where each vertex is a pair of tracklets that could potentially be linked together. The corresponding labels for each vertex take values from the set $\{0, 1\}$ and indicates whether a pair of tracklets are to be linked or not. The energy term for each clique is decomposed into the sum of a unary term for the vertices and a pairwise term for the edges. In \cite{yang2011learning}, the weights $w$ are learned with the RankBoost algorithm. Other techniques for learning the parameters of a CRF that maximize the log-likelihood of the training data include iterative scaling algorithms \cite{Lafferty:2001:CRF:645530.655813} and gradient based techniques. In Section \ref{sec: learning ass costs}, we will examine the problem of learning weights for assignment costs in more detail. The features used to construct these terms include appearance, motion, and occlusion information, among others. CRF and network optimization-based trackers are by nature global optimizers, and must be run with a temporal sliding-window to get near real-time performance. For example, in \cite{yang2012online} extensions to the generic CRF formulation are presented that enable it to run in real-time. \par A CRF formulation, Near Online Multi-Target Tracking (NOMT), is proposed in \cite{choi2015near} that also builds its graph of track hypotheses using tracklets. The novelty of this work is in the use of an affinity measure between detections called the Aggregated Local Flow Descriptor, and in the specific form of the the unary and pairwise terms in the energy function of the CRF. Inference in the CRF is sped up by first analyzing the structure of the graphical model so that independent subgraphs can be solved in parallel. \par Other variations on the approaches above have been seen as well. In \cite{milan2016multi}, the energy term of a CRF is augmented with a continuous component to jointly solve the discrete data association and continuous trajectory estimation problems. A factor graph is embedded in the CRF in \cite{heili2014exploiting} to add more structure and help model pairwise associations explicitly. In the next subsection, we will investigate how factor graphs, the belief propagation inference algorithm, and its variants can be used to solve the MDAP. To summarize, applying CRFs to a specific multi-target tracking problem involves defining how the graphical model will be constructed from the sensor data, specifying an objective function, selecting or learning features for the terms within the objective function, training the model to learn the weights, and then performing approximate inference to extract the predicted assignment hypothesis. \subsection{Belief Propagation} \label{section:belief-prop} \iffalse many ways to use BP... 1. p(Z, a | x) ~ marginal JPDA weights (sum-product) 2. p(X | z) ~ marginals for state estimation (sum-product) 3. p(X | z) ~ MDAP data association (max-product) Define Loopy Belief Prop for 1. and 2. Define Max-product algorithm and Tree-reweighted max product algorithm for 3. Discuss problem of track/measure/sensor oriented vs success of hybrid factor graphs overview of best results from different papers \fi In this section, we highlight recent work that formulate the association problems as MAP inference and use belief propagation (BP) or one of its variants to obtain a solution. Chen et al. \cite{chen2006data} \cite{chena2009efficient} showed the effectiveness of BP at finding the MAP assignment hypothesis for the single and multi-sensor data association problems. BP is a general message-passing algorithm that can carry out exact inference on tree-structured graphs and approximate inference on graphs with cycles, or "loopy" graphs. The types of graphs under consideration are once again Markov random fields, albeit more general ones than the ones discussed in the previous subsections. Indeed, BP can be used on graphs that model joint distributions $P(\mathbf{x}) = P(x_1, x_2, ..., x_N)$ that can be factorized into a product of clique potentials. As before, the clique potentials are assumed to be factorizable into pairwise terms. Therefore, for cliques $c$, we have \begin{equation} \label{bp-1} \begin{aligned} P(\mathbf{x}) &\propto \prod_{c} \psi_c(x_c) \\ & \propto \prod_{s \in V} \psi_s (x_s) \prod_{s,t \in E} \psi_{s,t} (x_s, x_t) \end{aligned} \end{equation} It is common to use factor graphs to explicitly encode dependencies between variables. A factor graph decomposes a joint distribution into a product of several local functions $f_j(X_j)$, where each $X_j$ is some subset of $\{x_1, x_2, ..., x_N\}$. The graph is bipartite and has nodes $x$ (i.e., discrete random variables) and factors (i.e., dependencies) $f \in \mathcal{F}$, and edges between the nodes and factors. For example, the graph of $g(x_1, x_2, x_3) = f_A(x_1) f_B(x_2, x_3) f_C(x_1, x_3)$ has factors $f_A, f_B$, and $f_C$ and nodes $x_1, x_2, x_3$. The joint distribution for a factor graph can be written similarly to Equation \ref{bp-1} as \begin{equation} P(\mathbf{x}) \propto \prod_{s \in V} \psi_s (x_s) \prod_{f \in \mathcal{F}} \psi_f (x_{\eta_f}) \end{equation} where $\eta_f$ represents the set of nodes $x$ that are connected to factor $f$. \par Parallel message-passing algorithms, such as BP, operate by having each node of the graph iteratively send messages to its neighbors simultaneously. We define messages from a node $x_s$ to its neighbors $x_t \in \mathcal{N}(s)$ as $\mu_{s \rightarrow t}(x_s)$. In a factor graph, the set of neighbors $\mathcal{N}(s)$ for a node $x_s$ are its corresponding factors. The max-product algorithm is useful for finding the MAP configuration ${x^*} = \{{x^*}_s | s \in V\}$ which corresponds to the best assignment hypothesis $\gamma^*$. In this algorithm, messages are computed recursively in general pairwise Markov random fields by \begin{equation} \label{pairwise-LBP} \mu_{s \rightarrow t} (x_s) = \max_{x_{t}} \bigg \{ \psi(x_{t}) \psi_{s,t} (x_s, x_{t}) \prod_{\xi \in N(t) \text{\textbackslash} s } \mu_{\xi \rightarrow t} (x_{t}) \bigg \} \end{equation} and at convergence, each $x^*_s$ can be calculated by \begin{equation} x^*_s = \argmax_{x_s \in X} \bigg \{ \psi_s(x_s) \prod_{\xi \in \text{ nbr}(s)} \mu_{\xi \rightarrow s} (x_s) \bigg \} \end{equation} for neighborhood set $\text{nbr}(s)$. As indicated in \cite{chen2006data}, these updates are not guaranteed to converge for graphs with cycles, and even if they do, they may not compute the exact MAP configuration. A proof of convergence for a specific loopy belief propagation (LBP) formulation for data association presented in \cite{williams2010convergence}. LBP simply applies the BP updates repeatedly until the messages all converge; interestingly, LBP has been shown to perform favorably in practice for association tasks \cite{williams2010data} \cite{williams2014approximate} \cite{meyer2017scalable}. An improvement over the max-product algorithm for LBP is tree-reweighted max-product \cite{wainwright2002map}. This algorithm is used for data association in \cite{chen2006data} to output a provably optimal MAP configuration or acknowledge failure. The key idea of the tree-reweighted max-product algorithm is to represent the original problem as a combination of tree-structured problems that share a common optimum \cite{chen2006data}. \par To illustrate the use of BP for solving MDAPs, we will present the graphical model formulation from \cite{zhu2007graphical} for multi-sensor multi-target track-to-track association. The structure of the graphical model is decided on-the-fly by producing sets of independent association clusters consisting of multi-sensor tracks that could plausibly be associated with each other. This is accomplished by computing elliptical gates around each track and clustering together all such tracks whose gates overlap; in \cite{zhu2007graphical}, the gates are computed from purely kinematic information. The nodes of the graph are the track state estimates for $T = 1$ and $S \geq 3$ sensors (Section \ref{sec: problem form}), $\big \{ x^{i,j} | x^{i,j} \in X^1 = \{X^{1,1}, X^{1,2}, ..., X^{1,S}\} \big \}$, where each $x^{i,j}$ is the $i$\textsuperscript{th} track state estimate from sensor $j$, $i = 1, ..., N_j$ and $j = 1, ..., S$. Edges only exist between nodes from different sensors when their elliptic gates overlap. A random variable $Y^{i,j}$ corresponding to each node $x^{i,j}$ is defined as a vector of $S-1$ dimensions and stores the indexes of the tracks from the other sensors associated with the $i$\textsuperscript{th} track from sensor $j$. The node potentials are defined as $\psi_{x^{i,j}} (Y^{i,j}) = \exp(\rho)$ where $\rho$ is the sum of pair-wise costs, given by Equation \ref{sum-pairwise}. Using the notation $Y^{i,j}_k$ to denote the $k$\textsuperscript{th} entry of the $S-1$-dimensional vector $Y^{i,j}$, (the index of the local track from sensor $k$), the edge potentials can be defined to ensure that each track from each sensor is associated once and only once by \begin{equation} \begin{aligned} \psi_{x^{l,m} \rightarrow x^{n, o}}(Y^{l,m}_n = p, Y^{n,o}_l = q) &= \begin{cases} 0 & p = n, q \neq l \\ 0 & p \neq n, q = l \\ 1 & \text{otherwise} \\ \end{cases} \\ \end{aligned} \end{equation} If $w^{u,v}$ is the Mahalanobis distance between two tracks $u, v$, then messages between nodes can be initialized as \begin{equation} \begin{aligned} \mu_{x^{l,m} \rightarrow x^{n,o}} (Y^{n,o}_l = q) &= \begin{cases} \exp(w^{u = (l,m); v = (n,o)}) & \text{ if } q = l \\ 1 & \text{otherwise} \\ \end{cases} \\ \end{aligned} \end{equation} Then, repeated application of Equation \ref{pairwise-LBP} until the $Y^{i,j}$s converge will produce the MAP solution to the MDAP. \par Examples of factor graphs for the data association MDAP can be found in \cite{williams2010data}, and examples of pairwise Markov random fields formulations of the data association MDAP are in \cite{chen2006data} and \cite{chena2009efficient}. An extension to \cite{williams2010data} for an unknown number of targets and multiple sensors is presented in \cite{meyer2016tracking} and applied to a multistatic sonar network in \cite{meyer2017scalable}. As shown in \cite{williams2010data}, a \textit{hybrid} factor graph that encodes the constraints (that each measurement be associated to at most one target and each target give rise to at most one measurement) with two different sets of constraint variables exhibited the strongest performance. A useful overview of graph techniques for the data association problem, including BP, is \cite{chong2012graph}. See \cite{choi2015near} for an example of how BP can be used as a general inference technique for MAP inference on a network flow graph. \subsection{Markov Chain Monte Carlo} A principled approach to sampling from a complex, potentially high-dimensional distribution is Markov Chain Monte Carlo (MCMC). MCMC methods construct a Markov chain on the state space $\mathcal{X}$ whose stationary distribution $\pi^*$ is the target distribution. Decorrelated samples drawn from the chain can be used for approximate inference, i.e., integrating with respect to $\pi^*$. This is useful in the context of assignment problems for multi-target tracking when the goal is to estimate a posterior distribution over assignment hypotheses, from which a MAP hypothesis can be extracted. The Metropolis-Hastings algorithm has been used extensively for data association in single and multi-sensor scenarios \cite{benfold2011stable} \cite{pasula1999tracking} \cite{oh2004markov} \cite{fagot2016improving}. Recently, a Gibbs sampler was derived for efficient implementations of the Labeled Multi-Bernoulli filter, which jointly addresses the data association and state estimation problems for single and multi-sensor scenarios \cite{reuter2017fast} \cite{vo2017efficient}. We omit detailed descriptions of the Metropolis-Hastings and Gibbs sampling algorithms, and instead refer the reader to the explanations in \cite{vo2017efficient} and \cite{oh2004markov}. \par MCMC is applied to the MDAP for data association (referred to as MCMCDA) and track-to-track association by designating the state space of the Markov chain to be all feasible assignment hypotheses and the stationary distribution of the Markov chain to be the posterior $P(\gamma | Z^T)$ or $P(\gamma | X^{T})$. A MAP assignment hypothesis $\gamma^*$ for the data association problem is \cite{oh2004markov}: \begin{align} \label{MCMCDA} P(\gamma | Z^T) &\propto P(Z^T | \gamma ) \prod_{t=1}^T p_z^{z_t}(1 - p_z)^{c_t} p_d^{d_t} (1 - p_d)^{g_t} \lambda_b^{a_t} \lambda_f^{f_t} \\ \gamma^* &= \argmax_{\gamma} P(\gamma | Z^T) \end{align} Here, we define the survival probability as $p_z$ and the detection probability as $p_d$. The number of targets at time $t - 1$ is $e_{t-1}$, the number of targets that terminate at time $t$ is $z_t$, and $c_t = e_{t-1} - z_t$ is the number of targets from time $t - 1$ that have not terminated at time $t$. We set $a_t$ as the number of new targets at time $t$, $d_t$ as the number of actual target detections at time $t$, and $g_t = c_t + a_t - d_t$ as the number of undetected targets. Finally, let $f_t = n_t - d_t$ be the number of false alarms, $\lambda_b$ be the birth rate of new objects, and $\lambda_f$ be the false alarm rate. Note that for the general case of unknown numbers of targets, the multi-scan MCMCDA will find an approximate solution of unknown quality at best. A bound on the quality of the approximation for the single-scan fixed target MCMCDA is provided in \cite{oh2004markov}. \par A Metropolis-Hastings algorithm for Equation \ref{MCMCDA} is described in \cite{oh2004markov} as follows. The proposal distribution $q$ is associated with five types of moves, for a total of eight moves; a birth/death move pair, a split/merge move pair, an extension/reduction move pair, a track update move, and a track switch move. A move is accepted with acceptance probability $A(\gamma, \gamma')$, where \begin{equation} A(\gamma, \gamma') = \min \bigg (1, \frac{\pi(\gamma') q(\gamma', \gamma) }{\pi (\gamma) q(\gamma, \gamma')} \bigg ) \end{equation} Assuming a uniform proposal distribution $q$, the proposal distribution terms in the numerator and denominator cancel. The stationary distribution $\pi(\gamma)$ is $P(\gamma | Z^T)$ from Equation \ref{MCMCDA}. Implementation details and descriptions of each type of move can be found in Section V-A of \cite{oh2004markov}. Extensions to this algorithm have been proposed in \cite{benfold2011stable} to add a sliding-window flavor and to reduce the number of types of moves to three. Since the application in \cite{benfold2011stable} is visual tracking, appearance information is fused with kinematic information to help improve performance. \cite{fagot2016improving} uses sparse representations of detections and kinematic information to define an energy objective that MCMCDA approximately solves. They deviate from prior work by allowing moves to be done not only forward in time, but also backwards as well to explore the solution space more efficiently. The use of a sliding-window is once again crucial, enabling the trade-off between solution quality and a faster run-time. \subsection{Deep Learning} \begin{figure} \includegraphics[scale=0.35]{online-mtt-rnns.png} \caption{An LSTM cell designed for multi-scan single-sensor data association (right). The input at each time step is the matrix of pairwise distances $C_{t+1}$, along with the previous hidden state $h_t$ and cell state $c_t$. The output $A^{i}_{t+1}$ of the data association cell is a vector of assignment probabilities for each target and all available measurements, obtained by a log-softmax operation, and is subsequently fed into the state estimation recurrent network (left). The LSTM's nonlinearities and memory are believed to provide the means for learning efficient solutions to the data association problem. Reproduced from \cite{milan2017online} with permission. \label{fig:mtt-with-rnns}} \end{figure} Neural networks have a rich history of being used to solve combinatorial optimization problems. One of the most influential papers in this line of research, by Hopfield and Tank \cite{hopfield1985neural}, describes how to use Hopfield nets to approximately solve instances of the Traveling Salesman Problem (TSP). Despite the controversy associated with their results \cite{smith1999neural}, this work inspired many others to pursue these ideas. This has lead to the present day, where research on the use of deep neural networks to solve problems like the TSP has started to pick up speed. \subsubsection{Deep Reinforcement Learning} The assignment problems in multi-target tracking are, at their core, combinatorial problems. Naturally, the question of whether deep neural networks are useful for finding near-optimal solutions to the LAP or MDAP is of significant interest. A preliminary answer to this question, in recent work by \cite{milan2017data}, suggests the affirmative; they used a recurrent neural network to solve a small MDAP in a simulated multi-target tracking scenario. Impressively, they were also able to get good performance on a quadratic assignment problem that involved finding point correspondences between pairs of images. In \cite{milan2017data} it was also suggested that using a problem-specific objective function for training neural networks in a supervised manner, as opposed to using, e.g., a regression loss \cite{vinyals2015pointer}, is preferable. One of the key challenges that supervised learning approaches face in this domain is obtaining labeled ground-truth samples, since generating optimal solutions to NP-hard combinatorial optimization problems can be time-consuming or even impossible. To address this, \cite{bello2016neural} and \cite{dai2017learning} use reinforcement learning to avoid the requirement of labeled data. The main difficulties here are deciding how to represent the data for efficient learning and enforcing the original constraints of the problem during training, e.g., Equations \ref{SSMDAP constraints} and \ref{MSMDAP constraints}. Naively searching in the space of assignment hypotheses forces a reinforcement learning agent to select an action from an action space of size $n!$. Furthermore, if the agent's policy is parameterized by a deep neural network, as is the case in deep reinforcement learning \cite{arulkumaran2017deep}, the output of the policy network (if searching directly in the space of valid solutions) is a permutation matrix; more formally, an extreme point of the Birkhoff polytope \cite{linderman2017reparameterizing}. This has been known to be quite difficult to do with neural networks \cite{gee1994polyhedral}. An alternative to this is the approach in \cite{dai2017learning}, where a Deep Q-Network augmented with a graph-embedding layer is used to greedily construct valid solutions to graph combinatorial optimization problems. Principled approaches for doing inference over permutations have been proposed in \cite{gold1996softmax}, \cite{linderman2017reparameterizing}, and \cite{latentPerms2018} based on annealing a temperature-controlled parameter to produce a discrete permutation matrix from a continuous doubly-stochastic matrix. However, this technique has yet to be extended to the reinforcement learning setting. We note that reinforcement learning has already been applied successfully to multi-target tracking by \cite{xiang2015learning}, where a policy is learned to control track initialization, maintenance, and removal. \subsubsection{Deep Learning on Graphs and Sets} Featurization of the assignment hypothesis graph (e.g., Figure \ref{fig:t2ta}) seems useful for a deep learning-based approach. Graph-embedding techniques can potentially enable deep neural networks to handle missed detections and false alarms by providing the means to model missing edges in the assignment hypothesis graph. However, learning useful \textit{inductive} representations of graph-structured data is still an open problem in machine learning; see \cite{pmlr-v70-gilmer17a} and \cite{NIPS2017_6703} for recent progress on this. Notably, the deep reinforcement learning algorithm from \cite{dai2017learning} makes use of a powerful graph embedding technique called \texttt{struct2vec} proposed in their earlier work \cite{dai2016discriminative}. Also, see \cite{bronstein2017geometric} for a general discussion on applying deep learning to graphs, including the recently proposed Graph Convolutional Networks (GCNs) \cite{kipf2016semi}. In particular, it is observed that the transductive nature of the graph embeddings learned by GCNs prohibits them from generalizing to graphs with different structure at test time, rendering this approach unusable for multi-target tracking. The review papers by Goyal et. al. \cite{goyal2017graph} and Hamilton et. al. \cite{DBLP:journals/corr/abs-1709-05584} are also useful for learning more about recent efforts at embedding graphs into a feature space. When applying deep neural networks to combinatorial optimization problems where the solution space consists of permutations or subsets of the input, Vinyals et. al. \cite{vinyals2015pointer} \cite{vinyals2015order} proposed Pointer Networks, which leverage attention mechanisms and the powerful seq2seq architecture to greedily construct valid solutions. In \cite{Rezatofighi2017ICCV}, a deep learning architecture inspired by the theory of Random Finite Sets \cite{mahler2007statistical} is proposed to predict outputs that are structured as sets by simultaneously predicting the cardinality of the set. They include promising results from pedestrian detection benchmarks, showing results that are slightly worse than state-of-the-art. \subsubsection{End-to-End Multi-target Tracking} As is common with deep learning research, some have already gone further to ask whether multi-target tracking can be solved in an entirely end-to-end fashion \cite{ondruska2016deeptrack}. In other words, given noisy measurements of the environment, the objective is for a deep learning system to directly output the filtered tracks, combining the association problem with state estimation. An investigation by \cite{ondruska2016deeptrack} revealed that a recurrent-convolutional neural network (RCNN) is able to learn to track multiple targets from raw inputs in a synthetic problem without access to labeled training data. Crucially, rather than maximizing the likelihood of the next state of the system at each time step, as would be natural for standard Bayesian recursive filtering, they modified the cost function to maximize the likelihood at some time $t + n$ in the future to force the network to learn a model of the system dynamics. More recently, they extended this work for use with raw LiDAR data collected by an autonomous vehicle \cite{DequaireIJJ2017}. In short, they showed that their system is able to predict an unoccluded version of the occupancy grid derived from the sensor's field-of-view. Recently, \cite{fang2017recurrent} proposed Recurrent Autoregressive Networks (RAN), an approach to online multi-target tracking that seeks to incorporate internal and external memory components into a deep learning framework to help handle occlusion and appearance changes. Crucially, they are able to show that RAN indeed makes use of its external memory to maintain tracks while the targets are occluded. The appearance and motion features in the external memory and the hidden state of the recurrent network used for the internal memory are combined to produce association scores for data association with the Hungarian algorithm. See \cite{sadeghian2017tracking} for a closely related prior work that also explores the use of recurrent networks. Instead of pursuing the monolithic end-to-end approach, \cite{milan2017online} represents the state estimation and data association problems separately in their deep learning architecture, arguing that doing so provides the means to separately train and debug these components. They design a Long Short-Term Memory (LSTM) cell specifically for solving the MDAP in data association (Figure \ref{fig:mtt-with-rnns}). Despite not using any visual features, their approach achieves reasonable performance relative to other similar systems on the MOT Challenge 2015 dataset \cite{leal2015motchallenge}. \par Research on applying deep learning to the LAP and MDAP in multi-target tracking is still in its infancy; based on the flurry of recent work on this problem, it is likely that we will see significant progress on this in the near future. However, using data-driven solutions brings up the question about whether such a system could generalize to any environment it may be deployed in. A interesting research direction for addressing this is zero-shot learning \cite{Xian_2017_CVPR}. Next, we will tackle the other major machine learning task in multi-target tracking---learning the assignment costs. \section{Learning Assignment Costs} \label{sec: learning ass costs} Framing the problem of learning an assignment cost function for data association or track-to-track association is deeply intertwined with the choice of sensor(s). This section will mainly consist of recent work on this problem from the computer vision community, where machine learning is most heavily used. One reason for this is the large amounts of annotated datasets that are freely available. We divide the presentation of techniques into pre- and post-deep learning to provide a comprehensive perspective and to emphasize the shift to deep learning-based approaches in recent years. Following this, we will conclude the section by highlighting recent research from the multi-sensor data fusion community on representation learning. \subsection{Learning Assignment Costs in Multi-Target Tracking, Pre-Deep Learning} \iffalse I. Introduction to machine learning for costs a. What is the learning framework b. what assumptions are made about the training/test data c. overfitting/underfitting concerns d. LDA, SVM i. (Briefly summarize) LDA and SVM algorithms ii. how do the papers use them e. Boosting techniques i. What is boosting? ii. How do the papers use them \fi Data-driven approaches to multi-target tracking are becoming popular due to learning algorithms that can take advantage of the increased availability of high-quality datasets. In essence, the goal of data-driven multi-target tracking is to use labeled datasets to train a model to output association costs at test time, where the cost might look similar to Equation \ref{eq:feature-aug}. These learned functions are then used in the optimization frameworks introduced in Section \ref{section:opt}. It is common to use discriminative models for learning appearance affinity; these models attempt to learn a conditional distribution $P(Y|X)$. $Y$ could be a categorical random variable for classification, or it could be real-valued for regression. Basically, discriminative models in visual tracking are used to predict an association likelihood based on appearance information. A simple example would be a neural network or Support Vector Machine (SVM) trained on a dataset of pairs of detections to output a score between 0 and 1. The score corresponds to the model's confidence about whether a pair of detections were generated by the same object. Another learning paradigm that has been used in conjunction with discriminative models for this task is metric learning. In this setting, a distance metric between measurements or tracks, typically in the form of a parameterized Mahalanobis distance, is learned from training data. We discuss a variety of machine learning techniques in this subsection to provide a brief historical context to frame our presentation of deep learning-based methods in the next subsection. We provide Table \ref{taxonomy of features}, which summarizes the various visual features used for learning association costs with the methods mentioned in this subsection. \subsubsection{Discriminative models} Boosting is one of the most powerful techniques in supervised learning and is a natural choice for learning discriminative models that approximate the true association costs. The general idea behind boosting is to produce a series of \textit{weak} learners that are combined to form a single \textit{strong} learner. The HybridBoost algorithm introduced in \cite{li2009learning}, one of the first applications of data-driven learning to multi-target tracking, is used to learn the link costs for a network flow graph (Equation \ref{eq:link-costs}). The data association problem is decomposed into a hierarchy of association problems where the tracklet lengths successively increases \cite{huang2008robust}; furthermore, it is cast as a joint ranking and classification problem. The cost function is learned so that it can rank correct associations higher than incorrect ones, as well as reject some associations entirely (i.e., a binary classification to determine reasonable associations). Hence, HybridBoost is a combination of RankBoost and AdaBoost \cite{freund1995desicion}. Their HybridBoost model is trained offline with videos paired with ground-truth trajectories. In \cite{kuo2010multi}, a slightly different approach is taken; a hierarchical decomposition in the same vein as \cite{huang2008robust} and \cite{li2009learning} is used, but each stage of the hierarchy is linked by applying the Hungarian algorithm. The cost matrix for the Hungarian algorithm is learned online with AdaBoost. Online learning of the discriminative model within the sliding-window is an attractive notion, since variations in appearance at test time can cause difficulty for systems that are trained offline. However, this comes at the cost of potentially sacrificing real-time capabilities; on a task involving tracking 2-8 pedestrians at a time, this tracker runs at about 4 fps. Other appearance models based on boosting include \cite{yang2011learning} and \cite{yang2012online}, where the RankBoost algorithm is used with CRFs. The online-learned discriminative appearance model from \cite{kuo2010multi} is adopted in \cite{yang2012online}. In an extension to \cite{kuo2010multi}, ideas from person re-identification are embedded into the system to improve the appearance model \cite{kuo2011does}. The features used to construct the parameterized learners for the boosting algorithms mentioned here are summarized in Table \ref{taxonomy of features}. \par In efforts to improve upon boosting for online learning of appearance models, \cite{bae2014robust} proposed the use of incremental linear discriminant analysis (ILDA). They showed that ILDA outperforms boosting in their experiments in terms of identification accuracy and computational efficiency, partially due to the fact that ILDA simply requires updating a single LDA projection matrix for distinguishing amongst the appearances of multiple objects. However, this approach makes the assumption that the featurized appearances of the tracked objects can be projected into a vector space where they are linearly separable. The assignment cost they used was \begin{equation} \label{eq:ASM} c_{ij} = \Lambda(x_i, x_j) = \textcolor{red}{\Lambda^{A}(x_i, x_j)} \textcolor{blue}{\Lambda^{S}(x_i, x_j)} \textcolor{violet}{\Lambda^M(x_i, x_j)} \end{equation} for \textcolor{red}{appearance}, \textcolor{blue}{shape}, and \textcolor{violet}{motion} (kinematics) affinities. This form of the cost is similar to Equation \ref{eq:feature-aug} and is fairly common. The appearance affinity is the score computed by ILDA, and the shape and motion affinities are not learned from data; details about those can be found in \cite{bae2014robust}. In this work, tracks are incrementally stitched together from tracklets by repeated application of the Hungarian algorithm. Another alternative to boosting methods, which is especially useful for learning the parameters of association cost functions embedding within complex graphical models, is the structured SVM \cite{Kim2013} \cite{wang2015learning} \cite{wang2017learning} \cite{choi2015near}. This approach, however, typically limits the cost functions to a linear parameterization. \subsubsection{Metric Learning} \label{sec:metric-learning} A different approach to addressing the problems of variability in object appearance and representation learning is target-specific metric learning. Here, we define metric learning as the problem of learning a distance $d_\mathbf{A}(x,y) = \sqrt{(x - y)^{\intercal}\mathbf{A}(x - y) }$ parameterized by a positive semi-definite (PSD) matrix $\mathbf{A}$. An intuitive way of thinking about this is that the data points $x$, which might be featurized representations of tracked objects, are being mapped to $A^{1/2}x$ where a Euclidean distance metric can be applied to the rescaled data \cite{xing2003distance}. This is then cast as a constrained optimization problem to ensure that the solution $\mathbf{A}$ is valid, i.e., $\mathbf{A} \succeq 0$. An early attempt at applying metric learning in multi-target tracking was \cite{wang2010discriminative}, where the problem of learning a discriminative model for appearance matching given image patches is combined with motion estimation and jointly optimized with gradient descent. Their formulation requires running the optimization at each time step for all pairs of objects in the scene with a set of training samples that gets incrementally updated. A more efficient use of metric learning for multi-target tracking is learning link costs in a network flow graph \cite{wang2014tracklet} \cite{wang2017tracklet}. Here, a regularized version of the aforementioned constrained optimization problem is applied to learn a distance between feature vectors for an appearance affinity model. The intention is to learn a metric that returns a smaller distance for feature vectors within the same tracklet in the graph than for feature vectors that belong to different tracklets. The negative log-likelihood assignment cost for the network links is defined similarly to Equation \ref{eq:ASM}. \par We will revisit metric learning when we discuss learning representations of multi-sensor data in Section \ref{sec:multi-sensor}. The topic of the next subsection transitions over to the use of deep learning for learning assignment costs. \begin{table}[t] \caption{Features used for data-driven learning of assignment costs from a representative set of works.} \label{taxonomy of features} \centering \begin{tabular}{p{1.5cm}|p{3cm}|p{8.5cm}} \toprule \textbf{Related Work} & \textbf{Method} & \textbf{Summary of Features Used} \\ \midrule \cite{li2009learning} & HybridBoost & tracklet lengths, no. of detections in the tracklets, color histograms, frame gap between tracklets, no. of frames occluded, no. of missed detected frames, entry and exit proximity, motion smoothness\\ \hline \cite{kuo2010multi}, \cite{kuo2011does}, \cite{yang2012online} & AdaBoost & color histograms, covariance matrices, HOG \\ \hline \cite{yang2011learning} & RankBoost & tracklet lengths, no. of detections in the tracklets, color histograms, frame gap between tracklets, no. of frames occluded, no. of missed detected frames, entry and exit proximity, motion smoothness \\ \hline \cite{bae2014robust} & ILDA & templates from HSV color channel and tracklet ID \\ \hline \cite{wang2015learning} \cite{wang2017learning} & Structured SVM & Off-the-shelf detector confidence (e.g., from DPM \cite{felzenszwalb2010object}), consecutive bounding box IOU, geometric relationships between all pairs of objects \\ \hline \cite{wang2014tracklet} \cite{wang2017tracklet} & Metric learning & RGB, YCbCr, and HSV color histograms, HOG, two texture features extracted with Schmid and Gabor filters\\ \bottomrule \end{tabular} \end{table} \subsection{Learning Assignment Costs in Multi-Target Tracking, Post-Deep Learning} Tracking-by-detection has solidified its position as the primary tracking paradigm for visual tracking, especially now that convolutional neural networks (CNNs) are widely used for learning assignment costs. CNNs are a special class of deep neural network that can learn hierarchical features which are translation invariant and invariant to slight deformations. For object detection and recognition, augmenting the training set by varying orientation, scale, and color can help to further increase robustness. CNNs learn incredibly rich representations directly from raw images. Another reason why deep learning is an attractive option for multi-target tracking is because it is straightforward to take a CNN that has been pre-trained on massive datasets and re-purpose it for new tasks by only re-training a few of the layers. In this subsection, we will cover recent research that leverages CNN-based neural network architectures to learn deep discriminative assignment costs. \par One of the first uses of deep learning in multi-target tracking is running image patches of detected objects obtained with, e.g., the DPM \cite{felzenszwalb2010object}, through a CNN to extract features. The CNNs are usually pre-trained on the ImageNet and PASCAL visual object classification (VOC) datasets. In one instance, the features extracted from the CNN were used to train a multi-output regularized least-squares classifier \cite{kim2015multiple}. Here, a 4096-dimensional feature vector is first extracted from the CNN for each detection box, followed by an application of PCA to reduce the dimensionality to 256. The classifier is used to compute a log-likelihood cost for a track hypothesis given a set of sensor detections. This paper was unique in that it showed how the classic MHT algorithm, which performs MAP inference by updating sets of track hypothesis trees in real-time, compares favorably with the modern approaches described in Section \ref{section:opt} when augmented with learned assignment costs. In fact, at the time of publishing, this approach outperformed the second-best tracker on the 2D MOT 2015 Challenge \cite{leal2015motchallenge} by 7\% in multiple object tracking accuracy (MOTA). \par \begin{figure} \includegraphics[scale=0.18]{siamese.png} \caption{The basic architecture of a siamese network. The weights of the convolutional layers are shared between the two arms of the network. A contrastive loss can be used to train the network to predict the similarity of the two input images.\label{siamese}} \end{figure} A variation on the standard CNN architecture that has seen extensive use in multi-target tracking is the siamese network. As nicely summarized in \cite{Leal-Taixe_2016_CVPR_Workshops}, a siamese network processes two inputs simultaneously using multiple layers with shared weights (Figure \ref{siamese}). These networks can be used for a variety of tasks that involve comparing two image patches; this seems intuitively useful for the task of learning assignment costs, where we are interested in predicting the association likelihood for two inputs. Indeed, \cite{Leal-Taixe_2016_CVPR_Workshops} proposed a technique where two image patches are stacked, along with their optical flow information, and fed as input into a siamese network. A separate network learns contextual features that encode relative geometry and position variations between the two inputs; the final layers of these two networks are extracted and combined with a gradient-boosting classifier to produce a match prediction score. Tracks are obtained by solving a network flow problem (Section \ref{section:graph-cut}) using Linear Programming. Siamese networks are also used in \cite{wang2016joint} to learn an embedding of two detections into a metric space where their affinity can be easily discriminated. In this work, all parameters between the two arms of the CNN are shared; the features produced by the last layers are used as input to a metric learning loss. A multi-task loss function for incorporating temporal constraints is combined with the regularized metric learning loss (Section \ref{sec:metric-learning}) to jointly optimize the weights of the deep model with stochastic gradient descent. They use an online learning algorithm to address the issue of changing object appearance throughout a trajectory, but the deep networks are pre-trained with auxiliary data. The learned affinity model is used with the softassign algorithm \cite{gold1996softmax} to solve a LAP to find an optimal pairing of tracklets. For the task of underwater multi-target tracking, siamese networks were shown to improve performance as well \cite{rahul2017siamese}. Instead of only considering pairs of images with siamese networks, the Quad-CNN \cite{son2017multi} aims to learn more sophisticated representations for metric learning by considering quadruplets of images. A bounding box regression loss and a multi-task ranking loss that considers appearance and temporal similarities between four images are used to jointly optimize a Quad-CNN end-to-end. The authors propose to use a minimax label propagation algorithm that makes use the trained Quad-CNN for data association in a sliding window. \par The confidence-based robust online tracking approach from \cite{bae2014robust} has been extended by adding a discriminative deep appearance model in \cite{bae2017confidence}. Similarly to the siamese network approach, they pass two image patches through a CNN to automatically featurize them. Then the features from the last CNN layer are used to compute a distance with the squared L2 norm; this distance is used to define a regularized energy function such that the lowest possible energy is assigned to the optimal assignment hypothesis. The deep network is once again pre-trained on a large dataset, and online transfer learning is leveraged to update a small number of the higher layers in the network to adapt to changing object appearances. In particular, when the average affinity scores computed by the network falls below a threshold at runtime, training samples are collected and a pass of online transfer learning is carried out to adapt the network. To help reduce the run-time overhead introduced by online learning, the authors suggest using a parallelized implementation and performing the high-confidence and low-confidence tracklet associations once every 10 time steps, as opposed to every time step. An efficient online algorithm for updating appearance models is described in \cite{yang2017hybrid}; here, the problem is cast as learning a bilinear similarity function between two feature vectors with constrained convex optimization. The feature vectors are aggregated from the last convolutional layer of a CNN pre-trained on ImageNet and fine-tuned on the PASCAL VOC dataset. \par Rather than formulate the data association problem for multi-person tracking as a MDAP, \cite{tang2016multi} defines it as a minimum-cost graph multi-cut problem. The key differences here with previously discussed optimization approaches are that multiple detections at a time step can be attributed to the same person; also, it is easy to allow edges to connect across multiple time steps in this graph to handle occlusion. The edge costs are learned with logistic regression, with features obtained from the DeepMatching \cite{weinzaepfel2013deepflow} algorithm. DeepMatching uses a CNN that has been trained to produce dense correspondences between image patches, and was notably used in the DeepFlow \cite{weinzaepfel2013deepflow} algorithm for learning to do large displacement optical flow. It is also used in the multi-person tracking system in \cite{henschel2017improvements} to compute temporal affinities between input features. Related to this is recent work on examining the interplay between semantic segmentation and multi-target tracking \cite{milan2015joint} \cite{tian2016duality} \cite{bullinger2017instance}. Indeed, \cite{bullinger2017instance} uses a CNN to segment images, and then computes the optical flow between segmented object pairs in consecutive images to define an association cost matrix for the LAP. \par The network optimization approach from \cite{zhang2008global} is revisited once again in \cite{schulter2017deep}, where the parameters of the the unary and pairwise link costs are learned end-to-end with a deep neural network. The original linear program is converted into the following bi-level optimization problem \begin{equation} \begin{aligned} & \argmin_{\Theta} \mathcal{L}(x^{gt}, x^*) \\ & \textrm{ s.t. } x^* = \argmin_{x} c(f, \Theta)^{\intercal} x \\ & \mathbf{A}x \leq \mathbf{b}, \mathbf{C}x = 0 \end{aligned} \end{equation} for parameters $\Theta$, input data $f$, ground truth network flow solutions $x^{gt}$, $x \in \mathbb{R}^M$ are the $M$ concatenated flow variables, $\mathbf{A} = [\mathbf{I}, -\mathbf{I}]^{\intercal} \in \mathbb{R}^{2M \times M}$ and $\mathbf{b} = [0, 1]^{\intercal} \in \mathbb{R}^M$ are box constraints, and $\mathbf{C} \in \mathbb{R}^{2K \times M}$ are the flow conservation constraints. The inner optimization problem is smoothed so that it is easily solvable with an off-the-shelf convex solver. The high level optimization problem is then solved with gradient descent. The high level optimization problem needs ground truth network flow labels $x^{gt}$ during training; this is handled by manually annotating bounding boxes in sequences of frames. At test time, inference is performed in a sliding window. \par A noticeable trend is the gradual drift away from developing novel optimization algorithms that attempt to solve the MDAP within a sliding window. Rather, recent solutions are relying more on powerful discriminative techniques, such as using features from pre-trained CNNs, and combining this with efficient LAP solvers. Advances in object detection such as Faster R-CNN \cite{ren2015faster} have almost single-handedly improved the performance of multi-target trackers. To offer some insight into the widely popular approach of using pre-trained CNN for generating detections and learning assignment costs, we present visualizations of CNN activations using the Gradient-weighted Class Activation Mapping technique \cite{Selvaraju_2017_ICCV} in Figure \ref{dl-viz}. \begin{figure} \begin{center}$ \begin{array}{cccc} \includegraphics[scale=0.33]{sedan.jpeg} & \includegraphics[scale=0.33]{truck.jpeg} & \includegraphics[scale=0.33]{occlusion.jpeg} & \includegraphics[scale=0.33]{night-time.jpeg} \\ \includegraphics[scale=0.33]{sedan-good-heatmap.jpg} & \includegraphics[scale=0.33]{truck-good-heatmap.jpg} & \includegraphics[scale=0.33]{occlusion-heatmap.jpg} & \includegraphics[scale=0.33]{night-time-bad-heatmap.jpg} \\ \includegraphics[scale=0.33]{sedan-good-guided.jpg} & \includegraphics[scale=0.33]{truck-good-guided.jpg} & \includegraphics[scale=0.33]{occlusion-guided.jpg} & \includegraphics[scale=0.33]{night-time-bad-guided.jpg} \end{array}$ \end{center} \caption{Visualizations of "important" regions for making predictions about the class label with the VGG16 network, generated with Grad-CAM \cite{Selvaraju_2017_ICCV} and pre-trained VGG16 weights \cite{Simonyan14c}. The first two images on the left were correctly labeled as containing vehicles, and it can be seen that the CNN leverages key features such as the car body shape, tires, and windshield to come to this conclusion. The CNN was not able to correctly classify the vehicles in the two images on the right. Heavy occlusion and illumination changes at night can still confuse a CNN that hasn't been trained for these situations. The images were taken with a traffic camera by the authors. \label{dl-viz}} \end{figure} \subsection{Multi-Sensor Representation Learning} \label{sec:multi-sensor} To wrap-up our discussion of learning assignment costs for multi-target tracking, we will introduce the basic ideas behind representation learning for multi-sensor data and discuss recent progress in this area. Our presentation will focus on applications to multi-target tracking, as well as related tasks such as multi-sensor classification. There are many theoretical and engineering challenges to multi-sensor multi-target tracking, and we hope that this subsection helps generate more discussion on this topic. As stated in Equation \ref{MSMDAP}, for the multi-sensor multi-target track-to-track association problem, we are interested in learning a cost $c_{i_1, i_2, ..., i_{N_s}}$ for an assignment of tracks $i_1, i_2, ..., i_{N_s}$, where each track originates from one of $N_s$ sensors. This problem is challenging for a number of reasons; for example, beyond the practical issues of temporally aligning the data, the raw data from each sensor may live in vastly different geometric spaces. Unfortunately, defining a measure of similarity between these spaces is usually non-trivial. Of course, the simple work-around is to independently map each sensor's raw data to the desired low-dimensional representation needed for tracking (e.g., $[x, y, \dot{x}, \dot{y}]^{\intercal} \in \mathbb{R}^4$ for position and speed), and then to define a cost function for this representation (e.g., Equation \ref{sum-pairwise}). We find that this isn't satisfying; indeed, we would instead like to learn a joint representation of the multi-sensor data that outperform the aforementioned simplistic approach in terms of tracking performance. Consider the example of a video camera, radar, and LiDAR observing one lane at an urban traffic intersection. In this scenario, we can assume that the surveillance regions of each sensor are overlapping. Then, one approach to multi-sensor representation learning would be to map the time-aligned images, point clouds, and radar ranges and azimuths to a single geometric space such that a cost function defined on this space assigns low costs to measurements that are generated by the same vehicle. A connection can be made here with the siamese networks mentioned before where two images are processed by twin pathways in a CNN and mapped to a single vector space, from which a similarity score can be produced. In the remainder of this section, we will discuss different research directions that formalize these ideas for multi-sensor multi-target tracking as well as the related task of multi-sensor classification. \par In many multi-sensor multi-target tracking scenarios, there is a network of sensors that are streaming high-dimensional data to a central processing unit for high-level fusion. When the surveillance regions are overlapping, the sensors might be tracking one or more targets from multiple perspectives; however, the raw data streams may live in vastly different geometric spaces when the network consists of heterogeneous sensors. Taking intuitions from manifold learning and dimensionality reduction, \cite{davenport2010joint} introduces the idea of a \textit{joint} manifold that captures a low-dimensional representation of the related data streams. The authors propose a distributed data fusion procedure that uses random projections to efficiently map the data streams to $K$-dimensional component manifolds, which are then linearly combined. They presented an application to tracking where they recorded themselves moving a coffee mug along an "R"-shaped trajectory on a planar surface with 4 cameras. They were able to learn a 2D joint manifold of the data generated by the 4 cameras that visually re-created the "R"-shaped path in the plane. An interesting research direction is thereby extending the theory of joint manifolds for augmenting multi-sensor multi-target tracking algorithms. For an extensive discussion on dimensionality reduction for multi-sensor fusion, we direct the reader to the following thesis \cite{schclar2012multi}. \par A different perspective on fusion in heterogeneous multi-sensor networks is taken by \cite{zhang2011heterogeneous} \cite{zhang2013multi}. In particular, Heterogeneous Multi-Metric Learning for classification \cite{zhang2011heterogeneous} involves learning $S$ projection matrices for a classification task, where $S$ is the number of sensors and the target metric space is one where training samples are encouraged to have the same labels as their $k$-nearest neighbors. Likewise, training samples with different labels are pushed away from each other in the learned space to help optimize the classification performance. To learn the projection matrices, the algorithm takes in a training set of multi-sensor data points and alternates between gradient descent steps over a hinge loss and projection steps onto the positive semi-definite cone to maintain the metric properties for the $S$ matrices. They later strengthen their results in \cite{zhang2013multi} by suggesting the use of the kernel trick to learn the $S$ projection matrices in a Reproducing Kernel Hilbert Space. Related to this work is that of \cite{bronstein2010data}, which introduces cross-modality similarity-sensitive hashing. Here, boosting is used to learn two maps that take data from two different geometric spaces and project them onto a single space. The motivation behind using boosting is that a Hamming distance metric on this learned space can be defined as a weighted sum of weak binary classifiers. \par In conclusion, we can see that there have been multiple algorithms proposed for multi-sensor representation learning, but they have yet to be fully integrated into multi-sensor multi-target tracking. The works we have described do not represent a comprehensive overview of the subject of multi-sensor fusion, but they suggest many interesting ideas for learning assignment costs. An important aspect of multi-sensor fusion that was not discussed is embedding robustness to temporal misalignment amongst sensors directly in the data fusion algorithm. Without specialized hardware, it can be difficult to precisely align data coming from multiple heterogeneous sensors, which in turn can have a drastic impact on the performance of track-to-track association. \section{Benchmarks} \label{sec: benchmarks} In this section, we will briefly review the multi-target tracking benchmarks; for a focused examination, we refer readers to the recent surveys \cite{leal2017tracking} \cite{luo2014multiple}. Following this, we discuss benchmarks pertaining specifically to multi-target tracking in ITS applications. \par Perhaps the most popular vision-based multi-object tracking benchmarking as of late is the MOT challenge. The MOT15 challenge was first released in 2014 and consists of 22 video sequences of pedestrians. Since then, the MOT16 \cite{milan2016mot16} and MOT17 challenges have been released, with each release also improving upon the annotation protocol and ground truth quality of the former. These datasets are particularly useful when proposing general improvements to multi-target tracking algorithms, since carefully evaluated results from many of the state-of-the-art trackers are available for comparison. The MOT datasets are particularly challenging because scenes are filmed from both static and moving vantage points, the density of the crowds of pedestrians is varied, and the appearances of pedestrians drastically changes between sequences. Previously, the PETS \cite{ellis2010pets2010}, TUD Stadtmitte \cite{andriluka2010monocular}, and ETH Pedestrian \cite{eth_biwi_00534} datasets were widely used as benchmarks. These offer a wide variety of multi-view, indoor, and outdoor scenes, and are still useful for training and testing, despite being less frequently used to assess state-of-the-art performance as of late. The KITTI benchmark \cite{Geiger2012CVPR} is focused on challenges for autonomous driving in urban environments, and contains many tasks beyond multi-target tracking such as odometry, lane estimation, and orientation estimation. \par Traffic surveillance is an application of multi-target tracking that is in desperate need of more high-quality single and multi-sensor datasets. Unfortunately, mounting sensors in areas of heavy traffic flow and collecting and cleaning the data is not an easy task, and collaboration with industry and government entities is crucial. On the other hand, there already exists plenty of datasets for pedestrian tracking, which is also important to ITS applications. Tracking vehicles is useful at traffic intersections as this information can be used for applications such as adaptive traffic signal control and collision detection; this is the area where high-quality datasets are most needed. Tracking both vehicles and pedestrians from the vantage point of an autonomous vehicle is still a challenge as well. The UA-DETRAC benchmark \cite{DETRAC:CoRR:WenDCLCQLYL15} is an excellent large-scale traffic surveillance benchmark that was recently proposed. It consists of 10 hours of video that was recorded at 24 different locations in China, and contains over 8,250 vehicles that were manually annotated. The dataset comes with some reference implementations of popular trackers, an evaluation tool, and detections. Another useful dataset for video-based traffic surveillance research is UrbanTracker \cite{jodoin2014urban}, which comes with sequences from 4 different intersections, as well as an annotation tool and a metrics tool. For multi-sensor traffic surveillance, the Ko-PER intersection dataset \cite{strigel2014ko} offers 6 sequences collected with multiple cameras and laser scanners; however, only 2 sequences currently have ground-truth labels. Due to the difficulty of collecting, synchronizing, and labeling data across multiple sensors, datasets such as this one are hard to find and extremely valuable. The KITTI tracking dataset also contains synchronized camera and laser scans, but it is slightly less useful for traffic surveillance since it is recorded from the perspective of an autonomous vehicle. Another video surveillance dataset that is of interest is GRAM Road-Traffic Monitoring \cite{guerrero2013iwinac}, which contains 3 sequences recorded under different conditions and with different visual platforms. The benefits of benchmarking across multiple datasets are apparent; in real-world scenarios, traffic surveillance systems will need to generalize to all manners of environments. \par Recently, realistic urban driving simulators have become available to advance research in autonomous vehicles \cite{Dosovitskiy17}. These simulators are typically built on top of game engines and have the ability to generate sensor data. A promising future direction may be leveraging these tools for research on single and multi-sensor multi-target tracking systems, especially if the research seeks to explore augmenting the tracker with vehicle-to-infrastructure communication. \section{Conclusions} \label{sec: conclusions} In this survey we argued that considering multi-target tracking as an assignment problem helps to conceptualize the large variety of existing solution techniques. We presented details for the most popular machine learning methods that address the MDAP underlying many single and multi-sensor multi-target tracking problems. The material was presented by distinguishing between optimization methods for finding the MAP assignment and learning algorithms for the assignment costs, and included a discussion on recent progress in applying deep learning to these tasks. Indeed, the latter is one of the most promising research directions that the field is taking. However, due to the current limited theoretical understanding of deep learning, careful consideration is required before it is deployed in real-world scenarios. The study of some of the failure modes of deep learning (e.g., fooling deep neural networks with adversarial inputs and its poor interpretability) as well as a detailed understanding of its generalization capabilities is still a work in progress. Another interesting research direction that was discussed is the development of solutions for end-to-end multi-target tracking; in particular, data-driven multi-target tracking systems that bundle the series of complex sub-problems into a single, monolithic solution. The fact that deep learning has already been successful in other areas such as machine translation and speech recognition is further evidence that this is a research direction that should be pursued. A large number of other open challenges were also highlighted in this survey, such as handling occlusion, changes in target appearance, and balancing the use of multiple scans of measurements with real-time performance. We used the application to ITS to help motivate many of these, as these problems involve tracking both vehicles and humans in a variety of environmental settings. \begin{acks} This work is supported by the \grantsponsor{1}{National Science Foundation}{} under grant \grantnum{1}{1446813} and the \grantsponsor{2}{Florida DOT}{} under grant \grantnum{2}{BDV31-977-45}. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. P.M. Pardalos' research is supported by the Paul and Heidi Brown preeminent professorship at ISE, University of Florida. \end{acks} \bibliographystyle{ACM-Reference-Format}
{ "timestamp": "2018-02-21T02:03:21", "yymm": "1802", "arxiv_id": "1802.06897", "language": "en", "url": "https://arxiv.org/abs/1802.06897" }
\section{Introduction} The model of population protocols was introduced by Angluin et al.~\cite{DBLP:journals/dc/AngluinADFP06}. A population consists of a large collection of indistinguishable agents that rely on very limited communication and computation power. Agents interact in a pairwise, sequential manner, governed by a scheduler. Rules of the protocol determine how agents update their states, and the update depends only on their states at the moment of interaction. The population protocol framework is particularly well-adapted to applications in interacting particle systems, which includes modeling behavior of biological agents and the programming of chemical systems. Specifically, the population protocol framework is equivalent to the formalism of fixed-volume Chemical Reaction Networks (CRN's,~\cite{DBLP:journals/dc/ChenCDS17}), and may be used directly for programming in frameworks such as DNS strand computing. From a computational perspective, we consider the standard input/output population protocol framework in which input is encoded in the form of the starting states of the population of agents, and output is decoded from the population states after a certain time.\footnote{Most simply, one can consider the state of an agent as a $2$-tuple, in which the first entry encodes its ``working state'' and the second its current output.} A basic set of tasks is that of computing \emph{predicates} on the number of agents initially occupying particular input states, so that the outputs of all agents eventually agree on the value of the predicate.\footnote{We use predicates only as a particularly representative example of tasks which can be resolved, noting that some natural tasks of distributed computing, e.g., leader election, are not representable as computational predicates, since not all agents are expected to converge to the same output value.} The most prevalent scheduler model, also considered in this paper, uses a probabilistic scheduler, where pairs of interacting agents are selected uniformly at random. In this model, one is interested in the \emph{convergence time}, being the number of interactions $t$ until all agents agree on the same output values, which subsequently never change for any agent. Usually, the measure $t/n$ called \emph{parallel time} is considered, which corresponds to the notion of the number of \emph{rounds} of parallel interactions. In this paper we look at the interplay between convergence time and the number of states of the agent's machine. The original formulation assumes that the agents are automata with a constant number of states~\cite{DBLP:journals/dc/AngluinADFP06}. Since then, this assumption has been frequently relaxed in the literature, making the number of states slightly dependent on the population size $n$. Such a relaxation has allowed for significant progress in the field, and specifically for designs of rapidly converging protocols for basic tasks. These results include majority computation in $O(\log^2 n)$ parallel time by the protocol of Alistarh et al.~\cite{DBLP:conf/soda/AlistarhAG18} using $O(\log n)$ states, and leader election in $O(\log^2 n)$ parallel time with the protocol of Gąsieniec and Stachowiak~\cite{DBLP:conf/soda/GasieniecS18} using $O(\log \log n)$ states, under a fair random scheduler. Both these tasks can also be solved using finite-state agents, but the best known solutions to date required super-linear parallel time. We remark that in all relevant applications of population protocols related to modeling and design of complex systems, agents are naturally described as finite-state machines. In this light, results in the relaxed model with a super-constant number of states have a more limited explanatory power for real-world phenomena, and they perhaps more pertinently seen as lower-bounding the population size $n$ for which a $k$-state protocol still operates correctly, for fixed $k$. This is sometimes sufficient to explain processes at reasonably large scales, but not in the macroscopic limit $n \to +\infty$. The model relaxation, up to a state space of polylogarithmic size, also does not seem particularly useful in terms of enlarging the set of tasks which can be resolved in the model.\footnote{Cf.~e.g.~\cite{DBLP:journals/tcs/ChatzigiannakisMNPS11} for an equivalence of the languages of predicates computable using finite state protocols and protocols with less than a polylogarithmic number of states, under some important additional assumptions.} This paper shows that \emph{it is possible to achieve fast computation in the finite-state model} of population protocols. Previously, results of this type were only known in the model variant with a unique pre-distinguished leader agent, which directly allows for a form of centralized control~\cite{DBLP:journals/dc/AngluinAE08a}. Here, we provide a completely leaderless approach to bypass the lack of synchronicity of the system, and consequently the lack of a common time measure for agents. This is achieved by creating a hierarchy of so-called \emph{phase-clocks}, running at carefully tuned rates, which synchronize agents sufficiently to allow a form of imperative centralized program to be deployed on the system. \subsection{Overview of Our Results} We provide a general framework for programming in what we call \emph{a sequential code} for population protocols, and describe a way of compilation of such code to a set of rules ready for execution in the protocol environment. The language itself involves finite-depth loops of prescribed length (lasting a logarithmic number of rounds, w.h.p.) and branching instructions. It is meant to have sufficient expressive power on the one hand, and to bound the expected execution time of any finite-state protocol expressed in it, with the precise trade-off depending on the adopted variant of compilation. \paragraph{Results for w.h.p.\ correct protocols.} The basic compilation scheme provides a simple way of achieving protocols which give results which are correct w.h.p., using $O(1)$ states and converging in expected polylogarithmic parallel time. As a representative example, we express in the designed language protocols for two basic tasks: leader election and majority. \emph{Leader election} is a problem in which all agents initially hold the same input state, and the goal is for the population to reach a configuration in which exactly one agent holds a distinguished output, labeled as the leader. \emph{Majority} (in its generalized comparison version) is a problem in which some subset of agents holds initial state $A$, a disjoint subset of the population (of different cardinality) holds initial state $B$, and the population should converge to an identical boolean output for all agents, depending only on which of the initial sets is the larger one. As a direct consequence of the formulation of our protocols within the framework (with no need for further analysis), we obtain the first constant-state protocols for leader election and majority, converging in polylogarithmic expected time w.h.p. to a w.h.p. correct result, under a fair scheduler (see Section~\ref{le} and Section~\ref{maj}). We remark that for majority, the high probability of correctness holds regardless of the gap between the sizes of the compared sets. We remark that in the framework, the precise convergence time of the protocols can be given as $O(\log^{c+1} n)$ rounds, where $c$ is the depth of nested ``repeat'' loops in the formulation of the protocol. In the adopted implementations it is thus $O(\log^2 n)$ rounds for leader election and $O(\log^3 n)$ rounds for majority. Next, the obtained solution to leader election allows us to exploit the leader-driven programming framework of~\cite{DBLP:journals/dc/AngluinAE08a}, and to combine it with our framework (see Section~\ref{general-semi-linear}). We apply protocols generated with (compiled within) this framework as a blackbox, composed with the solution to leader election. This allows us to compute any semi-linear predicate (i.e., one which can be resolved using population protocols with $O(1)$ states~\cite{DBLP:journals/dc/AngluinADFP06}) using a protocol converging in polylogarithmic time ($O(\log^5 n)$ rounds). Such predicates are much more general than the majority problem, including threshold estimation of the size of a set in relation to the entire population, as well as computation of modulo remainders. The task of \emph{plurality consensus} (cf.~\cite{DBLP:conf/soda/BecchettiCNPS15,DBLP:conf/icalp/BerenbrinkFGK16,DBLP:conf/podc/GhaffariP16a}), in which the goal is to identify the largest of $l$ input sets, can also be expressed using semi-linear predicates. A solution to plurality consensus is in fact obtained with a straightforward adaptation of our protocol for majority, with the same convergence time.\footnote{We remark that the number of states in the solution to plurality consensus will depend on $l$, and after some optimization can be bounded as $O(l^2)$. We leave as open the question if this dependence is optimal in the studied setting.} \paragraph{Results for always-correct protocols.} All the designed protocols can be made correct with certainty, converging in $O(n^\varepsilon)$ time, where $\varepsilon>0$ is an arbitrarily chosen parameter, influencing the number of states of the agent. Alternatively, polylogarithmic running time can be preserved at the cost of using $O(\log \log n)$ states per agent. \paragraph{Techniques, proof outline, and originality.} The idea of programming protocols using clock hierarchies goes back to the leader-based framework of~\cite{DBLP:journals/dc/AngluinAE08a}. With respect to~\cite{DBLP:journals/dc/AngluinAE08a}, the main new element is the operation of the phase-clock hierarchy itself, which is different and more involved in terms of its (leaderless) design. It also provides stronger synchronization guarantees than those required in~\cite{DBLP:journals/dc/AngluinAE08a}, promising that all agents are executing the same line of code at the same time, w.h.p. Our clock design hierarchy itself relies on several building blocks. We first revisit the analysis of a self-stabilizing 7-state oscillatory protocol of the authors~\cite{DBLP:journals/corr/DudekK17}, and extend this oscillating mechanism to obtain a modulo-3 phase clock, operating correctly in the presence of the correct concentration of agents in the clock's designated \emph{control state $X$} in the population. We then show how to extend the modulo-3 phase clock into a modulo-$m$ phase-clock, for $m$ being an arbitrarily large constant (cf. Section~\ref{sec:clocks}). When the clock is operating in the correct conditions and has stabilized to its intended behavior, this allows all agents to agree as to the modulo-$m$ phase indicated by the clock (up to a difference of at most $1$), w.h.p. The construction of the hierarchy now relies on a mechanism to drive the clocks. The internals of the base modulo-$m$ clock progress following the asynchronous random scheduler of the system. The phase of the clock progresses every $\Theta(\log n)$ rounds w.h.p. when the control state $X$ is represented by the right number $^\#\!\! X$ of agents in the population, contained in the range $^\#\!\! X \in [1,n^{1-\varepsilon}]$, where $\varepsilon>0$ is an arbitrarily fixed design parameter of the clock (see Theorem~\ref{th:variantofclock}). For subsequent clocks in the hierarchy, we use the same control state $X$, but slow down the execution of rules of the $(i+1)$-st clock, using the $i$-th clock so that $\Theta(n)$ rules of the $(i+1)$-st clock are executed during a single period of the $i$-th clock. In this way, the period of the $i$-th clock in the hierarchy is $\Theta(\log n)$ rounds, for $i=1,2,3,\ldots$. The number of clocks placed in the hierarchy depends on the depth of nested loops in the executed program. To the best of our knowledge, this is the first systematic approach to decentralized clock composition in the literature.\footnote{For a simpler ad-hoc composition of two non-self-stabilizing clocks with different designs, see e.g.~\cite{DBLP:conf/soda/GasieniecS18,GSU18}.} We remark on two essential points in the construction. First of all, compositions of clocks (or, for that matter, or most other population protocols) are notoriously hard to analyse rigorously. Here, we achieve this by using each clock to emulate a slower scheduler for the next clock in the population, keeping the clock protocol otherwise independent. We use to our advantage the fact that all of the composed protocols have a finite number of states, which allows us in some crucial parts to rely on the continuous approximation of the protocol, corresponding to the limit $n\to +\infty$ (mean-field approximation). For a $k$-state protocol with $k$ fixed, we then identify the state of the population with a point in the phase space $[0,1]^k$, with each coordinate corresponding to the proportion of the number of agents holding the given state in the population. The evolution of states can then be approximated using the continuous dynamics (set of ordinary differential equations) corresponding to the continuous limit of the protocol. The scope of applicability of this type of approximation is by now well understood for population protocols (cf.~e.g.~\cite{DBLP:conf/icalp/CzyzowiczGKKSU15,DBLP:conf/mfcs/BournezFK12,DBLP:journals/corr/DudekK17}). Informally, over time scales of the length analyzed in this construction (parallel time polynomial in $n$), the behavior of the system in parts of the phase space, which are separated from its singularities and fixed points by a distance of at least $n^{-0.5+\Omega(1)}$, can be analyzed using standard concentration bounds (cf.~e.g.~\cite{DBLP:journals/corr/DudekK17}[Lemma~1 and subsequent applications of Azuma's inequality]). The clock protocols we use do not have to operate all the time in such an area of the state space, but once they start correct operation, they would need to cross such an area in order to display subsequent incorrect behavior, due to which such behavior becomes a low probability event. The analysis obtained through continuous approximation is robust and holds over different types of random schedulers. It generalizes naturally from an asynchronous scheduler to a parallel scheduler, which activates a random matching in the population in every step. This is crucial, since we do not know how to simulate a slowed version of the asynchronous scheduler to propagate successive clocks, but succeed in using clocks to emulate an (almost perfect) parallel matching scheduler in the population, working at a rate of one activation of the scheduler per period of the clock, thus giving the required slowdown factor of $\Theta(\log n)$ in the construction of the hierarchy (see Section~\ref{hierarchia} for details). The second crucial aspect concerns maintaining the correct concentration of agents in state $X$, which controls the clocks in the hierarchy. We start by remarking on the reason for our choice of the clock mechanism. It is relatively easy~\cite{DBLP:journals/dc/AngluinAE08a} to design phase clocks which run in a setting with a unique leader ($^\#\!\! X=1$), however (as suggested before) the need to solve the leader election problem is perhaps the main source of hardness in our setting. One might also consider using {\em junta-driven} phase clocks, designed by G\k{a}sieniec and Stachowiak~\cite{DBLP:conf/soda/GasieniecS18}. This clock mechanism operates correctly in the range of parameters $^\#\!\! X \in [1,n^{1-\delta}]$, which is the same requirement as in our clock mechanism, and also uses $O(1)$ states. However, the clock from~\cite{DBLP:conf/soda/GasieniecS18} will find itself stuck indefinitely in a central area of the phase space if the clock is initialized when $^\#\!\! X=\Theta(n)$. When the value of $^\#\!\! X$ is reduced, it will eventually start operating correctly with a period of $\Theta(\log n)$ rounds, but this will only happen after exponentially long time, in expectation.\footnote{This is precisely the reason why the solution to leader election from~\cite{DBLP:conf/soda/GasieniecS18} uses $\Theta(\log \log n)$ states per agent, and not $O(1)$ states.} This problem is completely alleviated in our work by building on the self-stabilizing oscillator design from~\cite{DBLP:journals/corr/DudekK17}, which leaves the central area of its state space in $O(\log n)$ rounds of the (sequential or parallel) scheduler driving the clock, in expectation.\footnote{The downside is that the clock we use is a little harder to manipulate, hence the solution to leader election and subsequent tasks becomes more involved in the layer of clock synchronization.} The only remaining difficulty is that of designing a process which adapts the number of agents in the controlling state $^\#\!\! X$ so that it is in the correct range $^\#\!\! X \in n^{1-\varepsilon}$ for a sufficiently long time to allow the clock hierarchy to organize itself and operate for a polylogarithmic number of rounds, i.e., for at least $\Omega(\log n)$ periods of the outermost clock. This is achieved by running a separate building block: a dynamical process which starts with all agents in a state representing $X$, and reduces $^\#\!\! X$ over time. We denote the expected time from which $^\#\!\! X \leq n^{1-\varepsilon}$ is satisfied indefinitely in the protocol as $\cal T$; then, the time of convergence of protocols formulated in the framework to a w.h.p. correct result will be given as $\mathcal{T} + O(\mathrm{poly} \log n)$. We consider two distinct ways of reducing $^\#\!\! X$. For use in the always-correct framework, we consider such an auxiliary protocol with the simple rule of the form: \RULE{X}{X}{X}{\neg X}, which eliminates an agent in state $X$ whenever two agents in such a state meet. This guarantees that $^\#\!\! X$ is non-increasing over time, that $^\#\!\! X \geq 1$ is always satisfied, and that $^\#\!\! X \leq n^{1-\varepsilon}$ holds after ${\cal T} = O(n^{\varepsilon})$ parallel time, w.h.p. (see Proposition~\ref{simpleelimination}). For use in the faster w.h.p.\ framework, we use a slightly more involved $k$-level process which eliminates $X$ more quickly, and achieves ${\cal T} = O(\log^k n)$, where $k$ is an arbitrarily chosen integer in the protocol design (see Theorem~\ref{k-stanowa-dynamika} for details). This approach will, however, result in an eventual disappearance of $X$ ($^\#\!\! X = 0$ from some time step onwards), where $^\#\!\! X > 0$ continues for $\Omega (\log^k n)$ after time $\cal T$, w.h.p. This is long enough for the protocol to successfully complete (w.h.p.), if $k$ is chosen suitably large with respect to the depth of the program formulation. The designed execution framework comes with a number of guarantees which allow for the analysis of protocols formulated in it. In Section~\ref{le}, we put forward an appropriate leader election protocol and provide the corresponding analysis, whereas in Section~\ref{maj} we provide and describe a protocol for majority. Extending the protocol for leader election, the more general case of semi-linear predicates is handled in Section~\ref{general-semi-linear}. In general, the w.h.p. versions of the protocols expressed in the programming language are relatively straightforward, and often constitute a simplification of previous designs with a super-constant number of states (e.g., the majority protocol mimics the solution known from~\cite{DBLP:conf/soda/AlistarhAG18}). Providing an always-correct variant of the protocols is a bit more involved, since some of the guarantees of correctness given by the primitives of the programming framework only hold w.h.p. (cf.~Theorem~\ref{th:guarantees}). A solution which is always correct is achieved by carefully combining a w.h.p. solution in the framework with a slower, deterministically correct solution running in parallel. This process of protocol combination is based on a notion of \emph{threads}, which are coupled (i.e., informally, executed asynchronously in parallel) in the framework. Threads may share some variables, and the specific interaction between the fast (main) thread and the deterministic thread is chosen in a separate (ad hoc) manner for each of the designed protocols, to allow for a proof of correctness of the composed protocol. \paragraph{Relation to impossibility results.} The deterministically correct protocols which we present do not stand in contradiction to existing lower bounds on majority and leader election from Doty and Soloveichik~\cite{DBLP:conf/wdag/DotyS15}, Alistarh et al.~\cite{DBLP:conf/soda/AlistarhAEGR17} and Alistarh et al.~\cite{DBLP:conf/soda/AlistarhAG18}. Those impossibility results were derived (and stated) conditioned on an assumption on \emph{stable} computation --- to state it briefly, that once the protocol has reached a state that is a valid output, it remains in this state indefinitely, under any sequence of interactions (see~\cite{DBLP:conf/wdag/DotyS15} for a formal definition of stability). This seemingly safe assumption is in fact prohibitive, as it prevents, e.g., an approach of using fast protocol to quickly compute an output that is correct with high probability, and combining it with a slow and always correct protocol (running in the ``background'') to make sure that the computation is always correct, eventually. This is precisely the way we proceed to solve both leader election and majority in our framework. When speaking of convergence time, we must emphasize that detecting whether a population protocol has converged or not is not possible locally, in any model. Indeed, for most basic tasks, such as majority or leader election, no agent can decide whether convergence has already been achieved or whether the outputs of some agents will subsequently change, due to the properties of the random scheduler which may isolate some subset of agents for an arbitrarily long time with positive probability, regardless of the applied protocol. \paragraph{Extensions of results.} We leave as open the question of whether always-correct finite-state protocols for the problems considered in this work converge in polylogarithmic time. In particular, we do not know if a solution exists to the following relaxed variant of the leader election problem: obtaining a finite-state protocol which creates in polylogarithmic time a junta having size $^\#\!\! X \leq O(n^{1-\varepsilon})$, while guaranteeing that the junta remains non-empty at (almost) all times if the dynamics are run forever. A solution to this problem appears to be a necessary basic building block for controlling all phase clock designs known to us, including the one considered in this work. We also remark that in our solutions, after convergence to a correct output, the agents are still allowed to update their states in the part which is not used for encoding output. This is the case for our protocols, which continue running at least for some time after convergence. The time after which state changes in the population cease to happen, i.e., after which the protocol becomes silent, is $O(\mathrm{poly}\log n)$ for the w.h.p.\ schemes we present, whereas the always-correct schemes as presented in this paper never become silent. The latter solution can be modified to become silent after polynomial time (informally: once the deterministic thread has terminated with the same result as the main thread, for all agents), but the time after which the protocol becomes silent is still significantly longer than its $O(n^{\varepsilon})$ convergence time. Finally, we note that in all our results are phrased in the randomized model of population protocols, in which agents have access to a constant number of fair coin tosses in each iteration, which they can use to select the transition rule in a given iteration. Phrasing the protocols to enforce deterministic operation is possible by simulating coin tosses from randomness of the fair scheduler, using the so-called synthetic coin technique~\cite{DBLP:conf/soda/AlistarhAEGR17}. \subsection{Other Related Work} \paragraph{Population Protocols.} The population protocol model captures the way in which the complex behavior of systems (biological, chemical, sensor networks, etc.) emerges from the underlying local pairwise interactions of agents. The original work of Angluin et al.~\cite{DBLP:journals/dc/AngluinADFP06, DBLP:journals/dc/AngluinAE08a} was motivated by applications in sensor mobility. Despite the limited computational capabilities of individual sensors, population protocols permit the computation of two important classes of functions: threshold predicates, which decide if the weighted average of types appearing in the population exceeds a certain value, and modulo remainders of similar weighted averages. More precisely, the family of predicates computable in the finite-state population protocol model under the assumption of stability has been characterized as that of semi-linear predicates, or equivalently predicates expressible in second-order Presburger arithmetic~\cite{DBLP:journals/dc/AngluinADFP06}. \paragraph{Majority and Leader Election.} The most common problems considered in the context of population protocols include \emph{majority} and \emph{leader election}. The majority problem is a special form of consensus~\cite{DBLP:conf/fct/Fischer83}, in which the final configuration reflects the unique color of the largest fraction of the population initially colored with two colors.\footnote{The variant considered in this work is more general, since we allow some agents to be initially uncolored.} The majority problem was first posed in the context of population protocols in~\cite{DBLP:journals/dc/AngluinADFP06} and later a 3-state protocol for \emph{approximate} majority was given in~\cite{DBLP:journals/dc/AngluinAE08}, which converges in $O(\log n)$ time, but requires that the population gap is $\Omega(\sqrt{n \log n})$. Draief and Vojnovi\'c \cite{DBLP:journals/siamco/DraiefV12} and later Mertzios et al. \cite{DBLP:conf/icalp/MertziosNRS14} considered a 4-state protocol for exact majority, however with a prohibitive polynomial convergence time ($O(n \log n)$ expected parallel time). Alistarh et al. \cite{DBLP:conf/podc/AlistarhGV15} were the first to provide a protocol for exact majority with polylogarithmic parallel time, however the number of states there can be polynomial if the initial gap is small enough ($O(n)$ states, $O(\log^2 n)$ time whp). Further studies on time-space trade-offs can be found in Alistarh et al.~\cite{DBLP:conf/soda/AlistarhAEGR17} and Bilke et al.~\cite{DBLP:conf/podc/BilkeCER17} culminating with Alistarh et al.~\cite{DBLP:conf/soda/AlistarhAG18} showing a protocol with $O(\log n)$ states and $O(\log^2 n)$ expected time to elect a majority. In the leader election problem, in the final configuration an unique agent must converge to a {\em leader state} and every other agent has to stabilise in a {\em follower} state. A series of papers \cite{DBLP:conf/icalp/AlistarhG15}, \cite{DBLP:conf/soda/AlistarhAEGR17}, \cite{DBLP:conf/podc/BilkeCER17} culminated with Alistarh et al.~\cite{DBLP:conf/soda/AlistarhAG18} achieving a $O(\log n)$ states protocol electing a leader in $O(\log^2 n)$ expected time, improved by Berenbrink et al.~\cite{DBLP:conf/soda/BerenbrinkKKO18} to $O(\log^2 n)$ time with high probability. In a breakthrough paper, Gąsieniec and Stachowiak~\cite{DBLP:conf/soda/GasieniecS18} reduced the number of states exponentially to $O(\log \log n)$, and later Gąsieniec et al.~\cite{GSU18} achieved the same number of states but improved the time to $O(\log n \log \log n)$. \paragraph{Progress on Phase Clocks.} Unsurprisingly, the more efficient protocols in or around the population protocol framework~\cite{DBLP:conf/soda/BoczkowskiKN17,DBLP:conf/soda/GasieniecS18,DBLP:conf/soda/AlistarhAG18,GSU18} have focused on ways to allow some form of synchronization on the system to appear, in the form of a \emph{phase clock} or closely related construct. This line of work includes, in particular, {\em leader-less} phase clocks applied by Alistarh et al. in \cite{DBLP:conf/soda/AlistarhAG18} and {\em junta-driven} phase clocks used by G\k{a}sieniec and Stachowiak~\cite{DBLP:conf/soda/GasieniecS18}. \subsection{Preliminaries and Notation} Population protocols are expressed in the form of a set of rules, describing the state transitions of a pair of interacting agent. When designing protocols with $O(1)$ states, we use the convention that the state space of the agent is the Cartesian product of a certain number of boolean flags known as \emph{state variables}, which may be set or unset for each agent (we use the symbol $\ensuremath{\textit{on}}$ to denote the truth value and $\ensuremath{\textit{off}}$ to denote the false value). A rule can then be conveniently described through bit-masks, i.e., by specifying a set of $4$ boolean formulas $\Sigma_1, \Sigma_2, \Sigma_3, \Sigma_4$ on the state variables, written as follows: \RULE{\Sigma_1}{\Sigma_2}{\Sigma_3}{\Sigma_4}. Such a rule may be activated when the pair of interacting agents satisfy formulas $\Sigma_1$ and $\Sigma_2$ on their state variables, respectively. The execution of the rule corresponds to a minimal update of the states of the agents so that formulas $\Sigma_3$ and $\Sigma_4$ are satisfied by the states of the respective agents after the update. The special symbol $(.)$ is used to denote the empty boolean formula, which matches any agent. By convention, boolean variables associated with agents will be denoted by capital letters of the alphabet, such as $A$. The set of agents in the population for which $A$ is set will be denoted by $\mathcal{A}$. The number of agents in the population is denoted by $n$. We apply a convention in which the scheduler picks exactly one rule uniformly at random from the set of rules of the protocol, and executes it for the interacting agent pair if it is matching. Protocols designed in this framework can be translated into frameworks in which all matching rules are executed systematically, e.g., in a top-down manner. Our constructions rely on the idea of composing multiple protocols. In the simplest setting, this is obtained by defining each protocol with its own ruleset, and putting the rulesets of the different protocols together into one. We call protocols which have been composed like this \emph{threads}. (To ensure fairness of time division between the threads, mainly for the sake of better intuition of the reader, we will assume that each protocol in each thread is written down with the same number of rules; this can be enforced by creating a constant number of copies of the respective rules up to the least common multiple of the number of rules of respective threads.) We speak of \emph{composing protocol $P_2$ on top of protocol $P_1$} if the ruleset of protocol $P_2$ does not affect the values of boolean variables used in protocol $P_1$. Intuitively, the (asymptotic) execution of a protocol is not affected by composing a constant number of other protocols on top of it. \section{Programming Framework for Protocol Formulation} \subsection{Language Specification} Our simple language for writing imperative code is based on the following constructs and assumptions. The code of the protocol is a collection of threads, sharing the same pool of boolean state variables. Variables are defined and initialized at protocol startup, either as $X \gets \ensuremath{\textit{on}}$ or $X \gets \ensuremath{\textit{off}}$. The code of each thread is a finite-depth branching program with loops. The only available control instructions are the following: \begin{itemize} \item \IFNONEMPTY{$condition$}\ [block] \key{else}:\ [block] --- the branching instruction, where 'condition' is a boolean expression on local state variables. \item \REPEAT\ [block] --- the outermost control loop of a thread. \item \REPEAT[c \ln n]\ [block] --- possibly nested loops within a thread, where $c$ is an explicitly stated positive integer. \end{itemize} These constructs are intended to have an intuitive interpretation for the reader (which is indeed true in some circumstances w.h.p., as shown later). The intuition for the branching instruction corresponds to conditioning on the existence of at least one agent in the population for which the given boolean formula on local state variables evaluates to true. The only primitive instructions are the following: \begin{itemize} \item \ASYNC[c \ln n]\ [ruleset], followed by a set of primitive rules. \item The assignment instruction \GETS{X}{condition}, where $X$ is a variable and $condition$ is a boolean condition on local variables. \item \ASYNC[c \ln n]\ [ruleset], followed by a set of primitive rules. \end{itemize} Intuitively, the assignment instruction is intended to update the states of all agents in the population to set $\mathcal{X}$ if and only if $condition$ is true, while the \ASYNC\ instruction is intended to run the provided set of rules on the population for the specified number of parallel rounds (allowing for nesting of population protocols). An execution of a rule from a specified ruleset or of a local variable assignment corresponding to the given assignment instruction for an agent in the population is called an \emph{operation}. \subsection{Outline of Compilation and Execution Model} An informal description of the compilation process of the code is as follows. A certain (constant) number of threads are specified through code by the protocol designer. The remaining threads (also a constant number, dependent on the loop depth of the code formulation) are added internally to allow for clock operation and synchronization. For a program with $C_T$ threads, where $C_T$ is a constant, interacting agents pick a rule corresponding to the current step of each of the $C_T$ threads, choosing a thread u.a.r\ with probability $1/C_T$, independently of other interactions. For each thread, each agent runs its own control loop, which describes a finite state automaton. The automaton proceeds through alternating phases of ruleset execution, followed by synchronization. When executing a ruleset at depth $a$ in the loop structure of the code, its synchronization is governed by the $a$-th outermost clock of the system. We formalize this description in Section~\ref{sec:compilation}. \subsection{Compiled Population Protocols: Guarantees on Behavior and Convergence Time} We will require that the compilation framework produces protocols which satisfy two constraints throughout its execution, known as the \emph{guaranteed behavior}. \begin{definition}[Guaranteed behavior] The programming framework admits the \emph{guaranteed behavior property} if the following conditions are jointly fulfilled for any protocol compiled in it: \begin{itemize} \item At any time, for any agent, a state variable may only be modified in the way given by some primitive operation appearing in the code, i.e., by a rule in a ruleset or by an assignment operation. \item Suppose that in some execution of the protocol, a state variable (or, more generally, a boolean formula on state variables) $S$ satisfies $\mathcal{S}(t) = \emptyset$ for all $t > t_0$. If at some time $t_1 > t_0$ an agent is executing an operation not contained in a branch of the form ``\IFNONEMPTY{S}'', then this agent will never execute any operation contained in this branch in the future. \end{itemize} \end{definition} In addition, we expect some further properties from the compiled protocol, which are to be met w.h.p. To describe them, we now introduce the following notation for an execution of the compiled protocol. \begin{definition}[Synchronized iterations] We say a protocol is \emph{synchronized} at a given moment of time if within a thread either all agents have their instruction pointer pointing to the same instruction location in each thread (all agents are active), or are in the process of entering or leaving the block with this instruction. An \emph{iteration} is a time interval defined by looking at one agent (fixed arbitrarily at the start of the protocol) and considering the period of time from one time when this agent activates the first instruction of the outermost loop, to the next such time. Finally, an iteration is said to be \emph{synchronized} if the protocol is synchronized at all moments of time during this iteration, all agents follow the same execution path, and moreover all instructions or rulesets contained within each internal \key{repeat} or \key{execute} instruction on the execution path of the program are executed (with all agents active) at least for the specified number of rounds. \end{definition} We note that, in particular, for any \ASYNC[c \ln n]\ [ruleset] on the execution path instruction, in any synchronized iteration there must exist a period of $c\ln n$ rounds (or $c n \ln n$ steps), during which the program emulates an execution of a simple protocol consisting of set of rules $ruleset$ only under a \emph{fair} random scheduler. Note that, directly before and after this period, the given ruleset may also possibly be run for some further time for an arbitrary subset of agents, thus emulating the behavior of some \emph{unfair} scheduler on which no promises are made. Our expectation that the programmed code does what it says it should is met in some synchronized iterations, which are called good iterations. \begin{definition}[Good iterations] An iteration is said to be \emph{good} if it is synchronized and additionally all executions of the assignment and \IFNONEMPTY{} instructions performed within the iteration reach their usual (w.h.p.) outcome for all agents, where: \begin{itemize} \item The expected outcome of a \GETS{X}{\Sigma} operation is that for each agent, the value of the state variable $X$ is set to the value of the boolean formula $\Sigma$ given on its local state variables. \item The expected outcome of a \IFNONEMPTY{\Sigma} operation is that instructions contained in the ``\key{if}:'' block are subsequently executed if and only if the boolean formula $\Sigma$ on local state variables evaluates to true for at least one agent in the population, and that instructions contained in the (optional) ``\key{else}:'' block are otherwise executed. \end{itemize} \end{definition} We are now ready to state the main Theorem on the properties of the framework. \begin{theorem \label{th:guarantees} Any program expressed in the provided programming language can be compiled into a population protocol with $O(1)$ states, such that: \begin{itemize} \item[(i)] The guaranteed behavior constraints are always met; \item[(ii)] Subject to a choice made at the time of compilation, one of the following claims holds: \begin{itemize} \item[(a)] Starting from the initialization of the protocol, after an initialization phase taking some number of rounds $O(\mathrm{poly}\log n)$, each of the $\Omega(\log n)$ iterations which follow directly afterwards is good and takes at most $O(\mathrm{poly}\log n)$ time; Or \item[(b)] Starting from an arbitrary moment of time, after an initialization phase taking some number of rounds $O(n^{\varepsilon})$, where $\varepsilon>0$ is an arbitrarily chosen constant influencing the number of states of the compiled protocol, each of the $\Omega(\log n)$ iterations which follow directly afterwards is good and takes at most $O(\mathrm{poly}\log n)$ time. \end{itemize} \end{itemize} \end{theorem} The claim of the Theorem follows from the compilation mechanism described in Section~\ref{sec:compilation} and the construction of the phase clock hierarchy described in Section~\ref{sec:hierarchy}. \section{Warmup: Programming with High Probability of Correctness} Using the framework is easiest when the goal is to achieve w.h.p.\ correctness of the result. In this scenario, it will, in most cases, be sufficient to create one Main thread only. The execution of the code can be seen as follows: for some time, the provided rulesets will be executed in no particular order. Then, w.h.p., some number of iterations of the outermost loop of the code will be executed as designed, i.e., respecting all rules of sequential programming, conditions, and loop limits. At some point during such correct execution of the program, after a sufficient number of iterations are completed w.h.p., it may slow (or stop) without warning. Thus, the following use of the framework is safe and recommended: the Main thread of the code should be given as ``\REPEAT\ [Program]'', where ``Program'' is a piece of code which is required to solve the problem correctly in a sequential setting, subject to the two additional constraints: (1) ``Program'' does not modify any of the states of the input of the protocol (if any), and resets any other variables it may use to a valid initialization if it detects their initialization to be invalid; (2) If the output variables computed in one execution of ``Program'' were a valid answer to the problem, then the next execution of ``Program'' in the next iteration of the outer ``\REPEAT'' loop should not alter the state of output variables for any agent. In the above, constraint (1) eliminates the problem with the initial uncontrolled phase of the execution before its correct operation starts, and (2) handles the issue with the program stopping at an unpredictable moment (e.g., when writing output variables). To illustrate this approach, we provide the programs \fun{Majority} and \fun{LeaderElection} which describe protocols solving the respective problems w.h.p. of correctness. \subsection{Leader Election Protocol (w.h.p.)} \label{le} \begin{theorem} \label{lewhp} Let $T = \Omega(\log n)$ be fixed arbitrarily. At the end of any $T$-th successive good iteration, \fun{LeaderElection} has elected a unique agent with a set boolean state variable $L$, w.h.p. \end{theorem} \begin{figure}[!!!ht] \begin{protocol} \PROTOCOL{LeaderElection} \VAR{\OUTPUT{\INIT{L}{\ensuremath{\textit{on}}}}} \THREAD{Main}{L} \LOCAL{\INIT{D}{\ensuremath{\textit{off}}}, \INIT{F}{\ensuremath{\textit{on}}}} \REPEAT \IFNONEMPTY{L} \GETS{F}{\{\ensuremath{\textit{on}}, \ensuremath{\textit{off}}\} $chosen uniformly at random$} \GETS{D}{L \wedge F} \IFNONEMPTY{D} \GETS{L}{D} \key{else}: \GETS{L}{\ensuremath{\textit{on}}} \end{protocol} \end{figure} \begin{proof} Consider the first good iteration of the Main thread. Assume $\cal L$ is nonempty. Denote by $\ell_0,\ell_1,\ldots$ the size of $\cal L$ in the $i$-th good iteration after the first. We have $\mathbb{E}[\ell_{i+1} | \ell_i] = \ell_i/2 + 2^{-\ell_i} \cdot \ell_i$, so for $\ell_i \ge 2$ it follows that $\mathbb{E}[\ell_{i+1} | \ell_i] \le \frac34 \ell_i$. By the multiplicative drift theorem~\cite{DBLP:journals/algorithmica/DoerrJW12}, it follows that for some $t = c \log n$ for large enough constant $c$, $\ell_t = 1$ with high probability. We then also have $\ell_T = \ell_t = 1$ for any $T \geq t$. If $\cal L$ is empty at the beginning of the first good iteration, then in the next iteration we have $|{\cal L}| = n$, and subsequently the same analysis applies. \end{proof} It follows that the convergence time of \fun{LeaderElection} is $O(\log^2 n)$ parallel rounds w.h.p. under the compilation scheme from Theorem~\ref{th:guarantees}(ii)(a), as each iteration of the protocol with no nested loops can be realized in $O(\log n)$ parallel rounds. \subsection{Majority Protocol (w.h.p.)} \label{maj} \begin{theorem} \label{th:majority_runtime} Let $T = \Omega(\log n)$ be fixed arbitrarily. At the end of any $T$-th successive good iteration, \fun{Majority} has computed in the boolean state variable $Y_A$ a correct answer to the majority problem on sets $\cal A$ and $\cal B$, w.h.p. \end{theorem} \begin{figure}[!!!ht] \begin{protocol} \PROTOCOL{Majority} \VAR{\OUTPUT{Y_A}, \INPUT{A, B}} \THREAD{Main}{Y_A, \READONLY{A,B}} \LOCAL{\INIT{A^*}{\ensuremath{\textit{off}}}, \INIT{B^*}{\ensuremath{\textit{off}}}, \INIT{K}{\ensuremath{\textit{off}}}} \REPEAT \GETS{A^*}{A} \GETS{B^*}{B} \REPEAT[c \ln n] \ASYNC[c \ln n] \RULE{A^*}{B^*}{\neg A^*}{\neg B^*} \GETS{K}{\ensuremath{\textit{off}}} \ASYNC[c \ln n] \RULE{A^* \wedge \neg K}{\neg A^* \wedge \neg B^*}{A^* \wedge K}{A^* \wedge K} \RULE{B^* \wedge \neg K}{\neg A^* \wedge \neg B^*}{B^* \wedge K}{B^* \wedge K} \IFNONEMPTY{A^*} \GETS{Y_A}{\ensuremath{\textit{on}}} \IFNONEMPTY{B^*} \GETS{Y_A}{\ensuremath{\textit{off}}} \end{protocol} \end{figure} \begin{proof} We follow the steps of \cite{DBLP:conf/soda/AlistarhAG18}, adapting the proof to our setting. Suppose w.l.o.g.\ that initially $|\set A| < |\set B|$. Thread \fun{Trim} maintains the invariant $|\set A| - |\set B|$. Consider the first good iteration. All of the following properties hold with high probability, conditioned on $c$ being large enough constant. ${A}^*$ and ${B}^*$ are initialized so that $|\mathcal{A}^*| < |\mathcal{B}^*|$. Consider a single loop-iteration of the loop in line 9 (that is, single pass through lines 10--15). Denote by $a_i$ the size of $\mathcal{A}^*$ at the start of $i$-th loop-iteration, by $a'_i$ the size of $\mathcal{A}^*$ in the $i$-th loop-iteration at line 12, and denote $b_i$ and $b'_i$ in the same manner with respect to $\mathcal{B}^*$. We observe the following: if for some $i$, in the $i$-th loop-iteration there is $b'_i \ge \frac16n$, then in that loop-iteration, in lines 10--12 $|\mathcal{B}^*|$ is always at least $\frac16n$, and any $x \in \mathcal{A}^*$ has at least $1 - (1 - \frac{1}{6n})^{c n \ln n} = 1 - n^{-c/6}$ probability of triggering rule from line 11. As a consequence, $a'_{i}=0$. Now, assume that for some $i$ there is $a'_i,b'_i \le \frac16n$. In such case $a_{i+1} \le 2a'_i$ and $b_{i+1} \le 2b'_i$ and there are always at least $\frac13n$ nodes that do not belong to $\mathcal{A}^*$, $\mathcal{B}^*$. Additionally, any node that at line 12 belonged to $\mathcal{A}^*$ has probability at each step least $\frac{1}{3n}$ of triggering rule from line 14. Thus with high probability, it triggers it exactly once. The same reasoning holds for $\mathcal{B}^*$, and as a consequence, $a_{i+1} = 2a'_i$ and $b_{i+1} = 2b'_i$. It now follows that for some $t \le O(\log n)$, it holds that $a'_t= 0$. Otherwise, we would have that for any $i \le t$, we have $b'_i \le \frac16n$. Thus $b'_i - a'_i = b_i - a_i = 2^{i-1} (b_1 - a_1)$, and $b'_t - a'_t > n$, a contradiction. It follows that after leaving loop from lines 9--15, $\mathcal{A}^* = \emptyset$ and $\mathcal{B}^* \not= \emptyset$, thus $\set Y_A = \emptyset$ holds. \end{proof} It follows that the convergence time of \fun{LeaderElection} is $O(\log^3 n)$ parallel rounds w.h.p. under the compilation scheme from Theorem~\ref{th:guarantees}(ii)(a), since each iteration of the protocol with a single nested loop can be realized in $O(\log^2 n)$ parallel rounds. \section{Compilation Process of the Sequential Code}\label{sec:compilation} The proposed language for expressing sequential code is first precompiled into a subset of the language with a simple tree grammar. All leaf nodes take the form of \ASYNC[c \ln n]{\ [ruleset]} instructions with provided rulesets, and all internal nodes take the form of \REPEAT[c \ln n]{\ [ChildNode1, ChildNode2, \ldots]} instructions (with the exception of the root node, which is typically given by a \REPEAT{} loop). (If different values of $c$ were specified for loopcounts in the description of the code, w.l.o.g.\ we understand $c$ to be the maximum among them, and use this value of $c$ throughout the code.) We describe below the process of elimination of the other language constructs. We start by precompiling all assignment operation by replacing them with a simple ruleset, and then evaluate all conditions and eliminate the branching structure from the program. \paragraph{Assignments.} A naive but correct way of implementing an assignment operation \GETS{X}{\Sigma}, for some boolean formula $\Sigma$ on local state variables, is through an insertion of code in the form of loops shown in Fig.~\ref{fig:assignment}, using an auxiliary internal trigger state variable $K_{(\ensuremath{^\#\!\!\!\!}\ )}$ assigned to the line number $\#$ in which the instruction is placed. The assignment operation ensures that in any circumstances, if the value of $X$ changes, then $X$ may become set only when $\Sigma$ is set and $X$ may become unset only when $\Sigma$ is unset. Moreover, under correct operation, the trigger $K_{(\ensuremath{^\#\!\!\!\!}\ )}$ will be set for each agent at the beginning of the second loop w.h.p., and unset while performing the assignment of $X$ for each agent; thus, the assignment will be performed at most on each agent, and exactly once w.h.p. (where the high probability may be made arbitrarily high through a careful choice of $c$). \begin{figure}[h] \begin{protocol} \ASYNC[c \ln n] \RULE{\neg K_{(\ensuremath{^\#\!\!\!\!} )}}{.}{K_{(\ensuremath{^\#\!\!\!\!} )}}{.} \ASYNC[c \ln n] \RULE{\Sigma \wedge K_{(\ensuremath{^\#\!\!\!\!} )}}{.}{X \wedge \neg K_{(\ensuremath{^\#\!\!\!\!} )}}{.} \RULE{\neg\Sigma \wedge K_{(\ensuremath{^\#\!\!\!\!} )}}{.}{\neg X \wedge \neg K_{(\ensuremath{^\#\!\!\!\!} )}}{.} \end{protocol} \vspace{-5mm} \caption{Precompilation of instruction ``\GETS{X}{\Sigma}'' placed in line number $\#$.}\label{fig:assignment} \end{figure} \paragraph{Conditions and branching.} A conditional statement following instruction ``\IFNONEMPTY{X}'' has its condition evaluated following an insertion of code presented in Fig.~\ref{fig:ifnonempty}, using an auxiliary flag variable $Z_{(\ensuremath{^\#\!\!\!\!}\ )}$ assigned to the line number $\#$ in which the instruction is placed. In the code from Fig.~\ref{fig:ifnonempty}, in the first line flag $Z_{(\ensuremath{^\#\!\!\!\!}\ )}$ is unset (w.h.p.) for all agents (and never become set for any agent). In the loop which follows, an epidemic process is triggered with source $X$ on flags $Z_{(\ensuremath{^\#\!\!\!\!}\ )}$. In this phase, a flag $Z_{(\ensuremath{^\#\!\!\!\!}\ )}$ may become set for an agent only if it is already set at the same time for at least one other agent in the population, or if set $\mathcal X$ is nonempty, i.e., if $\mathcal{Z}_{(\ensuremath{^\#\!\!\!\!}\ )} = \emptyset$ at some time, then $\mathcal{Z}_{(\ensuremath{^\#\!\!\!\!}\ )}$ may become nonempty only if $\mathcal X$ is nonempty at some point. Moreover, if set $\mathcal X$ is nonempty throughout the loop, then at the end of the loop, flag $Z_{(\ensuremath{^\#\!\!\!\!}\ )}$ will be set for all agents w.h.p. We note the behavior of the code below when set $\mathcal X$ becomes permanently empty from some time $t_0$ onwards. Then, at the end of the first correct execution of the code evaluating ``\IFNONEMPTY{X}'', we will have flag $Z_{(\ensuremath{^\#\!\!\!\!}\ )} = \ensuremath{\textit{off}}$ for all agents. Suppose this happens at time $t_1 > t_0$. Then, for all $t > t_1$, no flag $Z_{(\ensuremath{^\#\!\!\!\!}\ )}$ will ever be set again, regardless of the correctness of the execution of the protocol. This will ensure the desired property of correctness of operation. \begin{figure}[h] \begin{protocol} \VAR{\OUTPUT{Z_{(\ensuremath{^\#\!\!\!\!} )}}} \GETS{Z_{(\ensuremath{^\#\!\!\!\!} )}}{\ensuremath{\textit{off}}} \ASYNC[c \ln n] \RULE{X}{.}{Z_{(\ensuremath{^\#\!\!\!\!} )}}{Z_{(\ensuremath{^\#\!\!\!\!} )}} \RULE{Z_{(\ensuremath{^\#\!\!\!\!} )}}{.}{Z_{(\ensuremath{^\#\!\!\!\!} )}}{Z_{(\ensuremath{^\#\!\!\!\!} )}} \end{protocol} \vspace{-5mm} \caption{Precompilation of instruction ``\IFNONEMPTY{X}'' placed in line number $\#$. }\label{fig:ifnonempty} \end{figure} After running the evaluation condition, intuitively, we require that the agent executes operations within the block of the corresponding ``\key{if}:'' branch only if $Z_{(\ensuremath{^\#\!\!\!\!}\ )} = \ensuremath{\textit{on}}$, and executes operations within the block of the ``\key{else}:'' branch only if $Z_{(\ensuremath{^\#\!\!\!\!}\ )} = \ensuremath{\textit{off}}$. The precompiled form of the code does \emph{not}, however, have branching structure. We perform a standard operation of compacting rulesets of the ``\key{if}:'' and ``\key{else}:'' blocks into a single ruleset, adding a requirement on $Z_{(\ensuremath{^\#\!\!\!\!}\ )}$ or $\neg Z_{(\ensuremath{^\#\!\!\!\!}\ )}$, respectively, to the conditions requiring for triggering rules from the respective block. Formally, this is done in a bottom-up manner, and when compacting the ``\key{if}:'' and ``\key{else}:'' blocks corresponding to a condition in line $\#$, we assume that all branching instructions contained within these two blocks of code have already been eliminated. Thus, the only instructions which remain in these blocks of code are assumed to already have the required structure of a tree program with \ASYNC[c \ln n]{\ [ruleset]} instructions at leaf nodes and \REPEAT[c \ln n]{\ [ChildNode1, ChildNode2, \ldots]} instructions at internal nodes. We denote the trees corresponding to the respective blocks $T_{if (\ensuremath{^\#\!\!\!\!}\ )}$ and $T_{else (\ensuremath{^\#\!\!\!\!}\ )}$. We construct one single tree $T_{(\ensuremath{^\#\!\!\!\!}\ )}$ out of them as follows. First, we augment each of the trees until they are isomorphic by inserting artificial repeat loops and nil instructions (empty rulesets) in the artificially created leaves. The tree $T_{(\ensuremath{^\#\!\!\!\!}\ )}$ is set to be isomorphic to each of these augmented trees. There is now a one-to-one matching between the leaves of the trees $T_{if (\ensuremath{^\#\!\!\!\!}\ )}$ and $T_{else (\ensuremath{^\#\!\!\!\!}\ )}$. Now, for the $i$-th leaf of $T_{(\ensuremath{^\#\!\!\!\!}\ )}$ we define a ruleset $R^i$ by taking a union of modified rules from rulesets in the $i$-th leaf of $T_{if (\ensuremath{^\#\!\!\!\!}\ )}$ (ruleset $R^i_{if}$) and from $T_{else (\ensuremath{^\#\!\!\!\!}\ )}$ (ruleset $R^i_{else}$) as follows: \begin{itemize} \item For each rule of the form \RULE{\Sigma_1}{\Sigma_2}{\Sigma_3}{\Sigma_4} in ruleset $R^i_{if}$, we put in ruleset $R^i$ the rule \RULE{Z_{(\ensuremath{^\#\!\!\!\!}\ )} \wedge \Sigma_1}{Z_{(\ensuremath{^\#\!\!\!\!}\ )} \wedge \Sigma_2}{\Sigma_3}{\Sigma_4}. \item For each rule of the form \RULE{\Sigma_1}{\Sigma_2}{\Sigma_3}{\Sigma_4} in ruleset $R^i_{else}$, we put in ruleset $R^i$ the rule \RULE{\neg Z_{(\ensuremath{^\#\!\!\!\!}\ )} \wedge \Sigma_1}{\neg Z_{(\ensuremath{^\#\!\!\!\!}\ )} \wedge \Sigma_2}{\Sigma_3}{\Sigma_4}. \end{itemize} The above is repeated until all branching elements have been eliminated from the code. \paragraph{Precompilation result.} At the end of the process, we obtain a tree $T$ with (loop) depth $l_{\max}$ and (loop) width $w_{\max}$. Each internal node represents a loop, and each leaf node a ruleset, which must be repeated for at least $c \ln n$ rounds. Without loss of generality, we assume that all internal nodes have the same number of children $w_{\max}$; assuming this corresponds to padding the tree $T$ into a complete ($w_{\max}$)-ary tree of depth $l_{\max}$, inserting artificial repeat loops and nil instructions (empty rulesets) in the artificially created leaves. The following Section describes how to convert the obtained structure into specific rules triggered in a way synchronized by a hierarchy of phase clocks (see Subsection~\ref{sec:compiledrules} for the compilation mechanism itself). \section{Construction of The Phase-Clock Hierarchy}\label{sec:hierarchy} \subsection{Clocks with Arbitrary Constant Period}\label{sec:clocks} In what follows, lengths of time intervals are expressed in parallel rounds. A \emph{clock} is a protocol with states ($C_{0}, \ldots, C_{m-1}$), for some positive integer $m$ called its \emph{module}, such that in the course of execution each agent performs a sequence of state transitions, moving it from some state $C_i$, $i \in \{0,\ldots, m-1\}$, to state $C_{(i+1) \bmod m}$. The time between two successive moments when an agent enters state $C_0$ is called its \emph{clock cycle}. We say that a clock is \emph{operating correctly at tick length} $r \geq \ln n$ during a time interval $I$ if we can find an ordered sequence of subintervals $I_i^j \subseteq I$, $i \in \{0,\ldots, m-1\}$, $j \in \{1,\ldots j_{\max}\}$ for some integer $j_{\max}$, called \emph{ticks}, such that: \begin{itemize} \item $\max I_{i}^j < \min I_{i+1}^j$ for all indices $j$ and $i < m-1$, and $\max I_{m-1}^j < \min I_{0}^{j+1}$ for all indices $j < j_{\max}$ \item Throughout each tick $I_i^j$, all agents in the population are in the same state $C_i$, \item All ticks have length $|I_i^j| \in [a r, b r]$, where $a > 0$ can be fixed arbitrarily, and $b > a$ is a protocol-specific constant depending on the choice of $a$; \item Adjacent ticks are not too far apart: the set of intervals between ticks $I \setminus \bigcup_{i,j} I_i^j$ does not contain an interval of length more than $b r$. \end{itemize} (We remark that the tick length of a correctly operating clock asymptotically determines the cycle length of all of the agents.) All algorithms in this paper are self-contained, except that we reuse clock routines known from the literature. We are making use of clock protocols which, after a brief initialization phase operate correctly over long intervals of time (i.e., over a sufficiently large polylogarithmic or polynomial number of rounds), w.h.p. A clock protocol $C'$ with a module $m' \geq 3$ (where in what follows we will use a clock with $m'=3$) and states ($C'_{0}, \ldots, C'_{m'-1}$), operating with ticks of length $|I_i^j| \in [a' r, b' r]$, can be used to provide a clock with some longer module $m > m'$ and states ($C_{0}, \ldots, C_{m-1}$). We do this by composing clock protocol $C'$ with the following set of rules: \RULE{C'_0}{C'_0}{C'_0 \wedge T}{C'_0} \RULE{C'_1 \wedge T \wedge C_i}{C'_1}{C'_1 \wedge \neg T \wedge C_{(i+1) \bmod m} \wedge \neg C_i}{C'_1},\quad for all $i \in \{0, \ldots, m-1\}$ \RULE{C'_2 \wedge C_i}{C'_2 \wedge C_j}{C'_2 \wedge C_i}{C'_2 \wedge C_i \wedge \neg C_j},\quad for all $j < i$, $i,j \in \{0, \ldots, m-1\}$. We observe that if clock $C'$ operates correctly with constants $a', b'$ during time interval $I$ of given length, $|I| = O(\mathrm{poly}(n))$, where $a'$ is chosen to be sufficiently large, then clock $C$ operates correctly during this time interval with constants $a = a' (m'-1)$ and $b = c b' m'$, w.h.p., for some choice of constant $c$ depending on the required probability of correctness. Indeed, note that the first two rules of the composition activate a trigger $T$, which is activated for each agent at most once per cycle of clock $C'$, and is activated for all agents exactly once in all cycles of clock $C'$, w.h.p.\ (because the length of each tick of clock $C'$ is at least $a' \ln n$, for a sufficiently large choice of constant $a'$). An agent with a set trigger $T$ in a given clock cycle advances its state for clock $C$ within the first $O(\ln n)$ rounds of tick $C'_1$ of the current clock cycle, following the second rule. The third rule ensures that during tick $C'_2$, all agents w.h.p. agree on the same state of clock $C$, chosen as the maximum of all states of $C$ over all agents in the population. Note that once agreement on some state $C_i$ is reached during a given cycle of clock $C'$, then such agreement will be retained throughout all future ticks $C'_2$ of clock $C'$ during interval $I$ w.h.p., with the state of clock $C$ in the population advancing from some $C_i$ to $C_{(i+1) \bmod m}$ in each cycle of clock $C'$. Thus, clock $C$ is ``slower'' by a factor of $m'$ with respect to clock $C'$. In our construction, we will use as the base clock the simple clock $C'$ with $m'=3$ described in Section~\ref{sec:baseclock}, and use it to generate a clock $C$ with a significantly larger module $m$, where $m$ will depend on the sequential code being executed as $O(w_{\max})$. \subsection{Design of the Base Clock}\label{sec:baseclock} We first proceed to describe the design of the base modulo-3 phase clock protocol, which meets the requirements for the clock hierarchy laid out in Section~\ref{sec:clocks}. All other protocols are then composed with this clock. We base the clock on the design of the oscillator protocol $P_o$ described in~\cite{DBLP:journals/corr/DudekK17}. We recall briefly its basic properties. Protocol $P_o$ uses $7$ states: six states of the oscillator, called $A_i^+$ and $A_i^{++}$, for~$i\in\{1,2,3\}$, and an optional control (source) state, denoted by $X$. Each agent of the oscillator holds one of the three \emph{species} $A_i := A_{i}^{+} \vee A_{i}^{++}$, for $i\in\{1,2,3\}$. (The protocol $P_o$ is itself inspired by the so-called rock-paper-scissors oscillator dynamics, which is defined by the simple predator-prey rule ``\RULE{A_i}{A_{(i-1)\bmod 3}}{A_i}{A_i}'', for $i=1,2,3$; in protocol $P_o$, this rule works with slightly different probability for the states $A_i^+$ and $A_i^{++}$ within species $A_i$.). We denote $a_{\min}= \min_{i=1,2,3}|\mathcal{A}_i|$. The control state converts any encountered agent of some species $A_i$ to an agent of a uniformly random species. The theorem below is obtained by carefully retracing the arguments in~\cite{DBLP:journals/corr/DudekK17}[Section 6.4,Section 7.1]\footnote{We remark that the proof of claim~(i) of the Theorem essentially follows from \cite{DBLP:journals/corr/DudekK17}[Section 6.4; Lemmas 1-8], but the analysis therein was conducted for $|\mathcal X|=0$. Assuming $\mathcal X \leq n^{1-\varepsilon}$ instead adds additional terms to the considered potentials without changing the proof structure}. \begin{theorem}[variant of analysis in \cite{DBLP:journals/corr/DudekK17}] \label{th:variantofclock} Fix $0 < \varepsilon < 1/2$. If $|\mathcal{X}| \in [1, n^{1-\varepsilon}]$ at all time steps starting from some round $t_0$, then regardless of the configuration at time $t_0$, the following properties hold: \begin{itemize} \item[(i)] the system reaches a configuration with $a_{\min} < n^{1- \varepsilon/2}$ at some time $t_1 > t_0$, where $t_1 - t_0= O(\log n)$ rounds in expectation. \item[(ii)] for any moment of time $t \in [t_1, t_1 +e^{\Omega(n)}]$ we have $a_{\min} < n^{1-\varepsilon/3}$ w.h.p., and for each species index $i=1,2,3$, there exists a time step in the interval of rounds $[t, t+O(\log n)]$ when at least $n - O(n^{1-3\varepsilon})$ agents belong to species $A_i$. Moreover, if $A_i$ is at the current time step the species in the population held by at least $n - O(n^{1-3\varepsilon})$ agents, then the next species with this property will be $A_{i+1}$, w.h.p. \end{itemize} The above properties hold under the assumption of an asynchronous fair scheduler or a random-matching fair synchronous scheduler.\qed \end{theorem} Condition (ii) in the above theorem characterizes the oscillatory behavior of the protocol once set $\mathcal{X}$ is small enough (but non-empty), after a short period of convergence given by condition (i). We remark that condition (i) holds because $\mathcal {X}$ is not too large, whereas condition (ii) requires $\mathcal X$ to be both strictly positive and not too large. Under these assumptions, each species then regularly switches from being the smallest (held by at most $O(n^{c'})$ agents where $c' := 1 - \varepsilon/3$) to being the largest (held by all but $o(n)$ agents), with the largest species alternating in the cyclic order $\ldots \to A_1 \to A_2 \to A_3 \to A_1 \ldots $, with each cycle and each phase in it taking $\Theta(\log n)$ steps. This oscillatory effect provides the most important components of a clock, but does not yet implement the separation between ticks required in Section~\ref{sec:clocks}. We are now ready to build a phase clock. Consider protocol $P_o$ composed with the following ruleset on the states $C'_s$, where $s\in \{0,\ldots, 3k-1\}$, and $k = \Omega(1/c')$ is a sufficiently large constant: \RULE{C'_{s}}{A_{i+1}}{C'_{(s+1)\bmod 3k} \wedge \neg C'_{s}}{A_{i+1}} \RULE{C'_{s}}{\neg A_{i+1}}{C'_{k\cdot i}\wedge \neg C'_{s}}{\neg A_{i+1}} for $i = \lfloor s/k \rfloor \in\{0,1,2\}$. We call such protocol a \emph{clock controlled by an external signal} $\mathcal{X}$. We add an auxiliary states $C_i := \bigvee_{r \in\{1,\ldots, k\}}C'_{ki+r}$, for $i=1, 2, 3$. The boolean formulas $C_1, C_2, C_3$ (which can also be made explicitly into state variables) now provide the required interface to $3$-state clock. We denote this composition of protocols as $C_o$. \begin{theorem} Assume protocol $C_o$ is initialized so that $a_{\min} < n/10$ and that there is time $t_0,t_1$ such that for any $t \in [t_0,t_1]$ there is $0 < |\mathcal{X}| < n^c$ for some constant $c<1$. Then $C_o$ is a clock operating correctly during a time interval $[t_0,t_1]$, w.h.p. \end{theorem} \begin{proof} The above rules move the state of each agent around a cycle of length $3k$. An agent in state $C'_{(ki) \bmod 3k}$ (which may be interpreted as ``believing'' $A_{i+1}$ to be a minority species at the current time) waits in states of the form $C'_{(ki+r) \bmod 3k}$, for $r < k$, until in $k$ consecutive meetings it has met agents from species $A_{i+1}$ only. This implies, w.h.p., that species $A_{i+1}$ is represented by a sufficiently large fraction of the population $\omega(n^{c'})$, and the state moves to $C'_{(k(i+1)) \bmod 3k}$, where the appearance of $k$ consecutive meetings with agents from species $A_{i+2}$ is awaited, etc. The rate of traversal of this cycle corresponds to the rate of oscillatory behavior of protocol $P_o$, moreover, after one cycle of the oscillator $P_o$, all agents in the population will become synchronized (up to a possible shift of $\varepsilon \ln n$ rounds, w.h.p., where $\varepsilon$ can be made arbitrarily small) as to the values. \end{proof} It remains to discuss how to ensure that protocol $P_o$ displays its oscillatory behavior through appropriate initialization. \paragraph{Controlling $|\mathcal X|$ for always-correct protocols.} We ensure that the number of agents with set control state $X$ satisfies the required condition $0 < |\mathcal{X}| < n^{1-\varepsilon}$, for any $\varepsilon > 0$, quickly and perpetually after the initialization of the protocol. \begin{proposition} \label{simpleelimination} There is a protocol using $O(1)$ states with one marked state $X$ such that $|\mathcal{X}|>0$ is guaranteed and in $O(n^{1-\varepsilon})$ time $|\mathcal{X}| < n^\varepsilon$, for any $\varepsilon > 0$, w.h.p. \end{proposition} \begin{proof} This can be achieved within $O(n^\varepsilon)$ rounds, w.h.p., by initializing $X \gets \ensuremath{\textit{on}}$ for all agents, and applying the following rule (in composition with all other rules) \RULE{X}{X}{\neg X}{X} which eventually unsets $X$ for all but one agents. A folklore computation following from a concentration analysis around the governing equation $\mathbb{E}\frac{d |\mathcal{X}|}{dt} = - (|\mathcal{X}|/n)^2$, gives the required bound on the number of rounds after which the bound $|\mathcal{X}| < n^{1-\varepsilon}$ is satisfied. Indeed, consider $T_j$, the number of interactions it takes for $|\mathcal{X}|$ to drop from $n/2^{j}$ to $n/2^{j+1}$, call it phase $j$. Since each interaction decreases $|\mathcal{X}|$ with probability between $(1/2^{j})^2$ and $(1/2^{j+1})^2$, we have that with very high probability $T_j = \Theta(n \cdot 2^j)$ for $j$ such that $n/2^j \ge n^{1-\varepsilon}$. By observing that $\sum_{j=0}^{\varepsilon\log_2 n} T_j = \Theta(T_{\varepsilon\log_2 n}) = \Theta(n \cdot n^{\varepsilon})$, we have that the desired behavior is achieved after $O(n^{\varepsilon})$ rounds. \end{proof} We remark that achieving the bound $|\mathcal{X}| < n^{1-\varepsilon}$ is possible more quickly by using a super-constant number of auxiliary states. This task is known as \emph{junta-election}: \begin{proposition}[c.f. \cite{DBLP:conf/soda/GasieniecS18} Lem.\ 4.2,\ 4.5, Thm.\ 4.1] There is a protocol using $O(\log \log n)$ states with one marked state $X$ such that $|\mathcal{X}| > 0$ is always guaranteed, and in $O(\log n)$ time, $|\mathcal{X}| < n^{1-\varepsilon}$, for any $\varepsilon > 0$, w.h.p. \end{proposition} \paragraph{Controlling $|\mathcal X|$ for w.h.p. protocols.} When w.h.p.\ results are required, it is possible to adapt protocol $P_o$ to perform $\Theta(\mathrm{poly} \log n)$ good oscillations before stopping, w.h.p., while using only $O(1)$ states. In fact, to achieve the correct number of oscillations, it suffices to construct a ``signal'' following the asymptotic relation $|\mathcal{X}| \sim \exp(-t^{1/k})$ over time $t$, for some constant $k$. \begin{proposition} \label{k-stanowa-dynamika} There is a protocol using $O(1)$ states with one marked state $X$ such that in $O(\mathrm{poly} \log n)$ time $|\mathcal{X}| < n^{1-\varepsilon}$, for any choice of $\varepsilon>0$, w.h.p. \end{proposition} \begin{proof} We observe that for $k>0$, the same can be achieved by using a higher-order variant of the previously analyzed process, where for some constant $k\ge 1$, we initialize $Z \gets \ensuremath{\textit{on}}$ and additional state flags $Z_1,\ldots,Z_{k} \gets \ensuremath{\textit{off}}$. \RULE{.}{\neg Z}{\neg Z_1 \wedge \ldots \wedge \neg Z_k}{.} \RULE{Z \wedge \neg Z_1 \ldots \wedge \neg Z_k}{Z}{Z_1}{.} \RULE{Z_i}{Z}{\neg Z_i \wedge Z_{i+1}}{.} for any $i<k$ \RULE{Z_k}{Z}{\neg Z \wedge \neg Z_k}{.} This protocol implements dynamics of the form $\mathbb{E} \frac{d |\mathcal{Z}|}{dt} = - |\mathcal{Z}| \cdot (|\mathcal{Z}|/n)^{k+1}$, which solves to $|\mathcal{Z}| = \Theta( n \cdot t^{-1/(k+1)})$. We now use $Z$ to create a signal $X$, initially $X \gets \ensuremath{\textit{on}}$, using additional states $X_1, \ldots, X_{k-1} \gets \ensuremath{\textit{off}}$. \RULE{.}{\neg Z}{\neg X_1 \wedge \ldots \wedge \neg X_k}{.} \RULE{X \wedge \neg X_1 \ldots \wedge \neg X_g}{Z}{X_1}{.} \RULE{X_i}{Z}{\neg X_i \wedge X_{i+1}}{.} for any $i<k-1$ \RULE{X_{k-1}}{Z}{\neg X \wedge \neg X_{k-1}}{.} This implements a dynamics of the form $\mathbb{E} \frac{d |\mathcal{X}|}{dt} = - |\mathcal{X}| \cdot (\mathcal{Z}/n)^{k} = - |\mathcal{X}| \cdot t^{-k/(k+1)} $, which solves to $|\mathcal{X}| = n \cdot e^{-t^{1/k}}$. The fact that the required dynamics is in fact realized with high probability by the given protocol follows from a mean-field analysis on the continuous process and standard concentration bounds (notably, taking into account~\cite{DBLP:journals/corr/DudekK17}[Lemma~1] and using Azuma's inequality). \end{proof} \subsection{Hierarchy of Clocks with Logarithmically Slowed Rate} \label{hierarchia} In what follows, we will be using a base clock $C^{(j)}$, where $j \geq 1$, with rate $r^{(j)}$ and sufficiently long module $m$, where we assume $4|m$ for technical reasons to simulate a slowed down version of other protocols. (For clock $C^{(1)}$, we put $r^{(1)} = \alpha \ln n$ for some large constant $\alpha$ depending on the sequential code being executed.) The simulation of a protocol $P$ proceeds as follows. All agents have state variables of clock $C^{(j)}$ and two copies of state variables of protocol $P$, which we call the \emph{current} copy and a \emph{new} copy. Then, in composition with the rules of clock $C^{(j)}$, the following rules are executed for all $i \in \{ 0, \ldots, m/4-1\}$: \begin{enumerate} \item When two agents meet in state $C^{(j)}_{4i}$, both having a set trigger $S$, then they simulate the interaction of protocol $P$ on the current copy of the state variables of protocol $P$, storing the new values of their state variables to the respective new copies, and unsetting trigger $S$ in the same interaction. \item When two agents meet in state $C^{(j)}_{4i+2}$, they assign the new copy of their state variables for protocol $P$ to the current copy, and set trigger $S$ in the same interaction. \end{enumerate} We first observe that when clock $C^{(j)}$ is operating correctly, rules of the form 1 and 2 above are separated in time by the odd clock ticks of clock $C^{(j)}$. Thus, w.h.p. the operation of the simulation of protocol $P$ can be more abstractly viewed as a computation of a random linear-size matching $M$\footnote{Matching $M$ is created incrementally, with the first agents to interact after entering state $C^{(j)}_{4i}$ being more likely to join it. However, for the applied clocks which are based on missing species detection, the probability that an agent enters state $C^{(j)}_{4i}$ at a given time is almost the same for all agents, differing for any two agents by at most $O(n^{-k})$, where constant $k$ can be made arbitrarily large by a choice of constant $a$ of the clock. It follows by an elementary analysis that the probability that $M$ is any fixed matching of given size is within a factor of $(1\pm O(n^{-k+2}))$ from that in the uniform distribution on the set of all matchings of the same size. This polynomially small non-uniformity becomes negligible in protocol analysis.} on the population in between the $(4i-1)$-st and $(4i+1)$-st clock ticks of the current clock cycle, and then an execution of the corresponding rules of protocol $P$ along the chosen matching $M$, \emph{deferred} to some time between the $(4i+1)$-st and $(4i+3)$-rd clock ticks of the current clock cycle. We have thus effectively (i.e., conditioned on the non-occurrence of events occurring with probability $O(n^{-c})$ where $c$ can be made an arbitrarily large constant) implemented a fair execution of protocol $P$ under a random-matching scheduler, but slowed by a factor of $\Theta(r^{(j)})$ with respect to the natural execution rate of such a scheduler. Whereas we are unaware of any generic method of showing the equivalence between a random-sequential and random-matching scheduler (and consider designing such a method to be a question of significant interest), the analysis of the vast majority of protocols from the literature carries over between the two scheduler models without significant changes. This is the case, in particular, for clock protocols, such as our base oscillator protocol from~\cite{DBLP:journals/corr/DudekK17}, and all the asymptotic results on the performance of such clock protocols remain intact under the random-matching scheduler model. By considering protocol $P$ to be a copy of clock $C^{(1)}$, we obtain a new clock $C^{(j+1)}$ whose rate is $r^{(j+1)} = \Theta(r^{(1)} r^{(j)})$, with $r^{(j+1)} \geq r^{(1)} r^{(j)}$. Recalling that $r^{(1)} = \alpha \ln n$, we obtain in particular for all $j$ constant: $r^{(j+1)} \geq \alpha \ln n r^{(j)}$ and $r^{(j)} = \Theta((\alpha \ln n)^j)$. This provides us with a hierarchy of clocks, each ticking at polylogarithmic rate, with clock $C^{(j)}$ performing at least $\alpha \ln n - O(1)$ cycles during one cycle of clock $C^{(j+1)}$. We choose the hierarchy of clocks to have constant height, more precisely, $j \leq l_{\max}$, where we recall that $l_{\max}$ is the maximum loop depth of the executed sequential code. By applying union bounds, we can thus also ensure that all clocks in the hierarchy are operating correctly, w.h.p. The details of the construction provide one more property of the clock hierarchy which simplifies subsequent analysis: while all clocks are operating correctly and at least one agent in the population is in state $C^{(j)}_0$ (or, more generally, in a state divisible by 4 for clock $C^{(j)}$), no update to the current copy of state of any of the clocks $C^{(j')}$, for $j' > j$, is being performed by any agent. We can thus compose the executed protocol with an auxiliary protocol, which at the beginning of a clock cycle for clock $C^{(j)}$, stores for each agent in variable $C^{*(j+1)}_i$ a local copy of the state variable $C^{*(j+1)}_i$: \RULE{C^{(j)}_0 \wedge C^{(j+1)}_i}{C^{(j)}_0}{C^{(j)}_0 \wedge C^{*(j+1)}_i}{C^{(j)}_0} \RULE{C^{(j)}_0 \wedge \neg C^{(j+1)}_i}{C^{(j)}_0}{C^{(j)}_0 \wedge \neg C^{*(j+1)}_i}{C^{(j)}_0} \noindent for all $j>0$. Note that not all agents will have the same values of $C^{*(j+1)}_i$, since the simulated clock $C^{(j+1)}$ might have been ``frozen'' in the process of updating its state while it was being copied. In this case, at most two states $C^{*(j+1)}_i$, $C^{*(j+1)}_{(i+1) \bmod m}$ may be set for different agents of the population. We fix this using a standard consensus rule applied strictly later (i.e., after a full tick of clock $C_j$), defaulting to the larger of the value: \RULE{C^{(j)}_2 \wedge C^{*(j+1)}_i}{C^{(j)}_2 \wedge C^{*(j+1)}_k}{C^{(j)}_2 \wedge C^{*(j+1)}_i}{C^{(j)}_2 \wedge C^{*(j+1)}_i \wedge \neg C^{*(j+1)}_k}, for $i > k$. Thus, from state $C^{(j)}_4$ until the end of the current cycle for clock $C^{(j)}$, each agent now holds stored (and does not update) the same value $C^{*(j+1)}_i$. We remark that when $C^{*(j+1)}_i$ is set, then always either $C^{(j+1)}_i$ is set or $C^{(j+1)}_{(i+1) \bmod m}$ is set for each agent (i.e., the locally stored state of the higher-level clock is at most one state behind the real state of that clock, for any agent), since the cycle length of clock $C^{(j)}$ is (much) more than $2$ times shorter than that of clock $C^{(j+1)}$. As a direct corollary, we make the following crucial observation. \begin{proposition} Let $\sigma = (\sigma_{l_{\max}}, \ldots, \sigma_1)$ be an arbitrary vector with $\sigma_j \in \{4,\ldots, m-2\}$, for all $j \in \{1, \ldots, l_{\max}\}$. Suppose for some agent $C^{(1)}_{\sigma_1} \wedge \bigwedge_{j>1} C^{*(j)}_{\sigma_j}$ is set. Then, $\bigwedge_{j>1} C^{*(j)}_{\sigma_j}$ is also set for all agents in the population. \qed \end{proposition} We now relate the period $m$ of clock $C_1$ to the compiled code, by setting $m := 4 w_{\max} + 2$. In relation to the above Proposition, we define the \emph{time path} $\tau = (\tau_{l_{\max}}, \ldots, \tau_1)$ of an agent as the unique vector such that $\tau_j \in \{1, \ldots, w_{\max}\}$ for all $j$, and $C^{(1)}_{4 \tau_1} \wedge \bigwedge_{j>1} C^{*(j)}_{4\tau_j} =: \Pi_{\tau}$ is set for this agent; if no such vector exists, we put $\tau = \perp$. The sequence of time paths observed in the program will completely determine its execution. We first formalize the recursively defined properties of defined clock hierarchy into iterative form, expressed by the following claim (which follows from a standard de-recursivization of the analysis). \begin{proposition}\label{pro:exec} The time paths of all agents satisfy the following properties w.h.p. during any time interval $I$ of at most polynomial length during which all clocks are operating correctly: \begin{itemize} \item At any moment of time, all time paths of agents which are not equal to $\perp$ are identical and equal to the same value $\tau$. \item We call a time step $t$ in which all agents hold the same time path $\tau_t \neq \perp$ (i.e., no agent holds a time path of $\perp$ during step $t$) a \emph{good step}, and denote by $I_G \subseteq I$ the subset of good steps in interval $I$. Then, the sequence of time paths observed over good steps, $(\tau_t)_{t \in I_G}$ is a valid output of \emph{some} execution of the non-deterministic program with the pseudocode given in Fig.~\ref{fig:loopstructure}. \item In the time interval between two good steps, the time path held by any agent is first the time path from the preceding good step, then $\perp$, and then the time path from the succeeding good step. \qed \end{itemize} \end{proposition} \begin{figure} \begin{protocol} GoodStepCount := 0 \textbf{for} $\tau_{l_{\max}} := 1$ \textbf{to} $+\infty$ \textbf{do}: \textbf{for} $\tau_{l_{\max}-1} := 1$ \textbf{to} $w_{\max} \cdot$ RandInt[$\gamma \ln n, \delta \ln n$] \textbf{do}: \ldots \textbf{for} $\tau_{2} := 1$ \textbf{to} $w_{\max} \cdot$ RandInt[$\gamma \ln n, \delta \ln n$] \textbf{do}: \textbf{for} $\tau_{1} := 1$ \textbf{to} $w_{\max} \cdot$ RandInt[$\gamma \ln n, \delta \ln n$] \textbf{do}: \textbf{for} RulesetLoop$ := 1$ \textbf{to} RandInt[$\gamma n \ln n, \delta n \ln n$] \textbf{do}: \textbf{print} $(\tau_{l_{\max}} \bmod m, \tau_{l_{\max}-1} \bmod m, \ldots, \tau_{l_2} \bmod m, \tau_{l_1} \bmod m)$ GoodStepCount := GoodStepCount + 1 \textbf{if} GoodStepCount $\geq |I| / (\delta \ln n)^{l_{\max}}$: \textbf{if} RandInt[0,1] = 1: \textbf{stop}. \end{protocol} \setcounter{figure}{0} \caption{The sequence of time paths observed in the population at time steps of interval $I$ when all time paths of all agents are identical is w.h.p. a valid output of \emph{some} execution of the non-deterministic program with the pseudocode given above. $\gamma$ and $\delta$, with $0 < \gamma < \delta$ are constants, such that $\gamma$ can be chosen to be arbitrarily large and $\delta$ depends on the choice of $\gamma$. The routine RandInt returns an integer from the given interval, sampling each integer with (arbitrary) positive probability.}\label{fig:loopstructure} \end{figure} We remark that in Fig.~\ref{fig:loopstructure}, it is guaranteed that termination will only occur after at least $\geq |I| / (\delta \ln n)^{l_{\max}}$. This is because the interval between adjacent good steps does not exceed the time the outermost clock $C^{(l_{\max})}$ takes to traverse $6$ states, which is less than the length of its clock cycle. \subsection{Deploying the Clock Hierarchy: Rules of the Compiled Protocol}\label{sec:compiledrules} We design the rules of the compiled program, to be composed with the protocol running the clock hierarchy, so that agents select the current ruleset to be executed based solely on their timepath. Note that there exists a one-to-one correspondence between timepaths (different from $\perp$, which can be seen as an idle step) and leaves of tree $T$ identified by paths taken from the root of tree $T$ to the leaf; we will assume both timepaths and leaves are indexed by vectors $\tau = (\tau_{l_{\max}}, \ldots, t_{1})$, with $\tau_j \in \{1,\ldots, w_{\max}\}$. By convention, level $l_{\max}$ represents the outermost loop level of the code, and level $1$ the innermost. We proceed to identify leaves inductively: having fixed $(\tau_1, \ldots, \tau_{j-1})$, a value of $\tau_j$ corresponds to picking the $\tau_j$-th child (in top-down order of instructions) in the $j$-th-outermost loop of the code reachable along the path $(\tau_1, \ldots, \tau_{j-1})$. We denote by $R_{\tau}$ the ruleset associated with leaf $\tau$ (formally, leaf $\tau$ will contain the instruction \ASYNC[c \ln n] $R_\tau$). \paragraph{Compilation procedure.} The entire process of compilation of the precompiled code tree is given as follows: for each $\tau = (\tau_{l_{\max}}, \ldots, t_{1})$, with $\tau_j \in \{1,\ldots, w_{\max}\}$, for each rule \RULE{\Sigma_1}{\Sigma_2}{\Sigma_3}{\Sigma_4} $\in R_{\tau}$, we add to the final protocol the rule: \RULE{\Pi_\tau \wedge \Sigma_1}{\Pi_\tau \wedge \Sigma_2}{\Sigma_3}{\Sigma_4} where we recall: $\Pi_\tau = C^{(1)}_{4 \tau_1} \wedge \bigwedge_{j>1} C^{*(j)}_{4\tau_j}$. \paragraph{Note on execution.} The execution of the rules over time for any agent is governed by Proposition~\ref{pro:exec}. The execution of interactions executing rules associated with any time path is always completed before execution of rules associated with the next timepath in the ordering from Fig.~\ref{fig:loopstructure} starts. Moreover, each timepath appears in all agents for at least $\delta n \ln n$ successive good steps, hence each ruleset is guaranteed to be repeated for a sufficiently long time to satisfy the constraints of the loops under a uniform scheduler on the population (choosing $\delta \geq c$). In other words, during good steps (which last sufficiently long), the applied filter $\Pi_\tau$ for the current time path $\tau$ corresponds precisely to the execution of the original ruleset $R_\tau$. (Note that directly before and after the good steps for time path $\tau$, ruleset $R_\tau$ may be executed by some subset of the agents in the population; this is tolerated behavior as per the specification of the programming language.) Finally, the order in which time paths appear (cf.~Fig.~\ref{fig:loopstructure}) is precisely the order of leaves in a correct sequential execution of rules in the tree. The number of rounds taken to perform a complete iteration of the outermost infinite ``for'' loop (i.e., to iterate through all possible time paths) is at most poly logarithmic in $n$, given as $O((\delta\log n)^{l_{\max} + 1})$. This value asymptotically determines the required time the clocks need to operate correctly before stopping to achieve intended operation of the program. \section{Programming Always-Correct Protocols} \subsection{Exact Leader Election Protocol} \label{leexact} This Section is devoted to the analysis of protocol \fun{LeaderElectionExact}, which is version of protocol \fun{LeaderElection} modified to achieve correct computation with certainty. \begin{figure} \begin{protocol} \PROTOCOL{LeaderElectionExact} \VAR{\OUTPUT{\INIT{L}{\ensuremath{\textit{on}}}}, \INIT{R}{\ensuremath{\textit{on}}}, \INIT{F}{\ensuremath{\textit{on}}}} \THREAD{Main}{L, \READONLY{R, F}} \LOCAL{\INIT{D}{\ensuremath{\textit{off}}}} \REPEAT \IFNONEMPTY{L} \GETS{D}{L \wedge F} \IFNONEMPTY{D} \GETS{L}{L \wedge D} \key{else}: \GETS{L}{R} \THREAD{FilteredCoin}{F} \LOCAL{\INIT{I}{\ensuremath{\textit{on}}}, \INIT{S}{\ensuremath{\textit{on}}}} \ASYNC \RULE{I}{I}{\neg I \wedge S}{\neg I \wedge \neg S} \RULE{I}{\neg I}{\neg I}{\neg I} \RULE{S}{\neg S}{S \wedge F}{S \wedge F} \RULE{\neg S}{S}{\neg S \wedge F}{\neg S \wedge F} \RULE{F}{.}{\neg F}{.} \THREAD{ReduceSets}{R,L} \ASYNC \RULE{R}{R \wedge \neg L}{R}{\neg R \wedge \neg L} \RULE{R \wedge L}{R \wedge L}{R \wedge L}{\neg R \wedge \neg L} \end{protocol} \end{figure} \begin{theorem} \fun{LeaderElectionExact} eventually elects a leader. For a protocol compilation following Theorem~\ref{th:guarantees}(ii)(b), a correct result with certainty is reached within $O(\mathrm{poly}(n))$ steps, w.h.p., starting from any reachable state of the protocol. \end{theorem} \begin{proof} Eventually (w.h.p. after at most polynomial time), $\set F = \emptyset$ by thread \fun{FilteredCoin}, and set $\set F$ never changes again. Eventually (w.h.p. after at most polynomial time), $|\set R| = 1$ by thread \fun{ReduceSets}, and set $\set R$ never changes again (e.g., $\set R = \{a\}$, for some agent $a$ in the population). At the end of the next good iteration after the above, in thread \fun{Main}, we have $\set D = \emptyset$ and set $\set D$ never changes again. At the end of the next good iteration after the above, we have $\set L = \set R = \{a\}$. Any successive update to $L$ is of the form \GETS{L}{R}. It follows that $\set L = \{a\}$ subsequently always holds. \end{proof} \begin{theorem} \fun{LeaderElectionExact} elects a leader in $O(\log^2 n)$ parallel rounds w.h.p., counting from the end of the initialization phase from Theorem~\ref{th:guarantees}(ii)(b). \end{theorem} \begin{proof} We remark that, following the above analysis, the correct result with certainty is reached in expected polynomial time. Thus, to asymptotically bound the expected time until the correct result is reached, it suffices to show that the Main thread reaches a correct result w.h.p. after at most a poly-logarithmic number of good iterations of its outer loop. The bound on the expected time until a correct result is reached will then follow from the promise of the framework. First, we observe that there is always $|\mathcal{R}| \ge 1$, by the rules of thread \fun{ReduceSets}. We now analyze thread \fun{FilteredCoin}. Set $\mathcal{I}$ starts at size $n$ and eventually becomes empty set (with high probability in $O(\log n)$ rounds). Consider first $n/2$ nodes eliminated from $\mathcal{I}$. While $|\mathcal{I}| \ge n/2$, rule from line 17 is at least as probable to trigger as rule from line 18. Thus, with very high probability there are at least $n/8$ triggers of line 17, adding $n/8$ elements to $\mathcal{S}$. On the other hand, line 17 was triggered at most $n/2$ times. By the lines 19 and 20, $\mathcal{S}$ with the same probability increases by one element and decreases by one element, from which we have that for the next $\Omega(n)$ rounds there is $\frac{n}{16} \le |\mathcal{S}| \le \frac{15}{16}n$. Denote by $f_i$ the size of set $F$ in the step $i$. There is $\mathbb{E}[ f_{i+1} - f_{i} | f_{i}] \le \frac25 (1-\frac{f_i}{n}) \cdot 2 - \frac15 \frac{f_i}{n} = \frac{4}{5} - \frac{f_i}{n}$, where the bound follows from upperbounding the probability of rules 19 and 20 successfully adding node to $F$. Similarly, $\mathbb{E}[ f_{i+1} - f_{i} | f_{i} ] \ge \frac{2}{5} \cdot \frac{15}{16} \cdot \frac{1}{16} \cdot 2 \cdot (1-\frac{f_i}{n}) - \frac15\frac{f_i}{n} = \frac{3}{64} - \frac{79}{320}\frac{f_i}{n}$. Thus the fixed point is between $\frac{15}{79}n$ and $\frac{4}{5}n$. Thus, by Azuma's inequality, with very high probability $\frac{15}{158}n \le |F| \le \frac{9}{10}n$, starting from round $O(\log n)$, for at least $\Omega(n)$ rounds. Moreover, for any agent $x$, by the above analysis, there is in each interaction a $\Theta(1/n)$ probability of $x \in \mathcal{F}$ or $x \not\in \mathcal{F}$ being overwritten by an application of one of the rules from lines 19--21, which translates to high probability over any $\Omega(\log n)$ rounds. Thus if we consider $\mathcal{F}_0, \mathcal{F}_1, \mathcal{F}_2, \ldots$ being set $\mathcal{F}$ in a consecutive round snapshots, then for $i,j = \Omega(\log n)$ such that $i - j \ge t = c \log n$, the sets $\mathcal{F}_i$ and $\mathcal{F}_j$ are independent, conditioned on an event which holds with high probability $1 - 1/n^{\Omega(c)}$. We consider now the first good iteration. If $\mathcal{L}$ is empty, it will be set to $\mathcal{R}$ in the next iteration, and $\mathcal{R}$ is nonempty. Thus w.l.o.g., assume $\mathcal{L}$ is non-empty. Denote by $\ell_i$ the size of $\mathcal{L}$ in $i$-th iteration after this one. Denote by $\frac{1}{11} \le p_i \ge \frac{10}{11}$ the probability of success of the synthetic coin $\mathcal{F}$ in $i$-th iteration. There is $\mathbb{E}[\ell_{i+1} | \ell_i] \le \ell_i \cdot p_i + (1-p_i)^{-\ell_i} \cdot \ell_i$, so for $\ell_i \ge 2$ there is $\mathbb{E}[\ell_{i+1} | \ell_i] \le \frac{111}{121} \ell_i$. It follows that for some $t = c \log n$ for large enough constant $c$, $\ell_t = 1$ with high probability. We have that $O(c \log n)$ uncorrelated sets $\mathcal{F}$ are enough to narrow down $\mathcal{L}$ to size 1. Thus, in $O(\log^2 n)$ rounds this happens with high probability, as every $\mathcal{F}_{i \cdot c \log n}$, $i \ge t$ for $t$ large enough constant is uncorrelated with high probability, and all the additional $\mathcal{F}_{j}$ for $i c \log n < j < (i+1) c \log n$ only possibly speed up the process. To complete the proof, we observe that line 26 in thread \fun{ReduceSets} only speeds up the process of narrowing down $\mathcal{L}$, while never reducing $\mathcal{L}$ to empty set. Additionally, if at some point $|\mathcal{L}| = 1$, then in $O(n)$ rounds in expectation, $|\mathcal{R}| = 1$ by the line 25, and after that $\mathcal{R}$ remains constant. \end{proof} \subsection{Exact Majority Protocol} \label{majexact} This Section is devoted to the analysis of protocol \fun{MajorityExact}, provided in the form of pseudocode, being a version of protocol \fun{Majority} adapted to achieve correctness with certainty. \begin{figure} \begin{protocol} \PROTOCOL{MajorityExact} \VAR{\OUTPUT{Y_A}, \INPUT{A, B}} \THREAD{Main}{Y_A, {A, B}} \LOCAL{\INIT{A^*}{\ensuremath{\textit{off}}}, \INIT{B^*}{\ensuremath{\textit{off}}}, \INIT{K}{\ensuremath{\textit{off}}}} \REPEAT \ASYNC[c \ln n] \RULE{A}{B}{\neg A}{\neg B} \GETS{A^*}{A} \GETS{B^*}{B} \REPEAT[c \ln n] \ASYNC[c \ln n] \RULE{A^*}{B^*}{\neg A^*}{\neg B^*} \GETS{K}{\ensuremath{\textit{off}}} \ASYNC[c \ln n] \RULE{A^* \wedge \neg K}{\neg A^* \wedge \neg B^*}{A^* \wedge K}{A^* \wedge K} \RULE{B^* \wedge \neg K}{\neg A^* \wedge \neg B^*}{B^* \wedge K}{B^* \wedge K} \IFNONEMPTY{A^*} \GETS{Y_A}{\ensuremath{\textit{on}}} \IFNONEMPTY{B^*} \GETS{Y_A}{\ensuremath{\textit{off}}} \end{protocol} \end{figure} \begin{theorem} Protocol \fun{MajorityExact} eventually correctly computes majority. The result is obtained in $O(\log^3 n)$ parallel rounds w.h.p., counting from the end of the initialization phase from Theorem~\ref{th:guarantees}(ii)(b). \end{theorem} \begin{proof} Suppose w.l.o.g.\ that initially $|\set A| < |\set B|$. Eventually (w.h.p. after polynomial time), $|\set A| = 0$ and set $\set A$ never changes again. At the end of the next good iteration after the above we have $\set A^* = \emptyset$ and set $\set A^*$ never changes again. At the end of the next good iteration after the above, we have $\set Y_A = \emptyset$ and $\set Z_{(18)} = \emptyset$. Set $\set Z_{(3)}$ never changes again. Any successive valuation of $Z_{(18)}$ in the compilation of line ``$\IFNONEMPTY{A^*}$'' always returns false, hence the branch of the program containing the line \GETS{Y_A}{\ensuremath{\textit{on}}} is never entered. It follows that any successive update to $Y_A$ is of the form \GETS{Y_A}{\ensuremath{\textit{off}}}, ans so $\set Y_A = \emptyset$ subsequently always holds. The case of $|\set A| > |\set B|$ is analogously analyzed, reaching $\set Y_A$ equal to the entire population. For a protocol compilation following Theorem~\ref{th:guarantees}(ii)(b), a correct result with certainty is reached within $O(\mathrm{poly}(n))$ steps, w.h.p., starting from any reachable state of the protocol. The expected time it takes for the protocol to compute majority follows from the analysis identical to the one done in Theorem~\ref{th:majority_runtime}. \end{proof} \subsection{Computing Arbitrary Semi-linear Predicates} \label{general-semi-linear} Generalizing the approach used in the exact solution to majority, we can also solve arbitrary semi-linear predicates. For a fixed semi-linear predicate\footnote{For simplicity and w.l.o.g., we consider binary-valued predicates, only.} $\Pi$ on a finite set of input states $A_1, A_2, \ldots$, we achieve this by combining two other population protocols for computing $\Pi$ as black boxes. The first of these protocols, given in~\cite{DBLP:journals/dc/AngluinAE08a}, has the property that for a population with a distinguished leader state and a given unique leader ($|\mathcal{L}=1|$), after $O(\log^5 n)$ rounds of computation, it writes the value of $\Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ to the output of all agents, w.h.p. of correctness. We call it the \emph{fast} blackbox. Moreover, we use the generic technique for exactly and stably computing in expected polynomial time the value of $\Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ to the output of all agents~\cite{DBLP:journals/dc/AngluinADFP06}. This is called the \emph{slow} blackbox. The slow blackbox uses output states $(P_D^1, P_D^0)$, at most one of which is set for each agent, and stable computation means that if state $P_D^i$ is set for all agents, $i\in \{0,1\}$, then $i$ represents the true value of $\Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ . The fast blackbox and slow blackbox are put together in separate threads with the threads of protocol \fun{LeaderElectionExact}, to achieve protocol \fun{SemilinearPredicateExact}. Note that the execution of the ruleset over $\geq (c \ln n)^5$ rounds should be read as $5$ nested loops over $\geq (c \ln n)$ rounds. \begin{figure}[!!!ht] \begin{protocol} \PROTOCOL{SemilinearPredicateExact} \{for predicate $\Pi$\} \VAR{\OUTPUT{\INIT{P}{\ensuremath{\textit{on}}}}, \INPUT{A_1, A_2,\ldots}, \INIT{L}{\ensuremath{\textit{on}}}, \INIT{P_D}{\ensuremath{\textit{on}}}} \{Import all threads of protocol LeaderElection on variable $L$\} \THREAD{SemiLinear}{P,\READONLY{L,P_D^0,P_D^1}} \LOCAL{P^*} \REPEAT \{Reset all state variables of fast blackbox to initial settings using \GETS \} \ASYNC[(c \ln n)^5] $\triangleright$ \{Fast blackbox: w.h.p. compute $\Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ as $P^*$, using $\cal L$ as leader(s)\} \IFNONEMPTY{P^*} \IFNONEMPTY{\neg P_D^0} \IFNONEMPTY{\neg P} \GETS{P}{\ensuremath{\textit{on}}} \IFNONEMPTY{\neg P^*} \IFNONEMPTY{\neg P_D^1} \IFNONEMPTY{P} \GETS{P}{\ensuremath{\textit{off}}} \THREAD{SemiLinearSlow}{P_D^0,P_D^1} \ASYNC $\triangleright$ \{Slow blackbox: deterministically compute $\Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ as $(P_D^0,P_D^1)$\} \end{protocol} \end{figure} \begin{theorem} Protocol \fun{SemilinearPredicateExact} eventually correctly computes $\Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$. The result is obtained in $O(\log^5 n)$ parallel rounds w.h.p., counting from the end of the initialization phase from Theorem~\ref{th:guarantees}(ii)(b). \end{theorem} \begin{proof} Eventually (w.h.p. after polynomial time), we have $P_D^i$ and $\neg P_D^{1-i}$ correctly and permanently set for all agents, where $i\in \{0,1\}$ is given as $i = \Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$, by the properties of the slow blackbox. By the analysis of protocol \fun{LeaderElection}, eventually (w.h.p. after polynomial time), we have that $|\mathcal{L}| =1$. At the end of the next good iteration after both of the above conditions are reached, in thread \fun{SemiLinear}, we either have that (1) $P$ and $\neg P_D^0$ are set for all agents, or (2) $\neg P$ and $P_D^0$ are set for all agents (in which two cases we have converged to $P = \Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ and no further update to $P$ will be performed), or (3) we have $P\neq \Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ for some agent. In the last case, by the properties of the fast black box, at the end of the good iteration, we have $P^*=\Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ for all agents, w.h.p. Once this event holds, we will update $P$ accordingly, and no further updates to $P$ will ever be performed by the protocol. For a protocol compilation following Theorem~\ref{th:guarantees}(ii)(b), a correct result with certainty is reached within $O(\mathrm{poly}(n))$ rounds, w.h.p., starting from any reachable state of the protocol. Finally, we note that w.h.p., the correct value of $P = \Pi(\mathcal{A}_1,\mathcal{A}_2,\ldots)$ will be set within $O(\log^5 n)$ rounds from the time when the leader election protocol has converged, and will never change again, w.h.p., in the polynomial number of steps until the convergence of the slow blackbox. While the start of the blackbox is not perfectly synchronized and agents start times differ w.h.p. by $O(\log n)$, a careful inspection of the \cite{DBLP:journals/dc/AngluinAE08a} protocol reveals that this is not an issue, slowing down the computation by an additional $O(\log n)$ time. After the convergence of the slow blackbox, it will never change again, as analyzed above. Overall, it follows that the protocol converges within $O(\log^5 n)$ rounds from the end of the initialization phase. \end{proof} Since the length of the initialization phase of the protocol is $O(n^\varepsilon)$ for the compilation scheme of Theorem~\ref{th:guarantees}(ii)(b), we obtain that the \fun{SemilinearPredicateExact} protocol provides an exact result in $O(n^\varepsilon)$ rounds in expectation. When using the compilation scheme of Theorem~\ref{th:guarantees}(ii)(a), we obtain a convergence time of $O(\log^5 n)$ rounds, but only w.h.p. of correctness of result. \bibliographystyle{alpha}
{ "timestamp": "2018-04-18T02:02:48", "yymm": "1802", "arxiv_id": "1802.06872", "language": "en", "url": "https://arxiv.org/abs/1802.06872" }
\section{Introduction} Hidden Markov models (HMMs) are widely used in machine learning when the data samples are time \emph{dependent}, for example in speech recognition, language processing, and video analysis. The graphical model of a HMM is shown in Figure~\ref{fig:hmm}. HMM models a (time-dependent) sequence of data $\{Y_t\}_{t=0}^T$ as indirect observations of an underlying Markov chain $\{X_t\}_{t=0}^T$ which is not available to us. Homogeneous HMMs are parsimonious models, in the sense that they are fully characterized by the transition probability $\Pr[X_{t+1}|X_{t}]$ and the emission probability $\Pr[Y_t|X_t]$ even though the size of the given data $\{Y_t\}_{t=0}^T$ can be very large. Consider a homogeneous HMM such that: \begin{itemize}[noitemsep] \item a latent variable $X_t$ can take $K$ possible outcomes $x_1,...,x_K$; \item an ambient variable $Y_t$ can take $N$ possible outcomes $y_1,...,y_N$. \end{itemize} Recall that \cite{rabiner1986introduction,Ghahramani2001}: \begin{itemize}[noitemsep] \item Given both $\{X_t\}_{t=0}^T$ and $\{Y_t\}_{t=0}^T$, the complete joint probability factors, and we can easily estimate the transition probability $\Pr[X_{t+1}|X_{t}]$ and the emission probability $\Pr[Y_t|X_t]$. \item Given only $\{Y_t\}_{t=0}^T$, but assuming we know the underlying transition and emission probabilities, we can calculate the observation likelihood using the forward algorithm, estimate the most likely hidden sequence using the Viterbi algorithm, and compute the posterior probability of the hidden states using the forward-backward algorithm. \end{itemize} The most natural problem setting, however, is when neither the hidden state sequence nor the underlying probabilities are known to us---we only have access to a sequence of observations, and our job is to reveal the HMM structure, characterized by the transition matrix $\Pr[X_{t+1}|X_{t}]$ and the emission probability $\Pr[Y_t|X_t]$ from the set of observations $\{Y_t\}_{t=0}^T$. \begin{figure}[t] \centering \input{hmm} \caption{The graphical model of a HMM.} \label{fig:hmm} \end{figure} \subsection{Related work}\label{sec:related} The traditional way of learning a HMM from $\{Y_t\}_{t=0}^T$ is via expectation-maximization (EM) \cite{rabiner1986introduction}, in which the expectation step is performed by calling the forward-backward algorithm. This specific instance of EM is also called the Baum-Welch algorithm \cite{baum1970maximization,Ghahramani2001}. However, the complexity of Baum-Welch is prohibitive when $T$ is relatively large---the complexity of the forward-backward algorithm is $\mathcal{O}(K^2T)$, but EM converges slowly, so the forward-backward algorithm must be called many times. This is a critical issue, because a HMM can only be learned with high accuracy when the number of observation samples $T$ is large enough. One way of designing scalable algorithms for learning HMMs is to work with sufficient statistics---a summary of the given observation sequence, whose size does not grow with~$T$. Throughout this paper we assume that the HMM process is stationary (time-invariant), which is true almost surely if the underlying Markov process is ergodic and the process has been going on for a reasonable amount of time. With $T$ large enough, we can accurately estimate the co-occurrence probability between two consecutive emissions $\Pr[Y_t,Y_{t+1}]$. According to the graphical model shown in Figure~\ref{fig:hmm}, it is easy to see that given the value of $X_t$, $Y_t$ is conditionally independent of all the other variables, leading to the factorization \begin{align}\label{eq:hmm_fac2} \Pr[Y_t,Y_{t+1}] = \sum_{k,j=1}^{K}&\Pr[Y_t|X_t=x_k]\Pr[Y_{t+1}|X_{t+1}=x_j]\Pr[X_t=x_k,X_{t+1}=x_j] \end{align} Let $\boldsymbol{\varOmega}\in\mathbf{R}^{N\times N}$, $\boldsymbol{M}\in\mathbf{R}^{N\times K}$, and $\boldsymbol{\varTheta}\in\mathbf{R}^{K\times K}$, with their elements defined as \begin{align*} \varOmega_{n\ell} &= \Pr[Y_t=y_n,Y_{t+1}=y_\ell],\\ M_{nk} &= \Pr[Y_t=y_n|X_t=x_k],\\ \varTheta_{kj} &= \Pr[X_t=x_k,X_{t+1}=x_j]. \end{align*} Then, equations \eqref{eq:hmm_fac2} can be written compactly as \begin{align}\label{eq:MTM} \boldsymbol{\varOmega} = \boldsymbol{M}\boldsymbol{\varTheta}\boldsymbol{M}^{\!\top\!}. \end{align} Noticing that $\eqref{eq:MTM}$ is a nonnegative matrix tri-factorization with a number of inconsequential constraints for $\boldsymbol{M}$ and $\boldsymbol{\varTheta}$ to properly represent probabilities, \citet{Vanluyten2008,Lakshminarayanan2010,Cybenko2011} proposed using nonnegative matrix factorization (NMF) to estimate the HMM probabilities. However, NMF-based methods have a serious shortcoming in this context: the tri-factorization~\eqref{eq:MTM} is in general not unique, because it is fairly easy to find a nonsingular matrix $\boldsymbol{Q}$ such that both $\boldsymbol{M}\boldsymbol{Q}\geq0$ and $\boldsymbol{Q}^{-1}\boldsymbol{\varTheta}\boldsymbol{Q}^{-{\!\top\!}}\geq0$, and then $\widetilde{\boldsymbol{M}}=\boldsymbol{M}\boldsymbol{Q}$ and $\widetilde{\boldsymbol{\varTheta}}=\boldsymbol{Q}^{-1}\boldsymbol{\varTheta}\boldsymbol{Q}^{-{\!\top\!}}$ are equally good solutions in terms of reconstructing the co-occurrence matrix $\boldsymbol{\varOmega}$. When we use $(\boldsymbol{M},\boldsymbol{\varTheta})$ and $(\widetilde{\boldsymbol{M}},\widetilde{\boldsymbol{\varTheta}})$ to perform HMM inference, such as estimating hidden states or predicting new emissions, the two models often yield completely different results, unless $\boldsymbol{Q}$ is a permutation matrix. A number of works propose to use \emph{tensor} methods to overcome the identifiability issue. Instead of working with the pairwise co-occurrence probabilities, they start by estimating the joint probabilities of three consecutive observations $\Pr[Y_{t-1},Y_t,Y_{t+1}]$. Noticing that these three random variables are conditionally independent given $X_t$, the triple-occurrence probability factors into \begin{align*} \Pr[Y_{t-1},Y_t,Y_{t+1}] = \sum_{k=1}^{K}\Pr[X_t=x_k]\Pr[Y_{t-1}|X_t=x_k]\Pr[Y_{t}|X_t=x_k]\Pr[Y_{t+1}|X_t=x_k], \end{align*} which admits a tensor \emph{canonical polyadic decomposition} (CPD) model \cite{Hsu2009,Anandkumar2012,Anandkumar2014}. Assuming $K\leq N$, the CPD is essentially unique if two of the three factor matrices have full column rank, and the other one is not rank one \cite{harshman1970foundations}; in the context of HMMs, this is equivalent to assuming $\boldsymbol{M}$ and $\boldsymbol{\varTheta}$ both have linearly independent columns, which is a relatively mild condition. The CPD is known to be unique under much more relaxed conditions \cite{sidiropoulos2017tensor}, but in order to uniquely retrieve the transition probability using the relationship \[ \Pr[Y_{t+1}|X_{t}] = \sum_{j=1}^{K} \Pr[Y_{t+1}|X_{t+1}\!=\!x_j]\Pr[X_{t+1}\!=\!x_j|X_t], \] $K\leq N$ is actually the best we can achieve using triple-occurrences without making further assumptions. \footnote{In the supplementary material, we prove that if the emission probability is \emph{generic} and the transition probability is \emph{sparse}, the HMM can be uniquely identified from triple-occurrence probability for \mbox{$K<N^2/16$} using the latest tensor identifiability result \cite{chiantini2012generic}.} A~salient feature in this case is that if the triple-occurrence probability $\Pr[Y_{t-1},Y_t,Y_{t+1}]$ is exactly given (meaning the rank of the triple-occurrence tensor is indeed smaller than $N$), the CPD can be efficiently calculated using generalized eigendecomposition and related algebraic methods \cite{sanchez1990tensorial,leurgans1993decomposition,DeLathauwer2004a}. These methods do not work well, however, when the low-rank tensor is perturbed; e.g., due to insufficient mixing / sample averaging of the triple occurrence probabilities. It is also possible to handle cases where $K>N$. The key observation is that, given $X_t$, $Y_t$ is conditionally independent of $Y_{t-1},...,Y_{t-\tau}$ and $Y_{t+1},...,Y_{t+\tau}$. Then, grouping $Y_{t-1},...,Y_{t-\tau}$ into a single categorical variable taking $N^\tau$ possible outcomes, and $Y_{t+1},...,Y_{t+\tau}$ into another one, we can construct a much bigger tensor of size $N^\tau\times N^\tau\times N$, and then uniquely identify the underlying HMM structure with $K\gg N$ as long as certain linear independence requirements are satisfied for the conditional distribution of the \emph{grouped} variables \cite{Allman2009,bhaskara14a,Huang2016,Sharan2017}. It is intuitively clear that for fixed $N$, we need a much larger realization length $T$ in order to accurately estimate $(2\tau+1)$-occurrence probabilities as $\tau$ grows, which is the price we need to pay for learning a HMM with a larger number of hidden states. \subsection{This paper} The focus of this paper is on cases where $K \leq N$, and $T$ is large enough to obtain accurate estimate of $\Pr[Y_t,Y_{t+1}]$, but not large enough to accurately estimate triple or higher-order occurrence probabilities. We {\em prove} that it is actually possible to recover the latent structure of an HMM only from pairwise co-occurrence probabilities $\Pr[Y_t,Y_{t+1}]$, provided that the underlying emission probability $\Pr[Y_t|X_t]$ is \emph{sufficiently scattered}. Compared to the existing NMF-based HMM learning approaches, our formulation employs a different (determinant-based) criterion to ensure identifiability of the HMM parameters. Our matrix factorization approach resolves cases that cannot be handled by tensor methods, namely when $T$ is insufficient to estimate third-order probabilities, under an additional condition that is quite mild: that the emission probability matrix $\boldsymbol{M}$ must be \emph{sufficiently scattered}, rather than simply full column-rank. We apply our method to hidden topic Markov modeling (HTMM) \cite{gruber2007hidden}, in which case the number of hidden states (topics) is indeed much smaller than the number of ambient states (words). HTMM goes beyond the simple and widely used bag-of-words model by assuming that (ordered) words in a document are emitted from a hidden topic sequence that evolves according to a Markov model. We show improved performance on real data when using this simple and intuitive model to take word ordering into account when learning topics, which also benefits from our identifiability guaranteed matrix factorization method. As an illustrative example, we showcase the inferred topic of each word in a news article (removing stop words) in Figure~\ref{fig:topic}, taken from the Reuters21578 data set obtained at \cite{reuters21578}. As we can see, HTMM gets much more consistent and smooth inferred topics compared to that obtained from a bag-of-words model (cf. supplementary material for details). This result agrees with human understanding. \begin{figure}[t] \begin{framed} \teight{china} \tfive{daily} \tfour{vermin eat} \tseven{pct} \tone{grain stocks survey provinces} \tseven{and} \tone{cities showed} \tfour{vermin consume} \tseven{and pct} \teight{china} \tone{grain stocks} \teight{china} \tfive{daily} \tseven{each} \tone{year} \tseven{mln} \teight{tonnes} \tseven{pct} \teight{china} \tfive{fruit} \ttwo{output} \tone{left} \tseven{rot and mln} \teight{tonnes} \tseven{pct} \tfour{vegetables} \tfive{paper} \teight{blamed} \tseven{waste inadequate} \tfive{storage} \tseven{and} \tone{bad} \tfour{preservation} \tseven{methods} \tone{government} \tseven{had launched} \tfive{national} \tone{programme reduce} \tseven{waste calling for} \tone{improved} \tfive{technology storage} \tseven{and} \tfour{preservation} \tseven{and} \tone{greater production} \tfour{additives} \tfive{paper} \tseven{gave details} \bigskip \tfour{china} \ttwo{daily} \tfour{vermin eat} \teight{pct} \tfour{grain} \ttwo{stocks} \tfour{survey provinces and cities showed vermin consume and pct} \teight{china} \tfour{grain stocks} \ttwo{china} \tfour{daily} \tfour{each} \tseven{year} \tfour{mln} \teight{tonnes} \tfour{pct} \teight{china} \tfour{fruit output} \ttwo{left} \tfour{rot and mln} \teight{tonnes} \tfour{pct} \teight{vegetables} \tfour{paper blamed waste} \tseven{inadequate} \tfour{storage} \ttwo{and} \tfour{bad preservation methods government had launched national programme} \teight{reduce} \tfour{waste} \tseven{calling} \tfour{for improved} \teight{technology} \tfour{storage} \ttwo{and} \tfour{preservation and greater production} \ttwo{additives} \tfour{paper gave details} \end{framed} \caption{Inferred topics of the words shown in different colors, obtained by probabilistic latent semantic analysis (top) and hidden topic Markov model (bottom).} \label{fig:topic} \end{figure} \section{Second-order vs. Third-order Learning} We start by arguing that for the same observation data $\{Y_t\}_{t=0}^T$, the estimate of the pairwise co-occurrence probability $\Pr[Y_t,Y_{t+1}]$ is always more accurate than that of the triple co-occurrence probability $\Pr[Y_{t-1},Y_t,Y_{t+1}]$. Let us first explicitly describe the estimator we use for these probabilities. For each observation $Y_t$, we define a coordinate vector $\boldsymbol{\psi}_t\in\mathbf{R}^K$, and $\boldsymbol{\psi}_t=\bm{e}_k$ if $Y_t=y_k$. The natural estimator for the pairwise co-occurrence probability matrix $\boldsymbol{\varOmega}$ is \begin{equation}\label{eq:omega2} \widehat{\boldsymbol{\varOmega}} = \frac{1}{T}\sum_{t=0}^{T-1}\boldsymbol{\psi}_t\boldsymbol{\psi}_{t+1}^{\!\top\!}, \end{equation} and similarly for the triple co-occurrence probability $\underline{\boldsymbol{\varOmega}_3}$ \begin{equation}\label{eq:omega3} \widehat{\underline{\boldsymbol{\varOmega}_3}} = \frac{1}{T-1}\sum_{t=1}^{T-1}\boldsymbol{\psi}_{t-1}\circ\boldsymbol{\psi}_t\circ\boldsymbol{\psi}_{t+1}, \end{equation} where $\circ$ denotes vector outer-product. \footnote{In some literature $\circ$ is written as the Kronecker product $\otimes$. Strictly speaking, the Kronecker product of three vectors is a very long vector, not a three-way array. For this reason, we chose to use $\circ$ instead of $\otimes$.} The first observation is that both $\widehat{\boldsymbol{\varOmega}}$ and $\widehat{\underline{\boldsymbol{\varOmega}_3}}$ are unbiased estimators: Obviously $\E(\boldsymbol{\psi}_t\boldsymbol{\psi}_{t+1}^{\!\top\!})=\boldsymbol{\varOmega}$ and likewise for the triple-occurrences, and taking their averages does not change the expectation. However, the individual terms in the summation are not independent of each other, making it hard to determine how fast estimates converge to their expectation. The state-of-the-art concentration result for HMMs \cite{kontorovich2006measure} states that for any 1-Lipschitz function $f$ \[ \Pr[|f(\{Y_t\})-\E f(\{Y_t\})|>\epsilon] \leq 2\exp\left(-T\epsilon^2/c\right), \] where $c$ is a constant that only depends on the specific HMM structure but not on the function $f$ (cf. \cite{kontorovich2006measure} for details). Taking $f$ as any entry in $\widehat{\boldsymbol{\varOmega}}$ or $\widehat{\underline{\boldsymbol{\varOmega}_3}}$, we can check that indeed it is 1-Lipschitz, meaning as $T$ goes to infinity, both estimators converge to their expectation with negligible fluctuations. We now prove that for a given set of observations $\{Y_t\}_{t=0}^T$, $\widehat{\boldsymbol{\varOmega}}$ is always going to be more accurate than $\widehat{\underline{\boldsymbol{\varOmega}_3}}$. Since both of them represent probabilities, we use two common metrics to measure the differences between the estimators and their expectations, the Kullback-Leibler divergence $D_\text{KL}(\cdot)$ and the total-variation difference $D_\text{TV}(\cdot)$. \begin{proposition}\label{ppst:2>3} Let $\widehat{\boldsymbol{\varOmega}}$ and $\widehat{\underline{\boldsymbol{\varOmega}_3}}$ be obtained from the same set of observations $\{Y_t\}_{t=0}^T$, we have that \begin{align*} D_\textup{KL}(\widehat{\boldsymbol{\varOmega}}\|\boldsymbol{\varOmega}) &\leq D_\textup{KL}(\widehat{\underline{\boldsymbol{\varOmega}_3}}\|\underline{\boldsymbol{\varOmega}_3}) \qquad\text{and}\\ D_\textup{TV}(\widehat{\boldsymbol{\varOmega}}\|\boldsymbol{\varOmega}) &\leq D_\textup{TV}(\widehat{\underline{\boldsymbol{\varOmega}_3}}\|\underline{\boldsymbol{\varOmega}_3}). \end{align*} \end{proposition} The proof of Proposition~\ref{ppst:2>3} is relegated to the supplementary material. \section{Identifiability of HMMs from Pairwise Co-occurrence Probabilities} The arguments made in the previous section motivate going back to matrix factorization methods for learning a HMM when the realization length $T$ is not large enough to obtain accurate estimates of triple co-occurrence probabilities. As we have explained in \S\ref{sec:related}, the co-occurrence probability matrix $\boldsymbol{\varOmega}$ admits a nonnegative matrix tri-factorization model \eqref{eq:MTM}. There are a number of additional equality constraints. Columns of $\boldsymbol{M}$ represent conditional distributions, so $\boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{M}=\boldsymbol{\mathit{1}}^{\!\top\!}$. Matrix $\boldsymbol{\varTheta}$ represents the joint distribution between two consecutive Markovian variables, therefore $\boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{\varTheta}\boldsymbol{\mathit{1}}=1$. Furthermore, we have that $\boldsymbol{\varTheta}\boldsymbol{\mathit{1}}$ and $\boldsymbol{\varTheta}^{\!\top\!}\boldsymbol{\mathit{1}}$ represent $\Pr[X_t]$ and $\Pr[X_{t+1}]$ respectively, and since we assume that the Markov chain is stationary, they are the same, i.e., $\boldsymbol{\varTheta}\boldsymbol{\mathit{1}}=\boldsymbol{\varTheta}^{\!\top\!}\boldsymbol{\mathit{1}}$. Notice that this does not imply that $\boldsymbol{\varTheta}$ is symmetric, and in fact it is often not symmetric. \citet{huang2016nips} considered a factorization model similar to \eqref{eq:MTM} in a different context, and showed that identifiability can be achieved under a reasonable assumption called \emph{sufficiently scattered}, defined as follows. \begin{definition}[sufficiently scattered]\label{def:suf_scat} Let $\cone(\boldsymbol{M}^{\!\top\!})^*$ denote the polyhedral cone $\{\boldsymbol{x}:\boldsymbol{M}\boldsymbol{x}\geq 0\}$, and $\mathcal{C}$ denote the elliptical cone $\{\boldsymbol{x}:\|\boldsymbol{x}\|\leq\boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{x}\}$. Matrix $\boldsymbol{M}$ is called \textbf{sufficiently scattered} if it satisfies that: (i) $\cone(\boldsymbol{M}^{\!\top\!})^*\subseteq\mathcal{C}$, and (ii) $\cone({\boldsymbol{M}}^{\!\top\!})^\ast\cap{\rm bd}\mathcal{C}=\{\lambda {\bm e}_k:\lambda\geq 0,k=1,...,K\}$, where ${\rm bd}\mathcal{C}$ denotes the boundary of $\mathcal{C}$, $\{\boldsymbol{x}:\|\boldsymbol{x}\|=\boldsymbol{\mathit{1}}^{{\!\top\!}}\boldsymbol{x}\}$. \end{definition} The sufficiently scattered condition was first proposed in \cite{huang2014tsp} to establish uniqueness conditions for the widely used \emph{nonnegative matrix factorization} (NMF). For the NMF model $\boldsymbol{\varOmega}=\bm{WH}^{\!\top\!}$, if both $\bm{W}$ and $\bm{H}$ are sufficiently scattered, then the nonnegative decomposition is unique up to column permutation and scaling. Follow up work strengthened and extended the identifiability results based on this geometry inspired condition \cite{fu2015bss,huang2016nips,fu2017spl}. A similar tri-factorization model was considered in \cite{huang2016nips} in the context of bag-of-words topic modeling, and it was shown that among all feasible solutions of~\eqref{eq:MTM}, if we find one that minimizes $|\det\boldsymbol{\varTheta}|$, then it recovers the ground-truth latent factors $\boldsymbol{M}$ and $\boldsymbol{\varTheta}$, assuming the ground-truth $\boldsymbol{M}$ is sufficiently scattered. In our present context, we therefore propose the following problem formulation: \begin{subequations}\label{prob:main} \begin{align} \minimize_{\boldsymbol{\varTheta},\boldsymbol{M}}~~~ & |\det\boldsymbol{\varTheta}| \\ \textrm{subject to}~~~ & \boldsymbol{\varOmega}=\boldsymbol{M}\boldsymbol{\varTheta}\boldsymbol{M}^{\!\top\!}, \label{eq:Omega}\\ & \boldsymbol{\varTheta}\geq0, \boldsymbol{\varTheta}\boldsymbol{\mathit{1}}=\boldsymbol{\varTheta}^{\!\top\!}\boldsymbol{\mathit{1}}, \boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{\varTheta}\boldsymbol{\mathit{1}}=1, \label{eq:Theta}\\ & \boldsymbol{M}\geq0, \boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{M}=\boldsymbol{\mathit{1}}^{\!\top\!}. \end{align} \end{subequations} Regarding Problem~\eqref{prob:main}, we have the following identifiability result. \begin{theorem}\label{thm:unique} \cite{huang2016nips} Suppose $\boldsymbol{\varOmega}$ is constructed as $\boldsymbol{\varOmega}=\boldsymbol{M}_\natural\boldsymbol{\varTheta}_\natural\boldsymbol{M}_\natural^{\!\top\!}$, where $\boldsymbol{M}_\natural$ and $\boldsymbol{\varTheta}_\natural$ satisfy the constraints in~\eqref{prob:main}, and in addition (i) $\rank(\boldsymbol{\varTheta}_\natural)=K$ and (ii) $\boldsymbol{M}_\natural$ is \textup{\bf sufficiently scattered}. Let $(\boldsymbol{M}_\star,\boldsymbol{\varTheta}_\star)$ be an optimal solution for~\eqref{prob:main}, then there must exist a permutation matrix $\boldsymbol{\varPi}\in\mathbf{R}^{K\times K}$ such that \[ \boldsymbol{M}_\natural = \boldsymbol{M}_\star\boldsymbol{\varPi}, \qquad \boldsymbol{\varTheta}_\natural = \boldsymbol{\varPi}^{\!\top\!}\boldsymbol{\varTheta}_\star\boldsymbol{\varPi}. \] \end{theorem} One may notice that in \cite{huang2016nips}, there are no constraints on the core matrix $\boldsymbol{\varTheta}$ as we do in~\eqref{eq:Theta}. In terms of identifiability, it is easy to see that if the ground-truth $\boldsymbol{\varTheta}_\natural$ satisfies~\eqref{eq:Theta}, solving~\eqref{prob:main} even without~\eqref{eq:Theta} will produce a solution $\boldsymbol{\varTheta}_\star$ that satisfies~\eqref{eq:Theta}, thanks to uniqueness. In practice when we are given a less accurate $\boldsymbol{\varOmega}$, such ``redundant'' information will help us improve the estimation error, but that goes beyond identifiability consederations. \begin{figure}[t] \centering \begin{subfigure}[t]{.3\linewidth} \centering \resizebox{.8\linewidth}{!}{\input{separable}} \caption{Separable}\label{fig:geo-separable} \end{subfigure} \begin{subfigure}[t]{.3\linewidth} \centering \resizebox{.8\linewidth}{!}{\input{scattered}} \caption{Sufficiently scattered}\label{fig:geo-scattered} \end{subfigure} \begin{subfigure}[t]{.3\linewidth} \centering \resizebox{.8\linewidth}{!}{\input{not-id-able}} \caption{Not identifiable}\label{fig:geo-no_id} \end{subfigure} \caption{A geometric illustration of the sufficiently scattered condition (middle), a special case that is separable (left), and a case that is not identifiable (right).} \label{fig:geo} \end{figure} The proof of Theorem~\ref{thm:unique} is referred to \cite{huang2016nips}. Here we provide some insights on this geometry-inspired sufficiently scattered condition, and discuss why it is a reasonable (and thus practical) assumption. The notation $\cone(\boldsymbol{M}^{\!\top\!})^*=\{\boldsymbol{x}:\boldsymbol{M}\boldsymbol{x}\geq0\}$ comes from the convention in convex analysis that it is the \emph{dual cone} of the conical hull of the row vectors of $\boldsymbol{M}$, i.e., $\cone(\boldsymbol{M}^{\!\top\!})=\{\boldsymbol{M}^{\!\top\!}\bm{\alpha}:\bm{\alpha}\geq0\}$. Similarly, we can derive that the dual cone of $\mathcal{C}$ is $\mathcal{C}^*=\{\boldsymbol{x}:\|\boldsymbol{x}\|\leq\boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{x}/\sqrt{K-1}\}$. A useful property of the dual cone is that for two convex cones $\mathcal{A}$ and $\mathcal{B}$, $\mathcal{A}\subseteq\mathcal{B}$ iff $\mathcal{B}^*\subseteq\mathcal{A}^*$. Therefore, the first requirement of sufficiently scattered in Definition~\ref{def:suf_scat} equivalently means \[ \mathcal{C}^*\subseteq\cone(\boldsymbol{M}^{\!\top\!}). \] We give a geometric illustration of the sufficiently scattered condition in Figure~\ref{fig:geo-scattered} for $K=3$, and we focus on the 2-dimensional plane $\boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{x}=1$. The intersection between this plane and the nonnegative orthant is the probability simplex, which is the triangle in Figure~\ref{fig:geo-scattered}. The outer circle represents $\mathcal{C}$, and the inner circle represents $\mathcal{C}^*$, again intersecting with the plane, respectively. The rows of $\boldsymbol{M}$ are scaled to sum up to one, and they are represented by black dots in Figure~\ref{fig:geo-scattered}. Their conical hull is represented by the shaded region. The polygon with dashed lines represents the dual of $\cone(\boldsymbol{M}^{\!\top\!})$, which is indeed a subset of $\mathcal{C}$, and touches the boundary of $\mathcal{C}$ only at the coordinate vectors. Figure~\ref{fig:geo-separable} shows a special case of sufficiently scattered called \emph{separability}, which first appeared in \cite{donoho2004does} also to establish uniqueness of NMF. In this case, all the coordinate vectors appear in rows of $\boldsymbol{M}$, therefore $\cone(\boldsymbol{M})$ equals the nonnegative orthant. It makes sense that this condition makes the identification problem easier, but it is also a very restrictive assumption. The sufficiently scattered condition, on the other hand, only requires that the shaded region contains the inner circle, as shown in Figure~\ref{fig:geo-scattered}. Intuitively this requires that the rows of $\boldsymbol{M}$ be ``well scattered'' in the probability simplex, but not to the extent of ``separable''. Separability-based HMM identification has been considered in \cite{Barlier2015,Glaude2015}. However, the way they construct second-order statistics is very different from ours. Figure~\ref{fig:geo-no_id} shows a case where $\boldsymbol{M}$ is not sufficiently scattered, and it also happens to be a case where $\boldsymbol{M}$ is not identifiable. As we can see, the elliptical cone $\mathcal{C}^*$ is tangent to all the facets of the nonnegative orthant. As a result, for $\boldsymbol{M}$ to be sufficiently scattered, it is necessary that there are enough rows of $\boldsymbol{M}$ lie on the boundary of the nonnegative orthant, i.e., $\boldsymbol{M}$ is relatively sparse. Specifically, if $\boldsymbol{M}$ is sufficiently scattered, then each column of $\boldsymbol{M}$ contains at least $K-1$ zeros \cite{huang2014tsp}. This is a very important insight, as exactly checking whether a matrix is sufficiently scattered may be computationally hard. In the present paper we further show the following result. \begin{proposition}\label{ppst:volume} The ratio between the volume of the hyperball obtained by intersecting $\boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{x}=1$ and $\mathcal{C}^*$ and the probability simplex is \begin{equation}\label{eq:ratio} \frac{1}{\sqrt{\pi K}}\left(\frac{4\pi}{K(K-1)}\right)^{\frac{K-1}{2}} \Gamma\left(\frac{K}{2}\right). \end{equation} \end{proposition} The proof is given in the supplementary material. As $K$ grows larger, the volume ratio \eqref{eq:ratio} goes to zero at a super-exponential decay rate. This implies that the volume of the inner sphere quickly becomes negligible compared to the volume of the probability simplex, as $K$ becomes moderately large. The take home point is that, for a practical choice of $K$, say $K\geq10$, as long as $\boldsymbol{M}$ satisfies that each column contains at least $K$ zeros, and the positions of the zeros appear relatively random, it is very likely that it is sufficiently scattered, and thus can be uniquely recovered via solving~\eqref{prob:main}. \section{Algorithm} Our identifiability analysis based on the sufficiently scattered condition poses an interesting non-convex optimization problem~\eqref{prob:main}. First of all, the given co-occurrence probability $\boldsymbol{\varOmega}$ may not be exact, therefore it may not be a good idea to put~\eqref{eq:Omega} as a hard constraint. For algorithm design, we propose the following modification to problem~\eqref{prob:main}. \begin{align}\label{prob:alg} \minimize_{\boldsymbol{\varTheta},\boldsymbol{M}}~~~ & \sum_{n,\ell=1}^{N}-\varOmega_{n\ell}\log\!\!\sum_{k,j=1}^{K}\!\!M_{nk}\varTheta_{kj}M_{\ell j} + \lambda|\det\boldsymbol{\varTheta}| \nonumber\\ \textrm{subject to}~~~ & \boldsymbol{M}\geq0, \boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{M}=\boldsymbol{\mathit{1}}^{\!\top\!}, \\ & \boldsymbol{\varTheta}\geq0, \boldsymbol{\varTheta}\boldsymbol{\mathit{1}}=\boldsymbol{\varTheta}^{\!\top\!}\boldsymbol{\mathit{1}}, \boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{\varTheta}\boldsymbol{\mathit{1}}=1. \nonumber \end{align} In the loss function of~\eqref{prob:alg}, the first term is the Kullback-Leibler distance between the empirical probability $\boldsymbol{\varOmega}$ and the parameterized version $\boldsymbol{M}\boldsymbol{\varTheta}\boldsymbol{M}^{\!\top\!}$ (ignoring a constant), and the second term is our identifiability-driven regularization. We need to tune the parameter $\lambda$ to yield good estimation results. However, intuitively we should use a $\lambda$ with a relatively small value. Suppose $\boldsymbol{\varOmega}$ is sufficiently accurate, then the priority is to minimize the difference between $\boldsymbol{\varOmega}$ and $\boldsymbol{M}\boldsymbol{\varTheta}\boldsymbol{M}^{\!\top\!}$; when there exist equally good fits, then the second term comes into play and helps us pick out a solution that is \emph{sufficiently scattered}. Noticing that the constraints of~\eqref{prob:alg} are all convex, but not the loss function, we propose to design an iterative algorithm to solve~\eqref{prob:alg} using successive convex approximation. At iteration $r$ when the updates are $\boldsymbol{\varTheta}^r$ and $\boldsymbol{M}^r$, we define \begin{align}\label{eq:posterior} \varPi_{n\ell kj}^r = M_{nk}^r\varTheta_{kj}^rM_{\ell j}^r \bigg/ \sum_{\kappa,\iota=1}^{K}M_{n\kappa}^r\varTheta_{\kappa\iota}^rM_{\ell\iota}^r. \end{align} Obviously, $\varPi_{n\ell kj}^r\geq0$ and $\sum_{k,j=1}^{K}\varPi_{n\ell kj}^r=1$, which defines a probability distribution for fixed $n$ and $\ell$. Using Jensen's inequality~\cite{jensen1906fonctions}, we have that \begin{align}\label{eq:upper1} -\varOmega_{n\ell}\log\sum_{k,j=1}^{K}M_{nk}\varTheta_{kj}M_{\ell j} \leq \sum_{k,j=1}^{K}-\varOmega_{n\ell}\varPi_{n\ell kj}^r \left(\log M_{nk} + \log\varTheta_{kj} + \log M_{\ell j} - \log\varPi_{n\ell kj}^r\right) \end{align} which defines a convex and locally tight upperbound for the first term in the loss function of \eqref{prob:alg}. Regarding the second term in the loss of~\eqref{prob:alg}, we propose to simply take the linear approximation \begin{align}\label{eq:approx2} |\!\det\!\boldsymbol{\varTheta}| \approx |\!\det\!\boldsymbol{\varTheta}^r| + |\!\det\!\boldsymbol{\varTheta}^r|\tr\!\left( (\boldsymbol{\varTheta}^r)^{\!-\!1\!}(\boldsymbol{\varTheta}\!-\!\boldsymbol{\varTheta}^r) \right) \end{align} Combining~\eqref{eq:upper1} and~\eqref{eq:approx2}, our successive convex approximation algorithm tries to solve the following convex problem at iteration $r$: \begin{align}\label{prob:iter} \minimize_{\boldsymbol{\varTheta},\boldsymbol{M}}~~ & \sum_{n,\ell=1}^{N}\sum_{k,j=1}^{K} -\varOmega_{n\ell}\varPi_{n\ell kj}^r \left(\log M_{nk} + \log M_{\ell j} + \log\varTheta_{kj} \right) + \lambda\sum_{k,j=1}^{K}\varXi_{kj}^r\varTheta_{kj} \\ \textrm{subject to}~~ & \boldsymbol{M}\geq0, \boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{M}=\boldsymbol{\mathit{1}}^{\!\top\!}, \nonumber \\ & \boldsymbol{\varTheta}\geq0, \boldsymbol{\varTheta}\boldsymbol{\mathit{1}}=\boldsymbol{\varTheta}^{\!\top\!}\boldsymbol{\mathit{1}}, \boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{\varTheta}\boldsymbol{\mathit{1}}=1, \nonumber \end{align} where we define $\bm{\varXi}^r=|\det\boldsymbol{\varTheta}^r|(\boldsymbol{\varTheta}^r)^{-{\!\top\!}}$. Problem~\eqref{prob:iter} decouples with respect to $\boldsymbol{M}$ and $\boldsymbol{\varTheta}$, so we can work out their updates individually. The update of $\boldsymbol{M}$ admits a simple closed form solution, which can be derived via checking the KKT conditions. We denote the dual variable corresponding to $\boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{M}=\boldsymbol{\mathit{1}}^{\!\top\!}$ as $\boldsymbol{\mu}\in\mathbf{R}^K$. Setting the gradient of the Lagrangian with respect to $M_{nk}$ equal to zero, we have \[ M_{nk} = \sum_{\ell=1}^N\sum_{j=1}^{K}\left(\varOmega_{n\ell}\varPi_{n\ell kj}^r+\varOmega_{\ell n}\varPi_{\ell njk}^r\right) \bigg/ \mu_k \] and $\boldsymbol{\mu}$ should be chosen so that the constraint $\boldsymbol{\mathit{1}}^{\!\top\!}\boldsymbol{M}=\boldsymbol{\mathit{1}}^{\!\top\!}$ is satisfied, which amounts to a simple re-scaling. The update of $\boldsymbol{\varTheta}$ is not as simple as a closed form expression, but it can still be obtained very efficiently. Noticing that the nonnegativity constraint is implicitly implied by the individual $\log$ functions in the loss function, we propose to solve it using Newton's method with equality constraints \citep[\S10.2]{boyd2004convex}. Although Newton's method requires solving a linear system of equations with $K^2$ number of variables in each iteration, there is special structure we can exploit to reduce the per-iteration complexity down to $\mathcal{O}(K^3)$: The Hessian of the loss function of~\eqref{prob:iter} is diagonal, and the linear equality constraints are highly structured; using block elimination \citep[\S10.4.2]{boyd2004convex}, we ultimately only need to solve a positive definite linear system with $K$ variables. Together with the quadratic convergence rate of Newton's method, the complexity of updating $\boldsymbol{\varTheta}$ is $\mathcal{O}(K^3\log\log\frac{1}{\varepsilon})$, where $\varepsilon$ is the desired accuracy for the $\boldsymbol{\varTheta}$ update. Noticing that the complexity of a naive implementation of Newton's method would be $\mathcal{O}(K^6\log\log\frac{1}{\varepsilon})$, the difference is big for moderately large $K$. The in-line implementation of this tailored Newton's method \textsc{ThetaUpdate} and the detailed derivation can be found in the supplementary material. \begin{algorithm}[t] \caption{Proposed Algorithm}\label{alg:hmm_id} \begin{algorithmic}[1] \REQUIRE $\lambda>0$ \STATE initialize $\boldsymbol{M}$ using \cite{huang2016nips} \STATE initialize $\boldsymbol{\varTheta}\leftarrow \frac{1}{K(K+1)}(\boldsymbol{I}+\boldsymbol{\mathit{1}}\!\boldsymbol{\mathit{1}}^{\!\top\!})$ \REPEAT \STATE $\widetilde{\boldsymbol{\varOmega}}\leftarrow\boldsymbol{\varOmega}\big/\boldsymbol{M}\boldsymbol{\varTheta}\boldsymbol{M}^{{\!\top\!}}$ \COMMENT{element-wise division} \STATE $\widetilde{\boldsymbol{M}}\leftarrow \boldsymbol{M} \ast \left( \widetilde{\boldsymbol{\varOmega}}\boldsymbol{M}\boldsymbol{\varTheta}^{{\!\top\!}} + \widetilde{\boldsymbol{\varOmega}}^{\!\top\!}\boldsymbol{M}\boldsymbol{\varTheta} \right)$ \STATE $\widetilde{\boldsymbol{\varTheta}}\leftarrow\boldsymbol{M}^{{\!\top\!}}\widetilde{\boldsymbol{\varOmega}}\boldsymbol{M}$ \STATE $\widetilde{\boldsymbol{M}}\leftarrow \widetilde{\boldsymbol{M}} \diag(\boldsymbol{\mathit{1}}^{\!\top\!}\widetilde{\boldsymbol{M}})^{-1}$ \STATE $\widetilde{\boldsymbol{\varTheta}}\leftarrow$ \textsc{ThetaUpdate} \COMMENT{cf. supplementary} \STATE $(\boldsymbol{M},\boldsymbol{\varTheta})\leftarrow$ Amijo line search between $(\boldsymbol{M},\boldsymbol{\varTheta})$ and $(\widetilde{\boldsymbol{M}},\widetilde{\boldsymbol{\varTheta}})$ \UNTIL{convergence} \RETURN $\boldsymbol{M}$ and $\boldsymbol{\varTheta}$ \end{algorithmic} \end{algorithm} The entire proposed algorithm to solve Problem~\eqref{prob:alg} is summarized in Algorithm~\ref{alg:hmm_id}. Notice that there is an additional line-search step to ensure decrease of the loss function. The constraint set of \eqref{prob:alg} is convex, so the line-search step will not incur infeasibility. Computationally, we find that any operation that involves $\varPi_{n\ell kj}^r$ can be carried out succinctly by defining the intermediate matrix $\widetilde{\boldsymbol{\varOmega}}=\boldsymbol{\varOmega}/\boldsymbol{M}\boldsymbol{\varTheta}\boldsymbol{M}^{{\!\top\!}}$, where ``$/$'' denotes element-wise division between two matrices of the same size. The per-iteration complexity of Algorithm~\ref{alg:hmm_id} is completely dominated by the operations that involve computing with $\widetilde{\boldsymbol{\varOmega}}$, notably comparing with that of \textsc{Theta-Update}. In terms of initialization, which is important since we are optimizing a non-convex problem, we propose to use the method by \citet{huang2016nips} to obtain an initialization for $\boldsymbol{M}$; for $\boldsymbol{\varTheta}$, it is best if we start with a feasible point (so that the Newton's iterates will remain feasible), and a simple choice is scaling the matrix $\boldsymbol{I}+\boldsymbol{\mathit{1}}\!\boldsymbol{\mathit{1}}^{\!\top\!}$ to sum up to one. Finally, we show that this algorithm converges to a stationary point of Problem~\eqref{prob:alg}, with proof relegated to the supplementary material based on \cite{razaviyayn2013unified}. \begin{proposition} Assume \textsc{ThetaUpdate} solves Problem~\eqref{prob:iter} with respect to $\boldsymbol{\varTheta}$ exactly, then Algorithm~\ref{alg:hmm_id} converges to a stationary point of Problem~\eqref{prob:alg}. \end{proposition} \begin{figure}[t] \centering \includegraphics[width=\linewidth]{sim_exp1} \caption{The total variation difference between the ground truth and estimated transition probability (top) and emission probability (bottom). The total variation difference of the emission probabilities is calculated as $\frac{1}{2K}\|\boldsymbol{M}_\natural-\boldsymbol{M}_\star\|_1$, since each column of the matrices indicates a (conditional) probability, and the total variation difference is equal to one half of the $L_1$-norm; and similarly for that of the transition probabilities after rescaling the rows of $\boldsymbol{\varOmega}_\natural$ and $\boldsymbol{\varOmega}_\star$ to sum up to one. The result is averaged over 10 random problem instances.} \label{fig:sim_exp1} \end{figure} \section{Validation on Synthetic Data} In this section we validate the identifiability performance on synthetic data. In this case, the underlying transition and emission probabilities are generated synthetically, and we compare them with the estimated ones to evaluate performance. The simulations are conducted in MATLAB using the HMM toolbox, which includes functions to generate observation sequences given transition and emission probabilities, as well as an implementation of the Baum-Welch algorithm \cite{baum1970maximization}, i.e., the EM algorithm, to estimate the transition and emission probabilities using the observations. Unfortunately, even for some moderate problem sizes we considered, the streamlined MATLAB implementation of the Baum-Welch algorithm was not able to execute within reasonable amount of time, so its performance is not included here. For the baselines, we compare with the plain NMF approach using multiplicative update \cite{Vanluyten2008} and the tensor CPD approach \cite{Sharan2017} using simultaneous diagonalization with Tensorlab \cite{tensorlab3.0}. Since we work with empirical distributions instead of exact probabilities, the result of the simultaneous diagonalization is not going to be optimal. We therefore use it to initialize the EM algorithm for fitting a nonnegative tensor factorization with KL divergence loss \cite{shashanka2008probabilistic} for refinement. We focus on the cases when the number of hidden states $K$ is smaller than the number observed states $N$. As we explained in the introduction, even for this seemingly easier case, it is not known that we can guarantee unique recovery of the HMM parameters \emph{just from the pair-wise co-occurrence probability}. What is known is that the tensor CPD approach is able to guarantee identifiability given exact triple-occurrence probability. We will demonstrate in this section that it is much harder to obtain accurate triple-occurrence probability comparing with the co-occurrence probability. As a result, if the sufficiently scattered assumption holds for the emission probability, the estimated parameters obtained from our method are always more accurate than those obtained from tensor CPD. Fixing $N=100$ and $K=20$, the transition probabilities are synthetically generated from a random exponential matrix of size $K\times K$ followed by row-normalization; for the emission probabilities, approximately 50\% of the entries in the $N\times K$ random exponential matrices are set to zero before normalizing the columns, which is shown to satisfy the sufficiently scattered condition with very high probability~\cite{huang2015principled}. We let the number of HMM realizations go from $10^6$ to $10^8$, and compare the estimation error for the transition matrix and emission matrix by the aforementioned methods. We show the total variation distance between the ground truth probabilities $\Pr[X_{t+1}|X_t]$ and $\Pr[Y_t|X_t]$ and their estimations $\widehat{\Pr}[X_{t+1}|X_t]$ and $\widehat{\Pr}[Y_t|X_t]$ using various methods. The result is shown in Figure~\ref{fig:sim_exp1}. As we can see, the proposed method indeed works best, obtaining almost perfect recovery when sample size is above $10^8$. The CPD based method does not work as well since it cannot obtain accurate estimates of the third-order statistics that it needs. Initialized by CPD, EM improves from CPD but the performance is still far away from the proposed method. NMF is not working well since it does not have identifiability in this case. \section{Application: Hidden Topic Markov Model} Analyzing text data is one of the core application domains of machine learning. There are two prevailing approaches to model text data. The classical bag-of-words model assumes that each word is \emph{independently} drawn from certain multinomial distributions. These distributions are different across documents, but can be efficiently summarized by a small number of \emph{topics}, again mathematically modeled as distributions over words; this task is widely known as \emph{topic modeling} \cite{hofmann2001unsupervised,blei2003latent}. However, it is obvious that the bag-of-words representation is oversimplified. The $n$-gram model, on the other hand, assumes that words are conditionally dependent up to a window-length of $n$. This seems to be a much more realistic model, although the choice of $n$ is totally unclear, and is often dictated by memory and computational limitations in practice---since the size of the joint distribution grows exponentially with $n$. What is more, it is somewhat difficult to extract ``topics'' from this model, despite some preliminary attempts \cite{wallach2006topic,wang2007topical}. We propose to model a document as the realization of a HMM, in which the topics are hidden states emitting words, and the states are evolving according to a Markov chain, hence the name \emph{hidden topic Markov model} (HTMM). For a set of documents, this means we are working with a \emph{collection} of HMMs. Similar to other topic modeling works, we assume that the topic matrix is shared among all documents, meaning all the given HMMs share the same emission probability. For the bag-of-words model, each document has a specific topic distribution $\boldsymbol{p}_d$, whereas for our model, each document has its own \emph{topic transition probability} $\boldsymbol{\varTheta}_d$; as per our previous discussion, the row-sum and column-sum of $\boldsymbol{\varTheta}_d$ are the same, which are also the topic probability for the specific document. The difference is the Markovian assumption on the topics rather than the over-simplifying independence assumption. We can see some immediate advantages for the HTMM. Since the Markovian assumption is only imposed on the topics, which are not exposed to us, the observations (words) are not independent from each other, which agrees with our intuition. On the other hand, we now understand that although word dependencies exist for a wide neighborhood, we only need to work with pair-wise co-occurrence probabilities, or 2-grams. This releases us from picking a window length $n$ in the $n$-gram model, while maintaining dependencies between words well beyond a neighborhood of $n$ words. It also includes the bag-of-words assumption as a special case: If the topics of the words are indeed independent, this just means that the transition probability has the special form $\boldsymbol{\mathit{1}}\boldsymbol{p}_d^{\!\top\!}$. The closest work to ours is by \citet{gruber2007hidden}, which is also termed hidden topic Markov model. However, they make a simplifying assumption that the transition probability takes the form $(1-\epsilon)\boldsymbol{I} + \epsilon\boldsymbol{\mathit{1}}\boldsymbol{p}_d^{\!\top\!}$, meaning the topic of the word is either the same as the previous one, or independently drawn from $\boldsymbol{p}_d$. Both models are special cases of our general HTMM. In order to learn the shared topic matrix $\boldsymbol{M}$, we can use the co-occurrence statistics for the entire corpus: Denote the co-occurrence statistics for the $d$-th document as $\boldsymbol{\varOmega}_d$, then $\E\boldsymbol{\varOmega}_d = \boldsymbol{M}\boldsymbol{\varTheta}_d\boldsymbol{M}^{\!\top\!}$; consequently \[ \boldsymbol{\varOmega} = \frac{1}{\sum_{d=1}^{D}L_d}\sum_{d=1}^{D}L_d\boldsymbol{\varOmega}_d, \] which is an unbiased estimator for \[ \boldsymbol{M}\boldsymbol{\varTheta}\boldsymbol{M}^{\!\top\!} = \frac{1}{\sum_{d=1}^{D}L_d}\sum_{d=1}^{D}L_d\boldsymbol{M}\boldsymbol{\varTheta}_d\boldsymbol{M}^{\!\top\!}, \] where $L_d$ is the length of the $d$-th document and $\boldsymbol{\varTheta}$ is conceptually a weighted average of all the topic-transition matrices. Then we may apply Algorithm~\ref{alg:hmm_id} to learn the topic matrix. We illustrate the performance of our HTMM by comparing it to three popular bag-of-words topic modeling approaches: pLSA \cite{hofmann2001unsupervised}, LDA \cite{blei2003latent}, and FastAnchor \cite{arora2013practical}, which guarantees identifiability if every topic has a characteristic \emph{anchor word}. Our HTMM model guarantees identifiability if the topic matrix is \emph{sufficiently scattered}, which is a more relaxed condition than the anchor word one. On the Reuters21578 data set obtained at \cite{reuters21578}, we use the raw document to construct the word co-occurrence statistics, as well as bag-of-words representations for each document for the baseline algorithms. We use the version in which the stop-words have been removed, which makes the HTMM model more plausible since any syntactic dependencies have been removed, leaving only semantic dependencies. The vocabulary size of Reuters21578 is around $200,000$, making any method relying on triple-occurrences impossible to implement, and that is why tensor-based methods are not compared here. \begin{figure}[t] \centering \includegraphics[width=0.55\linewidth]{reuters_coh} \caption{Coherence of the topics.} \label{fig:reuters_coh} \end{figure} Because of page limitations, we only show the quality of the topics learned by various methods in terms of coherence. Simply put, a higher coherence means more meaningful topics, and the concrete definition can be found in \cite{arora2013practical} and in the supplementary material. In Figure~\ref{fig:reuters_coh}, we can see that for different number of topics we tried on the entire dataset, HTMM consistently produces topics with the highest coherence. Additional evaluations can be found in the supplementary material. \section{Conclusion} We presented an algorithm for learning hidden Markov models in an unsupervised setting, i.e., using only a sequence of observations. Our approach is guaranteed to uniquely recover the ground-truth HMM structure using only pairwise co-occurrence probabilities of the observations, under the assumption that the emission probability is \emph{sufficiently scattered}. Unlike EM, the complexity of the proposed algorithm does not grow with the length of the observation sequence. Compared to tensor-based methods for HMM learning, our approach only requires reliable estimates of pairwise co-occurrence probabilities, which are easier to obtain. We applied our method to topic modeling, assuming each document is a realization of a HMM rather than a simpler bag-of-words model, and obtained improved topic coherence results. We refer the reader to the supplementary material for detailed proofs of the propositions and additional experimental results. \section*{Appendix}
{ "timestamp": "2018-06-20T02:03:13", "yymm": "1802", "arxiv_id": "1802.06894", "language": "en", "url": "https://arxiv.org/abs/1802.06894" }
\section{Technical Proofs} \label{sec:appendix} This section collects technical proofs. \subsection{Proof of Lemma \ref{lemmaInit}} \label{A:lemmaInit} Let $[\tilde U, \tilde\Sigma, \tilde V] = \text{rSVD}(\Theta^0)$ be the rank $r$ SVD of the matrix ${\Theta^0}$ and let \[ \tilde\Theta = \tilde U\tilde\Sigma(\tilde V)^{\top} = \arg\min_{{\rm rank}(\Theta) \leq r} \|\Theta - \Theta^0\|_F. \] Since $\tilde\Theta$ is the best rank $r$ approximation to ${\Theta^0}$, we have \begin{equation} \|\tilde\Theta - \Theta^0\|_F \leq \|\Theta^0 - \Theta^*\|_F. \label{Init3} \end{equation} The triangle inequality gives us \begin{equation} \|\tilde\Theta - \Theta^*\|_F \leq \|\Theta^0 - \Theta^*\|_F + \|\Theta^0 - \tilde\Theta\|_F \leq 2 \|\Theta^0 - \Theta^*\|_F. \label{Init4} \end{equation} Now that both $\tilde \Theta$ and $\Theta^*$ are rank $r$ matrices, and according to \eqref{eq:ass_init} we have \begin{equation} \|\tilde\Theta - \Theta^*\|_F \leq 2 \|\Theta^0 - \Theta^*\|_F \leq \frac 12 \sigma_r(\Theta^*). \label{Init5} \end{equation} Then, Lemma 5.14 in \cite{Tu2016Low} gives us \begin{equation} \begin{aligned} d^2\bigg( \sbr{\begin{array}{c} \tilde U \tilde \Sigma^{\frac 12} \\ \tilde V \tilde \Sigma^{\frac 12} \end{array}}, \sbr{\begin{array}{c}U^* \\ V^* \end{array}} \bigg) &\leq \frac{2}{\sqrt 2-1} \cdot \frac{\|\tilde\Theta - \Theta^*\|_F^2}{\sigma_r(\Theta^*)} \\ &\leq \frac{2}{\sqrt 2-1} \cdot \frac{4}{\sigma_r(\Theta^*)} \cdot \frac{I_0^2}{25\xi^2} \cdot \sigma_r(\Theta^*)\\ &\leq \frac{I_0^2}{\xi^2} \end{aligned} \end{equation} where the second inequality comes from the initialization condition \eqref{eq:ass_init}. Finally, Lemma 3.3 in \cite{Li2016Stochastic} gives \begin{equation} \begin{aligned} d^2\bigg( \sbr{\begin{array}{c} U^0\\ V^0 \end{array}}, \sbr{\begin{array}{c}U^* \\ V^* \end{array}} \bigg) &\leq \xi^2 d^2\bigg( \sbr{\begin{array}{c} \tilde U \tilde \Sigma^{\frac 12} \\ \tilde V \tilde \Sigma^{\frac 12} \end{array}}, \sbr{\begin{array}{c}U^* \\ V^* \end{array}} \bigg) \leq I_0^2. \end{aligned} \end{equation} \subsection{Proof of Lemma \ref{lemma:iteration_contraction}} \label{A:lemma:iteration_contraction} For notation simplicity, let $Z = \sbr{\begin{array}{c}U \\ V \end{array}}$ denote the current iterate and let $Z^+ = \sbr{\begin{array}{c}U^+ \\ V^+ \end{array}}$ denote the next iterate. Let $S_U = \Scal(U) \cup \Scal(U^+) \cup \Scal(U^*)$ and $S_V = \Scal(V) \cup \Scal(V^+) \cup \Scal(V^*)$. With some abuse of notation, we define the index set $S_Z = S_U \cup S_V$ to represent coordinates of $Z$ corresponding to $U_{S_U}$ and $V_{S_V}$. For an index set $S$, let $\Pcal(U, S) = \sbr{\begin{array}{c}U_S \\ 0_{S^C}\end{array}}$. Let $G(U,V) = f(U,V)+g(U,V)$. Finally, let $\Delta_U = U - U^*\hat O$, $\Delta_V = V - V^*\hat O$ and $\Delta_Z = Z - Z^*\hat O$. With these notations, we can write \[ U^+ = {\rm Hard}(U - \eta \cdot \nabla G_U(U, V), s_1) = {\rm Hard}\rbr{U - \eta\cdot \Pcal\rbr{\nabla G_U(U, V),S_U}, s_1} \] and \[ V^+ = {\rm Hard}(V - \eta \cdot \nabla G_V(U, V), s_2) = {\rm Hard}\rbr{V - \eta \cdot \Pcal\rbr{\nabla G_V(U, V), S_V}, s_2}. \] Let $\hat O \in \Ocal(r)$ be such that \[ d^2(Z, Z^*) = \|U - U^*\hat O \|_F^2 + \|V - V^*\hat O \|_F^2. \] We have that \begin{equation} \begin{aligned} d^2(Z^+, Z^*) &= \min_{O \in \Ocal(r)} \bigg\| \sbr{ \begin{array}{c} U^+ \\ V^+ \end{array} } - \sbr{ \begin{array}{c} U^*{O} \\ V^*{O} \end{array} }\bigg\|_F^2 \\ &\leq \bigg\| \sbr{ \begin{array}{c} {\rm Hard}\rbr{U - \eta\cdot \Pcal\rbr{\nabla G_U(U, V), S_U}, s_1} \\ {\rm Hard}\rbr{V - \eta\cdot \Pcal\rbr{\nabla G_V(U, V), S_V}, s_2} \end{array} } - \sbr{ \begin{array}{c} U^*\hat{O} \\ V^*\hat{O} \end{array} }\bigg\|_F^2 \\ &\leq \rbr{1 + \frac{2}{\sqrt{c-1}}} \big\|Z - \eta \cdot \Pcal\rbr{\nabla G_Z(Z),S_Z} - Z^*\hat{O}\big\|_F^2, \end{aligned} \end{equation} where the last inequality follows from Lemma~3.3 of \cite{Li2016Stochastic}. Therefore, \begin{equation} \label{eq:dz_bound} d^2(Z^+, Z^*) \leq \rbr{1 + \frac{2}{\sqrt{c - 1}}} \big[ d^2(Z, Z^*) - 2\eta \cdot (T_1+R_1) + 2\eta^2 \cdot (T_2+R_2) \big] \end{equation} where $T_1 = \dotp{\Pcal\rbr{\nabla f_Z(Z),S_Z}}{\Delta_Z}$, $T_2 = \big\|\sbr{\nabla f_Z(Z)}_{S_Z}\big\|_F^2$, $R_1 = \dotp{\Pcal\rbr{\nabla g_Z(Z),S_Z}}{\Delta_Z}$, and $R_2 = \big\|\sbr{\nabla g_Z(Z)}_{S_Z}\big\|_F^2$. For the term $T_1$, we have \begin{equation} \begin{aligned} \label{eq:T_1} T_1 & = \bdotp{\Pcal\rbr{\nabla f(UV^\top)V, S_U}}{\Delta_U} + \bdotp{\Pcal\rbr{\nabla f(UV^\top)^\top U, S_V}}{\Delta_V} \\ & = \underbrace{ \bdotp{\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}}{\sbr{UV^\top - U^*{V^*}^\top}_{S_U,S_V}}}_{T_{11}} \\ & \quad + \underbrace{ \bdotp{\sbr{\nabla f(U^*{V^*}^\top)}_{S_U,S_V}}{\sbr{UV^\top - U^*{V^*}^\top}_{S_U,S_V}} }_{T_{12}} \\ & \quad + \underbrace{ \bdotp{\sbr{\nabla f(UV^\top)}_{S_U,S_V}}{\sbr{\Delta_U\Delta_V^\top}_{S_U,S_V}} }_{T_{13}}. \end{aligned} \end{equation} Theorem 2.1.11 of \cite{Nesterov2013Introductory} gives \begin{equation} \begin{aligned} \label{eq:T_11} T_{11} & \geq \frac{L \cdot\mu }{L + \mu }\cdot\Big\|{UV^\top - U^*{V^*}^\top}\Big\|_F^2 + \frac{1}{L +\mu }\cdot\Big\|\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}\Big\|_F^2 \end{aligned} \end{equation} Next, we have \begin{equation} \begin{aligned} \label{eq:T12} T_{12} & \geq - \abr{ \bdotp{\sbr{\nabla f(U^*{V^*}^\top)}_{S_U,S_V}}{\sbr{UV^\top - U^*{V^*}^\top}_{S_U,S_V}} } \\ & \stackrel{(i)}{\geq} - e_{\rm stat}\cdot \Big\|{UV^\top - U^*{V^*}^\top}\Big\|_F \\ & \stackrel{(ii)}{\geq} - \frac{1}{2}\frac{L + \mu }{L \cdot\mu }e_{\rm stat}^2 - \frac{1}{2}\frac{L \cdot\mu }{L + \mu }\cdot\Big\|{UV^\top - U^*{V^*}^\top}\Big\|_F^2 \end{aligned} \end{equation} where in $(i)$ follows from the definition of statistical error and in $(ii)$ we used the Young's inequality $ab \leq \frac{a^2}{2\epsilon} + \frac{\epsilon b^2}{2}$, for $a,b,\epsilon > 0$. Therefore, \begin{equation} \begin{aligned} \label{eq:T_11_plus_T_12} T_{11} + T_{12} & \geq \frac{1}{2}\frac{L \cdot\mu }{L + \mu }\cdot\Big\|{UV^\top - U^*{V^*}^\top}\Big\|_F^2 + \frac{1}{L +\mu }\cdot\Big\|\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}\Big\|_F^2 \\ &\qquad - \frac{1}{2}\frac{L + \mu }{L \cdot\mu }\cdot e_{\rm stat}^2. \end{aligned} \end{equation} Finally, for the term $T_{13}$, we have \begin{equation} \begin{aligned} T_{13} & \geq - \abr{ \bdotp{\sbr{\nabla f(UV^\top)}_{S_U,S_V}}{\sbr{\Delta_U\Delta_V^\top}_{S_U,S_V}} } \\ & \geq - \abr{ \bdotp{\sbr{\nabla f(U^*{V^*}^\top)}_{S_U,S_V}}{\sbr{\Delta_U\Delta_V^\top}_{S_U,S_V}} } - \abr{ \bdotp{\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}}{\sbr{\Delta_U\Delta_V^\top}_{S_U,S_V}} } \\ & \geq - \rbr{e_{\rm stat} + \Big\|\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}\Big\|_F}\cdot d^2(Z, Z^*), \end{aligned} \end{equation} where the last inequality follows from the definition of statistical error and the observation $\|\Delta_U\Delta_V^\top\|_F \leq \|\Delta_V\|_F\cdot \|\Delta_U\|_F \leq d^2(Z, Z^*)$. Under the assumptions, \begin{equation} d^2(Z, Z^*) \leq {\frac{4\mu_{\min}\sigma_r(\Theta^*)}{5(\mu +L )}} \end{equation} and therefore \begin{equation} \begin{aligned} \label{eq:T_13} T_{13} & \geq - \rbr{e_{\rm stat} + \Big\|\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}\Big\|_F}\cdot \sqrt{\frac{4\mu_{\min}\sigma_r(\Theta^*)}{5(\mu +L )}} \cdot d(Z, Z^*) \\ & \geq -\frac{1}{2(\mu +L )}\cdot\rbr{e_{\rm stat}^2 + \Big\|\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}\Big\|_F^2} - \frac 45 \mu_{\min} \sigma_r(\Theta^*) \cdot d^2(Z, Z^*). \end{aligned} \end{equation} Combining \eqref{eq:T_11_plus_T_12} and \eqref{eq:T_13} we have \begin{equation} \begin{aligned} \label{eq:T_1_simplified} T_{1} & \geq \underbrace{\frac{1}{2}\frac{L \cdot\mu }{L + \mu }\cdot\Big\|{UV^\top - U^*{V^*}^\top}\Big\|_F^2}_{T_{1a}} - \frac 45 \mu_{\min} \sigma_r(\Theta^*) \cdot d^2(Z, Z^*) - \frac{1}{2}\rbr{\frac{L + \mu }{L \cdot\mu }+\frac{1}{L +\mu }}\cdot e_{\rm stat}^2 \\ &\qquad + \frac{1}{2(L +\mu )}\cdot\Big\|\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}\Big\|_F^2 . \end{aligned} \end{equation} For the term $T_2$, we have \begin{equation} \begin{aligned} \Big\|[\nabla f(U^*{V^*}^\top)V]_{S_U}\Big\|_F &= \sup_{\|U_{S_U}\|_F=1} \tr\big(\nabla f(U^*{V^*}^\top)VU_{S_U}^\top\big) \\ &= \sup_{\|U_{S_U}\|_F=1} \langle \nabla f(U^*{V^*}^\top), U_{S_U}V^\top \rangle \\ &\leq e_{\text{stat}} \cdot \|V\|_2. \end{aligned} \end{equation} We then have \begin{equation} \begin{aligned} \Big\|[\nabla f(UV^\top)V]_{S_U}\Big\|_F^2 &= \bignorm{[\nabla f(UV^\top)V-\nabla f(U^*{V^*}^\top)V+\nabla f(U^*{V^*}^\top)V]_{S_U}}_F^2 \\ &\leq 2\Big\|[\nabla f(UV^\top)V-\nabla f(U^*{V^*}^\top)V]_{S_U}\Big\|_F^2 + 2\Big\|[\nabla f(U^*{V^*}^\top)V]_{S_U}\Big\|_F^2 \\ &\leq 2\Big\|\sbr{\nabla f(UV^\top)-\nabla f(U^*{V^*}^\top)}_{S_U, S_V}\Big\|_F^2 \cdot \|V\|_2^2 + 2e_{\text{stat}}^2 \cdot \|V\|_2^2 \\ &\leq 2\bigg( \Big\|\sbr{\nabla f(UV^\top)-\nabla f(U^*{V^*}^\top)}_{S_U, S_V}\Big\|_F^2 + e_{\text{stat}}^2 \bigg) \cdot \|Z\|_2^2, \end{aligned} \end{equation} where the first inequality follows since $\|A+B\|_F^2 \leq 2\|A\|_F^2 + 2\|B\|_F^2$, and the last inequality follows since $\max(\|U\|_{2},\|V\|_{2}) \leq \|Z\|_{2}$. Combining the results, we have \begin{equation} \label{eq:T_2} \begin{aligned} T_2 &= \Big\|[\nabla f(UV^\top)V]_{S_U}\Big\|_F^2 + \Big\|[\nabla f(UV^\top)^\top U]_{S_V}\Big\|_F^2 \\ & \leq 4\cdot \rbr{ \Big\|\sbr{\nabla f(UV^\top)-\nabla f(U^*{V^*}^\top)}_{S_U, S_V}\Big\|_F^2 + e_{\text{stat}}^2 } \cdot \|Z\|_{2}^2. \end{aligned} \end{equation} For $R_1$, Lemma B.1 of \cite{park2016finding} gives \begin{multline} \label{eq:R_1} R_1 \geq \underbrace{\frac 12 \|\nabla g\|_F^2}_{R_{11}} + \underbrace{\frac 18 \Big[\big\|UU^\top-U^*{U^*}^\top\big\|_F^2 + \big\|VV^\top-V^*{V^*}^\top\big\|_F^2 - 2\big\|UV^\top-U^*{V^*}^\top\big\|_F^2\Big]}_{R_{12}} \\ - \underbrace{\frac 12 \|\nabla g\|_2 \cdot \|\Delta Z\|_F^2}_{R_{13}}. \end{multline} For $R_{12}$, we have that \begin{equation} \begin{aligned} \label{eq:R_12_plus_T_1a} R_{12} + T_{1a} & = R_{12} + \frac{1}{8}\frac{L \cdot\mu }{L + \mu }\cdot 4\Big\|{UV^\top - U^*{V^*}^\top}\Big\|_F^2 \\ &\geq \mu_{\min} \Big[\big\|UU^\top-U^*{U^*}^\top\big\|_F^2 + \big\|VV^\top-V^*{V^*}^\top\big\|_F^2 + 2\big\|UV^\top-U^*{V^*}^\top\big\|_F^2\Big] \\ &= \mu_{\min} \big\|ZZ^\top - Z^*{Z^*}^\top\big\|_F^2 \\ &\geq \frac 45 \mu_{\min} \sigma_r^2(Z^*) \cdot d^2(Z,Z^*) \\ &= \frac 85 \mu_{\min} \sigma_r(\Theta^*) \cdot d^2(Z,Z^*), \end{aligned} \end{equation} where the first inequality follows from the definition of $\mu_{\min}$, the second inequality follows from Lemma 5.4 of \cite{Tu2016Low}, and the last equality follows from $\sigma_r(Z^*) = \sqrt{2\sigma_r(\Theta^*)}$. For $R_{13}$, recall that $\Delta Z$ satisfies \eqref{eq:bound_I_0}, we have that \begin{equation} \begin{aligned} \label{eq:R_13} R_{13} & \leq \frac 12 \|\nabla g\|_2 \cdot \|\Delta Z\|_F \cdot \sqrt{ \frac{8}{5} \mu_{\min}\sigma_r(\Theta^*) }\\ &\leq \frac{2}{5} \mu_{\min}\sigma_r(\Theta^*) \cdot d^2(Z,Z^*) + \frac 14 \|\nabla g\|_F^2. \end{aligned} \end{equation} Combining \eqref{eq:T_1_simplified}, \eqref{eq:R_1}, \eqref{eq:R_12_plus_T_1a}, and \eqref{eq:R_13}, we obtain \begin{equation} \begin{aligned} \label{eq:T_1_plus_R_1} T_1 + R_1 &\geq \frac{2}{5} \mu_{\min}\sigma_r(\Theta^*) \cdot d^2(Z,Z^*) + \frac 14 \|\nabla g\|_F^2 - \frac{1}{2}\rbr{\frac{L + \mu }{L \cdot\mu }+\frac{1}{L +\mu }}\cdot e_{\rm stat}^2 \\ &\qquad + \frac{1}{2(L +\mu )}\cdot\Big\|\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}\Big\|_F^2 . \end{aligned} \end{equation} For $R_2$, we have \begin{equation} \begin{aligned} \label{eq:R_2} R_2 = \|U \nabla g\|_F^2 + \|V \nabla g\|_F^2 \leq 2\|Z\|_2^2 \cdot \|\nabla g\|_F^2. \end{aligned} \end{equation} Combining \eqref{eq:T_2}, \eqref{eq:T_1_plus_R_1}, and \eqref{eq:R_2}, we have \begin{equation} \begin{aligned} \label{eq:d2_intermediate} d^2(Z, Z^*) - & 2\eta \cdot (T_1+R_1) + 2\eta^2 \cdot (T_2+R_2) \\ & \leq \rbr{1 - \eta\cdot \frac{2}{5} \mu_{\min}\sigma_r(\Theta^*) }\cdot d^2(Z, Z^*) \\ & \qquad + \eta\rbr{4\eta\cdot\|Z\|_2^2 - \frac{1}{2(L +\mu )}}\cdot\Big\|\sbr{\nabla f(UV^\top) - \nabla f(U^*{V^*}^\top)}_{S_U,S_V}\Big\|_F^2 \\ &\qquad + \eta\bigg(2\eta\cdot\|Z\|_2^2 - \frac 14 \bigg) \|\nabla g\|_F^2 \\ &\qquad + \eta \rbr{ \frac{L + \mu }{2\mu L }+\frac{1}{2(L +\mu )} + 4\eta\cdot\|Z\|_2^2 }\cdot e_{\rm stat}^2. \end{aligned} \end{equation} Under the choice of the step size, \begin{equation} \label{eq:choice_of_step_size} \eta \leq \frac{1}{8\|Z\|_2^2} \cdot \min\Big\{\frac{1}{2(\mu +L )}, 1\Big\}, \end{equation} the second term and third term in \eqref{eq:d2_intermediate} are non-positive and we drop them to get \begin{equation} \begin{aligned} \label{eq:d2_intermediate_simplified} d^2(Z, Z^*) - & 2\eta \cdot (T_1+R_1) + 2\eta^2 \cdot (T_2+R_2) \\ & \leq \rbr{1 - \eta\cdot \frac{2}{5} \mu_{\min}\sigma_r(\Theta^*) }\cdot d^2(Z, Z^*) + \eta\cdot\frac{L + \mu }{L \cdot\mu }\cdot e_{\rm stat}^2. \end{aligned} \end{equation} Plugging \eqref{eq:d2_intermediate_simplified} into \eqref{eq:dz_bound} we finish the proof. \subsection{Proof of Lemma \ref{lemma:step_size_constant}} \label{A:lemma:step_size_constant} Comparing \eqref{eq:eta_constant} and \eqref{eq:step_size_condition} we see that we only need to show $\|Z\|_2^2 \leq 2\|Z_0\|_2^2$. Let $O \in \Ocal(r)$ be such that \[ d^2(Z, Z^*) = \|U - U^* O \|_F^2 + \|V - V^* O \|_F^2. \] By triangular inequality we have \begin{equation} \begin{aligned} \label{eq:98} \|Z\|_2 &\leq \|Z^*O\|_2 + \|Z - Z^*O\|_2 \\ &\leq \|Z^*\|_2 + \sqrt{\frac 45 \mu_{\min}\sigma_r(\Theta^*) \cdot \frac{1}{\mu +L }} \\ &\leq \|Z^*\|_2 + \sqrt{\frac 45 \cdot \frac 18 \frac{\mu L }{\mu +L } \cdot \frac 12\sigma_r^2(Z^*) \cdot \frac{1}{\mu +L }} \\ &\leq \|Z^*\|_2 + \sqrt{\frac{1}{80} \sigma_r^2(Z^*) } \\ &\leq \frac 98\|Z^*\|_2, \end{aligned} \end{equation} where the third inequality follows from the definition of $\mu_{\min}$ and $\sigma_r^2(Z^*) = 2\sigma_r(\Theta^*)$, and the fourth inequality follows from $\frac{ab}{(a+b)^2} \leq \frac 14$. Similarly, we have \begin{equation} \begin{aligned} \label{eq:78} \|Z_0\|_2 &\geq \|Z^*O\|_2 - \|Z_0 - Z^*O\|_2 \\ &\geq \|Z^*\|_2 - \sqrt{\frac{1}{80} \sigma_r^2(Z^*) } \\ &\geq \frac 78\|Z^*\|_2. \end{aligned} \end{equation} Combining \eqref{eq:98} and \eqref{eq:78} we have \begin{equation} \begin{aligned} \|Z\|_2 \leq \frac98 \cdot \frac 87 \|Z_0\|_2 \leq \sqrt{2} \|Z_0\|_2, \end{aligned} \end{equation} which completes the proof. \subsection{Proof of Lemma \ref{lemma:e_stat}} \label{A:lemma:e_stat} Let $\Omega(s, m)$ denote a collection of subsets of $\cbr{1,\ldots,m}$ of size $s$. Let $S_U \in \Omega(s_1, p)$ and $S_V \in \Omega(s_2, k)$ be fixed. With some abuse of notation, let $\Wcal(S_U) = \{U \in \RR^{p \times 2r} \mid \|U_{S_U^c}\| = 0, \|U_{S_U}\|_2 = 1\}$ and $\Wcal(S_V) = \{V \in \RR^{k \times 2r} \mid \|V_{S_V^c}\| = 0, \|V_{S_V}\|_F = 1\}$. Let $\Ncal_U(\epsilon)$ and $\Ncal_V(\epsilon)$ be the epsilon net of $\Wcal_U$ and $\Wcal_V$, respectively. Using Lemma 10 and Lemma 11 of \cite{Vu2011Singular}, we know that $|\Ncal_U(\epsilon)| \leq (3\epsilon^{-1})^{2r\cdot s_1}$, $|\Ncal_V(\epsilon)| \leq (3\epsilon^{-1})^{2r\cdot s_2}$, and \begin{equation} \begin{aligned} \sup_{\substack{U\in\Wcal(S_U)\\V\in\Wcal(S_V)}} \frac 1n \tr\big(E^\top XUV^\top\big) &\leq (1-\epsilon)^{-2} \max_{\substack{U \in \Ncal_U(\epsilon) \\ V \in \Ncal_V(\epsilon)}} \frac 1n \tr\big(E^\top XUV^\top\big). \end{aligned} \end{equation} For fixed $U$ and $V$, the random variable $\tr\big(E^\top XUV^\top\big)$ is a sub-Gaussian with variance proxy $\sigma^2\| X_{S_U}U_{S_U}V_{S_V}^\top \|_F^2$. This variance proxy can be bounded as \[ \sigma^2\| X_{S_U}U_{S_U}V_{S_V}^\top \|_F^2 \leq \sigma^2\cdot \max_{S_U \in \Omega(s_1, p)}\|(X^\top X)_{S_US_U}\|_2 = n \sigma^2 \bar\kappa(s_1). \] Using a tail bound for sub-Gaussian random variables, we get \[ \frac 1n \tr\big(E^\top XU_{S_U}V_{S_V}^\top\big) \leq 2\sigma \sqrt{ \frac{\bar\kappa(s_1)\log\frac1\delta}{n} } \] with probability at least $1-\delta$. To obtain an upper bound on $e_{\text{stat}}$, we will apply the union bound $\Omega(s_1, p)$, $\Omega(s_2, k)$, $\Ncal_U(\epsilon)$ and $\Ncal_V(\epsilon)$. We set $\epsilon=\frac12$ and obtain \[ e_{\text{stat}} \leq 8\sigma \sqrt{\frac{\bar\kappa(s_1)}{n} \Big(s_1\log p + s_2\log k + 2r (s_1+s_2)\log6 + \log \frac1\delta }\Big) \] with probability at least $1-\delta$. Taking $\delta = (p \vee k)^{-1}$ completes the proof. \section{Experiment} \label{sec:experiment} In this section we demonstrate the effectiveness of the GDT algorithm by extensive experiments\footnote{The codes are available at \url{http://home.uchicago.edu/~ming93/research.html}}. Section \ref{sec:simulation} shows results on synthetic datasets while Section \ref{sec:real1} and \ref{sec:real2} show results on two real datasets. \subsection{Synthetic Datasets} \label{sec:simulation} We present numerical experiments on MTL problem to support our theoretical analysis. Throughout this section, we generate the instances by sampling all entries of design matrix $X$, all nonzero entries of the true signal $U^*$ and $V^*$, and all entries of the noise matrix $E$ as i.i.d. standard normal. \vspace{2mm} {\bf Linear convergence.} We first demonstrate our linear convergence result. Because it is hard to quantify linear convergence with statistical error, we turn to show the linear convergence in some special cases. Firstly, as we discussed after Corollary~\ref{stat_error_thm}, suppose there is no error term $E$ in the model \eqref{eqn:MTL_model}, then Algorithm \ref{algo:AltGD} converges linearly to the true coefficient matrix $\Theta^*$. In this case we choose $p = 100, k = 50, r = 8, s_1^* = s_2^* = 10$, and the estimation error is shown in Figure \ref{linear_no_error}. Secondly, as we discussed at the end of Section \ref{sec:main-result}, suppose there are no row or column sparsity constraints on $\Theta^*$, then Algorithm \ref{algo:AltGD} converges linearly to global minimum $\hat \Theta$. In this case it is more likely that we are in low dimensions, therefore we choose $p = 50$. The estimation error is shown in Figure \ref{linear_low}. We see that in both cases GDT has linear convergence rate. \begin{centering} \begin{figure*}[tbp] \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=0.9\textwidth]{linear_no_error_new.eps} \caption{No error case} \label{linear_no_error} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=0.9\textwidth]{linear_low_new.eps} \caption{No sparsity case} \label{linear_low} \end{minipage} \end{figure*} \end{centering} \vspace{2mm} {\bf Estimation accuracy.} We compare our algorithm with the Double Projected Penalization (DPP) method in \cite{Ma2014Adaptive}, the thresholding SVD method (TSVD) method in \cite{Ma2014Learning}, the exclusive extraction algorithm (EEA) in \cite{Chen2011Reduced}, the two methods (denoted by RCGL and JRRS) in \cite{Bunea2012Joint}, and the standard Multitask learning method (MTL, with $L_{2,1}$ penalty). Here we set $n=50,p=100,k=50,r=8,s_1^* = s_2^* = 10$. The reason why we choose a relatively small scale is that many other methods do not scale to high dimensions, as will shown in Table \ref{runningtime}. We will show the effectiveness of our method in high dimensions later. Except for standard MTL, all the other methods need an estimate of the rank to proceed for which we apply the rank estimator in \cite{Bunea2011Optimal}. For the methods that rely on tuning parameters, we generate an independent validation set to select the tuning parameters. We consider two coefficient matrix settings, one is only row sparse and the other one is both row sparse and column sparse. We also consider strong signal and weak signal settings. The strong signal setting is described above and for the weak signal setting, we divide the true $\Theta^*$ by 5, resulting in a signal for which recovering true non-zero variables becomes much more difficult. Table \ref{row} (strong signal, row sparse), Table \ref{rowcolumn} (strong signal, row \emph{and} column sparse), Table \ref{row_weak} (weak signal, row sparse) and Table \ref{rowcolumn_weak} (weak signal, row \emph{and} column sparse) report the mean and the standard deviation of prediction errors, estimation errors and size of selected models based on 50 replications in each setting. We can see that in all the cases GDT has the lowest estimation error and prediction error. When the signal is weak, GDT may underselect the number of nonzero rows/columns, but it still has the best performance. \vspace{2mm} {\bf Running time.} We then compare the running time of all these methods. We fix a baseline model size $n=50,p=80,k=50,r=4,s_1^* = s_2^* = 10$, and set a free parameter $\zeta$. For $\zeta = \{1,5,10,20,50,100\}$, each time we increase $n,p,s_1^*,s_2^*$ by a factor of $\zeta$ and increase $k,r$ by a factor of $\lfloor\sqrt\zeta\rfloor$ and record the running time (in seconds) of each method for a fixed tolerance level, whenever possible. If for some $\zeta$ the algorithm does not converge in 2 hours then we simply record ``$>$2h'' and no longer increase $\zeta$ for that method. Table \ref{runningtime} summarizes the results. We can see that GDT is fast even in very high dimension, while all of the other methods are computationally expensive. We note that even though GDT uses the lasso estimator in the initialization step, all the variables are used in the subsequent iterations and not only the ones selected by the lasso. In particular, the speed of the method does not come from the initialization step. \begin{table}[tp] \caption{Strong signal, Row sparse} \begin{center} { \begin{tabular}{cccc} \hline & Estimation error & Prediction error & $|$Row support$|$ \\ \hline {\bf GDT} & 0.0452 $\pm$ 0.0110 & 1.1060 $\pm$ 0.0248 & 10.16 $\pm$ 0.51 \\ DPP & 0.0584 $\pm$ 0.0113 & 1.1290 $\pm$ 0.0357 & 52.64 $\pm$ 15.2 \\ TSVD & 0.3169 $\pm$ 0.1351 & 2.4158 $\pm$ 0.9899 & 25.62 $\pm$ 8.03 \\ EEA & 0.3053 $\pm$ 0.0998 & 1.2349 $\pm$ 0.0362 & 84.28 $\pm$ 6.70 \\ RCGL & 0.0591 $\pm$ 0.0148 & 1.1101 $\pm$ 0.0168 & 49.60 $\pm$ 10.6 \\ JRRS & 0.0877 $\pm$ 0.0227 & 1.1857 $\pm$ 0.0214 & 12.26 $\pm$ 2.02 \\ MTL & 0.0904 $\pm$ 0.0243 & 1.1753 $\pm$ 0.0204 & 73.40 $\pm$ 2.67 \\ \hline \end{tabular} } \end{center} \label{row} \end{table}% \begin{table}[tp] \caption{Strong signal, Row sparse and column sparse} \begin{center} { \begin{tabular}{ccccc} \hline & Estimation error & Prediction error & $|$Row support$|$ & $|$Column support$|$ \\ \hline {\bf GDT} & 0.0624 $\pm$ 0.0121 & 1.0353 $\pm$ 0.0167 & 10.24 $\pm$ 0.65 & 10.24 $\pm$ 0.68\\ DPP & 0.0921 $\pm$ 0.0251 & 1.0790 $\pm$ 0.0295 & 54.10 $\pm$ 18.25 & 10.38 $\pm$ 0.60\\ TSVD & 0.3354 $\pm$ 0.1053 & 1.7600 $\pm$ 0.3415 & 28.66 $\pm$ 7.27 & 30.88 $\pm$ 8.46 \\ EEA & 0.2604 $\pm$ 0.1159 & 1.1023 $\pm$ 0.0220 & 64.44 $\pm$ 9.88 & 12.10 $\pm$ 2.69 \\ RCGL & 0.1217 $\pm$ 0.0325 & 1.1075 $\pm$ 0.0174 & 42.06 $\pm$ 7.93 & 50 $\pm$ 0 \\ JRRS & 0.1682 $\pm$ 0.0410 & 1.1612 $\pm$ 0.0174 & 13.96 $\pm$ 4.69 & 50 $\pm$ 0 \\ MTL& 0.1837 $\pm$ 0.0499 & 1.1652 $\pm$ 0.0160 & 73.50 $\pm$ 3.17 & 50 $\pm$ 0\\\hline \end{tabular} } \end{center} \label{rowcolumn} \end{table}% \begin{table}[tp] \caption{Weak signal, Row sparse} \begin{center} { \begin{tabular}{cccc} \hline & Estimation error & Prediction error & $|$Row support$|$ \\ \hline {\bf GDT} & 0.2328 $\pm$ 0.0474 & 1.1282 $\pm$ 0.0231 & 10.08 $\pm$ 0.56\\ DPP &0.2954 $\pm$ 0.0640 & 1.1624 $\pm$ 0.0315 & 47.26 $\pm$ 11.7\\ TSVD &0.5842 $\pm$ 0.1020 & 1.4271 $\pm$ 0.0903 & 30.81 $\pm$ 4.72\\ EEA & 0.3802 $\pm$ 0.0787 & 1.1647 $\pm$ 0.0206 & 46.16 $\pm$ 8.97\\ RCGL & 0.2775 $\pm$ 0.0605 & 1.1493 $\pm$ 0.0291 & 37.92 $\pm$ 14.4\\ JRRS & 0.3600 $\pm$ 0.0752 & 1.1975 $\pm$ 0.0392 & 11.74 $\pm$ 1.35 \\ MTL& 0.3577 $\pm$ 0.0721 & 1.2140 $\pm$ 0.0418 & 69.92 $\pm$ 12.8\\ \hline \end{tabular} } \end{center} \label{row_weak} \end{table}% \begin{table}[tp] \caption{Weak signal, Row sparse and column sparse} \begin{center} { \begin{tabular}{ccccc} \hline & Estimation error & Prediction error & $|$Row support$|$ & $|$Column support$|$ \\ \hline {\bf GDT} & 0.3173 $\pm$ 0.0949 & 1.0380 $\pm$ 0.0218 & 9.56 $\pm$ 1.56 & 10.06 $\pm$ 1.21\\ DPP &0.3899 $\pm$ 0.0737 & 1.0580 $\pm$ 0.0216 & 50.66 $\pm$ 12.86 & 13.52 $\pm$ 5.02\\ TSVD &0.6310 $\pm$ 0.1074 & 1.1372 $\pm$ 0.0246 & 49.94 $\pm$ 5.53 & 43.38 $\pm$ 2.55\\ EEA & 0.6016 $\pm$ 0.0965 & 1.0874 $\pm$ 0.0197 & 30.64 $\pm$ 8.65 & 30.64 $\pm$ 8.65\\ RCGL & 0.4601 $\pm$ 0.0819 & 1.1017 $\pm$ 0.0262 & 28.9 $\pm$ 12.36 & 50 $\pm$ 0\\ JRRS & 0.5535 $\pm$ 0.0866 & 1.1164 $\pm$ 0.0262 & 12.42 $\pm$ 6.02 & 50 $\pm$ 0 \\ MTL& 0.5776 $\pm$ 0.0873 & 1.1286 $\pm$ 0.0296 & 53.0 $\pm$ 18.41 & 50 $\pm$ 0\\ \hline \end{tabular} } \end{center} \label{rowcolumn_weak} \end{table}% \begin{table}[tp] \caption{Running time comparison (in seconds)} \begin{center} { \begin{tabular}{ccccccc} \hline & $\zeta = 1$ & $\zeta = 5$ & $\zeta = 10$ & $\zeta = 20$ & $\zeta = 50$ & $\zeta = 100$ \\ \hline {\bf GDT} & 0.11 & 0.20 & 0.51 & 2.14 & 29.3 & 235.8\\ DPP & 0.19 & 0.61 & 3.18 & 17.22 & 315.4 & 2489\\ TSVD & 0.07 & 1.09 & 6.32 & 37.8 & 543 & 6075 \\ EEA & 0.50 & 35.6 & 256 & $>$2h & $>$2h & $>$2h \\ RCGL & 0.18& 1.02 & 7.15 & 36.4 & 657.4 & $>$2h\\ JRRS & 0.19 & 0.82 & 6.36 & 30.0 & 610.2 & $>$2h\\ MTL& 0.18 & 3.12 & 30.92 & 184.3 & $>$2h & $>$2h\\\hline \end{tabular} } \end{center} \label{runningtime} \end{table}% \vspace{2mm} {\bf Effectiveness in high dimension.} We finally demonstrate the effectiveness of GDT algorithm in high dimensions. Table \ref{row} and Table \ref{rowcolumn} are both in low dimensions because we want to compare with other algorithms and they are slow in high dimensions, as shown in Table \ref{runningtime}. Now we run our algorithm only and we choose $p = 5000, k = 3000, r = 50, s_1^* = s_2^* = 100$. The estimation error and objective value are shown in Figure \ref{high_estimation} and Figure \ref{high_obj}, respectively. In each figure, iteration 0 is for initialization we obtained by Lasso. We can see that both estimation error and objective value continue to decrease, which demonstrates the effectiveness and necessity of GDT algorithm. From Figure \ref{high_estimation} we also find that early stopping can help to avoid overfitting (although not too much), especially when $n$ is small. \begin{centering} \begin{figure*}[tbp] \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=0.9\textwidth]{high_estimation_new} \caption{Estimation error} \label{high_estimation} \end{minipage} \begin{minipage}[t]{0.5\linewidth} \centering \includegraphics[width=0.9\textwidth]{high_obj_new} \caption{Objective value} \label{high_obj} \end{minipage} \end{figure*} \end{centering} \subsection{Norwegian Paper Quality Dataset} \label{sec:real1} In this section we apply GDT to Norwegian paper quality dataset. This data was obtained from a controlled experiment that was carried out at a paper factory in Norway to uncover the effect of three control variables $X_1,X_2,X_3$ on the quality of the paper which was measured by 13 response variables. Each of the control variables $X_i$ takes values in $\{-1,0,1\}$. To account for possible interactions and nonlinear effects, second order terms were added to the set of predictors, yielding $X_1, X_2, X_3, X_1^2, X_2^2, X_3^2, X_1 \cdot X_2, X_1 \cdot X_3, X_2\cdot X_3$. The data set can be downloaded from the website of \cite{izenman2008modern} and its structure clearly indicates that dimension reduction is possible, making it a typical application for reduced rank regression methods \cite{izenman2008modern, aldrin1996moderate, Bunea2012Joint, She2017Selective}. Based on the analysis of \cite{Bunea2011Optimal} and \cite{aldrin1996moderate} we select the rank $\hat r = 3$; also suggested by \cite{Bunea2011Optimal} we take $s_1=6$ and $s_2 = k = 13$ which means we have row sparsity only. GDT selects 6 of the original 9 predictors, with $X_1^2,X_1\cdot X_2$ and $X_2\cdot X_3$ discarded, which is consistent with the result in \cite{Bunea2011Optimal}. To compare prediction errors, we split the whole dataset at random, with 70\% for training and 30\% for test, and repeat the process 50 times to compare the performance of the above methods. All tuning parameters are selected by cross validation and we always center the responses in the training data (and transform the test data accordingly). The average RMSE on test set is shown in Table \ref{RMSE}. We can see that GDT is competitive with the best method, demonstrating its effectiveness on real datasets. \begin{table}[htp] \caption{RMSE on paper quality dataset} \begin{center} {\footnotesize \begin{tabular}{ccccccc} \hline {\bf GDT} & DPP & TSVD & EEA & RCGL & JRRS & MTL \\\hline 1.002 & 1.012 & 1.094 & 1.161 & 1.001 & 1.013 & 1.014 \\\hline \end{tabular} } \end{center} \label{RMSE} \end{table}% \subsection{Calcium Imaging Data} \label{sec:real2} As a microscopy technique in neuroscience, calcium imaging is gaining more and more attentions \cite{haeffele2014structured}. It records fluorescent images from neurons and allows us to identify the spiking activity of the neurons. To achieve this goal, \cite{pnevmatikakis2014structured} introduces a spatiotemporal model and we briefly introduce this model here. More detailed description can be found in \cite{pnevmatikakis2014structured} and \cite{Ma2014Adaptive}. Denote $k = \ell_1 \times \ell_2$ as the pixels we observe, and denote $K$ as the total number of neurons. The observation time step is $t = 1, ..., T$. Let $S \in \mathbb{R}^{T \times K}$ be the number of spikes at each time step and for each neuron; $A \in \mathbb{R}^{K \times k}$ be the nonnegative spatial footprint for each neuron at each pixel; $Y \in \mathbb{R}^{T \times k}$ be the observation at each time step and at each pixel; and $E \in \mathbb{R}^{T \times k}$ be the observation error. Ignore the baseline vector for all the pixels, the model in \cite{pnevmatikakis2014structured} is given by \begin{equation} \begin{aligned} Y &= G^{-1}SA + E = X \Theta^* + E \end{aligned} \label{calcium_MTL} \end{equation} where $\Theta^* = SA$ is the coefficient matrix and $X = G^{-1}$ is observed with \begin{equation} G = \begin{pmatrix} 1 & 0 & \dots & 0 \\ -\gamma & 1 & \ddots & \vdots \\ \vdots & \ddots & \ddots & 0 \\ 0 & \dots & -\gamma & 1 \end{pmatrix}. \label{calcium_G} \end{equation} Here $\gamma$ is set to be $\gamma = 1 - 1/(\text{frame rate})$ as suggested by \cite{vogelstein2010fast}. From the settings we see that each row of $S$ is the activation for all the neurons, and therefore it is natural to have $S$ to be row sparse since usually we would not observe too many activations in a fixed time period; also, each column of $A$ is the footprint for all the neurons at each pixel, and therefore it is natural to have $A$ to be column sparse since we expect to see only a few neurons in a fixed area. Therefore our coefficient matrix $\Theta^* = SA$ would be both row sparse and column sparse. It is also low rank since it is the product of two ``tall" matrices because the number of neurons $K$ are usually small. Now we see this is a multi-task learning problem with simultaneous row-sparse, column-sparse and low rank coefficient matrix where $n = p = T$ and $k = \ell_1 \times \ell_2$. We consider the calcium imaging data in \cite{akerboom2012optimization} which is a movie with 559 frames (acquired at approximately 8.64 frames/sec), where each frame is $135 \times 131$ pixels. This dataset is also analyzed in \cite{Ma2014Adaptive} and \cite{haeffele2014structured}. For this dataset, we have $n=p=559$ and $k=135 \times 131 = 17,685$. We use $r = 50$, more conservative than the estimator given by \cite{Bunea2011Optimal} and we set $s_1 = 100$ row sparsity and $s_2 = 3000$ column sparsity. Figure \ref{true_signals} shows five most significant manually labeled regions; Figure \ref{our_signals} are the corresponding signals estimated by our GDT algorithm. We can see that they match very well, which demonstrates the effectiveness of our method. \begin{figure}[t] \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{true1} \end{minipage} \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{true2} \end{minipage} \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{true3} \end{minipage} \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{true4} \end{minipage} \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{true5} \end{minipage} \caption{Manually selected top 5 labeled regions} \label{true_signals} \vspace{5mm} \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{Our1} \end{minipage} \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{Our2} \end{minipage} \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{Our3} \end{minipage} \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{Our4} \end{minipage} \begin{minipage}[t]{0.192\linewidth} \centering \includegraphics[width=\textwidth]{Our5} \end{minipage} \caption{Corresponding signals estimated by our GDT algorithm} \label{our_signals} \end{figure} \section{Gradient Descent With Hard Thresholding} \label{sec:methodology} In this section, we detail our proposed algorithm, which is based on gradient descent with hard thresholding (GDT). Our focus is on developing an efficient algorithm for minimizing $f(\Theta)$ with $\Theta \in \Xi$. In statistical estimation and machine learning a common goal is to find $\Theta^*$, which is an (approximate) minimizer of $\EE[f(\Theta)]$ where the expectation is with respect to randomness in data. In many settings, the global minimizer of \eqref{eq:opt} can be shown to approximate $\Theta^*$ up to statistical error, which is problem specific. In Section~\ref{sec:theoretical}, we will show that iterates of our algorithm converge linearly to $\Theta^*$ up to a statistical error. It is worth noting that an argument similar to that in the proof of Theorem~\ref{main} can be used to establish linear convergence to the global minimizer $\hat \Theta$ in a deterministic setting. That is, suppose $(\hat U, \hat V)$ is a global minimizer of the problem \eqref{eq:opt:rep} and $\hat \Theta = \hat U \hat V^{\top}$. Then as long as the conditions in Section~\ref{sec:theoretical} hold for $\hat U, \hat V$ in place of $U^*, V^*$, we can show linear convergence to $\hat \Theta$ up to an error level defined by the gradient of the objective function at $\hat \Theta$. See the discussion after Theorem~\ref{main}. Our algorithm, GDT, uses a Burer-Monteiro factorization to write $\Theta = UV^{\top}$, where $U \in \R^{m_1 \times r}$ and $V \in \R^{m_2 \times r}$, and minimizes \begin{equation} \label{eq:opt:rep:1} \begin{aligned} (\hat U, \hat V) \in \arg \min_{U \in \Ucal , V \in \Vcal} f(U,V) + g(U,V), \end{aligned} \end{equation} where $g(U,V)$ is the penalty function defined as \begin{equation} \label{eq:penalty} g(U,V) = \frac 14 \|U^{\top}U - V^{\top}V \|_F^2. \end{equation} The role of the penalty is to find a balanced decomposition of $\hat \Theta$, one for which $\sigma_i(\hat U) = \sigma_i(\hat V)$, $i=1,\ldots,r$ \cite{Zhu2017Global, Zhang2017Nonconvex}. Note the value of the penalty is equal to $0$ for a balanced solution, so we can think of the penalized objective as looking through minimizer of \eqref{eq:opt:rep} for a one that satisfies $\hat U^{\top}\hat U - \hat V^{\top}\hat V = 0$. In particular, adding the penalty function $g$ does not change the minimizer of $f$ over $\Xi$. The convergence rate of GDT depends on the condition number of $(U^*,V^*)$, the point algorithm converges to. The penalty ensures that the iterates $U,V$ are not ill-conditioned. Gradient descent with hard-thresholding on $U$ and $V$ is used to minimize \eqref{eq:opt:rep:1}. Details of GDT are given in Algorithm~\ref{algo:AltGD}. The algorithm takes as input parameters $\eta$, the step size; $s_1$, $s_2$, the sparsity level; $T$, the number of iterations; and a starting point $\Theta^0$. The choice of starting point $\Theta^0$ is very important as the algorithm performs a local search in its neighborhood. In Section~\ref{sec:theoretical} we will formalize how close $\Theta^0$ needs to be to $\Theta^*$, while in Section~\ref{sec:MTL} we provide a concrete way to initialize under a multi-task learning model. In general, we envisage finding $\Theta^0$ by solving the following optimization problem \begin{equation} \label{eq:meta_init} \Theta^0 = \arg\min_{\Theta \in \RR^{m_1 \times m_2}} f(\Theta) + {\rm pen}(\Theta), \end{equation} where ${\rm pen}(\Theta)$ is a (simple) convex penalty term making the objective \eqref{eq:meta_init} a convex optimization problem. For example, we could use the vector $\ell_1$ norm, ${\rm pen}(\Theta) = \|\Theta\|_1$. The choice of penalty ${\rm pen}(\Theta)$ should be such that solving the optimization problem in \eqref{eq:meta_init} can be done efficiently in a high dimensional setting. In practice, if solving the convex relaxation is slow, we can start from the all zero matrix and perform several (proximal) gradient steps to get an appropriate initialization. See for example \cite{Zhang2017Nonconvex}. Once an initial estimate $\Theta^0$ is obtained, we find the best rank $r$ approximation $\tilde\Theta = \tilde U \tilde \Sigma \tilde V^\top$ to $\Theta^0$ and use it to obtain the initial iterates $ U^0$ and $V^0$. In each step, GDT updates $U$ and $V$ by taking a gradient step and hard-thresholding the result. The operation $\text{Hard}(U,s)$ keeps $s$ rows of $U$ with the largest $\ell_2$ row-norm, while setting to zero other rows. Suppose that the target statistical parameter $\Theta^*$ is in $\Xi(r^*, s_1^*,s_2^*)$. The sparsity level $s_1^*$ and $s_2^*$ as well as the rank $r^*$ are not known in practice, but are needed in Algorithm~\ref{algo:AltGD}. For the convergence proof we require that the input parameters to the algorithm are set as $s_1 = c\cdot s_1^*$ and $s_2 = c\cdot s_2^*$ for some $c > 1$. From simulations, we observe that the estimation accuracy is not very sensitive to the choice of $s_1$ and $s_2$ as long as they are chosen greater than the true values $s_1^*$ and $s_2^*$. This suggests that in practice, we could set $s_1$ and $s_2$ to be reasonably large values whenever a reasonable guess of the sparsity level is available, as incorrectly omitting nonzero value (false negative) is more troublesome than including one zero value (false positive). Alternatively, as we do in simulations, we can use a validation set or an information criteria to select these tuning parameters. For example, \cite{She2017Selective} develops the scale-free predictive information criterion to select the best sparsity parameters. The rank $r$ can be estimated as in \cite{Bunea2011Optimal}. To the best of our knowledge, GDT is the first gradient based algorithm to deal with a nonconvex optimization problem over a parameter space that is simultaneously low rank and row and column sparse. In the following section we will provide conditions on the objective function $f$ and the starting point $\Theta^0$ which guarantee linear convergence to $\Theta^*$ up to a statistical error. As an application, we consider the multi-task learning problem in Section~\ref{sec:MTL}. We show that the statistical error nearly matches the optimal minimax rate, while the algorithm achieves the best performance in terms of estimation and prediction error in simulations. \begin{comment} \begin{algorithm}[tb] \caption{Alternating Descent MultI-Task (GDT) Learning} \label{algo:AltGD} \begin{algorithmic} \STATE {\bfseries Input:} $\bm X$, $\bm Y$, $\eta$ \STATE {\bfseries Initialize $ U^0, V^0$} \FOR{$t=1$ {\bfseries to} $T$} \STATE $V^{t+1} = V^{\top} - \eta\nabla_V f(U^{\top},V^{\top})$ \STATE $U^{t+0.5} = U^{\top} - \eta\nabla_U f(U^{\top},V^{t+1})$, $U^{t+1} = \text{Hard} (U^{t+0.5}, s)$ \ENDFOR \STATE {\bfseries Output:} $\Theta^{T} = U^{T}(V^{T})^{\top}$ \end{algorithmic} \end{algorithm} \end{comment} \begin{algorithm}[tb] \caption{Gradient Descent with Hard Thresholding (GDT)} \label{algo:AltGD} \begin{algorithmic}[1] \STATE {\bfseries Input:} Initial estimate $\Theta^0$ \STATE {{\bfseries Parameters:} Step size $\eta$, Rank $r$, Sparsity level $s_1, s_2$, Total number of iterations $T$} \STATE $(\tilde U, \tilde\Sigma, \tilde V) = $ rank $r$ SVD of $\Theta^0$ \STATE $ U^0 = \text{Hard} (\tilde U(\tilde\Sigma)^{\frac 12},s_1), V^0 = \text{Hard}( \tilde V(\tilde\Sigma)^{\frac 12}, s_2)$ \FOR{$t=1$ {\bfseries to} $T$} \STATE $V^{t+0.5} = V^{t} - \eta\nabla_V f( U^{t},V^{t}) - \eta\nabla_V g( U^{t},V^{t})$, \STATE $V^{t+1} = \text{Hard} (V^{t+0.5},s_2)$ \STATE $U^{t+0.5} = U^{t} - \eta\nabla_U f( U^{t},V^{t}) - \eta\nabla_U g( U^{t},V^{t})$, \STATE $U^{t+1} = \text{Hard} (U^{t+0.5},s_1)$ \ENDFOR \STATE {\bfseries Output:} $\Theta^{T} = U^{T}(V^{T})^{\top}$ \end{algorithmic} \end{algorithm} \begin{comment} \begin{algorithm}[tb] \caption{} \label{algo} \begin{algorithmic} \STATE {\bfseries Input:} $\bm M$, $\bm t$, $T$, $\eta$, $\lambda$, $\epsilon$ \STATE {\bfseries Parameters:} $\bm M$, $\bm t$, $T$, $\eta$, $\lambda$, $\epsilon$ \STATE {\bfseries Notation: $B_1 = [u_1, ..., u_K], B_2 = [v_1, ..., v_K]$} \STATE {\bfseries Initialize $B_2$} \WHILE{$tolerance > \epsilon$} \WHILE{$tolerance(B_1) > \epsilon$} \FOR{$i=1$ {\bfseries to} $K$} \STATE $u_i = (u_i - \eta \cdot \frac{\partial \ell}{\partial u_i} - \lambda\eta)_+$ \ENDFOR \ENDWHILE \WHILE{$tolerance(B_2) > \epsilon$} \FOR{$i=1$ {\bfseries to} $K$} \STATE $v_i = (v_i - \eta \cdot \frac{\partial \ell}{\partial v_i} - \lambda\eta)_+$ \ENDFOR \ENDWHILE \ENDWHILE \end{algorithmic} \end{algorithm} \end{comment} \section{Introduction} \label{sec:introduction} Many problems in machine learning, statistics and signal processing can be formulated as optimization problems with a smooth objective and nonconvex constraints. The objective usually measures the fit of a model, parameter, or signal to the data, while the constraints encode structural requirements on the model. Examples of nonconvex constraints include sparsity where the parameter is assumed to have only a few non-zero coordinates \cite{Hsu2011Robust, Yu2016Statistical, She2017Selective, turlach2005simultaneous, Zhu2016Personalized}, group sparsity where the parameter is comprised of several groups only few of which are non-zero \cite{Lounici2011Oracle, kim2010tree, Huang2010benefit, Chen2011Integrating}, and low-rankness where the parameter is believed to be a linear combination of few factors \cite{Amit2007Uncovering, Chen2011Reduced, Chen2015Fast, Gross2011Recovering, Jain2013Low}. Common approach to dealing with nonconvex constraints is via convex relaxations, which allow for application of simple optimization algorithms and easy theoretical analysis \cite{Agarwal2012Noisy, Candes2010power, Fazel2001rank, Candes2009Exact, Koltchinskii2011Nuclear}. From a practical point of view, it has been observed that directly working with a nonconvex optimization problem can lead to both faster and more accurate algorithms \cite{Sun2016Guaranteed, Zhao2015Nonconvex, yu2017influence, wang2014nonconvex}. As a result, a body of literature has recently emerged that tries to characterize good performance of these algorithms \cite{Barber2017Gradient, Zhang2017Nonconvex, Ha2017Alternating}. In this work, we focus on the following optimization problem \begin{equation} \label{eq:opt} \hat \Theta \in \arg \min_{\Theta \in \Xi} f(\Theta) \end{equation} where $\Xi \subset \RR^{m_1 \times m_2}$ is a nonconvex set comprising of low rank matrices that are also row and/or column sparse, \[ \Xi = \Xi(r,s_1,s_2) = \{ \Theta \in \RR^{m_1 \times m_2} \mid {\rm rank}(\Theta) \leq r, \|\Theta\|_{2,0} \leq s_1, \|\Theta^\top\|_{2,0} \leq s_2\}, \] where $\|\Theta\|_{2,0} = |\{ i \in [m_1] \mid \sum_{j \in [m_2]} \Theta_{ij}^2 \neq 0\}|$ is the number of non-zero rows of $\Theta$. Such an optimization problem arises in a number of applications including sparse singular value decomposition and principal component analysis \cite{wang2014nonconvex, Ma2014Learning, Hastie2015Matrix}, sparse reduced-rank regression \cite{Bunea2012Joint, Ma2014Adaptive, Chen2011Reduced, Chen2012Sparse, Vounou2012Sparse}, and reinforcement learning \cite{calandriello2014sparse, sutton1998introduction, Lazaric2010Bayesian, wilson2007multi, Snel2012Multi}. Rather than considering convex relaxations of the optimization problem \eqref{eq:opt}, we directly work with a nonconvex formulation. Under an appropriate statistical model, the global minimizer $\hat \Theta$ approximates the ``true'' parameter $\Theta^*$ with an error level $\epsilon$. Since the optimization problem \eqref{eq:opt} is highly nonconvex, our aim is to develop an iterative algorithm that, with appropriate initialization, converges linearly to a stationary point $\check \Theta$ that is within $c\cdot\epsilon$ distance of $\hat \Theta$. In order to develop a computationally efficient algorithm, we reparametrize the $m_1 \times m_2$ matrix variable $\Theta$ as $UV^\top$ with $U \in \RR^{m_1 \times r}$ and $V \in \RR^{m_2 \times r}$, and optimize over $U$ and $V$. That is, we consider (with some abuse of notation) the following optimization problem \begin{equation} \label{eq:opt:rep} \begin{aligned} (\hat U, \hat V) \in \arg \min_{U \in \Ucal , V \in \Vcal} f(U,V), \end{aligned} \end{equation} where \[ \Ucal = \Ucal(s_1) = \cbr{ U \in \RR^{m_1 \times r} \mid \|U\|_{2,0} \leq s_1 } \quad\text{and}\quad \Vcal = \Vcal(s_2) = \cbr{ V \in \RR^{m_2 \times r} \mid \|V\|_{2,0} \leq s_2 }. \] Such a reparametrization automatically enforces the low rank structure and will allow us to develop an algorithm with low computational cost per iteration. Note that even though $\hat U$ and $\hat V$ are only unique up to scaling and a rotation by an orthogonal matrix, $\hat \Theta = \hat U\hat V^\top$ is usually unique. We make several contributions in this paper. First, we develop an efficient algorithm for minimizing \eqref{eq:opt:rep}, which uses projected gradient descent on a nonconvex set in each iteration. Under conditions on the function $f(\Theta)$ that are common in the high-dimensional literature, we establish linear convergence of the iterates to a statistically relevant solution. In particular, we require that the function $f(\Theta)$ satisfies restricted strong convexity (RSC) and restricted strong smoothness (RSS), conditions that are given in Condition ({\bf RSC/RSS}) below. Compared to the existing work for optimization over low rank matrices with (alternating) gradient descent, we need to study a projection onto a nonconvex set in each iteration, which in our case is a hard-thresholding operation, that requires delicate analysis and novel theory. Our second contribution, is in the domain of multi-task learning. Multi-task learning is a widely used learning framework where similar tasks are considered jointly for the purpose of improving performance compared to learning the tasks separately \cite{caruana1997multitask}. We study the setting where the number of input variables and the number of tasks can be much larger than the sample size (see \cite{Ma2014Adaptive} and references there in). Our focus is on simultaneous variable selection and dimensionality reduction. We want to identify which variables are relevant predictor variables for different tasks and at the same time we want to combine the relevant predictor variables into fewer features that can be explained as latent factors that drive the variation in the multiple responses. We provide a new algorithm for this problem and improve the theoretical results established in \cite{Ma2014Adaptive}. In particular, our algorithm does not require a new independent sample in each iteration and allows for non-Gaussian errors, while at the same time achieves nearly optimal error rate compared to the information theoretic minimax lower bound for the problem. Moreover, our prediction error is much better than the error bound proposed in \cite{Bunea2012Joint}, and matches the error bound in \cite{She2017Selective}. However, all of the existing algorithms are slow and cannot scale to high dimensions. Finally, our third contribution is in the area of reinforcement learning. We study the Multi-task Reinforcement Learning (MTRL) problem via value function approximation. In MTRL the decision maker needs to solve a sequence of Markov Decision Processes (MDPs). A common approach to Reinforcement Learning when the state space is large is to approximate the value function of linear basis functions (linear in some appropriate feature representation of the states) with sparse support. Thus, it is natural to assume the resulting coefficient matrix is low rank and row sparse. Our proposed algorithm can be applied to the regression step of any MTRL algorithm (we chose Fitted $Q$-iteration (F$Q$I) for presentation purposes) to solve for the optimal policies for MDPs. Compared to \cite{calandriello2014sparse} which uses convex relaxation, our algorithm is much more efficient in high dimensions. \subsection{Related Work} Our work contributes to several different areas, and thus is naturally related to many existing works. We provide a brief overview of the related literature and describe how it is related to our contributions. For the sake of brevity, we do not provide an extensive review of the existing literature. {\bf Low-rank Matrix Recovery.} A large body of literature exists on recovery of low-rank matrices as they arise in a wide variety of applications throughout science and engineering, ranging from quantum tomography to signal processing and machine learning \cite{Aaronson2007learnability,Liu2009Interior,Srebro2005Maximum,Davenport2016Overview}. Recovery of a low-rank matrix can be formulated as the following optimization problem \begin{equation} \label{eq:opt:related} \hat \Theta \in \arg \min_{\Theta \in \RR^{m_1 \times m_2}} f(\Theta) \quad \text{subject to } {\rm rank}(\Theta) \leq r, \end{equation} where the objective function $f : \RR^{m_1 \times m_2} \mapsto \RR$ is convex and smooth. The problem \eqref{eq:opt:related} is highly nonconvex and NP-hard in general \cite{Fazel2001rank,Fazel2004Rank}. A lot of the progress in the literature has focused on convex relaxations where one replaces the rank constraint using the nuclear norm. See, for example, \cite{Candes2009Exact, Candes2010power, Candes2010Matrix, Recht2010Guaranteed, Cai2010singular, Recht2011simpler, Gross2011Recovering, Chandrasekaran2011Rank, Hsu2011Robust, Rohde2011Estimation, Koltchinskii2011Nuclear, Harchaoui2012Large, Negahban2011Estimation, Chen2011Integrating, Xiang2012Optimal, Negahban2012Restricted, Agarwal2012Noisy, Recht2013Parallel, Chen2015Incoherence, Chen2014Coherent, Chen2013Low, Hastie2015Matrix, Cai2015ROP:, Yan2015Simultaneous, Zhu2016Personalized, Wang2015Orthogonal} and references therein. However, developing efficient algorithms for solving these convex relaxations is challenging in regimes with large $m_1$ and $m_2$ \cite{Hsieh2014Nuclear}. A practical approach, widely used in large scale applications such as recommendation systems or collaborative filtering \cite{Takacs2007Major, Koren2009Matrix, Gemulla2011Large, Zhuang2013fast} relies on solving a nonconvex optimization problem where the decision variable $\Theta$ is factored as $UV^\top$, usually referred to as the Burer-Monteiro type decomposition \cite{Burer2003nonlinear, Burer2005Local}. A stationary point of this nonconvex problem is usually found via a block coordinate descent-type algorithm, such as alternating minimization or (alternating) gradient descent. Unlike for the convex relaxation approaches, the theoretical understanding of these nonconvex optimization procedures has been developed only recently \cite{Keshavan2010Matrix, Keshavan2010Matrixa, Jain2013Low, Hardt2014Understanding, Hardt2014Fast, Hardt2014Computational, Sun2016Guaranteed, Zhao2015Nonconvex, Zheng2015Convergent, Bhojanapalli2016Global, Bhojanapalli2016Dropping, Tu2016Low, Chen2015Fast, Zhu2017Globala, Zhu2017Global, Ge2016matrix, Li2017Geometry, Mei2016Landscape}. Compared to the classical nonconvex optimization theory, which only shows a sublinear convergence to a local optima, the focus of the recent literature is on establishing linear rates of convergence or characterizing that the objective does not have spurious local minima. In addition to the methods that work on the factorized form, \cite{Jain2010Guaranteed, Lee2010ADMiRA:, Jain2015Fast, Barber2017Gradient} consider projected gradient-type methods which optimize over the matrix variable $\Theta \in \RR^{m_1 \times m_2}$. These methods involve calculating the top $r$ singular vectors of an $m_1 \times m_2$ matrix at each iteration. When $r$ is much smaller than $m_1$ and $m_2$, they incur much higher computational cost per iteration than the methods that optimize over $U \in \RR^{m_1 \times r}$ and $V \in \RR^{m_2 \times r}$. Our work contributes to this body of literature by studying gradient descent with a projection step on a non-convex set, which requires hard-thresholding. Hard-thresholding in this context has not been considered before. Theoretically we need a new argument to establish linear convergence to a statistically relevant point. \cite{Chen2015Fast} considered projected gradient descent in a symmetric and positive semidefinite setting with a projection on a convex set. Our work is most closely related to \cite{Zhao2015Nonconvex}, which used the notion of inexact first order oracle to establish their results, but did not consider the hard-thresholding step. {\bf Structured Low-rank Matrices.} Low-rank matrices with additional structure also commonly arise in different problems ranging from sparse principal component analysis (PCA) and sparse singular value decomposition to multi-task learning. In a high-dimensional setting, the classical PCA is inconsistent \cite{johnstone2009consistency} and recent work has focused on PCA with additional sparse structure on the eigenvectors \cite{Amini2009High, Berthet2013Optimal, Birnbaum2013Minimax, Cai2013Sparse, Vu2013Minimax, Ma2013Sparse, Yuan2013Truncated}. Similar sparse structure in singular vectors arises in sparse SVD and biclustering \cite{Lee2010Biclustering, Chen2011Reduced, Ma2014Learning, Uematsu2017SOFAR:, Yang2014Sparse, Balakrishnan2012Recovering, Kolar2011Minimax, balakrishnan2011statistical}. While the above papers use the sparsity structure of the eigenvectors and singular vectors, it is also possible to have simultaneous low rank and sparse structure directly on the matrix $\Theta$. Such a structure arises in multi-task learning, covariance estimation, graph denoising and link prediction \cite{Mei2012Encoding, Richard2012Estimation}. Additional structure on the sparsity pattern was imposed in the context of sparse rank-reduced regression, which is an instance of multi-task learning \cite{Chen2012Sparse, Bunea2012Joint, Ma2014Adaptive, Bahadori2016Scalable, She2017Selective}. Our algorithm described in Section~\ref{sec:methodology} can be applied to the above mentioned problems. In Section~\ref{sec:MTL}, we theoretically study multi-task learning in the setting of \cite{Ma2014Adaptive}. We relax conditions imposed in \cite{Ma2014Adaptive}, specifically allowing for non-Gaussian errors and not requiring independent samples at each step of the algorithm, while still achieving the near minimax rate of convergence. We provide additional discussion in Section~\ref{sec:MTL} after formally providing results for the multi-task learning setting. In Section~\ref{sec:experiment}, we further corroborate our theoretical results in extensive simulations and show that our algorithm outperforms existing methods in multi-task learning. {\bf Low-rank Plus Sparse Matrix Recovery.} At this point, it is worth mentioning another commonly encountered structure on the decision variable $\Theta$ that we do not study in the current paper. In various applications it is common to model $\Theta$ as a sum of two matrices, one of which is low-rank and the other one sparse. Applications include robust PCA, latent Gaussian graphical models, factor analysis and multi-task learning \cite{Candes2011Robust, Hsu2011Robust, Chandrasekaran2011Rank, Chen2013Low, Agarwal2012Noisy, Gu2016Low, Zhang2017Nonconvex, Xu2017Speeding, Ha2017Alternating}. While Burer-Monteiro factorization has been considered for the low-rank component in this context (see, for example, \cite{Zhang2017Nonconvex} and references therein), the low-rank component is dense as it needs to be incoherent. The incoherence assumption guarantees that the low-rank component is not too spiky and can be identified \cite{Candes2009Exact}. An alternative approach was taken in \cite{Ha2017Alternating} where alternating minimization over the low-rank and sparse component with a projection on a nonconvex set was investigated. \subsection{Organization of the paper} In Section \ref{sec:methodology} we provide details for our proposed algorithm. Section \ref{sec:theoretical} states our assumptions and the theoretical result with a proof sketch. Section \ref{sec:MTL} shows applications to multi-task learning, while Section \ref{sec:experiment} presents experimental results. Section \ref{sec:appendix} provides detailed technical proofs. Conclusion is given in Section~\ref{sec:conclusion}. \subsection{Application to Multi-task Reinforcement Learning} \label{subsec:MTRL} Reinforcement learning (RL) and approximate dynamic programming (ADP) are popular algorithms that help decision makers find optimal policies for decision making problems under uncertainty that can be cast in the framework of Markov Decision Processes (MDP) \cite{bertsekas1995neuro, sutton1998introduction}. Similar to many other approaches, when the sample size is small these algorithms may have poor performance. A possible workaround then is to simultaneously solve multiple related tasks and take advantage of their similarity and shared structure. This approach is called multi-task reinforcement learning (MTRL) and has been studied extensively \cite{Lazaric2010Bayesian, wilson2007multi, Snel2012Multi}. In this section we show how GDT algorithm can be applied to the MTRL problem. A Markov decision process (MDP) is represented by a 5-tuple $\mathcal M = (S, A, P, R, \gamma)$ where $S$ represents the state space (which we assume to be finite for simplicity); $A$ is a finite set of actions; $P_a(s,s') = \Pr(s_{t+1}=s' \mid s_t = s, a_t=a)$ is the Markovian transition kernel that measures the probability that action $a$ in state $s$ at time $t$ will lead to state $s'$ at time $t+1$ (we assume $P_a$ to be time homogeneous); $R(s,a)$ is the state-action reward function measuring the instantaneous reward received when taking action $a$ in state $s$; and $\gamma$ is the discount factor. The core problem of MDP is to find a deterministic policy $\pi: S \to A$ that specifies the action to take when decision maker is in some state $s$. Define the Bellman operator \begin{equation} \mathcal T Q(s,a) = R(s,a) + \gamma \sum_{s'}P_a(s,s')\max_{a'} Q(s',a'), \end{equation} where $Q: S \times A \to \mathbb R$ is the state-action value function. The MDP can then be solved by calculating the optimal state-action value function $Q^*$ which gives the total discounted reward obtained starting in state $s$ and taking action $a$, and then following the optimal policy in subsequent time steps. Given $Q^*$, the optimal policy is recovered by the greedy policy: $\pi^*(s) = \arg\max_{a \in A}Q^*(s,a)$. In MTRL the objective is to solve $k$ related tasks simultaneously where each task $k_0 \in \{1, \ldots, k \}$ corresponds to an MDP: $\mathcal M_{k_0} = (S, A, P_{k_0}, R_{k_0}, \gamma_{k_0})$. Thus, these $k$ tasks share the same state and action space but each task has a different transition dynamics $P_{k_0} $, state-action reward function $R_{k_0} $, and discount factor $\gamma_{k_0} $. The decision maker's goal is to find an optimal policy for each MDP. If these MDPs do not share any information or structure, then it is straightforward to solve each of them separately. Here we assume the MDPs do share some structure so that the $k$ tasks can be learned together with smaller sample complexity than learning them separately. We follow the structure in \cite{calandriello2014sparse} and solve this MTRL problem by the fitted-$Q$ iteration (F$Q$I) algorithm \cite{ernst2005tree}, one of the most popular method for ADP. In contrast to exact value iteration ($Q^{t} = \mathcal T Q^{t-1}$), in F$Q$I this iteration is approximated by solving a regression problem by representing $Q(s,a)$ as a linear function in some features representing the state-action pairs. To be more specific, we denote $\varphi(s) = [\varphi_1(s), \varphi_2(s), ..., \varphi_{p_s}(s)]$ as the feature mapping for state $s$ where $\varphi_i: S \to \mathbb R$ denotes the $i$th feature. We then extend the state-feature vector $\varphi$ to a feature vector mapping state $s$ and action $a$ as: \begin{equation} \phi(s,a) = [\underbrace{0, 0, ..., 0}_{(a-1)\times p_s \text{ times}}, \varphi_1(s), \varphi_2(s), ..., \varphi_{p_s}(s), \underbrace{0, 0, ..., 0}_{(|A| - a)\times p_s \text{ times}}] \in \mathbb R^p, \end{equation} where $p = |A| \times p_s$. Finally, for MDP $k_0$, we represent the state-action value function $Q_{k_0}(\cdot,\cdot)$ as an $|S|\times |A|$ dimensional column vector with: \[ Q_{k_0}(s,a) = \phi(s,a)^{\top} \cdot \Theta_{k_0} \] where $\Theta_{k_0}$ is a $p \times 1$ dimensional column vector. If $\Theta \in \mathbb{R}^{p \times k}$ represents the matrix with columns $\Theta_{k_0}$, $k \in \{1,\ldots, k\}$, then we see that given the $Q_{k_0}(s,a)$ state-action value functions, estimating the $\Theta$ matrix is just a Multi-Task Learning problem of the form \eqref{eqn:MTL_model} with the response matrix $Y \doteq Q \in \mathbb{R}^{n\times k}$ where $n=|S|\times |A|$ denotes the ``sample size'' with rows indexed by pairs $(s,a) \in S \times A$, $X \doteq \Phi \in \mathbb{R}^{n \times p}$ represents the matrix of predictors (features) with $(s,a)^{th}$ row as $\phi(s,a)$, and $\Theta^*$ is the unknown matrix of ADP coefficients. Consistent with the GDT algorithm, to exploit shared sparsity and structure across the $k$ MDP tasks, we will subsequently assume that the coefficient matrix $\Theta^*$ is row sparse and low rank. Algorithm~\ref{algo:MTRL} provides details of MTRL with GDT. We assume we have access to the generative model of the $k$ MDPs so that we can sample reward $r$ and state $s'$ from $R(s,a)$ and $P_a(s,s')$. With ``design states'' $S_{k} \subseteq S, \ n_s \doteq |S_k|$ given as input, for each action $a$ and each state $s \in S_{k}$, F$Q$I first generates samples (reward $r$ and transition state $s'$) from the generative model of each MDP. These samples form a new dataset according to \begin{equation} y_{i,a,k_0}^t = r^t_{i,a,k_0} + \gamma \max_{a'} \hat Q^{t-1}_{k_0}({s'}_{i,a,k_0}^t, a'). \end{equation} Here $\hat Q^{t-1}_{k_0}$ is calculated using the coefficient matrix from previous iteration: \begin{equation} \hat Q^{t-1}_{k_0}({s'}_{i,a,k_0}^t, a') = \phi({s'}_{i,a,k_0}^{t}, a')^{\top}\cdot \Theta^{t-1}_{k_0} \end{equation} We then build dataset $\mathcal D^t_{k_0} = \big\{(s_{i},a), y_{i,a,k_0}^t \big\}_{s_i\in S_k, a \in A }$ with $s$ as predictor and $y$ as response, and apply GDT algorithm on the dataset $\{D^t_{k_0}\}_{k_0=1}^k$ to get estimator $\Theta^{t}$. This completes an iteration $t$ and we repeat this process until convergence. Finally the optimal policy $\pi_{k_0}^t$ is given by greedy policy: $\pi_{k_0}^t(s) = \arg\max_{a \in A} \hat{Q}_{k_0}^t(s,a)$ at each iteration $t$. To derive theoretical result analogous to \cite{calandriello2014sparse}, we further assume $R(s,a) \in [0,1]$ and hence the maximum cumulative discounted reward $Q_{\max} = 1/(1-\gamma)$. Since each task is a meaningful MDP, we do not assume sparsity on columns. Suppose $\sup_s\| \varphi(s)\|_2 \leq L$, we have the following theoretical result: \begin{theorem} \label{thm:MTRL} Suppose the linear model holds and suppose the conditions in Section \ref{sec:theoretical} are satisfied for each $\Theta^*_a$ with rank $r$ and row sparsity $s_1^*$, then after $T$ iterations, with probability at least $\big(1 - (p \wedge k)^{-1}\big)^{T}$ we have \begin{equation} \begin{aligned} \label{eq_thm:MTRL} \frac 1k \sum_{k_0=1}^k \Big\| Q_{k_0}^* - Q_{k_0}^{\pi_{k_0}^{T}} \Big\|_2^2 &\leq \frac{C}{(1-\gamma)^4}\bigg[ \frac 1nQ_{\max}^2L^4 \Big(r+\frac{s_1^*}{k} (r+\log p) \Big) \bigg] + \frac{4Q_{\max}^2}{(1-\gamma)^4} \bigg[ C \beta^{T} + \gamma^{T} \bigg]^2 \end{aligned} \end{equation} for some constant $C$. \end{theorem} \begin{proof} We start from the intermediate result in \cite{munos2008finite}: \begin{equation} \Big|Q^*_{k_0} - Q_{k_0}^{\pi_{k_0}^{T}} \Big| \leq \frac{2\gamma(1-\gamma^{T+1})}{(1-\gamma)^2} \bigg[ \sum_{t = 0}^{T-1} \alpha_t|\epsilon_{k_0}^t| + \alpha_{T}\big|Q_t^* - Q_t^0\big| \bigg], \end{equation} where \begin{equation} \alpha_t = \frac{(1-\gamma)\gamma^{T-t-1}}{1-\gamma^{T+1}}\text{, for $t < T$, and } \alpha_{T} = \frac{(1-\gamma)\gamma^{T}}{1-\gamma^{T+1}}. \end{equation} The error term $\epsilon_{k_0}^t(s', b)$ measures the approximation error in state $s' \in S$ and action $b \in A$. It can be bounded by \begin{equation} \big|\epsilon_{k_0}^t(s', b)\big| = \big|\varphi(s')^{\top}\Theta^t_{k_0,b} - \varphi(s')^{\top}\Theta^*_{k_0,b}\big| \leq \big\|\varphi(s')\big\|_2\big\|\Theta^t_{k_0,b} - \Theta^*_{k_0,b} \big\|_2 \leq L \big\|\Theta^t_{k_0,b} - \Theta^*_{k_0,b} \big\|_2. \end{equation} We then have \begin{equation} \Big|Q^*_{k_0} - Q_{k_0}^{\pi_{k_0}^{T}} \Big| \leq \frac{2\gamma(1-\gamma^{T+1})}{(1-\gamma)^2} \bigg[ \sum_{t = 0}^{T-1} \alpha_t L \max_b \big\|\Theta^t_{k_0,b} - \Theta^*_{k_0,b} \big\|_2 + 2\alpha_{T}Q_{\max} \bigg]. \end{equation} Taking average, and plugging in the main result \eqref{theoremtheta} and the statistical error \eqref{eq:stat_error_row} we obtain our desired result. \end{proof} \begin{algorithm}[tb] \caption{Multi-Task Reinforcement Learning with GDT} \label{algo:MTRL} \begin{algorithmic} \STATE {\bfseries Input: States $S_k = \{{s_i}\}_{i=1}^{n_s} \subseteq S$.} \STATE {\bfseries Initialize $\Theta^0 = 0$} \FOR{$t=1$ {\bfseries to} $T$} \FOR{$a=1$ {\bfseries to} $|A|$} \FOR{$k_0=1$ {\bfseries to} $k$, $i=1$ {\bfseries to} $n_s$} \STATE Generate samples $r^t_{i,a,k_0} = R_{k_0}(s_{i},a)$ and ${s'}_{i,a,k_0}^{t} \sim P_{a, k_0}( s_{i},s')$ \STATE Calculate $y_{i,a,k_0}^t = r^t_{i,a,k_0} + \gamma \max_{a'} \hat Q^{t-1}_{k_0}({s'}_{i,a,k_0}^t, a')$ \ENDFOR \ENDFOR \STATE Estimate $\Theta^t$ using GDT algorithm with $X = \left\{ X((s_i,a),\cdot) = \phi(s_i,a)^\top \right\}_{s_i \in S_k , a \in A}$ and $Y = \left\{ Y((s_i,a),k_0) = y^{t}_{i,a,k_0} \right\}_{s_i \in S, a \in A, k_0 \in [k]}$. \ENDFOR \STATE {\bfseries Output:} $\Theta^{T} $ \end{algorithmic} \end{algorithm} \section{Conclusion} \label{sec:conclusion} We proposed a new GDT algorithm to efficiently solve for optimization problem with simultaneous low rank and row and/or column sparsity structure on the coefficient matrix. We show the linear convergence of GDT algorithm up to statistical error. As an application, for multi-task learning problem we show that the statistical error is near optimal compared to the minimax rate. Experiments on multi-task learning demonstrate competitive performance and much faster running speed compared to existing methods. For future extensions, it would be of interest to extend GDT algorithm to non-linear models. Another potential direction would be to adaptively select the sparsity level $s_1$ and $s_2$ in hard thresholding step. \section*{Acknowledgments} This work was completed in part with resources provided by the University of Chicago Research Computing Center. We thank Zhaoran Wang and Zhuoran Yang for many useful discussions and for suggesting an application to multi-task reinforcement learning. \bibliographystyle{plain} \section{Application to Multi-task Learning} \label{sec:MTL} In this section, we apply the theory developed in Section~\ref{sec:theoretical} on two specific problems. First, in Section~\ref{sec:GDT-multi-task}, we apply GDT algorithm to a multi-task learning problem. We show that under commonly used statistical conditions the conditions on the objective function stated in Section~\ref{sec:conditions} are satisfied with high-probability. Next, in Section~\ref{subsec:MTRL} we discuss an application to multi-task reinforcement learning problem. \subsection{GDT for Multi-task Learning} \label{sec:GDT-multi-task} We apply GDT algorithm to the problem of multi-task learning, which has been successfully applied in a wide range of application areas, ranging from neuroscience \citep{Vounou2012Sparse}, natural language understanding \citep{collobert2011natural}, speech recognition \citep{seltzer2013multi}, computer vision \citep{She2017Selective}, and genetics \citep{yu2017multitask} to remote sensing \citep{xue2007multi}, image classification \citep{lapin2014scalable}, spam filtering \citep{weinberger2009feature}, web search \citep{chapelle2010multi}, disease prediction \citep{zhou2013modeling}, and eQTL mapping \citep{kim2010tree}. By transferring information between related tasks it is hoped that samples will be better utilized, leading to improved generalization performance. We consider the following linear multi-task learning problem \begin{equation} \label{eqn:MTL_model} Y = X \Theta^* + E, \end{equation} where $Y \in \RR^{n\times k}$ is the response matrix, $X \in \RR^{n \times p}$ is the matrix of predictors, $\Theta^* \in \RR^{p\times k}$ is an unknown matrix of coefficients, and $E \in \RR^{n \times k}$ is an unobserved noise matrix with i.i.d.~mean zero and variance $\sigma^2$ entries. Here $n$ denotes the sample size, $k$ is the number of responses, and $p$ is the number of predictors. There are a number of ways to capture relationships between different tasks and success of different methods relies on this relationship. \cite{Evgeniou2004Regularized} studied a setting where linear predictors are close to each other. In a high-dimensional setting, with large number of variables, it is common to assume that there are a few variables predictive of all tasks, while others are not predictive \cite{turlach2005simultaneous, Obozinski10Support, Lounici2011Oracle, kolar11union, Wang2015Distributed}. Another popular condition is to assume that the predictors lie in a shared lower dimensional subspace \cite{Ando2005framework, Amit2007Uncovering, Yuan2007Dimension, Argyriou08Convex, Wang2016Distributed}. In contemporary applications, however, it is increasingly common that both the number of predictors and the number of tasks is large compared to the sample size. For example, in a study of regulatory relationships between genome-wide measurements, where micro-RNA measurements are used to explain the gene expression levels, it is commonly assumed that a small number of micro-RNAs regulate genes participating in few regulatory pathways \cite{Ma2014Learning}. In such a setting, it is reasonable to assume that the coefficients are both sparse and low rank. That is, one believes that the predictors can be combined into fewer latent features that drive the variation in the multiple response variables and are composed only of relevant predictor variables. Compared to a setting where either variables are selected or latent features are learned, there is much less work on simultaneous variable selection and rank reduction \cite{Bunea2011Optimal, Chen2011Reduced, Chen2012Sparse, She2017Selective}. In addition, we when both $p$ and $k$ are large, it is also needed to assume the column sparsity on the matrix $\Theta^*$ to make estimation feasible \cite{Ma2014Adaptive}, a model that has been referred to as the two-way sparse reduced-rank regression model. We focus on this model here. {\bf Multi-task Model (MTM)} In the model~\eqref{eqn:MTL_model}, we assume that the true coefficient matrix $\Theta^* \in \Xi(r, s_1^*, s_2^*)$. The noise matrix $E$ has i.i.d. sub-Gaussian elements with variance proxy $\sigma^2$, which requires that each element $e_{ij}$ satisfies $\mathbb E(e_{ij}) = 0$ and its moment generating function satisfies $\mathbb E[\exp(t e_{ij})] \leq \exp(\sigma^2t^2/2)$. The design matrix $X$ is considered fixed with columns normalized to have mean $0$ and standard deviation $1$. Moreover, we assume $X$ satisfies the following Restricted Eigenvalue (RE) condition \cite{negahban2010unified} for some constant $\underline \kappa(s_1)$ and $\bar \kappa(s_1)$. \begin{equation} \underline \kappa(s_1) \cdot \|\theta\|_2^2 \leq \frac{1}{n} \|X\theta\|_2^2 \leq \bar \kappa(s_1) \cdot \|\theta\|_2^2 \,\,\, \text{ for all } \, \|\theta\|_0 \leq s_1. \end{equation} We will show that under the condition {\bf (MTM)}, GDT converges linearly to the optimal coefficient $\Theta^*$ up to a region of statistical error. Compared to the previous methods for estimating jointly sparse and low rank coefficients \cite{Bunea2011Optimal, Chen2011Reduced, Chen2012Sparse, Ma2014Adaptive}, GDT is more scalable and improves estimation accuracy as illustrated in the simulation Section~\ref{sec:experiment}. In the context of the multi-task learning with the model in \eqref{eqn:MTL_model}, we are going to use the least squares loss. The objective function in is $f(\Theta) = \frac{1}{2n} \| Y - X\Theta \|_F^2$ and we write $\Theta = UV^\top$ with $U \in \R^{p \times r}$ and $V \in \R^{k \times r}$. The constraint set is set as before as $U \in \Ucal(s_1)$ and $V \in \Ucal(s_2)$ with $s_1 = c\cdot s_1^*, s_2 = c\cdot s_2^*$ for some $c>1$. The rank $r$ and the sparsity levels $s_1,s_2$ are tuning parameters, which can be selected using the information criterion as in \cite{She2017Selective}. In order to apply the results of Theorem~\ref{main}, we first verify the conditions in Section~\ref{sec:conditions}. The condition {\bf (RSC/RSS)} in is equivalent to \begin{equation} \mu \big\|\Theta_2 - \Theta_1\big\|_F^2 \leq \Big\langle \frac 1n X^\top X(\Theta_2 - \Theta_1), \Theta_2 - \Theta_1 \Big\rangle \leq L \big\|\Theta_2 - \Theta_1\big\|_F^2, \end{equation} and it holds with $\mu = \underline \kappa(s_1)$ and $L = \bar \kappa(s_1)$. Next, we discuss how to initialize GDT in the context of multi-task learning. Under the structural conditions on $\Theta^*$ in the condition {\bf (MTM)} there are a number of way to obtain an initial estimator ${\Theta}^0$. For example, we can use row and column screening \cite{fan08sis}, group lasso \cite{Yuan2006Model}, and lasso \cite{tibshirani96regression} among other procedures. Here and in simulations we use the lasso estimator, which takes the form \[ {\Theta}^0 = \arg\min_{\Theta \in \RR^{p \times k}} \frac{1}{2n} \| Y - X\Theta \|_F^2 + \lambda \|\Theta\|_1. \] The benefit of this approach is that it is scalable to the high-dimensional setting and trivially parallelizable, since each column of ${\Theta}^0$ can be estimated separately. The requirement of the initialization condition {\bf (I)} is effectively a requirement on the sample size. Under the condition {\bf (MTM)}, a result of \cite{negahban2010unified} shows that these conditions are satisfied with $n \geq s_1^*s_2^*\log p\log k$. \vspace{2mm} We then characterize the statistical error $e_{\text{stat}}$ under the condition {\bf (MTM)}. \begin{lemma} \label{lemma:e_stat} Under the condition {\bf (MTM)}, with probability at least $1 - (p \vee k)^{-1}$ we have \begin{equation} e_{\text{stat}} \leq C\sigma \sqrt{\frac{(s_1^*+s_2^*)\big(r + \log(p \vee k)\big)}{n}} \end{equation} for some constant $C$. \end{lemma} The proof of Lemma \ref{lemma:e_stat} is given in Section \ref{A:lemma:e_stat}. \vspace{2mm} With these conditions, we have the following result on GDT when applied to the multi-task learning model in \eqref{eqn:MTL_model}. \begin{corollary} Suppose that the condition {\bf (MTM)} is satisfied. Then for all \begin{equation} T \geq C \log\bigg[\frac {n}{(s_1^*+s_2^*) \big(r+\log (p\vee k)\big)}\bigg], \end{equation} with probability at least $1 - (p \vee k)^{-1}$, we have \begin{equation} \begin{aligned} \label{eq:stat_error} \|\Theta^{T} - \Theta^*\|_F \leq C \sigma \sqrt{\frac{(s_1^*+s_2^*) \big(r+\log (p\vee k)\big)}{n}} \end{aligned} \end{equation} for some constant $C$. \label{stat_error_thm} \end{corollary} Each iteration of the algorithm requires computing the gradient step with time complexity $r(n+r)(p+k)$. Note that if there is no error term $E$ in the model \eqref{eqn:MTL_model}, then Algorithm \ref{algo:AltGD} converges linearly to the true coefficient matrix $\Theta^*$, since $e_{\text{stat}} = 0$ in that case. The error rate in Corollary~\ref{stat_error_thm} matches the error rate of the algorithm proposed in \cite{Ma2014Adaptive}. However, our algorithm does not require a new independent sample in each iteration and allows for non-Gaussian errors. Compared to the minimax rate \begin{equation} \label{eq:minimiax_rate} \sigma \sqrt{\frac 1n \bigg[(s_1^*+s_2^*)r + s_1^*\log\frac{ep}{s_1^*} + s_2^*\log\frac{ek}{s_2^*} \bigg]} \end{equation} established in \cite{Ma2014Adaptive}, both our algorithm and that of \cite{Ma2014Adaptive} match the rate up to a multiplicative log factor. To the best of our knowledge, achieving the minimax rate \eqref{eq:minimiax_rate} with a computationally scalable procedure is still an open problem. Note, however, that when $r$ is comparable to $\log(p\vee k)$ the rates match up to a constant multiplier. Therefore for large enough $T$, GDT algorithm attains near optimal rate. In case we do not consider column sparsity, that is, when $s_2^* = k$, Corollary~\ref{stat_error_thm} gives error rate \begin{equation} \label{eq:stat_error_row} \|\Theta^{T} - \Theta^*\|_F \leq C\sigma \sqrt{\frac{kr + s_1^*\big(r+\log p\big)}{n}} \end{equation} and prediction error \begin{equation} \label{eq:prediction_GDT} \|X\Theta^{T} - X\Theta^*\|_F^2 \leq C\sigma^2\Big(kr + s_1^*\big(r+\log p\big)\Big). \end{equation} Compared to the prediction error bound $kr + s_1^* r \log{\frac ps}$ proved in \cite{Bunea2012Joint}, we see that GDT error is much smaller with $r+\log p$ in place of $r\log p$. Moreover, GDT error matches the prediction error $(k+s_1^*-r)r + s_1^*\log{p}$ established in \cite{She2017Selective}, as long as $k \geq Cr$ which is typically satisfied. \section{Theoretical Result} \label{sec:theoretical} In this section, we formalize the conditions and state the main result on the linear convergence of our algorithm. We begin in Section~\ref{sec:conditions} by stating the conditions on the objective function $f$ and initialization that are needed for our analysis. In Section~\ref{sec:main-result}, we state Theorem~\ref{main} that guarantees linear convergence under the conditions to a statistically useful point. The proof outline is given in Section~\ref{sec:proof}. In Section~\ref{sec:MTL} to follow, we derive results for multi-task learning as corollaries of our main result. \subsection{Regularity Conditions} \label{sec:conditions} We start by stating mild conditions on the objective function $f$, which have been used in the literature on high-dimensional estimation and nonconvex optimization, and they hold with high-probability for a number of statistical models of interest \cite{Zhao2015Nonconvex, Zhang2017Nonconvex, Ha2017Alternating}. Note that all the conditions depend on the choice of $s_1$ and $s_2$ (or equivalently, on $c$). For $\Theta^* \in \Xi(r^*, s_1^*, s_2^*) $, let $\Theta^* = U_{\Theta^*}\Sigma_{\Theta^*}V_{\Theta^*}^\top$ be its singular value decomposition. Let $U^* = U_{\Theta^*}\Sigma_{\Theta^*}^{1/2}$ and $V^* = V_{\Theta^*}\Sigma_{\Theta^*}^{1/2}$ be the balanced decomposition of $\Theta^* = U^*V^{*\top}$. Note that the decomposition is not unique as $\Theta^* = (U^*O)(V^*O)^{\top}$ for any orthogonal matrix $O \in \Ocal(r)$. Let $\sigma_1(\Theta^*) = \sigma_{\max}(\Theta^*)$ and $\sigma_r(\Theta^*) = \sigma_{\min}(\Theta^*)$ denote the maximum and minimum nonzero singular values of $\Theta^*$. The first condition is Restricted Strong Convexity and Smoothness on $f$. {\bf Restricted Strong Convexity and Smoothness (RSC/RSS).} There exist universal constants $\mu$ and $L$ such that \begin{equation} \label{eq:def_strongly_convex} \frac{\mu }{2} \|\Theta_2 - \Theta_1\|_F^2 \leq f(\Theta_2) - f(\Theta_1) - \langle \nabla f(\Theta_1), \Theta_2 - \Theta_1 \rangle \leq \frac{L }{2} \|\Theta_2 - \Theta_1\|_F^2 \end{equation} for all $\Theta_1, \Theta_2 \in \Xi(2r, \tilde s_1, \tilde s_2)$ where $\tilde s_1 = (2c+1)s_1^*$ and $\tilde s_2 = (2c+1)s_2^*$. \vspace{2mm} The next condition is on the initial estimate $\Theta^0$. It quantifies how close the initial estimator needs to be to $\Theta^*$ so that iterates of GDT converge to statistically useful solution. \vspace{2mm} {\bf Initialization (I).} Define $\mu_{\min} = \frac 18 \min\{1, \frac{\mu L }{\mu +L }\}$ and \begin{equation} \label{eq:def_I_0} I_0 = \frac 45 \mu_{\min}\sigma_r(\Theta^*) \cdot \min\Big\{ \frac{1}{\mu +L }, 2 \Big\}. \end{equation} We require \begin{equation} \label{eq:ass_init} \|\Theta^0 - \Theta^*\|_F \leq \frac 15 \min\Big\{ \sigma_r(\Theta^*), \frac{I_0}{\xi} \sqrt{\sigma_r(\Theta^*)} \Big\}, \end{equation} where $\xi^2 = 1 + \frac{2}{\sqrt{c-1}}$. We note that, in general, \eqref{eq:ass_init} defines a ball of constant radius around $\Theta^*$ in which the initial estimator needs to fall into. In particular, when considering statistical learning problems, the initial estimator can be inconsistent as the sample size increases. \vspace{2mm} Next, we define the notion of the statistical error, \begin{equation} \label{eq:def_stat_error} e_{\text{stat}} = \sup_{\substack{\Delta \in \Xi(2r, \tilde s_1, \tilde s_2) \\ \|\Delta\|_F \leq 1}} \langle \nabla f(\Theta^*), \Delta \rangle. \end{equation} Note that the statistical error quantifies how large the gradient of the objective evaluated at the true parameter $\Theta^*$ can be in the directions of simultaneously low-rank and sparse matrices. It implicitly depends on the choice of $c$ and as we will see later there is a trade-off in balancing the statistical error and convergence rate of GDT. As $c$ increases, statistical error gets larger, but requires us to choose a smaller step size in order to guarantee convergence. \vspace{2mm} With these two conditions, we are ready to the choice of the step size in Algorithm~\ref{algo:AltGD}. \vspace{3mm} {\bf Step Size Selection.} We choose the step size $\eta$ to satisfy \begin{equation} \label{eq:eta_constant} \eta \leq \frac{1}{16\|Z_0\|_2^2} \cdot \min\Big\{\frac{1}{2(\mu +L )}, 1\Big\}, \end{equation} Furthermore, we require $\eta$ and $c$ to satisfy \begin{equation} \begin{aligned} \label{eq:def_beta} \beta = \xi^2 \rbr{1 - \eta\cdot \frac{2}{5} \mu_{\min}\sigma_r(\Theta^*) } < 1, \end{aligned} \end{equation} and \begin{equation} \begin{aligned} \label{eq:ass_estat} e_{\rm stat}^2 \leq \frac{1-\beta}{\xi^2\eta} \cdot \frac{L \mu }{L + \mu }\cdot I_0^2. \end{aligned} \end{equation} The condition that the step size $\eta$ satisfies \eqref{eq:eta_constant} is typical in the literature on convex optimization of strongly convex and smooth functions. Under \eqref{eq:def_beta} we will be able to show contraction after one iteration and progress towards $\Theta^*$. The second term in \eqref{eq:def_beta} is always smaller than $1$, while the first term $\xi^2$ is slightly larger than $1$ and is the price we pay for the hard thresholding step. In order to show linear convergence we need to balance the choice of $\eta$ and $\xi^2$ to ensure that $\beta < 1$. From \eqref{eq:def_beta}, we see that if we select a small step size $\eta$, then we need to have a small $\xi^2$, which means a large $c$. Intuitively, if $\eta$ is too small, it may be impossible to change row and column support in each iteration. In this case we have to keep many active rows and columns to make sure we do not miss the true signal. This leads to large $s_1$ and $s_2$, or equivalently to a large $c$. However, the statistical error \eqref{eq:def_stat_error} will increase with increase of $c$ and these are the trade-off on the selection of $\eta$ and $c$. Finally, \eqref{eq:ass_estat} guarantees that the iterates do not run outside of the initial ball given in \eqref{eq:ass_init}. In case \eqref{eq:ass_estat} is violated, then the initialization point $\Theta^0$ is already a good enough estimate of $\Theta^*$. Therefore, this requirement is not restrictive. In practice, we found that the selection of $\eta$ and $c$ is not restrictive and the convergence is guaranteed for a wide range of values of their values. \subsection{Main Result} \label{sec:main-result} Our main result establishes linear convergence of GDT iterates to $\Theta^*$ up to statistical error. Since the factorization of $\Theta^*$ is not unique, we turn to measure the subspace distance of the iterates $({U}^t, {V}^t)$ to the balanced decomposition $ U^*(V^*)^\top = \Theta^*$. {\bf Subspace distance.} Let $Z^* = \sbr{\begin{array}{c}U^* \\ V^* \end{array}}$ where $\Theta^* = U^*{V^*}^\top$ and $\sigma_i(U^*) = \sigma_i(V^*)$ for each $i = 1, ..., r$. Define the subspace distance between $Z = \sbr{\begin{array}{c}U \\ V \end{array}}$ and $Z^* = \sbr{\begin{array}{c}U^* \\ V^* \end{array}}$ as \begin{equation} d^2(Z,Z^*) = \min_{O \in \Ocal(r)} \cbr{\|U - U^*O \|_F^2 + \|V - V^*O \|_F^2}. \end{equation} With this, we are ready to state our main result. \begin{theorem} \label{main} Suppose the conditions {\bf (RSC/RSS)}, {\bf (I)} are satisfied and the step size $\eta$ satisfies \eqref{eq:eta_constant} - \eqref{eq:ass_estat}. Then after $T$ iterations of GDT (Algorithm \ref{algo:AltGD}), we have \begin{equation} \begin{aligned} d^2(Z^{T}, Z^*) \leq \beta^T \cdot d^2(Z^0, Z^*) + \frac{\xi^2\eta}{1-\beta}\cdot\frac{L + \mu }{L \cdot\mu }\cdot e_{\rm stat}^2 . \label{theoremUV} \end{aligned} \end{equation} Furthermore, for $\Theta^{T} = U^{T}(V^{T})^{\top}$ we have \begin{equation} \begin{aligned} \|\Theta^{T} - \Theta^*\|_F^2 \leq 4\sigma_1(\Theta^*) \cdot \Big[ \beta^T \cdot d^2(Z^0, Z^*) + \frac{\xi^2\eta}{1-\beta}\cdot\frac{L + \mu }{L \cdot\mu }\cdot e_{\rm stat}^2 \Big]. \label{theoremtheta} \end{aligned} \end{equation} \end{theorem} The proof sketch of Theorem~\ref{main} is given in the following section. Conceptually, Theorem~\ref{main} provides a minimal set of conditions for convergence of GDT. The first term in equations \eqref{theoremUV} and \eqref{theoremtheta} correspond to the optimization error, whereas the second term corresponds to the statistical error. These bounds show that the distance between the iterates and $\Theta^*$ drop exponentially up to the statistical limit $e_{\text{stat}}$, which is problem specific. In statistical learning problem, it commonly depends on the sample size and the signal-to-noise ratio of the problem. Theorem~\ref{main} provides convergence in a statistical setting to the ``true'' parameter $\Theta^*$. However, as mentioned in Section~\ref{sec:methodology}, Algorithm~\ref{algo:AltGD} and Theorem~\ref{main} can also be used to establish linear convergence to a global minimizer in a deterministic setting. Suppose $(\hat U, \hat V) \in \arg\min_{U \in \Ucal, V \in \Vcal}\{f(U,V)\}$ is a global minimizer and $\hat \Theta = \hat U \hat V^{\top}$. Furthermore, assume that the conditions in Section \ref{sec:conditions} are satisfied with $\hat \Theta$ in place of $\Theta^*$. Then we have that the iterates $\{\Theta^t\}$ obtained by GDT converge linearly to a global minimum $\hat \Theta$ up to the error $\hat e_{\text{stat}}$ defined similar to \eqref{eq:def_stat_error} with $\hat \Theta$ in place of $\Theta^*$. This error comes from sparsity and hard thresholding. In particular, suppose there are no row or column sparsity constraints in the optimization problem \eqref{eq:opt:rep}, so that we do not have hard-thresholding steps in Algorithm~\ref{algo:AltGD}. Then we have $\hat e_{\text{stat}} = 0$, so that iterates $\{\Theta^t\}$ converge linearly to $\hat \Theta$, recovering the result of \cite{Zhao2015Nonconvex}. \subsection{Proof Sketch of Theorem \ref{main}} \label{sec:proof} In this section we sketch the proof of our main result. The proof combines three lemmas. We first one quantify the accuracy of the initialization step. The following one quantifies the improvement in the accuracy by one step of GDT. The third lemma shows that the step size assumed in Theorem~\ref{main} satisfies conditions of the second lemma. Detailed proofs of these lemmas are relegated to Section~\ref{sec:appendix}. \vspace{2mm} Our first lemma quantifies the accuracy of the initialization step. \begin{lemma} \label{lemmaInit} Suppose that the input to GDT, $\Theta^0$, satisfies initialization condition \eqref{eq:ass_init}. Then the initial iterates $ U^0$ and $V^0$ obtained in lines $3$ and $4$ of Algorithm~\ref{algo:AltGD} satisfy \begin{equation} \label{eq:bound_I_0} d(Z^0, Z^*) \leq I_0, \end{equation} where $Z^0 = \sbr{\begin{array}{c}U^0 \\ V^0 \end{array}}$ and $I_0$ is defined in \eqref{eq:def_I_0}. \end{lemma} The proof of Lemma~\ref{lemmaInit} is given in Section~\ref{A:lemmaInit}. \begin{lemma} \label{lemma:iteration_contraction} Suppose the conditions {\bf (RSC/RSS)}, {\bf (I)} are satisfied. Assume that the point $Z = \sbr{\begin{array}{c}U \\ V \end{array}}$ satisfies $d(Z, Z^*) \leq I_0$. Let $(U^+, V^+)$ denote the next iterate obtained with Algorithm~\ref{algo:AltGD} with the step size $\eta$ satisfying \begin{equation} \label{eq:step_size_condition} \eta \leq \frac{1}{8\|Z\|_2^2} \cdot \min\Big\{\frac{1}{2(\mu +L )}, 1\Big\}. \end{equation} Then we have \begin{equation} \label{eq:lemma_contraction_result} d^2(Z^+, Z^*) \leq \xi^2 \bigg[ \Big(1 - \eta\cdot \frac{2}{5} \mu_{\min}\sigma_r(\Theta^*) \Big)\cdot d^2(Z, Z^*) + \eta\cdot\frac{L + \mu }{L \cdot\mu }\cdot e_{\rm stat}^2 \bigg], \end{equation} where $\xi^2 = 1 + \frac{2}{\sqrt{c - 1}}$. \end{lemma} The proof of Lemma~\ref{lemma:iteration_contraction} is given in Section~\ref{A:lemma:iteration_contraction}. \begin{lemma} \label{lemma:step_size_constant} Suppose $Z = \sbr{\begin{array}{c}U \\ V \end{array}}$ satisfies $d(Z, Z^*) \leq I_0$. We have that the choice of step size \eqref{eq:eta_constant} in Theorem \ref{main} satisfies the condition \eqref{eq:step_size_condition} in Lemma \ref{lemma:iteration_contraction}. \end{lemma} The proof of Lemma~\ref{lemma:step_size_constant} is given in Section~\ref{A:lemma:step_size_constant}. \vspace{4mm} Combining the three results above, we can complete the proof of Theorem~\ref{main}. Starting from initialization $\Theta^0$ satisfying the initialization condition \eqref{eq:ass_init}, Lemma~\ref{lemmaInit} ensures that \eqref{eq:bound_I_0} is satisfied for $Z^0$ and Lemma \ref{lemma:step_size_constant} ensures that the choice of step size \eqref{eq:eta_constant} satisfies the step size condition \eqref{eq:step_size_condition} in Lemma \ref{lemma:iteration_contraction}. We can then apply Lemma 3 and get the next iterate $Z^1 = Z^+$, which satisfies \eqref{eq:lemma_contraction_result}. Using the condition on statistical error \eqref{eq:ass_estat}, initialization \eqref{eq:ass_init}, and a simple calculation, we can verify that $Z^1$ satisfies $d(Z^1, Z^*) \leq I_0$. Therefore we can apply Lemma \ref{lemmaInit}, Lemma \ref{lemma:iteration_contraction}, and Lemma \ref{lemma:step_size_constant} repeatedly to obtain \begin{equation} d^2(Z^{t+1}, Z^*) \leq \beta \cdot d^2(Z^t, Z^*) + \xi^2\eta\cdot\frac{L + \mu }{L \cdot\mu }\cdot e_{\rm stat}^2, \end{equation} for each $t = 0, 1, ..., T$. We then have \begin{equation} d^2(Z^{T}, Z^*) \leq \beta^T \cdot d^2(Z^0, Z^*) + \frac{\xi^2\eta}{1-\beta}\cdot\frac{L + \mu }{L \cdot\mu }\cdot e_{\rm stat}^2 . \end{equation} Finally, for $\Theta^{T} = U^{T}(V^{T})^{\top}$, let $O^T \in \Ocal(r)$ be such that \[ d^2(Z^T, Z^*) = \|U^T - U^* O^T \|_F^2 + \|V^T - V^* O^T \|_F^2. \] We have \begin{equation} \begin{aligned} \|\Theta^{T} - \Theta^*\|_F^2 &= \| U^{T}(V^{T})^{\top} - U^{*}O^T (V^{*}O^T)^{\top} \|^2_F\\ &\leq \Big[ \| U^{T}\|_2\|V^{T}-V^{*}O^T\|_F + \|V^{*}\|_2\| U^{T} - U^{*}O^T \|_F \Big]^2 \\ &\leq \| U^{T}\|_2^2\|V^{T}-V^{*}O^T\|_F^2 + \|V^{*}\|_2^2\| U^{T} - U^{*}O^T \|_F^2 \\ &\leq 2\|Z^*\|_2^2 \cdot d^2(Z^T, Z^*) \\ &\leq 4\sigma_1(\Theta^*) \cdot \Big[ \beta^T \cdot d^2(Z^0, Z^*) + \frac{\xi^2\eta}{1-\beta}\cdot\frac{L + \mu }{L \cdot\mu }\cdot e_{\rm stat}^2 \Big], \end{aligned} \end{equation} which shows linear convergence up to the statistical error.
{ "timestamp": "2019-04-11T02:18:55", "yymm": "1802", "arxiv_id": "1802.06967", "language": "en", "url": "https://arxiv.org/abs/1802.06967" }
\section{Introduction} \input{tex/Problem.tex} \input{tex/TechnicalApproach.tex} \input{tex/Results.tex} \input{tex/Conclusion.tex} \appendices \input{tex/Appendix.tex} \section*{Acknowledgment} Pedro P. V. Tecchio would like to thanks CNPq - Brazil for its support. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \section{The Barycentric Coordinates Approach} \label{sec:TheBarycentricApproach} \subsection{Cayley-Menger bi-determinants} Cayley-Menger determinants provide a relation between Euclidean distances between points in space and the signed volume of the simplex formed by such points. \cite{KhanDiloc} and \cite{DiaoECHO} use the absolute value of these signed volumes in order to compute the barycentric coordinates of nodes in sensor networks from their noiseless range measures. Similarly to \cite{ThomasTrilateration}, we introduce the concept of Cayley-Menger bi-determinants. While, Cayley-Menger determinants operate on one set of points, bi-determinants operate on two sets of points; providing a relation between the product of volumes of each set and the Euclidean distances between points of different sets. We define both types of Cayley-Menger determinants following Blumenthal's work in \cite{blumenthal1970theory}. Let two sets of $n+1$ points, $\mathcal{X} = \{{\bf x}_{0}, \hdots, {\bf x}_{n}\}$ and $\mathcal{Y} = \{{\bf y}_{0}, \hdots, {\bf y}_{n}\}$, be defined by their Cartesian coordinates, such that ${\bf x_i} = [x_{1i}, \hdots, x_{ni}]^T \in \mathbb{R}^{n \times 1}$ and ${\bf y_i} = [y_{1i}, \hdots, y_{ni}]^T \in \mathbb{R}^{n \times 1}$ for $0 \leq i \leq n$. Then, the Cayley-Menger bi-determinant of $\mathcal{X}$ and $\mathcal{Y}$ is defined as follows: \begin{equation} \begin{array}{l} D({\bf x}_0, \hdots, {\bf x}_{n};{\bf y}_0, \hdots, {\bf y}_{n}) = \\ 2\left(-\frac{1}{2}\right)^{n+1} \left| \begin{smallmatrix} 0 & 1 & 1 & \hdots & 1 \\ 1 & d({\bf x}_0, {\bf y}_0)^2 & d({\bf x}_0, {\bf y}_1)^2 & \hdots & d({\bf x}_{0}, {\bf y}_{n})^2\\ 1 & d({\bf x}_1, {\bf y}_0)^2 & d({\bf x}_1, {\bf y}_1)^2 & \hdots & d({\bf x}_{1}, {\bf y}_{n})^2\\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & d({\bf x}_{n}, {\bf y}_0)^2 & d({\bf x}_{n}, {\bf y}_1)^2 & \hdots & d({\bf x}_{n}, {\bf y}_{n})^2\\ \end{smallmatrix} \right|. \end{array} \label{eq:CayleyMengerBiDet} \end{equation} Where, $d({\bf x}_i, {\bf y}_j)^2 = ||{\bf x}_i - {\bf y}_j||^2 = ({\bf x}_i-{\bf y}_j)^T({\bf x}_i-{\bf y}_j)$, for all $0 \leq i,j \leq n$. The Cayley-Menger determinant of $\mathcal{X}$ can be defined from the bi-determinant as follows: \begin{equation} D({\bf x}_0, \hdots, {\bf x}_n) = D({\bf x}_0, \hdots, {\bf x}_n;{\bf x}_0, \hdots, {\bf x}_n). \label{eq:CayleyMengerDet} \end{equation} As one can infer from equation \eqref{eq:CayleyMengerDet}, the Cayley-Menger determinant is a specific case of the more general bi-determinant. Next, we formally specify the relationship between the signed volumes of sets of points in $n$-dimensional Euclidean space and their Cayley-Menger bi-determinant. \begin{proposition} \label{prop:CayleyMengerAndVolume} The Cayley-Menger bi-determinant of two sets of $n+1$ points, $\mathcal{X} = \{{\bf x}_{0}, \hdots, {\bf x}_{n}\}$ and $\mathcal{Y} = \{{\bf y}_{0}, \hdots, {\bf y}_{n}\}$, in $\mathbb{R}^n$ is related to the products of the signed volumes of each independent set, by \begin{equation} D({\bf x}_0, \hdots, {\bf x}_n; {\bf y}_0, \hdots, {\bf y}_n) = (n!)^2\text{Vol}(\mathcal{X}) \text{ }\text{Vol}(\mathcal{Y}). \end{equation} \begin{proof} See Appendix \ref{app:CayleyMengerBidet}. \end{proof} \end{proposition} It is very important to notice that determinants in general are alternating forms, so the order in which its elements are arranged is essential to its correct computation. The following sections will show how we leverage Proposition \ref{prop:CayleyMengerAndVolume} in order to compute barycentric coordinates of points inside and outside the convex-hull of its $n+1$ neighbors, which allow us to obtain similar, but more concise results than \cite{DiaoECHO}. \subsection{Computing Barycentric Coordinates} \label{sec:ComputingBarycentricCoordinates} Barycentric coordinates are usually defined using concepts of points, affine spaces and affine frames. An affine space $\mathbf{X}$ is defined by a collection of points, a vector space and a function. An affine frame is a set of points in an affine space with origin ${\bf x}_0$, $\{{\bf x}_i\}_{i = 0,1,...,n}$, such that vectors $\{\overrightarrow{{\bf x}_0{\bf x}_1},\overrightarrow{{\bf x}_0{\bf x}_2},\cdots,\overrightarrow{{\bf x}_0{\bf x}_n}\}$ are linearly independent, i.e. they form a base for the embedded vector space $\overrightarrow{\mathbf{X}}$. By taking a field $\mathbf{K}$ such as $\mathbb{R}$, one can define barycentric coordinates as follows: \begin{proposition}[{\cite[Prop.3.6.2]{berger2009geometry}}] Let $\{{\bf x}_i\}_{i = 0,1,...,n}$ be a frame for an affine space $\mathbf{X}$. For any (point) ${\bf x} \in \mathbf{X}$ there exist $\lambda_i \in \mathbf{K}$, $0 \leq i \leq n$, such that $\sum_i \lambda_i = 1$ and ${\bf x} = \sum_i \lambda_i {\bf x}_i$. The scalars $\lambda_i$ are uniquely defined by this property and are called barycentric coordinates of ${\bf x}$ in the frame $\{{\bf x}_i\}_{i = 0,1,...,n}$. \label{prop:AffineFrame} \end{proposition} From proposition \ref{prop:AffineFrame}, one can infer that in order to localize the unknown nodes in the network, one needs to know the locations of $n+1$ nodes in an $n$-dimensional space. Moreover, the $n+1$ anchor nodes must form an affine frame for the inherent affine space. Suppose that our set of $n+1$ anchor node locations, represented by their Cartesian coordinates $\mathcal{X}_a = \{\mathbf{x}_{a_0}, \mathbf{x}_{a_1}, \cdots, \mathbf{x}_{a_n}\}$, form a frame for an affine space $\mathbf{X}$. We can compute the barycentric coordinates ${\bf \lambda}$ for any unknown node ${\bf x} \in \mathbf{X}$ by solving the following linear system, where $ X = [\mathbf{x}_{a_0} \mathbf{x}_{a_1} \cdots \mathbf{x}_{a_n}]$, \begin{equation} \begin{array}{llcl} {\bf x} = \sum_i \lambda_i {\bf x_{a_i}}, & \sum_i \lambda_i = 1 & \Leftrightarrow & \begin{bmatrix} X \\ {\bf 1}^T \end{bmatrix} {\bf \lambda} = \begin{bmatrix} {\bf x} \\ 1 \end{bmatrix}. \end{array} \label{eq:BarycentricCoordinatesDefinition} \end{equation} Because our set of anchor nodes form a frame in an affine space, the solution is unique by Proposition \ref{prop:AffineFrame}. Then, $V_X = \begin{bmatrix} X \\ {\bf 1}^T \end{bmatrix} $ has non-zero determinant. Using Cramer's Rule, we can compute each barycentric coordinate, $\lambda_i$, as follows \begin{equation} \lambda_i = \frac{ \begin{vmatrix} X_i \\ {\bf 1}^T \end{vmatrix}}{ \begin{vmatrix} X \\ {\bf 1}^T \end{vmatrix} } \end{equation} where, $X_i$ is the matrix obtained from $X$ by replacing its $i^{th}$ column with ${\bf x}$. The volume of the sets of points $\mathcal{X}_a$ and $\mathcal{X}_{a_i}$, where the $i^{th}$ point is replaced by ${\bf x}$, are given by \begin{equation} \begin{array}{lclclcl} \text{Vol}(\mathcal{X}_a) &=& \frac{1}{n!} \begin{vmatrix} X \\ {\bf 1}^T \end{vmatrix}, & & \text{Vol}(\mathcal{X}_{a_i}) &=& \frac{1}{n!} \begin{vmatrix} X_i \\ {\bf 1}^T \end{vmatrix} \end{array}. \end{equation} Therefore, each barycentric coordinates, $\lambda_i$, can be computed in terms of these volumes by \begin{equation} \lambda_i = \frac{ \begin{vmatrix} X_i \\ {\bf 1}^T \end{vmatrix}}{ \begin{vmatrix} X \\ {\bf 1}^T \end{vmatrix} } = \frac{(n!)\text{Vol}(\mathcal{X}_{a_i})}{(n!)\text{Vol}(\mathcal{X}_a)}. \end{equation} Multiplying the right side by $ \frac{(n!)\text{Vol}(\mathcal{X}_a)}{(n!)\text{Vol}(\mathcal{X}_a)}$ does not change its value, but permits us to conclude that the barycentric coordinates, $\lambda_i$ for all $0\leq i\leq n$, can be computed using Cayley-Menger bi-determinants and determinants. \begin{equation} \lambda_i = \frac{D({\bf x_0,\hdots,x_n};{\bf x_0, \hdots, x_{i-1},x, x_{i+1}, \hdots, x_n})}{D({\bf x_0,\hdots,x_n})} \label{eq:BarycentricCoordinates} \end{equation} We emphasize that equation \eqref{eq:BarycentricCoordinates} provides a method to compute barycentric coordinates of any point in an affine space based on any set of points that form an affine frame for that space. Moreover, if one maintains the ordering of points throughout all determinants, the signs of each barycentric coordinate will be correctly computed. Notice that each point's barycentric coordinate sign is associated to the overall position of this point in relation to the ones forming the affine frame used in its computation. Any point strictly inside the convex-hull of the affine frame, will have strictly positive coordinates. Otherwise, it will have at least one zero or negative coordinate. It is impossible to have all non-zero barycentric coordinates with a negative sign. Moreover, the order in which the different signs are given depends on the partial ordering of the $n+1$ hyper-planes generated on the $n$-dimensional space. It is important to notice that if we consider each node as a point in an affine space, then allowing nodes to be outside of the convex-hull of its neighbors, is equivalent of having zero or negative barycentric coordinates. This constitutes the main difference between DILOC's and ECHO's algorithm, \cite{KhanDiloc} and \cite{DiaoECHO} respectively. Fig. \ref{fig:GeneralizedBaryCoord} shows an example in three-dimensional space of the $2^{n+1}-1$ possible regions where the barycentric coordinate signs are strictly positive or negative. The convex-hull of the set of $n+1$ points forming the affine frame is depicted by having all its edges blacked in the figure. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{fig/baryCoord3D_2-eps-converted-to.pdf} \caption{Regions of different Barycentric Coordinate signs.} \label{fig:GeneralizedBaryCoord} \end{figure} \subsection{Generalizing Barycentric Coordinates} We developed equation \eqref{eq:BarycentricCoordinates} by using the anchor nodes as an affine frame, but any set of $n+1$ nodes that form an affine frame for the same space can be used in order to compute different barycentric coordinates of the same node of interest, provided that one has all required range measurements between nodes. This concept was utilized in \cite{DiaoECHO} in order to create the Generalized Barycentric Coordinates, which we extend next for the $n$-dimensional case. Let $\mathcal{N}_l$ be the index set of neighbors of node $l$, defined as follows \begin{equation} \small{ \mathcal{N}_l = \left\{j \in \{1,2,\hdots,m \} \setminus \{l\} |\hspace{.5mm} ||\mathbf{x}_l-\mathbf{x}_j||_2 \leq \min{(r_l,r_j)}\right\}.} \label{eq:Neighbors} \end{equation} Let $\mathcal{I}_{l}$ be a family of sets of $n+1$ indexes given by the combination without repetition of members of $\mathcal{N}_l$, which are also neighbors from one another. \begin{equation} \mathcal{I}_{l} = \{ \mathcal{V}_{J} \in \mathcal{N}_l^{\times n+1} | \mathcal{J} = \{\mathcal{V}_{J}, \mathcal{E}_{J}\}, \mathcal{E}_{J} \subset \mathcal{E} \text{ and } \mathcal{J} \in \mathbb{K}_{n+1} \} \label{eq:FamilySets} \end{equation} Where, $\mathcal{J}$ is a subgraph of our network graph $\mathcal{G}$ and $\mathbb{K}_{n+1}$ is the set of complete graphs with $n+1$ vertices. Therefore, the cardinality of $\mathcal{I}_{l}$ is $|\mathcal{I}_{l}| \leq \begin{pmatrix} |\mathcal{N}_{l}| \\ n+1 \end{pmatrix} $. For each node $l$, we define sets of $n+1$ points in $\mathbb{R}^{n}$, $\mathcal{X}_{l_i}$, $1 \leq i \leq |\mathcal{I}_{l}|$, such that $$\mathcal{X}_{l_i} = \{\mathbf{x}_{\mathcal{I}_{l_{i1}}}, \mathbf{x}_{\mathcal{I}_{l_{i2}}}, \cdots, \mathbf{x}_{\mathcal{I}_{l_{i n+1}}}\},$$ and sets of $n+1$ points in the same space, $\mathcal{Y}_{l_{ij}}$, $1 \leq j \leq n+1$, such that $$\mathcal{Y}_{l_{ij}} = \{\mathbf{x}_{\mathcal{I}_{l_{i1}}}, \mathbf{x}_{\mathcal{I}_{l_{i2}}}, \cdots, \mathbf{x}_{\mathcal{I}_{l_{i j-1}}}, \mathbf{x}_{l}, \mathbf{x}_{\mathcal{I}_{l_{i j+1}}}, \cdots, \mathbf{x}_{\mathcal{I}_{l_{i n+1}}}\}.$$ Using the previously defined sets with the correct indexes applied to equation \eqref{eq:BarycentricCoordinates}, one can compute all possible barycentric coordinates for each node $l$. \begin{proposition} \label{prop:BarycentricCoordinates} For each node $l$ and each set $\mathcal{I}_{l_i}$, if $D(\mathcal{X}_{l_i}) \neq 0$, then the barycentric coordinates of node $l$, $\lambda_l$, with respect to its neighbors identified by the set $\mathcal{I}_{l_i}$ are \begin{equation} [\lambda_{l_{ij}}]_k = \left\{ \begin{array}{ll} \dfrac{D(\mathcal{X}_{l_i};\mathcal{Y}_{l_{ij}})}{D(\mathcal{X}_{l_i})}, & \text{if } k = \mathcal{I}_{l_{ij}}\\ 0, & \text{otherwise}\\ \end{array} \right.. \end{equation} \begin{proof} Follows from equation \eqref{eq:BarycentricCoordinates}, by taking the appropriate nodes and their indexes. \end{proof} \end{proposition} If one arranges all computed barycentric coordinates for each node $l$ as previously specified in matrix form, with one row per combination, then the resulting matrix has $|\mathcal{I}_{l}|$ rows and $m$ columns. One possible way to utilize the previous result in Proposition \ref{prop:BarycentricCoordinates} is to concatenate all these matrices generating an overdetermined linear system. Another way is to compute the Generalized Barycentric Coordinates proposed by Diao et. al. in \cite{DiaoECHO}. These generalized coordinates are computed by averaging all possible barycentric coordinates for each node $l$ as stated in Proposition \ref{prop:GeneralizedBarycentricCoordinates}. \begin{proposition}[\cite{DiaoECHO}] \label{prop:GeneralizedBarycentricCoordinates} For each node $l$, its generalized barycentric coordinate, $\boldsymbol{\lambda}_l \in \mathbb{R}^{m \times 1}$, can be computed as follows \begin{equation} [\boldsymbol{\lambda}_l]_j = \frac{1}{|\mathcal{I}_l|} \sum_{i = 1}^{|\mathcal{I}_l|}{\lambda_{l_{ij}}} \text{ , } 1 \leq j \leq m. \label{eq:GeneralizedBaryCoord} \end{equation} \end{proposition} One may wonder about the possibility of using a smaller subset of barycentric coordinates to compute a modified set of generalized coordinates. Is there a trade-off between accuracy and efficiency related to the number of used subsets? We seek to provide some insights via numerical simulations in section \ref{sec:Evaluation}. As the generalized barycentric coordinates in equation \eqref{eq:GeneralizedBaryCoord} are computed through averaging, the property of summing to one is preserved. Moreover, one can use all $m$ generalized barycentric coordinates in order to compute the unknown nodes coordinates by simply solving a linear system as shown in the next section. \subsection{Localizing the unknown nodes} Finally, if one leverages the theory presented in the previous sections in a $n$-dimensional noiseless range-only static sensor network, one can utilize existing range measurements to compute generalized barycentric coordinates for each network node using Cayley-Menger bi-determinants as stated in Propositions \ref{prop:BarycentricCoordinates} and \ref{prop:GeneralizedBarycentricCoordinates}. \begin{theorem} The unknown node locations, $X_u$, of the problem stated in section \ref{sec:problem} can be computed by solving the following linear system, whenever $(I - D)^{-1}$ exists: \begin{equation} (I_{q\times q} - D_{q\times q}) \cdot X_{u_{q\times n}} = C_{q\times p} \cdot X_{a_{p\times n}}. \label{eq:ErrorFreeSolution} \end{equation} Where, matrices $C \in \mathbb{R}^{q \times p}$ and $D \in \mathbb{R}^{q \times q}$ are given in equations \eqref{eq:LinearSystem} and \eqref{eq:LinearSystemBlockMatrices} and $X_a$ is the given location coordinates of the anchor nodes. \begin{proof} We known from Proposition \ref{prop:AffineFrame} that any unknown node can be localized in relation to neighboring nodes that form a frame in an affine space utilizing barycentric coordinates. Propositions \ref{prop:BarycentricCoordinates} and \ref{prop:GeneralizedBarycentricCoordinates} show us how to compute these barycentric coordinates. Thus, one can compute coordinates of each node as the linear combination of this node's generalized barycentric coordinates with the coordinates of all other nodes. Arranging these linear combinations in matrix form such that $G \in \mathbb{R}^{m\times m}$, $G^T = [\boldsymbol{\lambda}_1, \boldsymbol{\lambda}_2, \hdots, \boldsymbol{\lambda}_m]$, and all node Cartesian coordinates as $X \in \mathbb{R}^{m\times n}$, $X^T = [\mathbf{x}_1, \mathbf{x}_2, \hdots, \mathbf{x}_m]$. Then it is true that \begin{equation} G \cdot X = X. \label{eq:LinearSystem} \end{equation} Next, one can permute rows of $X$ in order to isolate anchor and unknown nodes, so that $X = [X_a; X_u]$. Applying the same permutation to the rows and columns of $G$ we can write \begin{equation} \begin{bmatrix} A_{p\times p} & B_{p\times q}\\ C_{q\times p} & D_{q\times q} \end{bmatrix} \begin{bmatrix} X_{a_{p\times n}}\\ X_{u_{q\times n}} \end{bmatrix} = \begin{bmatrix} X_{a_{p\times n}}\\ X_{u_{q\times n}} \end{bmatrix}, \label{eq:LinearSystemBlockMatrices} \end{equation} where $p$ is the number of anchors, $n+1 \leq p < m$, and $q$ is the number of unknowns, $q = m - p$. As we know the anchor node coordinates, then $X_a$ is defined and one can use equation \eqref{eq:ErrorFreeSolution} to find the unknown node coordinates $X_u$ exactly, if and only if the inverse of $I - D$ matrix exists. \end{proof} \label{theo:ProblemSolution} \end{theorem} The linear system in Theorem \ref{theo:ProblemSolution} has a unique solution, if and only if $(I_{q\times q} - D_{q\times q})^{-1}$ exists. In \cite{DiaoECHO}, Diao et al. provide necessary and sufficient conditions for its existence based on the number of disjoint paths from unknown nodes to anchor nodes in the two-dimensional case. As we provide a generalization of their problem to the $n$-dimensional case in Theorem \ref{theo:ProblemSolution}, we can state their result in Corollary \ref{cor:ECHO2DTheorem} with the necessary reference changes. \begin{corollary}[Adapted from \cite{DiaoECHO}, Theorem 1] A sensor network in $\mathbb{R}^2$ with generic configuration $[X_a^T X_u^T]$ is localizable using the barycentric coordinate representation by solving \eqref{eq:ErrorFreeSolution}, i.e. the matrix $I - D$ is nonsingular, if and only if every node to be localized has at least three disjoint paths from the set of anchor nodes in $\mathcal{G}_{\hat{G}}$. \label{cor:ECHO2DTheorem} \end{corollary} Corollary \ref{cor:ECHO2DTheorem} makes reference to the undirected graph $\mathcal{G}_{\hat{G}} = (\mathcal{V}_{\hat{G}},\mathcal{E}_{\hat{G}})$. This undirected graph $\mathcal{G}_{\hat{G}}$ is constructed from our initial network representation graph $\mathcal{G}$. They both share the same set of vertices, $\mathcal{V}_{\hat{G}} = \mathcal{V}$, but possibly different edge sets. One may create an adjacency matrix from matrix $G$ defined in equation \eqref{eq:LinearSystem} assuming that elements of the latter are edge weights of some directed graph associated with the former. As it is assumed that edges in our network exists if and only if each node in a pair is able to communicate and measure their inter-node distances and there are no edges from one node to itself, then this adjacency matrix is symmetric. Lastly, Diao et al., in \cite{DiaoECHO}, further simplify this graph by modifying our original matrix $G$ to $\hat{G}$ given in equation \eqref{eq:MatrixAECHO}. Therefore, an edge $(i,j)$ exists in the edge set $\mathcal{E}_{\Hat{G}}$ of $\mathcal{G}_{\hat{G}}$, if and only if $[\hat{G}]_{ij} \neq 0$. Notice, that $\hat{G}$ is constructed using block matrices from matrix $G$ in \eqref{eq:LinearSystemBlockMatrices} for the 2D case specifically. \begin{equation} \hat{G} = \begin{bmatrix} I_{3\times 3} & 0_{3\times q} \\ C_{q\times 3} & D_{q\times q} \end{bmatrix} \label{eq:MatrixAECHO} \end{equation} We hypothesize, without proof, that a similar result from Corollary \ref{cor:ECHO2DTheorem} exists for the n-dimensional case. In which case, $I - D$ would be nonsingular if and only if every node to be localized has at least $n+1$ disjoint paths from the set of anchor nodes in the respective undirected graph $\mathcal{G}_{\hat{G}}$. \subsection{Distributed algorithm} It is important to notice that similarly to what happens in \cite{DiaoECHO}, the linear system matrices associated with Theorem \ref{theo:ProblemSolution} may have eigenvalues with modulus greater than one, which can cause convergence problems if one tries to solve these systems in an iterative or distributed form. Because of that, Diao et al. provided a distributed method based on the Richardson iteration \cite{Richardson307}. This method can also be applied to the generic $n$-dimensional case presented here. In the following section, based on numerical simulations of our work, we provide comparisons with the standard DILOC algorithm \cite{KhanDiloc} and Matlab's MDS implementation, as well as, our algorithm complexity analysis and experimental run times. \section{Conclusion} \label{sec:conclusion} This work's main contribution is a more concise algorithm for network node localization based on barycentric coordinates in $n$-dimensional Euclidean spaces using noiseless range measurements, when unknown nodes are not necessarily confined to the convex-hull of anchor nodes and their neighbors. Future work will be centered on extending this result in order to solve network node localization with noisy range measurements, as well as, improve its computational efficiency. \section{Introduction} \label{sec:intro} \IEEEPARstart{L}{ocalization} problems are fundamental to a multitude of applications and everyday scenarios, including deployment of robots and drones, autonomous vehicular systems and static sensor networks. In the later case, the location of each sensor is usually necessary to correctly analyze, interpret and correlate all measurements being made. In general, localization problems in static sensor networks have a single objective: based on available measurements and known information about the surrounding space, provide locations of one or more sensor nodes. Multiple methodologies have been proposed. Some utilize range and bearing measurements between sensors \cite{NikolayJointEstimation}; while others utilize only bearings \cite{ShamesBearingOnly}, or only ranges \cite{ThomasTrilateration}, \cite{KhanDiloc} and \cite{DiaoECHO}. These methods can also be classified depending on how the underlining algorithm computes the location of each sensor in the network. If all sensor nodes in the network send their information to one node, responsible for computing their locations, the method is called centralized. If each sensor node is responsible for computing its own location by exchanging information within its local neighbors, it is called distributed. In \cite{PryanthaN} and \cite{DiaoECHO}, localization methods are also classified as either sequential or concurrent. Sequential methods start from sensor nodes with known locations, called {\it anchors}, and compute location of other nodes individually or in groups. We call these latter nodes {\it unknown} throughout this paper. Concurrent methods start with an initial estimate of all node locations and finish when location estimates converge. At each iteration, nodes update their locations {by exchanging information and} using inter-node distance measurements with its neighbors. Trilateration methods, \cite{ThomasTrilateration}, are usual examples of sequential methods, while optimization like approaches such as MDS - Multi Dimensional Scalling, \cite{FrancoMDS} and \cite{KruskalMDS}, are examples of concurrent methods. Trilateration is one of the most straight forward approaches to solve the range-only localization problem. Its approach involves solving sets of non-linear equations, in which measured distances between unknown nodes and anchor nodes must be equal to the Euclidean norm of their Cartesian coordinates. In the two-dimensional case, a generic example involves three anchor nodes $\{i,j,k\}$ and one unknown node $\{l\}$, such that the Trilateration equations become \begin{equation} \left\{ \begin{array}{lcl} d(\mathbf{x}_l,\mathbf{x}_i) &=& ||\mathbf{x}_l - \mathbf{x}_i||_2\\ d(\mathbf{x}_l,\mathbf{x}_j) &=& ||\mathbf{x}_l - \mathbf{x}_j||_2\\ d(\mathbf{x}_l,\mathbf{x}_k) &=& ||\mathbf{x}_l - \mathbf{x}_k||_2 \end{array} \right. \end{equation} In \cite{ThomasTrilateration}, Thomas et al. propose a modification of the Trilateration method using barycentric coordinates to localize a robot in two-dimensional space knowing the location of three points and the distances from those points to the robot. Although they propose the utilization of Cayley-Menger bi-determinants and determinants, also defined in \cite{blumenthal1970theory}, to compute all necessary coordinates, they emphasize a greater geometrical view of the problem. Relationships between barycentric coordinates, Cayley-Menger bi-determinants and geometrical notions such as angles, dot and cross products of vectors in two and three dimensional spaces are demonstrated. In \cite{KhanDiloc}, Khan et al. proposed the Distributed Iterative LOCalization (DILOC) algorithm that can be classified as a range-only distributed concurrent method. This algorithm is defined in the $n$-dimensional Euclidean space, $\mathbb{R}^n$, and requires the following main assumptions: 1) there are $n+1$ anchor nodes; 2) all unknown nodes are placed inside the convex-hull of the anchor nodes; 3) for each unknown node, there is at least one subset of $n+1$ of its neighbors such that the former lies in the convex-hull of the latter. If the sensor network satisfies the previous assumptions, DILOC can be initialized at each unknown node by computing its barycentric coordinate in relation to one of its subsets of neighbors given by assumption 3). Later, an iterative process starts using location estimates for each unknown node and true locations for anchor nodes. At each subsequent step, neighboring nodes exchange their location estimates; unknown nodes update their location estimates based on the convex-combination of their neighbors' estimates and its barycentric coordinates, while anchor nodes maintain their assigned locations. This iterative process is proven to converge to the true solution for all nodes in \cite{KhanDiloc} by reorganizing all barycentric coordinates in matrix form, one row for each network node. The resulting matrix is right stochastic, which allows one to see the entire iterative process as an Absorbing Markov Chain which converges to the desired result. In \cite{DiaoECHO}, Diao et al. propose modifications to DILOC's algorithm relaxing its second and third previously specified assumptions, which they call Extended Computation scHeme of cOordiate (ECHO). Thus, ECHO enables arbitrary locations for anchor nodes in the network and allows the utilization of subsets of neighbors in which a node does not strictly lie inside the convex-hull of its neighbors. When the latter condition occurs; though the barycentric coordinates of a node can never be all simultaneously negative, some will be negative or zero. In order to compute such coordinates, Diao et al. present a series of algorithms based on geometrical properties of the two dimensional case. They also propose the concept of generalized barycentric coordinates, as the average of all possible barycentric coordinates of a node. Similarly to DILOC, these coordinates can be arranged in matrix form, as a linear system in terms of anchor and unknown node locations. Conversely, ECHO's matrix may not be right stochastic, so one can not relate it to Markov Chain theory. Moreover, this linear system may have unstable eigenvalues which can cause problems with iterative and distributed solution methods. In spite of that, Diao et al. show in \cite{DiaoECHO} a feasible distributed algorithm, as well as, proof that, in the two dimensional case, ``an entire sensor network is localizable if and only if every sensor node has at least three disjoint paths to the anchor nodes in the graph associated with the barycentric coordinate representation." Unfortunately, ECHO is only defined for two dimensional Euclidean spaces, restricting its applicability from scenarios involving 3D ad hoc networks, such as the deployment of robots and drones. We propose a generalization of ECHO over $n$-dimensional Euclidean spaces, as well as, a more concise way to compute barycentric coordinates on any number of dimensions and node arrangements. Our algorithm was influenced by ideas presented in \cite{ThomasTrilateration}, but like ECHO, an increase of connectivity on the network will incur in an increase computational expense in order to compute the generalized barycentric coordinates from all possible combinations of $n+1$ neighbor subsets of each node. Our main contributions to the static sensor network node localization problem using barycentric coordinates are: \begin{enumerate} \item Arbitrary anchor node placement among unknown nodes; \item Arbitrary n-dimensional Euclidean spaces allowed; \item Extension of generalized barycentric coordinates using Cayley-Menger bi-determinants. \end{enumerate} The solution we provide is centralized, but a method similar to the one proposed in \cite{DiaoECHO} can also be applied to compute all sensor node locations in a distributed form. In the upcoming sections, we formally state the localization problem in sensor networks while providing the necessary concepts of graph theory. We define Cayley-Menger bi-determinants and determinants, and show how to use them in order to compute barycentric coordinates of nodes in $n$-dimensional Euclidean space, as well as their generalization similarly to \cite{DiaoECHO}. Finally, we show how to use these generalized barycentric coordinates to compute all unknown node coordinates. \section{Problem Formulation} \label{sec:problem} \begin{figure}[!b] \centering \includegraphics[width=0.35\textwidth]{fig/nodeNetworkExample.pdf} \caption{Wireless node network example.} \label{fig:ProblemNetworkExample} \end{figure} A static sensor network can be modeled as a graph $\mathcal{G} = \{\mathcal{V},\mathcal{E}\}$, with vertex set $\mathcal{V}$ and edge set $\mathcal{E}$. The vertex set contains unique labels for each sensor node in the network. Without loss of generality, we assume that labels are ordered and taken from the set of non-zero natural numbers, $\mathcal{V} \subset \mathbb{N}_{\neq 0}$. Let the location of node $i \in \mathcal{V}$ be specified by Cartesian coordinates ${\bf x}_i \in \mathbb{R}^{n}$. If the number of nodes in the network is $m = |\mathcal{V}|$, then the set of node locations is given by $\mathcal{X} = \{\mathbf{x}_1,\mathbf{x}_2,\hdots,\mathbf{x}_m\}$. In general, edges exist whenever two nodes are able to communicate with one another. However, in this work, an edge exists whenever two nodes are able to communicate and measure their relative distance. Considering that each node $i \in \mathcal{V}$ has inter-node measuring and communication range given by ${\it r}_i \in \mathbb{R}_{>0}$, an edge between node pair $(i,j) \in V \times V$ will exist, if and only if, its edge weight, $d({\bf x}_i,{\bf x}_j) = || {\bf x}_i - {\bf x}_j ||_2$, satisfies $d({\bf x}_i,{\bf x}_j) = d({\bf x}_j,{\bf x}_i)$ and $d({\bf x}_i,{\bf x}_j) \leq \min ({\it r}_i, {\it r}_j)$. Therefore, the generated graph is undirected. An example of such a network in 3D is shown in Fig. \ref{fig:ProblemNetworkExample}. In this example, all 64 nodes are arranged in a cubic lattice with a lattice constant of 1 unit. Inter-node measuring and communication ranges are 2 units, ${\it r}_i = 2$, $1 \leq i \leq m $. Fig. \ref{fig:ProblemNetworkExample} also shows the node degree\footnote{In graph theory, node degree refers to the number of edges a node has to other nodes in the network, which are its direct neighbors.} of each vertex in this network example. Node degree will influence the localizability of the network, as will be shown later. One can infer that a subset of nodes with known coordinates is required to compute all other node coordinates, that is, one needs location references. The subset of anchor node coordinates is specified as $\mathcal{X}_a \subset \mathcal{X}$ while the subset of unknown node coordinates is $\mathcal{X}_u \subset \mathcal{X}$ so that $\mathcal{X}_a \bigcup \mathcal{X}_u = \mathcal{X}$ and $\mathcal{X}_a \bigcap \mathcal{X}_u = \emptyset$. Then, the range-only node localization in static sensor networks problem consists of estimating each unknown node coordinate using only range measurements between nodes. \begin{problem*} Given $\mathcal{X}_a \subset \mathcal{X}$ and $d({\bf x}_i,{\bf x}_j)$ for all $(i,j) \in \mathcal{E}$, estimate $\mathcal{X}_u \subset \mathcal{X}$. \end{problem*} Next, while making required extensions to the $n$-dimensional Euclidean space, we present the necessary concepts related to Cayley-Menger bi-determinants, barycentric coordinates and its generalization, as well as, the methodology to compute unknown node locations using these concepts from true anchor node locations and range measurements. \section{Evaluation} \label{sec:Evaluation} \subsection{Comparison with other algorithms} In Fig. \ref{fig:NodeCoord}, we provide simulations of our proposed algorithm, the standard DILOC algorithm, \cite{KhanDiloc}, and Matlab's MDS implementation, $mdscale(.)$. The static sensor network from these examples was formed from a $6 \times 6 \times 6$ unit-spaced regular lattice in which independent Gaussian noise with zero mean and unit variance was added to each node coordinate. The maximum range threshold for each node $i$ was set up to $r_i = 3$ units. As the resulting number of edges is high, they were not draw in Fig. \ref{fig:NodeCoord}. Instead, we provide a histogram of node degrees for this network in Fig. \ref{fig:ourNodeDegree}. Lastly, anchor nodes were chosen such that there were some nodes inside and outside their convex-hull. \begin{figure}[!b] \centering \includegraphics[width=0.45\textwidth]{fig/HistogramNodeDegree-eps-converted-to.pdf} \caption{Node degree histogram for the 215 localized nodes in the static sensor network with maximum range radius of 3 units for each node.} \label{fig:ourNodeDegree} \end{figure} \begin{figure}[!b] \centering \subfloat[Our computed node coordinates.]{\includegraphics[width=0.4\textwidth]{fig/OurNodeCoordinates-eps-converted-to.pdf}% \label{fig:ourNodeCoord}} \vfil \subfloat[DILOC's computed node coordinates.]{\includegraphics[width=0.4\textwidth]{fig/DilocNodeCoordinates-eps-converted-to.pdf}% \label{fig:dilocNodeCoord}} \vfil \subfloat[MDS's computed node coordinates with the least stress.]{\includegraphics[width=0.4\textwidth]{fig/MDSCoordinatesBestSolution-eps-converted-to.pdf}% \label{fig:mdsNodeCoord}} \caption{Static sensor network localization example with 216 nodes. Given the high number of edges in this network, they are not draw. Instead, we provide a histogram of its node degrees in Fig. \ref{fig:ourNodeDegree}. In these plots, the black frame represent edges of the anchor's convex-hull; black circles and squares are true node coordinates outside and inside the convex-hull respectively. Red dots are computed coordinates in each method, with the exception of one node which was not correctly localized by any algorithm. The latter has true coordinates marked by a black star, while red stars marks its MDS' computed values in which the stress criterion was inferior to 0.01.} \label{fig:NodeCoord} \end{figure} From the 216 nodes in our example, 215 were correctly localized by the proposed method, as shown in Fig. \ref{fig:ourNodeCoord}, in which black circles and squares represent their true node coordinates, while red dots represent computed coordinates. The unlocalized node has its true coordinates represented by the sole black star. As one can expect, DILOC's standard algorithm is able to localize all nodes inside the convex-hull of the anchor nodes as shown in Fig. \ref{fig:dilocNodeCoord}. Though, in order to accomplish that, 6 of the 15 unknown nodes inside the convex-hull had to use measurements greater than our proposed maximum range of 3 units. If one limits DILOC's range to 3 units, some nodes would not be localized, as these nodes would not have the necessary number of neighbors to compute their barycentric coordinates. Matlab's MDS implementation, $mdscale(.)$, allows one to set a range of parameters. In order to utilize the same set of range measurements as the one used by our proposed algorithm for this network example, we set the starting condition to random and maximum iterations to 1000. We also choose the stress criterion to be squared and normalized with the sum of 4th powers of the dissimilarities. It is also possible to choose the number of replications used by the algorithm, i.e. the number of random re-initializations used to compute results, where the one which provides the least stress is chosen as the final answer. In order to compare algorithms, we simulated the MDS algorithm 1000 times with one replication each time. A histogram of MDS stress values is shown in Fig. \ref{fig:MDSStressHistogram}. This histogram was generated using the square root binning method. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{fig/MDSStressHistogram-eps-converted-to.pdf} \caption{MDS stress value histogram for 1000 simulations with one replication each.} \label{fig:MDSStressHistogram} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{fig/MDSCoordinatesMostCommomStressInterval-eps-converted-to.pdf} \caption{Static sensor network localization example with 216 nodes using all MDS computed node coordinates which have stress value in the most frequent interval from Fig. \ref{fig:MDSStressHistogram}. Given the high number of edges in this network, they are not draw. Instead, we provide a histogram of its node degrees in Fig. \ref{fig:ourNodeDegree}. The black frame are edges of the anchor's convex-hull. Black circles and squares are true node coordinates outside and inside the convex-hull respectively. Red dots are computed coordinates with the exception of one node which was not localized in any method. This unlocalized node has true coordinates marked by a black star and computed ones marked by red stars.} \label{fig:MDSNodeCoordMostCommonStress} \end{figure} Fig. \ref{fig:mdsNodeCoord} shows all MDS computed coordinates which have a stress value inside the lowest valued bin interval from the histogram shown in Fig. \ref{fig:MDSStressHistogram}. In this case, except for one node with computed coordinates marked by red stars, all others were correctly localized. Notice that, each red star represent one possible solution in which the MDS stress criterion was less than 0.01, which shows that MDS solutions are not necessarily unique. Moreover, according to the stress value histogram, this best case scenario happened in less than 10\% of all simulations. Lastly, the computed node coordinates for the most frequent stress value interval are shown in Fig. \ref{fig:MDSNodeCoordMostCommonStress}. In order to provide a visual representation of the best and worst computed node coordinates throughout all 1000 simulations of Matlab's MDS, Fig. \ref{fig:MDSErrorEllipsoid}, we use \cite{Johnson} to obtain error ellipsoids which are centered at the average node coordinates. These ellipsoids have axes and orientation proportional to the covariance matrix eigenvalues magnitude and eigenvectors orientation. Their volumes are computed so that 90\% of all computed node coordinates are inside them. These results demonstrate that the MDS algorithm may provide less than desirable results most of the time. \begin{figure}[!t] \centering \includegraphics[width=0.45\textwidth]{fig/MDSCoordinatesErrorEllipsoidsMinMax-eps-converted-to.pdf} \caption{Error ellipsoids for the nodes with computed coordinate covariance matrices that have the smallest and largest maximum eigenvalue. These ellipsoids were obtained so that they encompass 90\% of all 1000 MDS computed coordinates for these nodes, which are also shown as dots in different colors.} \label{fig:MDSErrorEllipsoid} \end{figure} It is interesting to notice that one node, represented by a black star in Fig. \ref{fig:NodeCoord}, was not localized by any method. Besides being outside the anchor's convex-hull, it has less than $n+1$ neighbors given the defined maximum measurement range. \subsection{Algorithm complexity and execution times} \label{sec:ComputationComplexity} An implementation of our algorithm will have worst case scenario for its computational complexity whenever its network graph $\mathcal{G}$ is complete, i.e. all nodes are inter-connected. In which case, the computation of all family sets, $\mathcal{I}_l$ from equation \eqref{eq:FamilySets}, has complexity of $O(m^{n+2})$, where $m$ is the number of nodes and $n$ is the dimension of each node coordinate. Moreover, there will be $m$ choose $n+1$ sets of $n+1$ neighbors for each node. Therefore, in order to compute matrix $G$ from equation $\eqref{eq:LinearSystem}$, one needs to perform an order of $O(m^{n+3}n^{3})$ operations, considering that determinants of square matrix of dimension $k$ have computational complexity of $O(k^3)$. If one uses Gaussian Elimination or LU-factorization in order to solve the linear system given by equation \eqref{eq:ErrorFreeSolution}, one needs to perform $O((m-(n+1))^3)$. It is usually the case that $m >> n$, so the complexity becomes $O(m^3)$. We emphasize that the worst case scenario in which we have a complete graph is not common. Thus, one may define our algorithm's computational complexity considering the maximum node degree, $N = \max{(\{|\mathcal{N}_l|\}_{l = 1,\cdots,m})}$, where $\mathcal{N}_l$ is specified in equation \eqref{eq:Neighbors}. Then, the construction of matrix $G$ has complexity of $O(m^2 n^3 N^{n+1})$ and the construction of all families of neighbor sets has $O(mN^{n+1})$. Therefore, the complexity is reduced for $m >> N$. As previously defined, our algorithm has computational cost highly dependent on the number of inter-node range measurements, which are given by each node degree. Fig. \ref{fig:ourNodeDegree} shows a histogram of node degrees for the sensor network in Fig. \ref{fig:NodeCoord}. In which case, the maximum degree is 86 and the minimum is 5, thus much smaller than the worst case scenario. Notice that the initial process of finding all possible neighbor node combinations for each node can be done only once in the first initialization. If nodes remain static or move inside the range of their neighbors, no future modification are necessary; if not, one may update each node as needed. A similar argument was given in \cite{KhanDiloc}. \begin{figure}[!t] \centering \subfloat[Proportion of correctly localized networks.]{\includegraphics[width=0.4\textwidth]{fig/ProportionOfNetworksLocalized-eps-converted-to.pdf}% \label{fig:BatchProportionCorrectlyLocalized}} \vfil \subfloat[Average execution time.]{\includegraphics[width=0.4\textwidth]{fig/AverageExecutionTime-eps-converted-to.pdf}% \label{fig:BatchAverageExecutionTime}} \vfil \subfloat[Average reciprocal condition number.]{\includegraphics[width=0.4\textwidth]{fig/ConditionNumber-eps-converted-to.pdf}% \label{fig:BatchAverageReciprocalCN}} \caption{Simulations of our modified algorithm using 3D networks with 50 to 500 nodes in 50 nodes increments. For each network size, we create 100 sets of random node coordinates with Gaussian distribution $N(0,5I_3)$. For each instance, we select 5 sets of anchor nodes randomly. Distance threshold of 5 units. $\{$Red continuous, Blue dashed, Black dot dashed$\}$ lines are obtained utilizing the minimum between $\{1,50,200\}$ subset of neighbor nodes per pair of neighboring nodes or all possible subsets respectively.} \label{fig:BatchRandomNetworks} \end{figure} In order to provide an idea of our algorithm's run time and compare it to Matlab's MDS implementation, we executed 100 simulations of the former and 1000 for the later. Each MDS simulation was done with one replication only. The average run time for the first part of our algorithm which constructs all possible families of neighbor sets is 140.13 seconds with a standard deviation of 2.71 seconds. The second part which constructs and solves the linear system takes in average 49.15 seconds with a standard deviation of 1.32 seconds. The average execution time for one run of Matlab's MDS implementation with only one replication was 2.80 seconds with standard deviation of 0.71 seconds. Although our run times are 68 times larger than MDS' single replication rounds, one will usually need to compute a few rounds of MDS in order to obtain sufficiently small stresses, as can be inferred from Fig. \ref{fig:MDSStressHistogram}. In which, similar or better solutions than the one presented in Fig. \ref{fig:MDSNodeCoordMostCommonStress} happened approximately in 25\% of all MDS simulations made. Therefore, our run times should be around 17 times larger than MDS'. Despite the lack of optimization of our Matlab implementation, it is obvious that the problem of finding better neighboring node subsets, in which the localization problem can be solved as fast as needed while preserving the same flexibility provided to the placement of anchor nodes, is an interesting one. In this direction, we experiment with our algorithm by choosing smaller number of neighboring subsets for each node. Our original method to form all neighbor subsets is a Depth First Search algorithm. We modified it so that it will stop after finding a specified number of subsets, or after finding all possible subsets, whichever occurs first. In the former case, we could have utilized the depth first methodology without modification; but as we previously hypothesized, the existence a minimum of distinct paths from each unknown node to the anchors seems necessary towards the correctly solution of our localization problem. Therefore, we start our search with only one round of Breadth First Search; later, for each previously computed branch, we use Depth First Search to find a specified number of subsets. This approach guarantees that we have at least one feasible subset involving each neighbor node. In order to test this algorithm modification, we simulate 3D networks with 50 to 500 nodes in 50 nodes increments. For each network size, we generate 100 sets of random node coordinates using Gaussian distribution $N(0,5I_3)$. For each instance, we randomly choose 5 different sets of anchor nodes in order to compute the unknown node coordinates. Lastly, the distance threshold chosen for all instances was equal to 5 units, as this value was approximately equal to the average inter-node distance between all random network nodes generated. After each random network is created with its designated number of nodes, we check for nodes with less than $n+1$ neighbors and nodes in which there are no usable neighbor subsets, i.e. neighbors do not form a complete subgraph of $n+1$ nodes or their nodes form a region with zero volume. All nodes that satisfy these conditions are exclude from the initial network which is re-checked until no further nodes are excluded. We emphasize that this simple procedure does \textit{not} guarantee that all unknown nodes have $n+1$ disjoint paths to anchors. One can infer from Fig. \ref{fig:BatchProportionCorrectlyLocalized}, that our algorithm was able to correctly find the unknown node coordinates in more than 75\% of all random networks tested, utilizing the simple check of nodes connectivity explained above. Moreover, increasing the number of neighbor subsets per node had a positive effect on the smaller networks. We believe that this effect is reduced in the larger simulated networks due to their increased node density per volume. Fig. \ref{fig:BatchAverageExecutionTime} coupled with Fig. \ref{fig:BatchProportionCorrectlyLocalized} shows that our experiment achieved a satisfactory result as the total execution times decreased from around 200 seconds to around 10 seconds for networks with less than 250 nodes. Furthermore, even for networks with 500 nodes, our algorithm execution time became less than 40 seconds in average with a success rate between 77\% and 80\%. Lastly, we show the average reciprocal condition number of each network $I-D$ matrix as given in Theorem \ref{theo:ProblemSolution} in Fig. \ref{fig:BatchAverageReciprocalCN}. This shows that increasing the number of neighbor subsets per node as well as the node density provides a worse conditioned system in general, making it more susceptible to perturbations. \subsection{Towards real world deployment} Real world measurements are subject to many types of interference, be it random like noises or failures resulting in data loss, among others. While we do not provide a method to deal with such interferences, we believe that methodologies similar to the one employed in \cite{DILAND} may provide a solution. The linear system matrix $G$ utilized in Theorem \ref{theo:ProblemSolution} is constructed utilizing the available range measurements. Moreover, this matrix construction contains all non-linearities inherent on this localization problem. Therefore, measurement noises will affect the value of each element of this matrix in a non-linear form, making the explicit computation of each element's probability density function complex. In order to subvert these complications, which also happen to their DILOC algorithm in \cite{KhanDiloc}, Khan et al. propose a new methodology in \cite{DILAND}. Their approach utilize an averaging process of all received measurements in time. Thus, if all measurement noises have zero mean, their average will converge to the noiseless value after sufficient time has passed. So, each new batch of measurements is averaged with all previous ones before being used to compute the necessary barycentric coordinates. A similar process is also applied to each barycentric coordinate. At each step, the newly computed barycentric coordinate is averaged with its previous existing value through the utilization of converging weights. In \cite{DILAND}, it is proved that this process converges to the true barycentric coordinates given sufficient number of iterations in their algorithm based on DILOC. Besides the dissimilarities existing between our proposed algorithm and DILOC's, we experimented, using simulations, the application of these averaging process to our algorithm. Some simulations would converge to correct solutions, while others wouldn't. We believe that instabilities associated to eigenvalues of the averaged $G$ matrix, the one containing the barycentric coordinates, as previously mentioned in relation to iterative linear methods are responsible for the low reliability of this simple experiment. A better investigation of the underlining process is certainly needed. \section{Cayley-Menger Bi-determinant proof} \label{app:CayleyMengerBidet} Following the approach given by \cite{blumenthal1970theory}, we begin by defining matrices $X = [{\bf x_0,x_1,\cdots,x_n}] \in \mathbb{R}^{n\times n+1}$ and $Y = [{\bf y_0,y_1,\cdots y_n}] \in \mathbb{R}^{n\times n+1}$, as $$ \begin{tabular}{clc} $ V_X = \begin{bmatrix} X \\ \mathbf{1}^T \end{bmatrix} $ &, and & $ V_Y = \begin{bmatrix} Y \\ \mathbf{1}^T \end{bmatrix} $, \end{tabular} $$ where $\mathbf{1}$ is the column vector of all ones with the appropriate dimension. The signed volume of the sets of points $\mathcal{X}$ and $\mathcal{Y}$, specified by their coordinates given by the columns of matrices $X$ and $Y$, can be found through the determinant of $V_X$ and $V_Y$ respectively: \begin{equation} \begin{vmatrix} V_X \end{vmatrix} = (n!) \text{Vol}(\mathcal{X}) \label{eq:detAxVolSX} \end{equation} \begin{equation} \begin{vmatrix} V_Y \end{vmatrix} = (n!) \text{Vol}(\mathcal{Y}) \label{eq:detAyVolSY} \end{equation} Next, we can apply a sequence of operations without modifying the value of the determinants of $V_X$ and $V_Y$. \begin{equation} \begin{array}{lclcl} \begin{vmatrix} V_X \end{vmatrix} & = & \begin{vmatrix} X \\ \mathbf{1}^T \end{vmatrix} & = & \begin{vmatrix} X & \mathbf{0} \\ \mathbf{1}^T & 0 \\ \mathbf{0}^T & 1 \end{vmatrix} \vspace{2mm}\\ \begin{vmatrix} V_Y \end{vmatrix} & = & \begin{vmatrix} Y \\ \mathbf{1}^T \end{vmatrix} & = & \begin{vmatrix} Y & \mathbf{0} \\ \mathbf{1}^T & 0 \\ \mathbf{0}^T & 1 \end{vmatrix} \end{array} \end{equation} For the determinant of $V_X$, we take its transpose and interchange its last two columns and last two rows, obtaining: \begin{equation} \begin{array}{lcl} \begin{vmatrix} V_X^T \end{vmatrix} & = & \begin{vmatrix} x_{1 0} & x_{2 1} & \hdots & x_{n 0} & 0 & 1\\ x_{1 1} & x_{2 1} & \hdots & x_{n 1} & 0 & 1\\ \vdots & \vdots & \ddots & \vdots & \vdots & \vdots\\ x_{1 n-1} & x_{2 n-1} & \hdots & x_{n n-1} & 0 & 1\\ 0 & 0 & \hdots & 0 & 1 & 0\\ x_{1 n} & x_{2 n} & \hdots & x_{n n} & 0 & 1\\ \end{vmatrix} \vspace{2mm}\\ \end{array} \end{equation} Multiplying the previous to the determinant of $V_Y$ gives \begin{equation} \begin{vmatrix} V_X^T V_Y \end{vmatrix} = \left| \begin{smallmatrix} {\bf x}_0^T{\bf y}_0 & {\bf x}_0^T{\bf y}_1 & \hdots & {\bf x}_0^T{\bf y}_n & 1\\ {\bf x}_1^T{\bf y}_0 & {\bf x}_1^T{\bf y}_1 & \hdots & {\bf x}_1^T{\bf y}_n & 1\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ {\bf x}_{n-1}^T{\bf y}_0 & {\bf x}_{n-1}^T{\bf y}_1 & \hdots & {\bf x}_{n-1}^T{\bf y}_n & 1\\ 1 & 1 & \hdots & 1 & 0\\ {\bf x}_n^T{\bf y}_0 & {\bf x}_n^T{\bf y}_1 & \hdots & {\bf x}_n^T{\bf y}_n & 1\\ \end{smallmatrix} \right| \end{equation} Interchanging the last two rows returns: \begin{equation} \begin{array}{lcl} \begin{vmatrix} V_X^T V_Y \end{vmatrix} \vspace{2mm} & = & - \begin{vmatrix} {\bf x}_0^T{\bf y}_0 & {\bf x}_0^T{\bf y}_1 & \hdots & {\bf x}_0^T{\bf y}_n & 1\\ {\bf x}_1^T{\bf y}_0 & {\bf x}_1^T{\bf y}_1 & \hdots & {\bf x}_1^T{\bf y}_n & 1\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ {\bf x}_n^T{\bf y}_0 & {\bf x}_n^T{\bf y}_1 & \hdots & {\bf x}_n^T{\bf y}_n & 1\\ 1 & 1 & \hdots & 1 & 0\\ \end{vmatrix} \vspace{2mm}\\ & = & - \begin{vmatrix} X^TY & \mathbf{1} \\ \mathbf{1}^T & 0 \end{vmatrix} \vspace{2mm} \\ & = & - \begin{vmatrix} M \end{vmatrix} \end{array} \label{eq:detMdetAxTAy} \end{equation} Let C be the matrix inside the Cayley-Menger bi-determinant. Using the fact that $d({\bf x}_i, {\bf y}_j)^2 = ||{\bf x}_i - {\bf y}_j||^2 = ({\bf x}_i - {\bf y}_j)^T({\bf x}_i - {\bf y}_j)$, we write \begin{equation} \begin{vmatrix} C \end{vmatrix} = \left| \begin{smallmatrix} 0 & 1 & \hdots & 1 \vspace{1mm}\\ 1 & {\bf x}_0^T{\bf x}_0 + {\bf y}_0^T{\bf y}_0 - 2{\bf x}_0^T{\bf y}_0 & \hdots & {\bf x}_0^T{\bf x}_0 + {\bf y}_n^T{\bf y}_n - 2{\bf x}_0^T{\bf y}_n \vspace{1mm}\\ 1 & {\bf x}_1^T{\bf x}_1 + {\bf y}_0^T{\bf y}_0 - 2{\bf x}_1^T{\bf y}_0 & \hdots & {\bf x}_1^T{\bf x}_1 + {\bf y}_n^T{\bf y}_n - 2{\bf x}_1^T{\bf y}_n\\ \vdots & \vdots & \ddots & \vdots\\ 1 & {\bf x}_n^T{\bf x}_n + {\bf y}_0^T{\bf y}_0 - 2{\bf x}_n^T{\bf y}_0 & \hdots & {\bf x}_n^T{\bf x}_n + {\bf y}_n^T{\bf y}_n - 2{\bf x}_n^T{\bf y}_n \vspace{1mm}\\ \end{smallmatrix} \right| \end{equation} Applying row and column operations $$ Row_i \leftarrow Row_i - {\bf x_{i-2}}^T{\bf x_{i-2}} Row_1 $$ $$ Col_j \leftarrow Col_j - {\bf y_{j-2}}^T{\bf x_{j-2}} Col_1 $$ for $ 2 \leq \text{i, j} \leq n+2$, result in: \begin{equation} \begin{array}{llll} \begin{vmatrix} C \end{vmatrix} &=& -\frac{1}{2} & \begin{vmatrix} 0 & -2 & \hdots & -2 \vspace{1mm}\\ 1 & -2{\bf x}_0^T{\bf y}_0 & \hdots & -2{\bf x}_0^T{\bf y}_n \vspace{1mm}\\ 1 & -2{\bf x}_1^T{\bf y}_0 & \hdots & -2{\bf x}_1^T{\bf y}_n \vspace{1mm}\\ \vdots & \vdots & \ddots & \vdots \vspace{1mm}\\ 1 & -2{\bf x}_n^T{\bf y}_0 & \hdots & -2{\bf x}_n^T{\bf y}_n \vspace{1mm}\\ \end{vmatrix} \vspace{1mm}\\ \end{array} \end{equation} Extracting the repeated scalars and noticing that an even number of permutations is required, one can use equation \eqref{eq:detMdetAxTAy}, so that \begin{equation} \begin{array}{lcl} \begin{vmatrix} C \end{vmatrix} & = & -\frac{(-2)^{n+1}}{2} \begin{vmatrix} M \end{vmatrix} \vspace{2mm}\\ & = & (-1)^{n+1}2^{n} \begin{vmatrix} V_X^T V_Y \end{vmatrix} \vspace{2mm}\\ \end{array} \end{equation} Or, as we defined before, \begin{equation} \begin{array}{lcl} \begin{vmatrix} V_X^T V_Y \end{vmatrix} & = & 2 \left(-\frac{1}{2}\right)^{n+1} \begin{vmatrix} C \end{vmatrix} \vspace{2mm}\\ & = & D({\bf x}_0, \hdots, {\bf x}_n; {\bf y}_0, \hdots, {\bf y}_n) \end{array} \end{equation} Now, using \eqref{eq:detAxVolSX} and \eqref{eq:detAyVolSY}, we can see that \begin{equation} D({\bf x}_0, \hdots, {\bf x}_n; {\bf y}_0, \hdots, {\bf y}_n) = (n!)^2 \text{Vol}(\mathcal{X}) \text{ } \text{Vol}(\mathcal{Y}). \end{equation} By taking the sets $\mathcal{Y} = \mathcal{X}$ one finds that \begin{equation} \begin{array}{lcl} D({\bf x}_0, \hdots, {\bf x}_n) & = & D({\bf x}_0, \hdots, {\bf x}_n; {\bf x}_0, \hdots, {\bf x}_n)\\ & = & (n!)^2 \text{Vol}(\mathcal{X})^2. \end{array} \end{equation} Therefore, the Cayley-Menger Bi-determinant is proportional to the product of the signed volumes of the sets of points as previously defined.
{ "timestamp": "2018-02-21T02:03:09", "yymm": "1802", "arxiv_id": "1802.06890", "language": "en", "url": "https://arxiv.org/abs/1802.06890" }
\section{Introduction} \label{Sec:Intro} Magnetic domain walls (DWs) are of great interest for spintronics.~\cite{HOF-15} The motion of DWs in magnetic nanowires has attracted particular attention because of potential applications in data storage and logic devices.~\cite{Allwood:Science_2005,Parkin:Science_2008,Xu-08} Magnetic DWs can be driven by a magnetic field, an electric current,~\cite{YAM-04,Li:PRL_2004,Tatara:SciRep_2008,EMO-13,RYU-13} propagating spin waves,~\cite{HAN-09,Yan:PRL_2011,HIN-11} or an electric field.~\cite{Lahtinen:SciRep_2012,FRA-15} On the other hand, strong DW pinning at specific locations of a ferromagnetic film offers attractive prospects for magnonics,~\cite{KRA-14} where they can be used as spin-wave nanochannels~\cite{GAR-15,WAG-16,TRU-16} or monochromatic spin-wave sources.~\cite{VanDeWiele:SciRep_2016,Voto2017:SciRep} In unpatterned films, DW pinning requires a lateral modulation of magnetic anisotropy. Here, the anisotropy boundaries pin the magnetic DWs and an external magnetic field tailors their spin structure instead of moving them. Deterministic switching between wide and narrow magnetic DWs by a magnetic field has been demonstrated~\cite{Franke:PRB_2012} and the energetics of different DW types can drastically alter the magnetization reversal process.~\cite{YOU-13,CAS-15} For spin-wave emission, a pinned magnetic DW needs to be driven into oscillation by high-frequency actuation. Spin-transfer torques from an ac spin-polarized current can be used to achieve this.~\cite{VanDeWiele:SciRep_2016} Magnetic anisotropy boundaries themselves can also act as local spin-wave sources in a microwave magnetic field.~\cite{HAM-17} Thus, even if all magnetic DWs are erased by an external bias field, spin waves are still emitted from anisotropy boundaries. In this case, dissimilar magnetization precessions in neighboring domains trigger the excitation of spin waves. Regular modulations of magnetic anisotropy can be induced by magnetoelectric coupling between a ferromagnetic film and a ferroelectric layer. In some material systems, the ferroelectric domain pattern is completely transfered to the ferromagnet. Full ferroelectric-ferromagnetic domain correlations have been demonstrated in bilayers where the ferromagnetic film is exchange-coupled to the canted magnetization of a single-phase multiferroic film~\cite{CHU-08,Lebeugle:PRL_2009,Heron:PRL_2011,YOU-13} or strain-coupled to the ferroelastic domains of a ferroelectric crystal.~\cite{Lahtinen:AdvMat_2011,Chopdekar:PRB_2012,STR-13,Franke:PRL_2014} In both material systems, a local uniaxial magnetic anisotropy is induced in the ferromagnetic film. The in-plane axis of uniaxial magnetic anisotropy rotates from one domain to the other. Since ferroelectric domain boundaries in multiferroic bilayers are only a few nanometer wide, the magnetic anisotropy boundaries are nearly abrupt. Magnetic domains walls are pinned strongly by such sharp rotations of magnetic anisotropy. Besides multiferroic heterostructures, modulations of uniaxial magnetic anisotropy can also be realized by local ion irradiation~\cite{TRU-16,TRU-14} and thermally-assisted scanning probe lithography.~\cite{ALB-16} In most of the cited examples, the uniaxial magnetic anisotropy axis rotates by 90$^\circ$. In thin ferromagnetic films and zero magnetic field, the anisotropy boundaries thus pin 90$^\circ$ magnetic DWs of the N\'{e}el type. In this paper, we provide a theoretical description of a magnetic DW that is pinned by a 90$^\circ$ uniaxial magnetic anisotropy boundary. To describe the static and dynamic properties of the DW, we use a 1D model with continuous spatial variables. The model allows us to accurately calculate the static deformation of the DW profile in a perpendicular magnetic field. Dynamic excitations of the DW are modeled by the inclusion of a spin-transfer torque from an electric current. Application of a spin-polarized current moves the DW center away from the anisotropy boundary and tilts the DW magnetization out of the film plane. Next, we describe the dynamics of a pinned magnetic DW by using the center and tilting angle of the DW as collective coordinates. We derive an expression for the DW resonance frequency and calculate how it varies as a function of magnetic anisotropy strength and applied magnetic field. For consistency, we compare our analytical results with numerical simulations based on a 1D Heisenberg model and the Landau-Lifshitz-Gilbert (LLG) equation as well as with micromagnetic simulations. The paper is organized as follows. In Sec.~\ref{Sec:Model} we introduce the DW models. In Sec.~\ref{Sec:DomainWall} we provide results for the DW profile in zero and non-zero magnetic field. Sec.~\ref{Sec:Dynam} studies the effect of an applied electric current. First, we develop a model for current-induced DW oscillations in zero magnetic field. Then, a model describing the simultaneous action of a magnetic bias field and a spin-polarized current is presented. An expression for the DW resonance frequency is derived and numerically studied. Finally, we discuss our results in Sec.~\ref{Sec:Conclusions}. \section{Model} \label{Sec:Model} \begin{figure}[htp!] \includegraphics[width=.9\columnwidth]{./fig1.eps} \caption{Collective coordinates of an head-to-tail 90$^\circ$ magnetic DW. The red arrows point in the direction of local magnetization. The left (L) and right (R) parts of the magnetic layer differ in the direction of uniaxial magnetic anisotropy axis. The red solid line indicates the abrupt anisotropy boundary and the black dashed line marks the DW center. The displacement of the DW center from the anisotropy boundary is given by coordinate $q$ while the DW tilting angle from the film plane is given by $\psi$. The black arrow indicates the direction of applied magnetic field ($H_{\rm app}$).} \label{Fig:collective} \end{figure} In our models, we orient the ferromagnetic film in the $y-z$ plane (see Fig.~\ref{Fig:collective}). The DW magnetization changes along the $y$-axis and we assume translation symmetry along the $z$-axis. The magnetic anisotropy boundary is located at $y = 0$ and the angle between the uniaxial anisotropy axis in the left (L) and right (R) domains is set to 90$^\circ$. The unit vector along the anisotropy axis in the domains is expressed as $\hat{\bm e}_u = (0, \sin\xi_u, \cos\xi_u)$, with $\xi_u$ differing in the domains \begin{equation} \xi_u = \begin{dcases} \pi/4\,, & \text{in the left domain ($y<0$) }\,, \\ 3\pi/4\,, & \text{in the right domain ($y>0$)}\,. \end{dcases} \label{Eq:xi_u} \end{equation} In this section, we introduce two models that describe the static and dynamic properties of a pinned magnetic DW. For analytical calculations, we exploit a continuous 1D model in which the magnetization direction varies smoothly across the DW. Numerical simulations are performed using a discrete 1D Heisenberg model. Here, we consider a finite chain of magnetic moments along the $y$-axis. Relations between the two models are explained. \subsection{Continuous model} In the continuous limit, the volume energy density can be written as the sum of exchange energy density (${w_{\rm ex}}$), shape anisotropy density (${w_{\perp}}$), Zeeman energy density (${w_{\rm Z}}$), and uniaxial anisotropy energy density (${w_u}$) \begin{equation} w = {w_{\rm ex}} + {w_{\perp}} + {w_{\rm Z}} + {w_u}\,. \end{equation} Generally, the terms take the forms \begin{subequations} \begin{align} {w_{\rm ex}} &= A\, \left[({\bm \nabla} m_x)^2 + ({\bm \nabla} m_y)^2 + ({\bm \nabla} m_z)^2 \right]\,, \\ {w_{\perp}} &= K_{\perp}\, \left({\bm m} \cdot \hat{\bm e}_x \right)^2\,, \\ {w_{\rm Z}} &= -\mu_0\, {M_{\rm s}}\, {\bm m} \cdot {{\bm H}_{\rm app}}\,, \\ {w_u} &= -K_u\, \left( {\bm m} \cdot \hat{\bm e}_u \right)^2\,, \end{align} \label{Eqs:w_def} \end{subequations} where ${\bm m} = (m_x, m_y, m_z)$ is a unit vector along the magnetization direction ${\bm m} = {\bm M} / {M_{\rm s}}$, ${M_{\rm s}}$ is the saturation magnetization, $A$ is the exchange stiffness parameter, $K_{\perp}$ is the perpendicular anisotropy, and $K_u$ is the uniaxial in-plane anisotropy. In our calculations, we always assume $K_{\perp}$ and $K_u$ to be positive. \subsection{Heisenberg model} To complement our analytical results, we perform simulations using a discrete Heisenberg model. In this model, the continuously varying parameter ${\bm m}(y)$ is replaced by ${\hat{\bm s}}_n = {\bm m}(y_n)$, with $y_n$ indicating the position of $n$-th spin of a 1D chain. The positional variable can be expressed as $y_n = (n-1)\, a$, where $n \in \{1, 2, \dots, N\}$ and $a$ is the distance between two adjacent magnetic moments. The Heisenberg Hamiltonian of a 1D chain of magnetic moments is given by~\cite{Wieser:PRB_2010} \begin{equation} \begin{split} {\cal H} = &-J \sum_n {\hat{\bm s}}_n \cdot {\hat{\bm s}}_{n+1} - \mu_0 \mu_{\rm S}\, {{\bm H}_{\rm app}} \cdot \sum_n {\hat{\bm s}}_n + \\ &{D_{\perp}} \sum_n \left( {\hat{\bm s}}_n \cdot \hat{\bm e}_x \right)^2 - {D_u} \sum_n \left( {\hat{\bm s}}_n \cdot \hat{\bm e}_u \right)^2\,, \end{split} \label{Eq:Hamiltonian} \end{equation} where, $J$ is the exchange coupling parameter, ${{\bm H}_{\rm app}}$ is the applied magnetic field, ${D_{\perp}} > 0$ is the perpendicular magnetic anisotropy, and ${D_u} > 0$ is the uniaxial anisotropy in the film plane. The parameters in Eq.~\ref{Eq:Hamiltonian} are related to those of the continuous model: $J = 2a\, A$, ${D_{\perp}} = a^3\, K_{\perp}$, and ${D_u} = a^3\, K_u$, where $a$ is the cell size. Moreover, if we define $\mu_{\rm S} = \muB S$, where $\muB$ is Bohr magneton and $S$ is the spin per unit cell, then the saturated magnetization of one cell is given by ${M_{\rm s}} = \mu_{\rm S} / a^3$. \subsection{Spin dynamics} We will now describe the dynamics of magnetization that is generated by an effective torque. The torques that we consider are caused by an effective magnetic field or a spin-polarized current. The time variation of the magnetization vector is described by the Landau-Lifshitz-Gilbert (LLG) equation. In the continuous limit, it reads \begin{equation} \der{}{{\bm m}}{t} - \alpha\, {\bm m} \times \der{}{{\bm m}}{t} = {\bm \Omega}\,, \label{Eq:LLG} \end{equation} where $t$ is time, $\alpha$ is the Gilbert damping parameter, and ${\bm \Omega}$ is the total torque acting on ${\bm m}$. ${\bm \Omega}$ can be written as \begin{equation} {\bm \Omega} = -\mu_0\, \gamma\, {\bm m} \times {{\bm H}_{\rm eff}} + {\bm \tau}\,, \label{Eq:Gamma} \end{equation} where $\gamma = |{\gamma_{\rm g}}| > 0$ is the gyromagnetic ratio. ${\bm \Omega}$ consists of two terms, one describing the torque that is induced by the effective magnetic field (${{\bm H}_{\rm eff}}$) and another representing the current-induced spin-transfer torque (${\bm \tau}$). The effective magnetic field is a functional derivative of the volume energy density ($w$) \begin{equation} {{\bm H}_{\rm eff}} = \frac{1}{\mu_0\,{M_{\rm s}}} \fder{w}{{\bm m}}\,, \label{Eq:Heff} \end{equation} where $\mu_0$ is the vacuum permeability. The spin-transfer torque acting on the magnetization is given by\cite{Li:PRL_2004,Li:PRB_2004} \begin{equation} {\bm \tau} = -u\, \left[ ({\bm j} \cdot {\bm \nabla}) - \beta\, {\bm m} \times ({\bm j} \cdot {\bm \nabla}) \right] {\bm m}\,, \label{Eq:torque} \end{equation} where ${\bm j}$ is a unit vector along the current direction and $\beta$ is the spin-torque nonadiabaticity. The parameter $u$ is given by \begin{equation} u = \frac{\muB I P}{e {M_{\rm s}}}\,, \label{Eq:u} \end{equation} where $I$ is the charge current density, $P$ is the spin polarization of the current, and $e$ is the electron charge. If we consider a current along the $y$-axis, we can write \begin{equation} {\bm \tau} = -u\, \left( \pder{}{y} - \beta\, {\bm m} \times \pder{}{y} \right) {\bm m}. \label{Eq:torque1} \end{equation} \subsubsection{Spherical coordinates} \begin{figure} \centering \includegraphics[width=.9\columnwidth]{./fig2.eps} \caption{Spherical coordinate system used in the analytical model.} \label{Fig:spherical} \end{figure} For the sake of simplicity, we now express the LLG equation in local spherical coordinates $(\theta, \phi)$, as schematically shown in Fig.~\ref{Fig:spherical}. In this coordinate system, the magnetization vector can be written as ${\bm m} = (\cos\phi \sin\theta, \sin\phi \sin\theta, \cos\theta)$. Moreover, we define two local base vectors perpendicular to ${\bm m}$. \begin{subequations} \label{Eq:loc_base_vectors} \begin{align} \hat{\bm e}_\phi &= \left( \hat{\bm e}_z \times {\bm m} \right) / \sin\theta\,, \\ \hat{\bm e}_\theta &= \hat{\bm e}_\phi \times {\bm m}\,, \end{align} \end{subequations} where $\hat{\bm e}_z = (0, 0, 1)$. Consequently, the LLG equation (Eq.~\ref{Eq:LLG}) takes the form~\cite{Thiaville:EPL2005,Thomas:Nat2006} \begin{subequations} \begin{align} \der{}{\theta}{t} = &-\frac{\gamma}{{M_{\rm s}}} \frac{1}{\sin\theta} \fder{w}{\phi} - \alpha\, \sin\theta\, \der{}{\phi}{t} \notag \\ &- u\, \left( \pder{\theta}{y} + \beta\, \sin\theta\, \pder{\phi}{y} \right)\,, \\ \sin\theta\, \der{}{\phi}{t} = &\quad \frac{\gamma}{{M_{\rm s}}} \fder{w}{\theta} + \alpha\, \der{}{\theta}{t} \notag \\ &- u\, \left( \sin\theta\, \pder{\phi}{y} - \beta\, \pder{\theta}{y} \right)\,. \end{align} \label{Eqs:LLG_spher} \end{subequations} The overall torque acting on ${\bm m}$ can be split as \begin{equation} {\bm \Omega} = \Omega_\theta\, \hat{\bm e}_\theta + \Omega_\phi\, \hat{\bm e}_\phi\,, \end{equation} where $\Omega_\theta = {\bm \Omega} \cdot \hat{\bm e}_\theta$ and $\Omega_\phi = {\bm \Omega} \cdot \hat{\bm e}_\phi$. Finally, the different energy density terms can be expressed as \begin{subequations} \begin{align} {w_{\rm ex}} &= A\, \left[ \left( \pder{\theta}{y} \right)^2 + \left( \pder{\phi}{y} \right)^2 \sin^2\theta \right]\,, \\ {w_{\perp}} &= K_{\perp}\, \cos^2\!\phi\, \sin^2\!\theta\,, \\ {w_{\rm Z}} &= -\mu_0\, {M_{\rm s}}\, H_{\rm app}\, \sin\!\phi\, \sin\!\theta\,, \\ {w_u} &= -\frac{K_u}{\sqrt{2}}\, \left(\sin\phi \sin\theta \pm \cos\theta \right)^2. \end{align} \label{Eqs:w_def_spher} \end{subequations} Here, we took into account that the magnetization is solely changing along the $y$ direction and the magnetic bias field is oriented along $y$ as well (${{\bm H}_{\rm app}} = H_{\rm app}\, \hat{\bm e}_y$). In the expression for ${w_u}$, we included the abrupt 90$^\circ$ rotation of uniaxial magnetic anisotropy. The upper sign relates to the left domain ($y < 0$) and the lower sign applies to the right domain ($y > 0$). \subsubsection{Heisenberg model} In the discrete Heisenberg model, we replace ${\bm m}$ by ${\hat{\bm s}}_n$ in the LLG equation. The effective magnetic field encountered by spin ${\hat{\bm s}}_n$ is given by ${{\bm H}_{\rm eff}}_n = -(\mu_{\rm S} \mu_0)^{-1} (\delta{\cal H}/\delta{\hat{\bm s}}_n)$. For the discrete variable we use $\partial{\hat{\bm s}} / \partial y = ({\hat{\bm s}}_{n+1} - {\hat{\bm s}}_{n-1}) / (2\,a)$ at the $n$-th site of the 1D chain. This gives a discretized expression for the current-induced spin-transfer torque \begin{equation} {\bm \tau}_n = -\frac{u}{a} \left[ \Delta{\hat{\bm s}}_n - \beta\, {\hat{\bm s}}_n \times \Delta {\hat{\bm s}}_n \right]\,, \label{Eq:stt_disc} \end{equation} where $\Delta {\hat{\bm s}}_n = ({\hat{\bm s}}_{n+1} - {\hat{\bm s}}_{n-1}) / 2$. \section{90$^\circ$ domain wall} \label{Sec:DomainWall} \subsection{Equilibrium DW} We will now inspect the DW profile in equilibrium, i.e., when no external magnetic field and no electric current are applied. If we assume that the magnetization rotates in the film plane ($\phi = \pi/2$), $\Omega_\theta = 0$ in both domains and \begin{equation} \Omega_\phi = \frac{\gamma}{{M_{\rm s}}} \left\{ -2\, A \pderr{2}{\theta(y)}{y} + K_u \sin\left[2 \left(\theta(y) - \xi_u\right)\right] \right\}\,. \label{Eq:omp} \end{equation} In equilibrium, $\Omega_\theta = \Omega_\phi = 0$, which gives~\cite{Tatara:SciRep_2008} \begin{equation} \pderr{2}{\theta'}{y} = \frac{1}{\lambda^2} \sin\theta' \cos\theta'\,. \label{Eq:dif_eq} \end{equation} Here, we defined $\theta'(y) = \theta(y) - \xi_u$ and \begin{equation} \lambda = \sqrt{\frac{A}{K_u}}\,. \label{Eq:lambda} \end{equation} For a head-to-tail 90$^\circ$ DW one needs to impose the boundary conditions $\theta' \to 0$ for $y \to \pm\infty$. Moreover, $y=0$ and $\theta = \pi/2$ at the anisotropy boundary. Using these conditions, we obtain a static solution for the DW profile \begin{equation} \theta(y) = \begin{dcases} \frac{\pi}{4} + 2\, \arctan\left[ \left( \sqrt{2} - 1 \right) \exp\left(y/\lambda \right)\right]\,, \\ \qquad \text{if } y < 0\,, \\ \frac{3 \pi}{4} - 2\, \arctan\left[ \left( \sqrt{2} - 1 \right) \exp\left(-y/\lambda \right)\right]\,, \\ \qquad \text{if } y > 0\,. \end{dcases} \label{Eq:prof_eq} \end{equation} \begin{figure}[!tp] \centering \includegraphics[width=.9\columnwidth]{fig3.eps} \caption{Domain wall profiles obtained from Heisenberg model simulations for different values of applied magnetic field $\mu_0 H_{\rm app}$. In the simulations, $N = 2000$, $a = 0.5\, {\rm nm}$, ${M_{\rm s}} = 1.5 \times 10^6\, {\rm A/m}$, $A = 2.1 \times 10^{-11}\, {\rm J/m}$, $K_u = 2.5 \times 10^4\, {\rm J/m}^3$, and ${D_{\perp}} = 0.1 {D_u}$. The open circles indicate zero-field solutions of the analytic model (Eq.~\ref{Eq:prof_eq}).} \label{Fig:DW_prof} \end{figure} This expression is exact when dipolar interactions are negligible. For a head-to-tail 90$^\circ$ DW this is an accurate approximation because its profile is determined by the competing strengths of exchange coupling and uniaxial magnetic anisotropy.~\cite{Franke:PRL_2014} Figure~\ref{Fig:DW_prof} demonstrates that the analytical solution agrees well with Heisenberg model simulations for zero magnetic field (see solid orange curve and open circles), which also ignores dipolar interactions. When a magnetic field is applied along an unpinned 180$^\circ$ magnetic DW, it moves to minimize Zeeman energy. On the other hand, when the field is oriented perpendicular to the same DW, its internal spin structure and, thereby, the dynamic properties change.~\cite{Sobolev1994:JAP,Sobolev1995:JMMM} Next, we will analyze how the application of a magnetic field normal to the DW plane alters the profile of a pinned 90$^\circ$ DW. \subsection{Effect of magnetic field} \label{SSec:mag_field} When an in-plane magnetic field is applied perpendicular to the head-to-tail 90$^\circ$ DW, i.e., along the $y$-axis, the Zeeman energy is the same in both domains. Therefore, the DW will not leave its equilibrium position on top of the anisotropy boundary. Instead, the magnetization vectors in both domains gradually rotate towards each other in a magnetic field. This coherent reduction of the DW angle depends on the strength of uniaxial magnetic anisotropy. The torque that acts on the magnetization in an external magnetic field $H_{\rm app}$ is given by \begin{equation} \begin{split} \Omega_\phi = \frac{\gamma}{{M_{\rm s}}} \biggl\{ &-2 A\, \pderr{2}{\theta}{y} - K_u\, \sin\left[2 (\theta - \xi_u)\right] \\ &+H_{\rm app}\, \mu_0 {M_{\rm s}}\, \cos\theta\; \biggr\}\,. \end{split} \label{Eq:omp_happ} \end{equation} For $H_{\rm app} > 0$, the magnetization angle $\theta$ in the left domain increases by angle $\zeta$, $\theta_{\rm L} = \pi/4 + \zeta$, while in the right domain $\theta_{\rm R} = 3\pi/4 - \zeta$. Consequently, the magnetization rotation between neighboring domains ($\Delta$) is reduced by $2\zeta$; $\Delta = \pi/2 - 2\zeta$. Figure~\ref{Fig:DW_prof} shows how the DW profile evolves as a function of applied magnetic field. Deep inside the domains, where $\partial\theta/\partial{y} = 0$, Eq.~\ref{Eq:omp_happ} can be used to derive an expression for $\zeta$ \begin{equation} \frac{K_u}{\mu_0\, {M_{\rm s}}} \sin\left(2\, \zeta\right) = H_{\rm app} \cos\left( \zeta + \frac{\pi}{4} \right)\,. \label{Eq:zeta} \end{equation} This equation can be solved numerically for any arbitrary value of $H_{\rm app}$. Once the angle $\zeta$ is known, one can use the following ansatz for the DW profile in an applied magnetic field \begin{equation} \theta_{\zeta}(y) = \begin{dcases} \frac{\pi}{4} + \zeta + 2\, \arctan\left[ C\, \exp\left(y/\lambda' \right)\right]\,, \\ \qquad \text{if } y < 0\,, \\ \frac{3 \pi}{4} - \zeta - 2\, \arctan\left[ C\, \exp\left(-y/\lambda' \right)\right]\,, \\ \qquad \text{if } y > 0\,, \end{dcases} \label{Eq:prof_zeta} \end{equation} where $C$ can be extracted from the boundary condition at $y=0$ \begin{equation} C = \tan\left( \frac{\pi}{8} - \frac{\zeta}{2} \right)\,. \label{Eq:A_zeta} \end{equation} Moreover, $\lambda'$ in Eq.~\ref{Eq:prof_zeta} is the DW width, which differs from the zero-field DW width, $\lambda$, as defined by Eq.~\ref{Eq:lambda}. \begin{figure}[tp] \centering \includegraphics[width=.9\columnwidth]{fig4.eps} \caption{(a) Angle $\zeta$ and (b) $p = \lambda'/\lambda$ as a function of applied magnetic field, $H_{\rm app}$, for various values of $K_u$. The others parameters in the calculations are the same as in Fig.~\ref{Fig:DW_prof}.} \label{Fig:dw_happ} \end{figure} Figure~\ref{Fig:dw_happ} shows the parameter $\zeta$ and ratio $p = \lambda'/\lambda$ as a function of magnetic field for different values of $K_u$. While the values of $\zeta$ are directly obtained from Eq.~\ref{Eq:zeta}, the dependence of $\lambda'$ follows from Heisenberg model simulations. Here, the LLG equation is used to simulate the relaxation of discrete magnetic moments in a magnetic field. Once the static state is reached, the parameters $\zeta$ and $\lambda'$ are extracted by fitting the spatial magnetization profile to Eq.~\ref{Eq:prof_zeta}. Equation~\ref{Eq:zeta} and the discrete Heisenberg model give very similar results for $\zeta$. For large perpendicular magnetic fields, $\zeta$ approaches a maximum of $\pi/4$. This value corresponds to full magnetization saturation along the direction of applied magnetic field. As a result of diminishing magnetization rotation between domains ($\Delta$), the DW width ($\lambda'$) decreases with increasing field strength (Fig.~\ref{Fig:dw_happ}(b)). The predicted tunability of the width and internal spin structure of a pinned DW might be exploited for active manipulation of spin waves. Previously, it has been found that dynamic stray fields in DWs reduce the transmission of propagating spin waves if the DW width becomes smaller than the spin-wave wavelength.~\cite{MAC-10,WAN-APL-13} Reprogramming of the DW spin structure by an external field at a fixed location of a ferromagnetic film could thus impose controllable changes to the amplitude or phase of passing spin waves, which is an essential feature of magnonic logic devices.~\cite{CHU-15} \section{Current-induced domain wall dynamics} \label{Sec:Dynam} \subsection{Zero magnetic field} \begin{figure}[!htp] \centering \includegraphics[width=.9\columnwidth]{fig5.eps} \caption{Head-to-tail DW profiles under the influence of an electric current for (a) in-plane, $S_z$, and (b) out-of-plane, $S_x$, spin coordinates as obtained from Heisenberg model simulations. The inset of (a) shows the displacement of the DW center from the anisotropy boundary at $y=0$. In the calculations $\alpha = 0.15$, $P=0.5$, $\beta=0.4$, and $|I| = 10^{12}\,{\rm A}/{\rm m}^2$. The other parameters are the same as in Fig.~\ref{Fig:DW_prof}.} \label{Fig:DW_curr} \end{figure} We now discuss the influence of an electric current on the DW profile and its dynamic properties. We first focus on a pinned head-to-tail 90$^\circ$ DW in zero magnetic field. To examine the action of spin-transfer torque, we perform numerical simulations of the Heisenberg model with constant spin current density. In these simulations, we develop the magnetization using the LLG equation until a stationary state is reached. Results for a current density of $I = \pm 10^{12}\, {\rm A}/{\rm m}^2$, which is comparable to values used in experiments,~\cite{YAM-04,Klaui2005:PRL} are shown in Fig.~\ref{Fig:DW_curr}. While the electric current does not substantially modify the in-plane DW profile, it shifts the DW center away from the magnetic anisotropy boundary (see inset in Fig.~\ref{Fig:DW_curr}(a)). Moreover, the DW magnetization tilts out of the film plane under the action of an electric current (Fig.~\ref{Fig:DW_curr}(b)). The direction of DW displacement and sign of DW tilt depend on the direction of electric current. The magnitude of both effects are determined by the absolute value $|I|$. Our results are consistent with current-induced magnetization dynamics of 180$^{\circ}$ DWs.~\cite{Li:PRL_2004,Li:PRB_2004} Importantly, the results of Fig.~\ref{Fig:DW_curr} allow us to assume that the profile of a pinned 90$^{\circ}$ DW does not change under the influence of an electric current. Hence, we can describe DW dynamics by two collective coordinates, namely, the position of the DW center ($q(t)$) and the DW tilt angle ($\psi(t)$), as illustrated in Fig.~\ref{Fig:collective}. In agreement with Eq.~\ref{Eq:prof_eq}, we use the following ansatz for the in-plane DW profile \begin{equation} \tilde{\theta}(y,t) = \begin{dcases} \frac{\pi}{4} + 2\, \arctan\left[ \left( \sqrt{2} - 1 \right) \exp\left(\frac{y-q(t)}{\lambda} \right) \right]\,, \\ \qquad \text{if } y < q(t)\,, \\ \frac{3 \pi}{4} - 2\, \arctan\left[ \left( \sqrt{2} - 1 \right) \exp\left(-\frac{y-q(t)}{\lambda} \right) \right]\,. \\ \qquad \text{if } y > q(t)\,. \end{dcases} \label{Eq:theta_ansatz} \end{equation} The out-of-plane DW profile needs to satisfy vanishing magnetization and spin-transfer torque inside the domains, i.e., far away from the anisotropy boundary. To account for this, we use \begin{equation} \tilde{\phi}(y,t) = \frac{\pi}{2} - \psi(t)\, \cos\left[2\, \tilde{\theta}(y,t) \right]\,. \label{Eq:phi_ansatz} \end{equation} Here, the DW tilting angle $\psi$ corresponds to the maximum out-of-plane magnetization angle. We note that this simple ansatz does not fully reproduce the numerical simulations of Fig.~\ref{Fig:DW_curr}(b). In Eq.~\ref{Eq:phi_ansatz}, $\phi$ decays more quickly as a function of $y$ compared to the Heisenberg model. Despite this discrepancy, we will demonstrate that the approximation is valid for calculations of the tilting angle and resonance frequency in the limit of small DW displacements. We obtain dynamic equations for the collective DW coordinates, $q(t)$ and $\psi(t)$, by using $\delta{w}/\delta{\theta}$ and $\delta{w}/\delta{\phi}$ from Eq.~\ref{Eqs:LLG_spher} and defining the differential areal energy density \begin{equation} {\rm d}\varepsilon = \int_{-\infty}^{\infty} {\rm d}y \left[ \left(\fder{w}{\theta}\right)\delta\theta + \left(\fder{w}{\phi}\right)\delta\phi \right]\,. \label{Eq:deps} \end{equation} Inserting Eq.~\ref{Eq:theta_ansatz} and Eq.~\ref{Eq:phi_ansatz} into Eq.~\ref{Eq:deps} and integrating along $y$ gives an equation of motion for the collective coordinates \begin{equation} \der{}{}{t} \begin{pmatrix} q \\ \psi \end{pmatrix} = \bar{\bm M}\, \begin{pmatrix} \partial \epsilon / \partial q\\ \partial \epsilon / \partial \psi \end{pmatrix} + \begin{pmatrix} a_u \\ b_u \end{pmatrix}\, u\,, \label{Eq:motion} \end{equation} where \begin{equation} \renewcommand*{\arraystretch}{1.9} \bar{\bm M} = -\frac{\gamma}{{M_{\rm s}}}\, \frac{3}{2 \sqrt{2}} \begin{pmatrix} \frac{5\sqrt{2}+1}{5}\, \alpha\, \lambda & -1 \\ 1 & \frac{3}{2} \frac{\sqrt{2} - 1}{\lambda}\, \alpha \end{pmatrix}\,, \label{Eq:matM} \end{equation} and \begin{subequations} \begin{align} a_u &= 1 + \alpha \beta\,, \\ b_u &= -\frac{3 (\sqrt{2} - 1)}{2}\, \frac{\alpha - \beta}{\lambda}\,. \end{align} \label{Eqs:curr_amps} \end{subequations} Equation~\ref{Eq:motion} can be linearized. This gives \begin{equation} \der{}{}{t} \begin{pmatrix} q \\ \psi \end{pmatrix} = \bar{\bm D}\, \begin{pmatrix} q\\ \psi \end{pmatrix} + \begin{pmatrix} a_u \\ b_u \end{pmatrix}\, u\,, \label{Eq:motion_lin} \end{equation} where $\bar{\bm D}$ is the dynamic matrix \begin{equation} \bar{\bm D} = \bar{\bm M} \cdot \begin{pmatrix} \partial^2 \epsilon / \partial q^2 & \partial^2 \epsilon / \partial q \partial \psi \\ \partial^2 \epsilon / \partial \psi \partial q & \partial^2 \epsilon / \partial \psi^2 \end{pmatrix}_{\!\rm eq}\,, \label{Eq:dynam_approx} \end{equation} where the subscript ${\rm eq}$ indicates that the second derivatives of the areal energy density are evaluated numerically in the equilibrium magnetic configuration, i.e., $q = 0$ and $\psi = \pi/2$.~\cite{Voto2017:SciRep,Bazaliy2004:PRB} Let us now discuss the validity and applicability of the linearized 1D model. Figure~\ref{Fig:model_compar} compares the stationary values of $q$ and $\psi$ under constant electric current as a function of $K_u$. As $K_u$ increases, both the DW displacement and DW tilting angle decrease because of stronger pinning at the anisotropy boundary. As a result, the linearized 1D model is more accurate for large values of $K_u$. This is confirmed by Figs.~\ref{Fig:model_compar}(a) and (b), where the parameter values of the 1D model approach the numerical simulations when the anisotropy is strong. In addition, if we fit the displaced DW profile with Eq.~\ref{Eq:theta_ansatz}, we obtain a DW width that is comparable to Eq.~\ref{Eq:lambda} in the whole anisotropy range (Fig.~\ref{Fig:model_compar}(c)). Based on these results, we conclude that our linearized 1D model describes current-induced DW dynamics in the approximation of small DW displacements. The calculated DW displacement for a current density $I = 10^{12}\, {\rm A/m^{2}}$ is of the order $\sim$1 nm. This distance compares well to micromagnetic simulations in Ref.~\citenum{VanDeWiele:SciRep_2016}. In the same study it was shown that DW oscillations of this amplitude, driven by an ac spin-polarized current, turn the pinned 90$^{\circ}$ DW into a tunable source of propagating spin waves. \begin{figure}[!tbp] \centering \includegraphics[width=.9\columnwidth]{fig6.eps} \caption{Comparison of the linearized analytical model and Heisenberg model simulations for $\alpha = 0.15$, $P=0.5$, $\beta=0.4$, and $I = 10^{12}\, {\rm A}/{\rm m}^2$. The other parameters are the same as in Fig.~\ref{Fig:DW_prof}. (a) DW displacement, (b) DW tilting angle, (c) DW width. In (c), the DW width obtained from Heisenberg model simulations is compared to Eq.~\ref{Eq:lambda}.} \label{Fig:model_compar} \end{figure} Now, we discuss the effect of an ac electric current in our model. Since the direction of DW displacement depends on the direction of current, an ac electric current induces DW oscillations around its equilibrium position. For potential applications in magnonics, the DW resonance frequency ($\omega_{\rm r}$) is a key parameter.~\cite{Saitoh2004:Nat} To calculate $\omega_{\rm r}$, we use the linearized equations of motion (Eq.~\ref{Eq:motion_lin}). For an ac electric current with frequency $\omega$, we write $I(t) = I_0\, e^{-i\, \omega t}$. Moreover, we assume that the solutions of the linearized equation of motion have the same form, $q(t) = C_q\, e^{-i\, \omega t}$ and $\psi(t) = C_\psi\, e^{-i\, \omega t}$, where $C_q$ and $C_\psi$ are constants. Using these parameters, we find that Eq.~\ref{Eq:motion_lin} has a solution for a DW resonance frequency of \begin{equation} \omega_{\rm r} = \frac{3}{2 \sqrt{2}}\, \frac{\gamma}{{M_{\rm s}}} \sqrt{\pderr{2}{\varepsilon}{q}\pderr{2}{\varepsilon}{\psi} - \left( \ppder{\varepsilon}{q}{\psi} \right)^2}\,, \label{Eq:om_res} \end{equation} where, as in the Eq.~\ref{Eq:dynam_approx}, the second derivatives of the areal energy density ($\varepsilon$) are evaluated numerically in the equilibrium magnetic configuration. The solid line in Fig.~\ref{Fig:res_freq}(a) shows the dependence of $f_{\rm res} = \omega_{\rm r} / (2\pi)$ on $K_u$ in the absence of a magnetic field. We find that $\omega_{\rm r} \sim K_u^{1/2}$. In addition, the potential stiffness, $\kappa$, which is defined by $\partial\varepsilon/\partial{q} = \kappa\, q$ can be approximated as $\kappa = K_u/\lambda$. This gives $\kappa \sim K_u^{3/2}$. Finally, the DW mass $m_{\rm DW} = \kappa / \omega_{\rm r}^2$,~\cite{Dorig1948:ZNat,Saitoh2004:Nat} which can be used as an indicator for the operation speed of DW devices, varies as $m_{\rm DW} \sim K_u^{1/2}$. \subsection{Simultaneous effect of magnetic field and electric current} In Sec.~\ref{SSec:mag_field} we showed that an in-plane magnetic field along the $y$-axis reduces the magnetization rotation between domains ($\Delta$) and the DW width ($\lambda'$). This might also modify the DW resonance frequency. By combining the expression for zero-field resonance frequency (Eq.~\ref{Eq:om_res}) and ansatzes for the DW profile (Eqs.~\ref{Eq:prof_zeta} and~\ref{Eq:phi_ansatz}), we derive dynamic equations for the collective coordinates in non-zero magnetic fields. The equation of motion has the same form as Eq.~\ref{Eq:motion} with $\bar{\bm M}$ replaced by \begin{equation} \renewcommand*{\arraystretch}{1.9} \bar{\bm M}(\zeta) = \frac{\gamma}{{M_{\rm s}}} \begin{pmatrix} \alpha \lambda'\, f(\zeta)^{-1} & g(\zeta)^{-1} \\ -g(\zeta)^{-1} & \alpha\, h(\zeta)^{-1} / \lambda' \end{pmatrix}\,, \label{Eq:matM} \end{equation} and \begin{subequations} \begin{align} a_u &= 1 + \alpha \beta\,, \\ b_u &= \frac{\alpha - \beta}{\lambda'}\, \frac{f(\zeta)}{g(\zeta)}\,. \end{align} \label{Eqs:curr_amps} \end{subequations} Here, the three functions that vary with $\zeta$ are given by \begin{subequations} \begin{align} f(\zeta) = &\sqrt{2}\, \left( \sin\zeta + \cos\zeta - \sqrt{2} \right)\,, \\ g(\zeta) = &\frac{2 \sqrt{2}}{3}\, \frac{\sin\zeta + \cos\zeta - \sqrt{2}\sin(2\zeta)}{\cos(2\zeta)}\,, \\ h(\zeta) = &\frac{\sqrt{2}}{15} \frac{1}{\cos^2(2\zeta)}\, \biggl[ 5\, (\sin\zeta + \cos\zeta)\; - \\ & 2\sqrt{2}\, \sin(2\zeta) + 7\, [\sin(3\zeta) - \cos(3\zeta)] - 10\sqrt{2}\; \biggr]\,. \notag \end{align} \label{Eqs:abc} \end{subequations} After linearization, we obtain an expression for the DW resonance frequency as a function of magnetic field \begin{equation} \omega_{\rm r}(\zeta) = \frac{\gamma}{{M_{\rm s}}}\, g(\zeta)^{-1}\, \sqrt{\pderr{2}{\varepsilon}{q}\pderr{2}{\varepsilon}{\psi} - \left( \ppder{\varepsilon}{q}{\psi} \right)^2}\,. \label{Eq:om_res_happ} \end{equation} Here, we applied the approximate relation $f(\zeta) h(\zeta) \simeq g^{2}(\zeta)$. The DW resonance frequency depends on the function $g(\zeta)^{-1}$. For zero applied field $g(0)^{-1} = 3 / (2\sqrt{2})$, which recovers Eq.~\ref{Eq:om_res}. $g(\zeta)^{-1}$ increases with $\zeta$ and diverges for $\zeta \to \pi/4$, i.e., when the DW is erased by the applied magnetic field. Figure~\ref{Fig:res_freq}(b) shows the field-dependence of $f_{\rm res}(\zeta) = \omega_{\rm r}(\zeta)/(2\pi)$ for several values of $K_u$. The resonance frequency increases as a function of $H_{\rm app}$. This effect relates to a reduction of the DW width at nonzero $H_{\rm app}$ (see Fig.~\ref{Fig:dw_happ}(b)). For narrow DWs, the stiffness of the pinning potential increases, causing an upshift of $f_{\rm res}$. Our calculations indicate nearly linear tuning of $f_{\rm res}$ by several GHz in modest magnetic fields. This ability to actively alter $f_{\rm res}$ could be used to tailor the frequency and wavelength of spin waves that are emitted from an oscillating DW. \subsection{Comparison with micromagnetic simulations} \begin{figure}[!tbp] \centering \includegraphics[width=.9\columnwidth]{fig7.eps} \caption{(a) DW resonance frequency calculated for $H_{\rm app} = 0$. The line is calculated using the 1D analytical model and the open diamonds are obtained from micromagnetic simulations. (b) Field dependence of the DW resonance frequency calculated using the 1D model (lines) for various values of $K_u$ and extracted from micromagnetic simulations (solid symbols) using $K_u = 1.0\times 10^4\, {\rm J}/{\rm m}^3$ (pentagons) and $5.0\times 10^4\, {\rm J}/{\rm m}^3$ (triangles).} \label{Fig:res_freq} \end{figure} In the previous sections, we derived a 1D analytical model for a magnetic DW that is pinned by a 90$^\circ$ uniaxial anisotropy boundary. Results from this model for the DW profile, DW displacement, and DW tilting angle were compared to numerical simulations based on the 1D Heisenberg model. Although the 1D Heisenberg model goes beyond a simple linear approximation and the assumption of a rigid DW profile, it might deviate from reality because of its reduced dimensionality and lack of long-range dipolar interactions.~\cite{Beach2008:JMMM,Vandermeulen2018:JMMM} Therefore, we will now compare our model results to micromagnetic simulations and assess its relevance for the interpretation of experimental data. The simulations were performed using MuMax3 software~\cite{mumax3} with periodic boundary conditions in the y-z plane. Modulations of uniaxial magnetic anisotropy were included by abrupt rotation of the magnetic easy axis at the cell boundary of two 10-$\mu$m-wide stripe domains. The film thickness was set to 5 nm and the structure was discretized into $2.44 \times 4.88 \times 5\, {\rm nm}^3$ cells. We estimated the resonance frequency of the pinned DW by applying a ${\rm sinc}$-function-type current pulse in the $y$-direction with a cut-off frequency of $40\, {\rm GHz}$. After this, the $z$-component of magnetization was recorded one cell from the anisotropy boundary. The eigen frequency of the DW was extracted by performing a Fourier transformation on these data. The simulated profile and width of the pinned DW in zero and non-zero magnetic field agree well with results from our 1D model. The main effect of dipolar interactions, which are included in the micromagnetic simulations but omitted in the 1D model, is an enlargement of the DW tails. We also find good correspondence between the simulated and calculated values of the DW resonance frequency. Figure~\ref{Fig:res_freq}(a) shows a comparison for different values of $K_u$ and zero magnetic field. At large magnetic field, the results start to deviate, as shown in Figs.~\ref{Fig:res_freq}(b). Under these conditions, the 1D model overestimates the DW resonance frequency. One of the reasons is a gradual decrease of the magnetization rotation between domains ($\Delta$). This effect lowers the spin-transfer torque efficiency and thereby the displacement of the DW. Another factor relates to a distortion of the DW during magnetization dynamics. The dependence of both effects on applied magnetic field is illustrated in Fig.~\ref{Fig:dw_excit}. The figure shows micromagnetic simulations of the displacement and deformation of the DW during current-induced DW oscillations. The applied magnetic field in (a) and (b) is $\mu_0H_{\rm app} = 25$~mT and $\mu_0H_{\rm app} = 400$~mT, respectively. The solid black lines represent DW profiles for zero electric current and the other lines depict snapshots of dynamic DW deformations. In small magnetic field, the spin-transfer torque displaces the DW without significantly changing its profile. Because of smaller spin-transfer torque efficiency, the DW displacement diminishes upon an increase of the magnetic field strength. At the same time, deformations of the DW profile become more pronounced. Because our 1D analytical model assumes a rigid DW, it overestimates the DW resonance frequency for large magnetic field. \begin{figure}[htp!] \centering \includegraphics[width=.9\columnwidth]{./fig8.eps} \caption{Micromagnetic simulations of the DW profile during current-induced magnetization dynamics. Panels (a) and (b) show results for different magnetic bias fields along the $y$-axis. The solid black lines depict DW profiles in equilibrium (zero current). The dashed red and dotted blue lines show snapshots of the displaced and distorted DW during current-driven oscillations. The anisotropy constant in the simulations is $K_u = 10^5\, {\rm J}/{\rm m}^3$.} \label{Fig:dw_excit} \end{figure} \section{Conclusions} \label{Sec:Conclusions} In summary, we studied the static and dynamic properties of a magnetic DW that is pinned by a 90$^\circ$ uniaxial anisotropy boundary using an analytical model with continuous spatial coordinates and a discrete Heisenberg model. First, we derived a formula for the profile of an equilibrium head-to-tail DW. To account for the abrupt rotation of magnetic anisotropy, we split the expression for the in-plane magnetization profile into two parts (Eq.~\ref{Eq:prof_eq}). Consequently, calculations for the two domains were done separately. We note that the following ansatz can be used to simplify the model \begin{equation} \theta_{{\rm u},{\rm approx}}(y) = \frac{\pi}{4} + \arctan\left[ \exp \left( \frac{y}{\lambda} \right) \right]\,. \label{Eq:prof_approx} \end{equation} Here, $\lambda$ is given by Eq.~\ref{Eq:lambda}. Equation~\ref{Eq:prof_approx} does not satisfy Eq.~\ref{Eq:dif_eq}, but its similar shape could be sufficient for practical purposes. After assessing the equilibrium state, we analyzed how the DW profile deforms in a magnetic field. Besides an obvious reduction of the magnetization rotation between domains, we observed a gradual decrease of the DW width in a perpendicular magnetic field. Next, we used the Landau-Lifshitz-Gilbert equation to explore current-induced dynamics of a pinned DW. For a small electric current and zero magnetic field, we found that the DW is slightly displaced from the anisotropy boundary without significantly changing its in-plane magnetization profile. Additionally, the spin-transfer torque tilts the DW magnetization out of the film plane. Using an ansatz for the DW profile, we derived linear equations of motion for collective DW coordinates and demonstrated that the calculated values of DW displacement and DW tilting angle are in good agreement with Heisenberg model simulations. We also derived expressions for the DW resonance frequency in zero and non-zero magnetic fields. Our results indicate that an ac electric current can drive the domain wall into resonance. Moreover, the model predicts active tuning of the DW eigen frequency by a magnetic bias field. Finally, we showed that our model calculations are in good agreement with micromagnetic simulations up to modest magnetic fields. Beyond this, break-down of the rigid-DW approximation causes an overestimation of the DW resonance frequency. Spin waves are emitted from a pinned DW if an ac spin-polarized current or another activation mechanism forces it to oscillate. To exploit DW pinning at anisotropy boundary in programmable magnonic devices one needs to understand their basic static and dynamic properties and learn how to control them. The models provided here describe active tuning of the DW resonance condition by means of an external magnetic field. \section*{Acknowledgement} This work was supported by the European Regional Development Fund in the IT4Innovations national supercomputing center - path to exascale project (project number CZ.02.1.01/0.0/0.0/16\_013/0001791 within the Operational Programme Research, Development and Education) and the European Research Council (grant number ERC-2012-StG 307502-E-CONTROL). PB thanks the Czech Science Foundation for support (grant number 18-07172S) and S.J.H. acknowledges support from the V\"ais\"al\"a Foundation. The micromagnetic simulations were performed using computational resources provided by the Aalto Science-IT project.
{ "timestamp": "2018-08-01T02:08:00", "yymm": "1802", "arxiv_id": "1802.06741", "language": "en", "url": "https://arxiv.org/abs/1802.06741" }
\section{Acknowledgements} We acknowledge helpful discussions with Prof Martin Kamp, University of Wuerzburg, and Anjani Kumar Tiwari. SM gratefully acknowledges financial support for the project `XIIP243: Optics in nanostructures', funded by the DAE, Government of India, and the Swarnajayanti Fellowship from the DST, Government of India. SM dedicates this work to Late Prof Narendra Kumar, who pioneered the research on the synergy of optical amplification and Anderson localization\cite{pradhan94}. \section{References}
{ "timestamp": "2018-02-21T02:07:46", "yymm": "1802", "arxiv_id": "1802.07047", "language": "en", "url": "https://arxiv.org/abs/1802.07047" }
\section{Introduction} The increasing understanding on complexity has influenced many fields of research. The role of coupling, interaction, self-organization, hierarchies, etc. in complex systems has lead to better understanding of natural and man-made systems and processes. Without any doubt this concerns also biosystems including biophysics, biochemistry, medical physics, genetics, etc. One of the fascinating problems of biophysics is the propagation of signals in nerve fibres and networks. The basic element of this process is the propagation of an electrical signal in a single axon. An important step in understanding this process was related to the studies of \citet{Hodgkin1945}, who derived a mathematical model of an axon potential (AP) based on the ionic hypothesis. The celebrated Hodgkin-Huxley (HH) model describes an electrical signal in a fibre that has a typical asymmetric shape and is strongly supported by ion currents through the fibre wall. Later several simplified models were proposed for the description of this process like the widely used FitzHugh-Nagumo (FHN) model \citep{Nagumo1962}. Instead of sodium and potassium ion currents governed by specific kinetic equations in the HH model, in the FHN model just one unspecified ion current is used which is able to reproduce the main properties of an AP. Paying full credit to the HH model, one has to admit that the existence of more than 20 parameters in the model makes its practical usage difficult. However, the process is more complicated than a single wave. Hodgkin himself stated that ``in thinking about the physical basis of the action potential perhaps the most important thing to do at the present moment is to consider whether there are any unexplained observations which have been neglected in an attempt to make the experiments fit into the a tidy pattern'' \citep{Hodgkin1964a}. Indeed, the structure of a nerve fibre is complicated \citep{Debanne2011}. Even in the first approximation it must be described as a tube surrounded by extracellular fluid with a wall made of a biomembrane and filled with the intracellular fluid – the axoplasm. The AP is an electrical signal in the axoplasm. The biomembrane is made of the lipid bilayer consisting of amphiphilic molecules with hydrophobic tails directed toward the membrane centre. This bilayer has also embedded proteins which are responsible for forming the ion transport channels (gates) through the membrane. Consequently, in order to get a full description of a process in a nerve fibre, in addition to the propagation of an AP, the accompanying processes in the surrounding biomembrane and in the axoplasm must be understood. In this paper a mathematical model describing coupled processes in a nerve fibre is presented. The resulting ensemble of waves is a clear sign of complexity of the process when the single constituents form a whole. In Section 2 the physical description of the general signal propagation in nerve fibres is revisited in order to formulate the background. The next Section 3 is devoted to the analysis of coupling mechanisms and known models. A novel model on the basis of governing equations and coupling forces is presented in Section 4. Attention is paid to assumptions and to the differences and similarities with known models. In Section 5 the results of the numerical simulation are presented. The final Section 6 involves conclusions and ideas for further theoretical and experimental studies. In the Appendix, the numerical scheme used for simualations, is described. \section{Physical description of signal propagation} \paragraph{Early descriptions} Electrophysiology of nerves is strongly influenced by explanations given by Nernst in the beginning of the 20th century (see overview by \citet{Faraci2013}) who has described the movement of ions in nerve fibres. Even before Hodgkin and Huxley studies, \citet{Wilke1912}, \citet{Cole1939} have noted the complicated nature of signals in nerve fibres. \citet{Kaufmann1989} has analyzed the possible coupling effects: “electrical action potentials are inseparable from force, displacement, temperature, entropy and other membrane variables”. Indeed, nowadays several experiments have proved the existence of phenomena accompanying the propagation of an AP. That all has been summed up by the following statement: “... to frame a theory that incorporates all observed phenomena in one coherent and predictive theory of nerve signal propagation” \citep{Andersen2009}. \paragraph{Experimental evidence} The early experiments of \citet{Hill1936} and \citet{Hodgkin1945,Hodgkin1964a} have demonstrated the formation of an AP in dependence of ion currents. In addition, the heat production associated with an AP was measured \citep{Abbott1958}. All this basic knowledge was summarized in \citep{Hodgkin1964a,Katz1966}. Later a lot of attention was focused also to mechanical effects accompanying the AP. The swelling effects of a nerve fibre have been demonstrated \citep{Iwasa1980,Tasaki1989} and the pressure waves in axoplasm analyzed \citep{Terakawa1985}. It means that the main components of accompanying effects -- mechanical waves in the fibre wall and in the intracellular axoplasm generated due to the coupling with electrical signals have been experimentally measured. The observable transverse displacement of the biomembrane has been measured being about 1-2 nm. The overviews of these studies have summarized the findings \citep{Tasaki1988,Kaufmann1989}. Recent experimental studies have given more information about the dependence of those effects on physiological parameters \citep{Gonzalez-Perez2016}. The similar coupling of electrical signals and mechanical pulses have been measured also in excitable plant cells \citep{Fillafer2017}. \paragraph{Present understanding} Axon physiology is nowadays very well documented \citep{Clay2005,Debanne2011}. The axon walls are biomembranes which are specific lipid bilayers with many embedded cellular and molecular components which regulate the forces and transmission between the membrane and the ion channels \citep{Mueller2014}. Biomembranes are important not only for nerve fibres but also because biomembranes are structures characteristic to all living cells. These layers between living cells and the surrounding environment can be treated as deformable structures and they are able to carry mechanical waves. Consequently, the methods from the theory of continua (microstructured media) can be applied for deriving the governing equations of deformation. The mechanical energy of a biomembrane has been proposed as a quadratic function called after Helfrich and it describes the lipid bilayer as a homogeneous elastic body, usually two-dimensional \citep{Helfrich1973}. This approach is nowadays modified \citep{Lomholt2006,Deseri2008} and is able also to describe inhomogeneities of the biomembranes \citep{Bitbol2011}. These inhomogeneities are related to channels for ion transport \citep{Mueller2014}. An important proposal to account for physical nonlinearities gives a possibility to model localized waves in a biomembrane \citep{Heimburg2005}. The corresponding mathematical model is a Boussinesq-type equation \citep{Heimburg2005,Engelbrecht2015}. The electrical signal in an axon propagates in the intracellular axoplasm which is actually a gel consisting 87\% of water held together with cytoskeleton \citep{Gilbert1975}. The axoplasm is able to carry also the pressure waves \citep{Terakawa1985,Rvachev2010}. All mechanisms, structural properties and models described briefly above reflect the specific features of signal propagation in nerve fibres. It is a challenge to build up a coupled model of all observable effects into one, much in terms of A.Toffler – “...we often forget to put the pieces back together again” \citep{Toffler1984}. Here it means that an AP should be coupled to mechanical deformation in the biomembrane and the pressure changes in the axoplasm. \section{Modelling of mechanisms of coupling} \paragraph{Open questions and difficulties} Although there is a general agreement that all the dynamical changes in nerve fibres during the propagation of an AP are coupled, the mechanisms of coupling are not satisfactorily described. This concerns the coupling between three main processes: AP, waves in biomembrane (LW,TW), and pressure waves (PW) in axoplasm. Taken as single processes, the physical and mathematical descriptions of them are well known. As far as the AP is supported by ion currents, the transport through ion channels is also well studied \citep{Heimburg2010,Howells2012,Mueller2014}. It must be stressed that the ion channels may be influenced not only by electrical factors (voltage-gated) like in the HH model but may be also mechanically sensitive \citep{Mueller2014}. The opening of an ion channel means also the deformation of the lipid bilayer and once this is a time-dependent event, it produces a mechanical wave in the bilayer. This process has a localized character and the crucial problem is to understand the electromechanical transduction mechanisms – the transduction of electrical energy to mechanical (AP to waves in biomembrane) and vice versa. However, it is not clear yet whether electrostriction or piezoelectricity is the main mechanism of the transduction \citep{Gross1983}. Electrostriction is related to electric field-induced deformation in dielectrics and the produced stress is proportional to the square of the imposed electric field \citep{Gross1983} and seems to be a better candidate for coupling because based on known data, piezoelectricity leads to unrealistic values of deformation. Here, as suggested by \citet{Gross1983}, studies on molecular mechanisms in biomembranes should clarify the effects. Later, based on experiments, it has been stated that the mechanical changes in the biomembrane are proportional to the voltage changes \citep{Gonzalez-Perez2016}. However, it has been also argued that the mechanical effects in the biomembrane accompanying the AP could be caused by water movement associated by sodium influx through ion channels \citep{Kim2007}. It is clear that the extracellular and intracellular molecular structure of a fibre has great impact on processes. The heat production accompanying the AP has noted in several studies \citep{Howarth1968,Heimburg2007,Gonzalez-Perez2016} and this process is seemingly responsible for phase changes in the lipid bilayer. That is why the absence of thermodynamical considerations in the HH model has been criticized compared to the adiabatic theories \citep{Heimburg2005,Gonzalez-Perez2016}. An important question is related to the velocities of all the processes. Experimental studies have demonstrated that the estimations for velocities of single waves can be significantly different. The velocities of a nerve pulses depend on the diameter of fibres (but also on temperature, ion concentration, myelin thickness, etc.) and for human nerves can be in the interval starting from ca 2~m/s for nerves with a small diameter up to 100~m/s in bigger nerves \citep{Debanne2011}. The classical results of HH model give the estimation of about 20~m/s for the non-myelinated squid axon \citep{Hodgkin1964a}. In myelinated nerves, the velocities are larger \citep{Heimburg2005}. The estimations for localized mechanical waves in biomembranes indicate the values of velocity about 170~m/s \citep{Heimburg2005}. The velocities in excitable plant cells of electrical and mechanical waves both, however, have shown synchronization \citep{Fillafer2017} but are considerably slower (less than 10~m/s). The pressure waves in axoplasm can be analyzed like pulses in flexible tubes and then the velocities are dependent on viscosity of the intracellular fluid and the diameter but also on temperature \citep{Rvachev2010}. These theoretical estimations cover a wide interval from small velocities around several m/s up to velocities around 90~m/s. For modelling the pressure waves it is possible to use either the Navier-Stokes model or the direct analogy to the waves in tubes \citep{Engelbrecht2018}. One possible starting point is the two-dimensional (2D) model of pressure waves in a elastic cylindrical tube \citep{Lin1956} \begin{align} \label{LinMorganPressureWaves} &\bar{p}_{tt}=c_f^2(\bar{p}_{xx}+\bar{p}_{rr}+\bar{p}_r/r),\\ &\rho u_{tt}+\bar{p}_x=0,\\ &\rho w_{tt}+\bar{p}_r=0, \end{align} where $\bar{p}$ is the pressure, $x$ and $r$ are the longitudinal and radial coordinates respectively; $u$, $w$ are the longitudinal and radial displacements respectively and $c_f$ is the velocity of sound in the fluid. Here and further, the independent variables used as indices, denote differentiation. As far as the diameter of the axon is very small, it is assumed that the pressure is constant across its cross-section ($\bar{p}_r=0$). In this case Eq.~\eqref{LinMorganPressureWaves} reduces to \begin{equation} \label{LinMorganWaveEQ} \bar{p}_{tt}=c_f^2\bar{p}_{xx}, \end{equation} which is the classical wave equation. Although Eq.~\eqref{LinMorganWaveEQ} does not include viscosity it is straightforward to take into account if needed by adding viscous damping term. The effects of nonlinearity are also not included because the amplitude of a pressure wave is small \citep{Terakawa1985}. To sum up, in experiments more emphasis is directed towards the voltage of APs (amplitudes) rather than to velocities. It is clear that in a coupled model all the velocities should be synchronized. One must also stress that changes in nerve fibre properties are strongly influenced by anaesthetics \citep{Heimburg2007} that could influence the amplitudes and velocities of waves by changing the properties of lipid membranes, i.e., the ion transport. \paragraph{Modelling of coupling} Without any doubt, there is a considerable interest to build up theories where at least basic effects are described within one model \emph{resp} theory. In many cases the models are formulated at the physical level determining the possible linkage of effects together with possible coupling factors. However, the approach where such a description is supported by mathematical models like the basic models describing single effects, seem to be more perspective. \citet{Hady2015} have proposed a model for coupling the electrical and mechanical signals which is based on the assumption that the potential energy is mostly stored in the surrounding biomembrane and the kinetic energy in the axoplasmic fluid resulting in mechanical surface waves in the biomembrane. The AP is described by using the HH model and the force exerted on the biomembrane is taken proportional to the square of the voltage. The process in the axoplasmic fluid is described by the linearized Navier-Stokes equation. The profile of the calculated transverse displacement is similar to that measured by \citet{Tasaki1988}. A coupled model of electrical and mechanical signals based spring-dampers (dashpots) system has been elaborated by \citet{Jerusalem2014}. The ion currents are calculated again using the HH model and calibrated for a guinea pig spinal cord white matter. This model provides a framework for damage mechanisms in neurons. For this purpose a special simulation package Neurite has been developed \citep{Garc??a-Grajales2015}. As far as the governing wave equations modelling of all the single processes have actually been derived, the challenge is to formulate a model based on the system of coupled governing equations. First ideas on such a model are described by \citep{Engelbrecht2016}. Further this model is elaborated in more detail. \section{A model involving an ensemble of waves} In general terms, beside electrophysiology and mechanisms in biomembranes, the ideas from continuum theory area used (see also \citet{Lomholt2006}). The general concepts well-known in mathematical physics are followed – the initial conditions and forcing are formulated in variables involved in governing equations. The starting assumptions in modelling are the following:\\ (i) electrical signals are the carriers of information \citep{Debanne2011} and trigger all the other processes;\\ (ii) the axoplasm in a fibre can be modelled as a viscous fluid where a pressure wave is generated due to electrical signal \citep{Terakawa1985,Rvachev2010,Hady2015};\\ (iii) the biomembrane can be deformed \citep{Gross1983,Heimburg2005} in the longitudinal as well as in the transverse direction;\\ (iv) the channels in biomembranes can be opened and closed under the influence of electrical signals as well as of the mechanical input \citep{Heimburg2010,Mueller2014}.\\ The aim is to use known mathematical models (governing equations) but adding the contact forces which need additional assumptions. The first approach described below is to build up as simple (robust) system as possible in order to test assumptions, especially on coupling forces. The process is initialized by an electrical input $f(z)$ which is \begin{equation} \label{initpulse} z|_{t=0}=f(x), \end{equation} \begin{wrapfigure}{l}{0.25\textwidth} \includegraphics[width=0.25\textwidth]{WaveSchemes.eps} \caption{Schemes of the ensemble of waves. Here AP - action potential, PW - pressure wave in axoplasm, LW - longitudinal wave in the biomembrane (BM), TW - transverse wave in the BM, scales are arbitrary. Reproduced from \citet{Engelbrecht2018}.} \label{Waveschemes} \end{wrapfigure} where $z$ is an electrical pulse above the threshold level. The action potential AP is governed by a FHN-type model \citep{Nagumo1962} in the form of two coupled equations: \begin{align} \label{FHN1} &z_t=z(z-(a_1+b_1))(1-z)-j+D \, z_{xx},\\ \label{FHN2} &j_t=\varepsilon(-j+(a_2+b_2)z), \end{align} where $z$ is a scaled voltage, $j$ is the recovery current, $D$ is a coefficient, $\varepsilon$ is the time-scale difference (see \citet{Nagumo1962}) and $0<a_1+b_1<1$, $a_2+b_2>0$, $x$ and $t$ are dimensionless space and time respectively. Here $a_1$, $a_2$ control the `electrical' activation and added coefficients $b_1$, $b_2$ control the `mechanical' activation.\\ The pressure wave is governed by Eq.~\eqref{LinMorganWaveEQ} with a driving force \begin{equation} \label{LinMorganPressureWaveForced} \bar{p}_{tt}=c_f^2\bar{p}_{xx} - \mu \bar{p}_{t} +F_1(z,j), \end{equation} where $F_1(z,j)$ is a force from the AP and $\mu \bar{p}_t$ is added viscous dampening term. At this moment we leave open whether the changes in the voltage or in the ion current play role of a driving force. In the biomembrane, the governing equation for a longitudinal wave is derived from the balance of momentum with the special ‘displacement-type’ nonlinearity and dispersive terms \citep{Heimburg2005,Engelbrecht2015}: \begin{equation} \label{HJimproved} u_{tt}=\left[\left(c_0^2+pu+qu^2\right)u_{x}\right]_{x}-h_1 u_{xxxx} + h_2 u_{xxtt} +F_2(j,\bar{p}), \end{equation} where $u=\Delta\rho_0$ is the density change of a biomembrane, $c_0$ is the velocity in the unperturbed state, $p$, $q$ are coefficients of nonlinearity, $h_1$, $h_2$ are dispersion constants and $F_2(j,\bar{p})$ is the force exerted by the processes in the axoplasm. \\ Finally, the transverse wave w, following the ideas from the theory of rods \citep{Porubov2003} is governed by \begin{equation} \label{transversedispl} w=-kr \cdot u_{x}, \end{equation} where $r$ is the radius of the fibre and $k$ is a constant. In the theory of rods $k$ is the Poisson ratio. Some remarks concerning equations \eqref{initpulse}-\eqref{transversedispl} are in order. The AP is described by a simple FHN model involving only one ion current \citep{Nagumo1962}. One could certainly use the HH model with two (sodium and potassium) ion currents \citep{Hodgkin1945,Hodgkin1964a} or even a generalized model with more ion currents \citep{Courtemanche1998} but with the aim to test the coupling forces, we start with this robust simpler model. The limitation is that the effects of anaesthesia are oversimplified. The pressure wave could certainly be described also by a 2D Navier-Stokes model. This change must be considered with a special attention because if the transverse velocity $v_y$ will be taken into account, it could modify the forces exerted to the biomembrane. It must also be stressed that in the improved model describing longitudinal waves in a biomembrane \citep{Engelbrecht2015}, the second dispersive term with the coefficient $h_2$ describes the microinertia of the lipid bilayer and corresponds to principles of continuum theory for microstructured solids \citep{Engelbrecht2005}. As a result we expect an ensemble of waves to be generated which is schematically shown in Fig.~\ref{Waveschemes} and a block diagram depicting the relationships between the individual components of the proposed model is shown in Fig.~\ref{blokkd}. \section{Results of numerical simulation} \begin{wrapfigure}{r}{0.52\textwidth} \vspace{-0.7cm} \centering \includegraphics[width=0.51\textwidth]{Blokkskeem} \caption{Block diagram of the combined model for the nerve pulse propagation.} \label{blokkd} \end{wrapfigure} The most important problem in building the joint model is related to the assumptions about the coupling forces. Although the various mechanisms of transduction between fields are analyzed \citep{Gross1983,Gonzalez-Perez2016}, there is still no widely accepted understanding about the character of this process. Here we follow an assumption that the mechanical waves are generated by two changes in electrical pulses: either in the AP or in the ion current \citep{Engelbrecht2018} and by the changes in pressure in the axoplasm. In more general terms this means that the dynamical processes are not generated by values of the fields but by changes in the field. Consequently, we assume that \begin{align} \label{AssumptionCouplingConstants} &F_1=\eta_1z_x+\eta_2 j_t,\\ &F_2=\gamma_1\bar{p}_t+\gamma_2j_t, \end{align} where $\eta_1$, $\eta_2$, $\gamma_1$, $\gamma_2$ are suitable coefficients. Further on, the normalized values of variables are used in calculations ($Z$ for the AP amplitude, $J$ for the ion currents, $\bar{P}$ for the pressure, $U$ for the LW amplitude) together with dimensionless space and time coordinates $X$, $T$. The normalization of independent variables is based on Eq.~\eqref{HJimproved} where the velocity $c_0$ and the characteristic length of an axon is used \citep{Engelbrecht2015}. For example, the generated ion current calculated from the FHN model and its gradients $Z_X$, $I_X$ as well as $Z_T$, $J_T$ are shown in Fig.~\ref{ioncurrent}. In principle, the bi-polarity is evident for all derivatives. Note that the exact nature of the coupling forces terms is left open at this stage. One possible physical interpretation of the proposed terms is that the time derivatives could be interpreted as forces acting across the lipid bi-layer at a fixed spatial point on axon while spatial gradients could be interpreted as forces acting along the axon axis. For example, the force $F_1$ in the pressure expression could contain two terms -- $Z_Z$, $J_T$. First, an action potential gradient $Z_X$ could be related to the fact that there are charged particles (ions) present inside the axon that might move along the axis of the axon in the presence of the potential gradient. Second, an ion current time derivative $J_T$ could be related to changes in pressure when the ions flow in and out of the axon through the lipid membrane during the nerve pulse propagation at a fixed point (ion channel) on the axon. As noted earlier we use a simplified model for the action potential where all the ion currents present are wrapped into one abstracted ion current but if one would be using one of the more complex models (HH model, for example) the similar logic can be extended to any number of individual ion flows and to include some parameters specific to ion channel behaviour for these individual ion flows. \begin{wrapfigure}{l}{0.6\textwidth} \centering \includegraphics[width=0.51\textwidth]{Fig0_J_Jx.eps}\\ \vspace{4mm} \includegraphics[width=0.51\textwidth]{Fig0_Z_Zt_J_Jt.eps} \caption{The solutions and their derivatives of the FHN equation. Top panel -- action potential $Z$, recovery current $J$ and their gradients $Z_X, J_X$ in space at $T=1500$, bottom panel -- action potential $Z$, recovery current $J$ and their time derivatives $Z_T, J_T$ in time at spatial node $n=1024$.} \label{ioncurrent} \end{wrapfigure} The assumption on $j_x$ or $j_t$ (see Fig.~\ref{ioncurrent}) as a driving force has an important property – the force exerted to the biomembrane is bipolar and is therefore energetically balanced. If a localized pulse-type force is used then the energetical balance due to the moving signal $z(x,t)$ is distorted by the continuous energy influx. The next question is related to the composition of an ensemble and the relative significance of all three constituents in it: the electrical signal, the pressure wave in the axoplasm and the mechanical wave in the biomembrane. The measured pressure change in the axoplasm is extremely small \citep{Terakawa1985}. The transverse displacements of the fibre wall (biomembrane) are also small but can be measured \citep{Tasaki1988}. As far as the coupling mechanisms of electrical and mechanical signals are not fully understood, we shall use mathematical simulation with the goal to understand how the coupling process can be modelled in terms of the governing equations of single waves. Before simulation of three waves (AP, PW, LW), we proceed with simpler two-wave models: AP and LW coupled and AP and PW coupled. All the numerical calculations are carried out by using the pseudospectral method (see Appendix). The system of model equations solved numerically in the dimensionless forms is \begin{equation} \begin{split} & Z_{T} = D Z_{XX} + Z \left( Z - \left[ a_1 + b_1 \right] - Z^2 + \left[ a_1 + b_1 \right] Z \right) - J, \\ & J_{T} = \varepsilon \left( \left[ a_2 + b_2 \right] Z - J \right),\\ & U_{TT} = c^2 U_{XX} + P U U_{XX} + Q U^2 U_{XX} + P U_{X}^{2} + 2 Q U U_{X}^{2} - H_1 U_{XXXX} + H_2 U_{XXTT} + \gamma_1 \bar{P}_T + \gamma_2 J_T,\\ & \bar{P}_{TT} = c_{f}^{2} \bar{P}_{XX} - \mu \bar{P}_T + \eta_1 Z_X + \eta_2 J_T , \end{split} \label{EQS1} \end{equation} where capital letters denoting dependent variables are used to emphasize that we are dealing with the dimensionless case. As noted earlier $Z$ is the action potential, $J$ is the recovery current, $a_i, b_i$ are the `electrical' and `mechanical' activation coefficients, $D, \varepsilon$ are coefficients, $U$ is the longitudinal density change in lipid layer, $c$ is the velocity of unperturbed state in lipid bi-layer, $P, Q$ are the nonlinear coefficients, $H_1, H_2$ are the dispersion coefficients and $\gamma_1, \gamma_2$ are the coupling coefficients for the mechanical wave, $\bar{P}$ is the pressure, $c_{f}$ is the characteristic velocity in the fluid, $\eta_1, \eta_2$ are the coupling coefficients for the pressure wave and $\mu$ is the (viscous) dampening coefficient. `Mechanical' activation coefficient could be connected to the improved Heimburg-Jackson model as $b_1 = - \beta_1 U$ and $b_2 = - \beta_2 U$ where $\beta_1, \beta_2$ are the mechanical coupling coefficients which could be different for the action potential and recovery current parts of the FHN equations. Note that in system~\eqref{EQS1} either $J_T$ or $J_X$ can be used as coupling forces, the coupling terms used for a given calculation are noted in the figure legends after the variable plotted. The localized initial conditions and periodic boundary conditions are used (see Appendix for details on the numerical scheme). The coupling coefficients vary and the rest of the parameters are the same for all the depicted solutions. The common parameter values are used: $D=1, \varepsilon=0.01, a_1=0.2, a_2=0.2, c^{2}=0.16, P=-0.05, \break Q=0.02, H_1= 0.43, H_2=0.75, c_{f}^{2}=0.1, \mu=0.0025$ \begin{wrapfigure}{R}{0.52\textwidth} \centering \includegraphics[width=0.51\textwidth]{3Fig1_Z_U_Ux.eps} \caption{Action potential coupled with the mechanical wave. Solutions at $T=1500$. The coupling parameters are $\beta_1=\beta_2=0.05, \break \gamma_1=0, \gamma_2=0.002, \eta_1=\eta_2 =0$.} \label{Fig4} \end{wrapfigure} \paragraph{(i) Two-wave model I} In this case we neglect the pressure wave (PW) in the axoplasm and formulate a model including the electrical signal (AP) in the fibre and the accompanying longitudinal wave (LW) in the biomembrane. Then the coupled model includes Eqs~\eqref{FHN1}, \eqref{FHN2} and Eq.~\eqref{HJimproved}. In the latter, the force $F_2(j, \bar{p})$ is taken as $F_2(j)$ only, i.e., depending only on the AP. The detailed analysis of this case is presented by \citet{Engelbrecht2018}. In terms of system~\eqref{EQS1} this means that $\gamma_1=\eta_1=\eta_2=0$. The main features are the following: \\ -- the input (the initial condition) for Eqs~\eqref{FHN1}, \eqref{FHN2} is taken as a narrow $\sech^2$–type pulse with an amplitude above the threshold; \\ -- the generated electrical pulse (AP) has a typical asymmetric form with an overshoot and generates an ion current; \\ -- the gradient (i.e., the change) of the ion current is taken as an input for the generation of the mechanical longitudinal wave (LW); \\ -- the derivative of the LW gives the profile of the TW \citep{Engelbrecht2015}. \\ Note that (i) the gradient of the ion current is energetically balanced; (ii) the velocities of the AP and LW are chosen to be synchronized. The simulation results in the dimensionless form are shown in Fig.~\ref{Fig4} which demonstrates the profiles of the AP, LW and TW together with the ion current. The latter has a characteristic shape measured by \citet{Iwasa1980,Tasaki1988} and \citet{Gonzalez-Perez2016}. \begin{wrapfigure}{R}{0.52\textwidth} \centering \includegraphics[width=0.51\textwidth]{3Fig2_Z_Pzx_Pjx.eps} \caption{Action potential coupled with the pressure wave (two different coupling forces considered). Solutions at $T=1500$. The coupling parameters are $\beta_1=\beta_2=0.0, \gamma_1=\gamma_2=0.0, \eta =0.002 (Z_X),\break \eta =0.02 (J_X)$. In the case of $\bar{P}[Z_X,J_X]$ the coupling parameters are $\eta_1=0.001 (Z_X), \eta_2=0.01 (J_X)$.} \label{FigAPandPW} \end{wrapfigure} \paragraph{(ii) Two-wave model II} In this case we formulate a model in terms the electrical signal AP and the pressure wave PW. The model involves Eqs~\eqref{FHN1}, \eqref{FHN2} and governing equation \eqref{LinMorganPressureWaveForced} for the pressure. In terms of system~\eqref{EQS1} this means that $\beta_1=\beta_2=\gamma_1=\gamma_2=0$. The simulation results are shown in Fig.~\ref{FigAPandPW} and the pressure profiles for different combinations of coupling parameters $\eta_1$ and $\eta_2$ are shown in Fig.~\ref{DifferentPWProfiles}. The pressure wave (PW) modelled by Eq.~\eqref{LinMorganPressureWaveForced} demonstrates retardation from the AP and a slight overshoot \citep{Terakawa1985}. As far as the wave equation \eqref{LinMorganPressureWaveForced} has pretty stable solutions, the small changes is the coefficients $\eta_1$, $\eta_2$ which characterize the driving force $F_1$, do not lead to essential changes in the profile of the PW (see Fig.~\ref{DifferentPWProfiles}). Increasing $\eta_1$ leads to a steeper front and faster decay at the back of the profile, while the effect of the $\eta_2$ is opposite. \newpage \begin{wrapfigure}{r}{0.52\textwidth} \centering \includegraphics[width=0.51\textwidth]{Fig8_Z_Pzx_Pjx_var.eps} \caption{Pressure wave profiles with different coupling parameters at $T=1500$.} \label{DifferentPWProfiles} \end{wrapfigure} \paragraph{(iii) Three-wave model} In this case all three components of a signal – AP, PW, LW – are taken into account. An important question is to estimate the forms of physically plausible contact forces $F_1$ and $F_2$. From the analysis of case (i) with coupled AP and LW it is possible to conclude that the character of the $F_2$ should be bipolar. The numerical simulation permits to calculate the profiles with several forces depending on $Z_X$, $P_T$, $J_X$, $J_T$. The corresponding wave profiles at $T=1500$ are shown in Fig.~\ref{FigJt} for the case when time derivatives are used for coupling forces and in Fig.~\ref{ThreeCompFig} for the cases where mostly gradients are used as coupling terms. In Fig.~\ref{ThreeCompFig}: \\ (a) -- The pressure wave is generated by the action potential gradient and the mechanical wave is generated by the pressure time derivative. The coupling parameters are $\beta_1=\beta_2=0.05, \gamma_1=0.002, \gamma_2=0,\break \eta_1 =0.002, \eta_2=0$.\\ (b) -- The pressure wave is generated by the action potential gradient and the mechanical wave is generated by the pressure time derivative and the ion current gradient. The coupling parameters are\break $\beta_1=\beta_2=0.05, \gamma_1=\gamma_2=0.002, \eta_1 =0.002, \eta_2=0$.\\ (c) -- The pressure wave is generated by the ion current gradient and the mechanical wave is generated by the pressure time derivative. The coupling parameters are $\beta_1=\beta_2=0.05, \gamma_1=0.002, \gamma_2=0, \eta_1=0, \eta_2 =0.02$.\\ (d) -- The pressure wave is generated by the ion current gradient and the mechanical wave is generated by the pressure time derivative and ion current gradient. The coupling parameters are $\beta_1=\beta_2=0.05, \break\gamma_1=\gamma_2=0.002, \eta_1=0, \eta_2 =0.02$.\\ (e) -- The pressure wave is generated by the ion current gradient and action potential gradient while the mechanical wave is generated by the pressure time derivative and ion current gradient. The coupling parameters are $\beta_1=\beta_2=0.05, \gamma_1=\gamma_2=0.002, \eta_1 =0.001, \eta_2 =0.01$. From the viewpoint of the behaviour of the solution there is almost no qualitative difference if we use a time derivative or spatial gradient as the coupling term because as demonstrated in Fig.~\ref{ioncurrent}, the shape of the function is the same in essence. In the used numerical scheme the calculation of spatial derivatives is more convenient and numerically more accurate and for that reason in the following analysis the focus is on the case where $J_X$ is used as one of the coupling terms. Numerically we find $J_X$ by making use of the properties of the fast Fourier transform while for finding $J_T$ a simple backward difference scheme is used. However, if the experiments demonstrate the need to use $J_T$ in coupling forces, this can also be realized. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{Fig_Jt.eps} \caption{The solutions of the three wave model at $T=1500$ when using the ion current time derivative $J_T$ (left panel) and $J_T$ plus pressure time derivative $P_T$ and action potential gradient $Z_X$ as a coupling forces (right panel). Parameters $\beta_1=\beta_2=0.05,\break \gamma_1=0, \gamma_2=0.01, \eta_1=0, \eta_2=0.01$ (left panel) and $\beta_1=\beta_2=0.05, \gamma_1=0.001, \gamma_2=0.01, \eta_1=0.001, \eta_2=0.01$ (right panel).} \label{FigJt} \end{figure} The profiles in Figs~\ref{FigJt} and~\ref{ThreeCompFig} demonstrate a typical AP with an overshoot, a pressure wave (PW) propagating behind the AP and the longitudinal wave (LW) in the biomembrane with a typical solitary wave profile. Feedback coupling is taken into account for the AP from the LW and its influence is more evident in Figs~\ref{ThreeCompFig}b, \ref{ThreeCompFig}e. These profiles correspond qualitatively to previous studies starting from the AP \citep{Hodgkin1945,Nagumo1962} to experimentally measured PW \citep{Terakawa1985} and LW \citep{Heimburg2005,Gonzalez-Perez2016}. The transverse wave TW is calculated from the LW by using expression \eqref{transversedispl} and has a bipolar shape \citep{Tasaki1988}. Note that all the profiles are dimensionless with their maximal amplitude taken as a scaling measure. The basic assumption in all calculations is that the coupling is influenced by the changes of the field quantities, not by their values. This idea is supported by several studies \citep{Terakawa1985,Kim2007,Mueller2014,Gonzalez-Perez2016}. The initial stage of the AP forming from input \eqref{initpulse} is not analyzed because of possible fast changes and presented analysis takes a fully formed AP as a basic signal for coupled waves. The profiles in Figs~\ref{FigJt} and \ref{ThreeCompFig} are qualitatively similar to all measured ones. The parameters for simulation shown in Figs~\ref{FigJt} and \ref{ThreeCompFig} have been chosen to generate mechanical effects a little bit behind the AP. This brings up the question about the synchronization of velocities. In principle, the wave velocities in continua depend on elastic properties and density but due to coupling effects the group velocities (responsible for energy propagation) may considerably differ from the sound velocity due to dispersion. The velocity of the AP may also be affected by axonal irregularities and ion channels \citep{Debanne2011}. Note also that the velocity of the blood flow in a vessel depends on the stiffness of the vessel wall \citep{Barz2013}. So, we have to agree that ``the conduction velocity of mechanical impulses in nerve fibres is unknown'' \citep{Barz2013} and needs further theoretical and experimental studies in order to establish joint understanding. So, as a proof of concept, this study has fulfilled the idea to look for the biomembrane mediated signalling in a nerve fibre as a complex system, resulting in an ensemble of waves. It can be concluded from profiles shown in Figs~\ref{FigJt} and~\ref{ThreeCompFig} that the influence of changes for coupling can be presented either by gradients $Z_X$, $J_X$ or by the time derivatives $P_T, J_T$ or in the more general case as some combination of the considered coupling terms. \section{Discussion} Clearly the analysis carried out above is only the first stage of modelling. The AP is calculated by a simple FHN model with only one ion current but the HH-model includes both sodium and potassium currents. In this case the number of coefficients is certainly much higher \citep{Courtemanche1998} but gives a lot of possibilities to model anaesthetics \citep{Heimburg2007}. In addition, the time shift of sodium and potassium ion currents may give an additional possibility to specify the generation on mechanical effects. The more detailed handling of the ion currents would certainly enable accounting for the effect of individual ion flows when coupled to the equations governing the pressure and mechanical waves. One could for example consider the effect of the ion sizes, charges and masses. Temperature effects and possible heat production \citep{Heimburg2010} are also not taken into account. It is also discussed whether water movement across the biomembrane associated with sodium influx may affect the mechanical effects \citep{Kim2007}. For the pressure wave as a first approximation the wave equation with added dampening term is adequate for catching the main effect of a disturbance propagating in the viscous environment. The obvious direction for improvement would be the celebrated Navier-Stokes equations allowing to account for compressibility, non-linearity, viscosity, etc. from the first principles. \newpage \begin{wrapfigure}{R}{0.49\textwidth} \centering \includegraphics[width=0.48\textwidth]{3Fig3_Z_Pzx_Up_Ux.eps}\\ \vspace{2mm} \includegraphics[width=0.48\textwidth]{3Fig4_Z_Pzx_Upjx_Ux.eps}\\ \vspace{1.8mm} \includegraphics[width=0.48\textwidth]{3Fig5_Z_Pjx_Up_Ux.eps}\\ \vspace{1.8mm} \includegraphics[width=0.48\textwidth]{3Fig6_Z_Pjx_Upjx_Ux.eps}\\ \vspace{1.8mm} \includegraphics[width=0.48\textwidth]{3Fig7_Z_Pzxjx_Upjx_Ux.eps} \caption{The solutions of the three wave model when using the ion current gradient $J_X$ as one of the coupling forces. See text for parameter details. } \vspace{-9mm} \label{ThreeCompFig} \end{wrapfigure} \noindent Also, if a HH-like model for the AP is used, then the effects of individual ion currents on the pressure could be studied in greater detail. So there are several possibilities to improve the mathematical models. Even when using component models with higher level of detail the qualitative picture should remain similar. There might emerge some finer nuances that the simpler models cannot quite catch in full detail. However, the basic principle of being able to combine the components into a whole which is richer than just the sum of individual components through the coupling forces will still hold. As noted in the Introduction, the scientists are good at breaking the complex problems down into simpler problems which can be solved that they sometimes forget to put things back together (Toffler, 1984) -- this is one possibility of putting different aspects of such a complicated phenomenon as the nerve pulse propagation back together. The former studies \citep{Jerusalem2014,Hady2015} have pointed out several possibilities to build up models which could describe a coupled signal. To the best knowledge of authors, the model presented above is the first attempt to compose the governing differential equations of single waves into a system coupled by interaction forces resulting in an ensemble of waves. It is, as stated above, a proof of concept in terms of mathematical physics. The model needs certainly experimental verification. Compared with the classical experiments \citep[etc.]{Terakawa1985,Tasaki1988} there are now contemporary powerful experimental methods like atomic force microscopy \citep{Kim2007}, optical detectors \citep{Perez-Camacho2017}, and other methods described in many studies \citep[see][etc.]{Clay2005,Scholkmann2014,Gonzalez-Perez2016}. The next decade will surely bring along exciting results in measuring the ensemble of waves. Finally, as stated by \citet{Kaufmann2018} in his analysis of paradoxes in physics: ``the origin of the nervous impulse unifies the realities'' referring to studies of Hill-Hodgkin-Tasaki. Indeed, already \citet{Hill1936} has shown the importance of ion currents, \citet{Hodgkin1964a} called for ``a tidy pattern'' and \citet{Tasaki1988} explained the non-electrical manifestation of excitation process. It is a challenge to incorporate all observed phenomena in one theory \citep{Andersen2009}. The present paper proposes a robust mathematical explanation to the coupling of waves in nerve fibres. We followed the principle of Ockham's razor which states simply that more things should not be used than are necessary. However, we admit that given the complicated structure of cells, the coupling forces may have much more complicated structure than proposed within this model. \section*{Acknowledgements} This research was supported by the European Union through the European Regional Development Fund (Estonian Programme TK 124) and by the Estonian Research Council (projects IUT 33-24, PUT 434). \begin{appendices} \setcounter{equation}{0} \numberwithin{equation}{section} \section{The numerical scheme} \subsection{The system of partial differential equations to be solved numerically} As noted above, the pseudospectral method (PSM) (see \cite{Fornberg1998,Salupere2009}) is used to solve the system of dimensionless model equations: \begin{equation} \begin{split} & Z_{T} = D Z_{XX} + Z \left( Z - \left[ a_1 + b_1 \right] - Z^2 + \left[ a_1 + b_1 \right] Z \right) - J, \\ & J_{T} = \varepsilon \left( \left[ a_2 + b_2 \right] Z - J \right),\\ & U_{TT} = c^2 U_{XX} + P U U_{XX} + Q U^2 U_{XX} + P U_{X}^{2} + 2 Q U U_{X}^{2} - H_1 U_{XXXX} + H_2 U_{XXTT} + \gamma_1 \bar{P}_T + \gamma_2 J_T,\\ & \bar{P}_{TT} = c_{f}^{2} \bar{P}_{XX} - \mu \bar{P}_T + \eta_1 Z_X + \eta_2 J_T. \end{split} \label{EQS} \end{equation} The notations used here are already given in the main text. The coupling coefficients are changed for the investigated cases but the rest of the parameters are the same for all the shown solutions. The common parameter values are taken as $D=1, \varepsilon=0.01, a_1=0.2, a_2=0.2, c^{2}=0.16,$ $P=-0.05, Q=0.02,\break H_1= 0.43, H_2=0.75, c_{f}^{2}=0.1, \mu=0.0025.$ \subsection{Initial and boundary conditions} A $\sech^{2}$-type localized initial condition with an initial amplitudes $Z_o$ and $J_o$ are applied to $Z$ and $J$ in system~\eqref{EQS} and we make use of the periodic boundary conditions for all the members of the model equations \begin{equation} \label{algtingimus} \begin{split} & Z(X,0) = Z_{o} \sech^2 B_{o} X, \quad Z(X,T) = Z (X + 2 K m \pi,T), \quad m = 1,2,\ldots ,\\ & J(X,0) = J_{o} \sech^2 B_{o} X, \quad J(X,T) = J (X + 2 K m \pi,T), \quad m = 1,2,\ldots ,\\ & U(X,0) = 0, \quad U_T(X,0) = 0, \quad U(X,T) = U (X + 2 K m \pi,T), \quad m = 1,2,\ldots ,\\ & \bar{P}(X,0) = 0, \quad \bar{P}_T(X,0) = 0, \quad \bar{P}(X,T) = \bar{P} (X + 2 K m \pi,T), \quad m = 1,2,\ldots , \end{split} \end{equation} where $K=128$, meaning that the total length of the spatial period is $256\pi$. The amplitude of the initial condition is taken as $Z_o=2,$ $J_o = 0.1$ and the width parameter is taken as $B_o=1$ for both. In a nutshell -- such an initial condition is a narrow `spark' in the middle of the considered space domain with the amplitude above the threshold resulting in the usual FHN action potential formation which then proceeds to propagate in the positive and negative directions of the 1D space domain under consideration. In the paper only the solutions traveling to the left are shown, i.e., only half the spatial nodes from $0$ to $n/2$. For all other equations we take initial excitation to be zero and make use of the same periodic boundary conditions. The solution representing the mechanical and pressure wave is generated over time as a result of coupling with the action potential and ion current parts in the model system. It should be noted that in the present paper wave interactions are not investigated and integration intervals in time are picked such that the waves modeled do not reach the boundaries so the type of boundary conditions used is of low importance. For making use of the pseudospectral method periodic boundary conditions are needed. While not shown in the present paper it should be added that the action potentials annihilate each other during the interaction (as expected) but the mechanical and pressure waves can keep on going through many interactions if one uses the fact that we have periodic boundary conditions for taking a look at the interactions of the modeled wave ensembles. \subsection{The derivatives and integration} The discrete Fourier transform (DFT) based (PSM) \citep[see][]{Fornberg1998,Salupere2009} is used for numerical solving of the system~\eqref{EQS}. Variable $Z$ can be represented in the Fourier space as \begin{equation} \label{dft} \widehat{Z}(k,T) = \mathrm{F} \left[ Z \right]= \sum^{n-1}_{j=0}{Z(j \Delta X, T) \exp{\left(-\frac{2 \pi \mathrm{i} j k}{n} \right)}}, \end{equation} where $n$ is the number of space-grid points ($n=2^{12}$ in the present paper), $\Delta X=2 \pi/n$ is the space step, $k=0,\pm1,\pm2,\ldots,\pm(n/2-1),-n/2$; $\mathrm{i}$ is the imaginary unit, $\mathrm{F}$ denotes the DFT and $\mathrm{F}^{-1}$ denotes the inverse DFT. The idea of the PSM is to approximate space derivatives by making use of the DFT \begin{equation} \label{dft2} \frac{\partial^{m} Z}{\partial X^{m}} = \mathrm{F}^{-1}\left[(\mathrm{i} k)^{m} \mathrm{F}(Z) \right], \end{equation} reducing therefore the partial differential equation (PDE) to an ordinary differential equation (ODE) and then to use standard ODE solvers for integration with respect to time. The model~\eqref{EQS} contains a mixed derivative term and coupling force terms can be taken either as a space derivative which can be found like in Eq.~\eqref{dft} or time derivative which is not suitable for a direct PSM application and need to be handled separately. For integration in time the model system~\eqref{EQS} is rewritten as a system of first order ODE's after the modification to handle the mixed partial derivative term and a standard numerical integrator is applied. In the present paper ODEPACK FORTRAN code (see \cite{ODE}) ODE solver is used by making use of the F2PY (see \cite{F2PY}) generated Python interface. Handling of the data and initilization of the variables is done in Python by making use of the package SciPy (see \cite{SciPy}). \subsection{The handling of the mixed derivatives } Normally the PSM algorithm is intended for $ u_t = \Phi(u,u_x, u_{2x},\ldots,u_{mx})$ type equations. However, we have a mixed partial derivative term $H_2 U_{XXTT}$ in Eqs~\eqref{EQS} and as a result some modifications are needed (see \cite{lauriandrus2009,lauriandruspearu2007,Salupere2009}). Rewriting system~\eqref{EQS} the equation for $U$ so that all partial derivatives with respect to time are in the left-hand side of the equation \begin{equation} \label{LHSofHE} U_{TT} - H_2 U_{XXTT}= c^{2} U_{XX} + P U U_{XX} + Q U^{2} U_{XX} + P \left( U_{X} \right)^2 + 2 Q U \left(U_X \right)^2 - H_1 U_{XXXX} + \gamma_1 \bar{P}_T + \gamma_2 J_T \end{equation} allows one to introduce a new variable $\Phi = U - H_2 U_{XX}.$ After that, making use of properties of the DFT, one can express the variable $U$ and its spatial derivatives in terms of the new variable $\Phi$: \begin{equation}\label{UUXPhi} U=\mathrm{F}^{-1}\left[\frac{\mathrm{F}(\Phi)}{1+H_2 k^2}\right], \qquad \frac{\partial^m U}{\partial X^m} =\mathrm{F}^{-1}\left[\frac{(\mathrm{i} k)^m \mathrm{F}(\Phi)}{1+H_2 k^2}\right]. \end{equation} Finally, in system~\eqref{EQS} the equation for $U$ can be rewritten in terms of the variable $\Phi$ as \begin{equation} \label{HEtegelik} \Phi_{TT} = c^2 U_{XX} + N U U_{XX} + M U^{2} U_{XX} + N \left( U_{X} \right)^2 + 2 M U \left(U_X \right)^2 - H_1 U_{XXXX} + \gamma_1 \bar{P}_T + \gamma_2 J_T, \end{equation} where all partial derivatives of $U$ with respect to $X$ are calculated in terms of $\Phi$ by using expression \eqref{UUXPhi} and therefore one can apply the PSM for numerical integration of Eq.~\eqref{HEtegelik}. Other equations in the model \eqref{EQS} are already written in the form which can be solved by the standard PSM. \subsection{The time derivatives $\bar{P}_T$ and $J_T$.} The time derivatives $\bar{P}_T$ and $J_T$ are found using different methods. For finding $\bar{P}_T$ it is enough to write the equation for $\bar{P}$ in system~\eqref{EQS} as two first order ODE's which is done anyway as the integrator requires first order ODE's and it is possible to extract $\bar{P}_T$ from there directly \begin{equation} \begin{split} & \bar{P}_{T} = \bar{V} \\ & \bar{V}_{T} = c_{f}^{2} \bar{P}_{XX} + \eta_1 Z_X + \eta_2 J_T - \mu \bar{P}_T. \end{split} \label{EQSp} \end{equation} For finding $J_T$ a basic backward difference scheme is used \begin{equation} J_T (n,T) = \frac{J (n,T) - J (n,(T-dT))}{T - (T - dT)} \approx \frac{\Delta J(n,T)}{d T}, \label{backdiff} \end{equation} where $J$ is the ion current from Eqs~\eqref{EQS}, $n$ is the spatial node number, $T$ is the dimensionless time and $dT$ is the integrator internal time step value (which is variable and in the present paper the integrator is allowed to take up to $10^6$ internal time steps between $\Delta T$ values to provide the desired numerical accuracy. \subsection{The technical details and numerical accuracy} As noted, the calculations are carried out with the Python package SciPy (see \cite{SciPy}), using the FFTW library (see \cite{FFTW3}) for the DFT and the F2PY (see \cite{F2PY}) generated Python interface to the ODEPACK FORTRAN code (see \cite{ODE}) for the ODE solver. The particular integrator used is the `vode' with options set to nsteps$=10^6$, rtol$=1e^{-11}$, atol$=1e^{-12}$ and $\Delta T = 2$. It should be noted that typically the hyperbolic functions like $\sech^2(X)$ in our initial conditions in \eqref{algtingimus} are defined around zero. However, in the present paper the spatial period is taken from $0$ to $K \cdot 2\pi$ which means that the noted functions in \eqref{algtingimus} are actually shifted to the right (in direction of the positive axis of space) by $K \cdot \pi$ so the shape of $\sech^2(X)$ typically defined around zero is actually in our case located in the middle of the spatial period. This is a matter of preference (in the present case the reason is to have more convenient mapping between the values of $X$ and indices) and the numerical results would be the same if one would use a spatial period from $-K \cdot \pi$ to $K \cdot \pi$. The `discrete frequency function' $k$ in \eqref{dft2} is typically formulated on the interval from $-\pi$ to $\pi$, however, we use a different spatial period than $2\pi$ and also shift our space to be from $0$ to $K \cdot 2\pi$ meaning that \begin{equation} k = \left[\frac{0}{K}, \frac{1}{K}, \frac{2}{K}, \ldots, \frac{n/2 - 1}{K}, \frac{n/2}{K}, - \frac{n/2}{K}, - \frac{n/2 - 1}{K}, \ldots , - \frac{n - 1}{K}, - \frac{n}{K} \right], \label{diskreetnesagedus} \end{equation} where $n$ is number of the spatial grid points uniformly distributed across our spatial period (the size of the Fourier spectrum is $(n/2)$ which is, in essence, the number of spectral harmonics used for approximating the periodic functions and their derivatives) and $K$ is the number of $2\pi$ sections in our space interval. There are a few different possibilities for handling the division by zero rising in Eq.~\eqref{backdiff} during the initial initialization of the ODE solver and when the numerical iteration during the integration reaches the desired accuracy resulting in a zero length time step. For initial initialization of the numerical function initial value of $1$ is used for $dT$. This is just a technical nuance as during the initialization the time derivative will be zero anyway as far there is no change in the value of $J(n,0)$. For handing the division by zero during the integration when ODE solver reaches the desired accuracy using values from two steps back from the present time for $J$ and $T$ is computationally the most efficient. Another straightforward alternative is using a logical cycle inside the ODE solver for checking if $dT$ would be zero but this is computationally inefficient. In the present paper a value two steps back in time for calculating $J_T$ is used for all presented results involving $J_T$. The difference between the numerical solutions of the $J_T$ with the scheme using a value 1 step back and additional logic cycle for checking for division by zero and using two steps back in time scheme only if division by zero occurs is only approximately $10^{-6}$ and is not worth the nearly twofold increase in the numerical integration time. Overall accuracy of the numerical solutions is approximately $10^{-7}$ for the fourth derivatives, approximately $10^{-9}$ for the second derivatives and approximately $10^{-11}$ for the time integrals. The accuracy of $J_T$ is approximately $10^{-6}$ which is adequate and very roughly in the same order of magnitude as the fourth spatial derivatives. Note that the accuracy estimates are not based on the solving system~\eqref{EQS} with the presented parameters but are based on using the same scheme with the same technical parameters for finding the derivatives of $\sin(x)$ and comparing these to an analytic solution. In addition it should be noted that in the PST the spectral filtering is a common approach for increasing the stability of the scheme -- in the numerical simulations for the present paper the filtering (suppression of the higher harmonics in the Fourier spectrum) is not used although the highest harmonic (which tends to collect the truncation errors from the finite numerical accuracy of floating point numbers in the PST schemes) is monitored as a `sanity check' of the scheme. \end{appendices}
{ "timestamp": "2018-02-22T02:06:39", "yymm": "1802", "arxiv_id": "1802.07014", "language": "en", "url": "https://arxiv.org/abs/1802.07014" }
\section{Introduction.} We give an overview of our methods, and recall the structure of the objects we will be dealing with. Let's describe how type theory leads to a parametrization of inertial classes of representations on the side of~$G = \GL_n(F)$. By the fundamental work~\cite{BKbook}, the supercuspidal irreducible representations of~$G$ are classified up to unramified twist by the maximal simple types they contain, which form a unique conjugacy class in~$G$. A maximal simple type is constructed from a character~$\theta$ of a compact open pro-$p$ subgroup of~$G$ denoted~$H^1_\theta$, called a \emph{maximal simple character} and satisfying a number of arithmetic properties we won't be recalling here (see for instance the first sections of~\cite{BHeffective}). There is a two-step extension process to be applied to~$\theta$, to groups \begin{displaymath} H^1_\theta \subseteq J^1_\theta \subseteq J_\theta. \end{displaymath} In more detail, there exists a unique irreducible representation~$\eta_\theta$ of~$J^1_\theta$ containing~$\theta$, and it can be extended to~$J_\theta$. There is a distinguished set of \emph{$\beta$-extensions}\footnote{The terminology comes from the construction of simple characters from simple strata~$[\fA, \beta]$ defining the simple character~$\theta$. To emphasize that the notion does not depend on the stratum, one could follow~\cite{BHJL} in referring to these as \emph{wide extensions}. These representations are called in yet another way in~\cite{BHeffective}.} of~$\eta_\theta$, which are all twists of each other by certain abelian characters of~$J_\theta$. The group~$J_\theta/J^1_\theta$ is non-canonically isomorphic to a general linear group over a finite field, determined uniquely by invariants attached to~$\theta$. One then fixes a $\beta$-extension~$\kappa$, inflates a supercuspidal irreducible representation~$\sigma$ of~$J_\theta/J^1_\theta$ to~$J_\theta$, and forms the tensor product $\lambda = \kappa \otimes \sigma$. One of the main results of~\cite{BKbook} says that the pair $(J_\theta, \lambda)$, called a \emph{maximal simple type} in~$G$, is a type for a supercuspidal Bernstein component of~$G$ (see for instance~\cite{BKtypes} for terminology regarding types and the Bernstein decomposition). To describe general Bernstein components, one uses $G$-covers and semisimple types. Maximal simple characters in different groups are organized in endo-equivalence classes (see~\cite{BHliftingI}), and a step in the construction of semisimple types is the construction of compatible $\beta$-extensions of endo-equivalent maximal simple characters. \paragraph{} The first problem we treat is whether there always exists a canonical choice of $\beta$-extension of a given maximal simple character. For instance, one could notice that precisely one of them has determinant character of order a power of~$p$. Working throughout with these \emph{$p$-primary} $\beta$-extensions, we obtain a simple description of the supercuspidal inertial classes of~$\GL_n(F)$: they are in bijection with $G$-conjugacy classes of pairs $(\theta, \sigma)$, where~$\theta$ is a maximal simple character in~$G$ and~$\sigma$ is a supercuspidal irreducible representation of~$J_\theta/J^1_\theta$. This parametrization is completely intrinsic to the group~$G$, and this very fact makes it hard to compare inertial classes across different groups, for example when dealing with noncuspidal inertial classes and their supercuspidal supports. In line with the compatibility of the local Langlands correspondence with parabolic induction, one would like the parameter of a parabolically induced inertial class to be ``the same" as that of its supercuspidal support, in some suitable sense. It is therefore reasonable, for instance, to attach to the inertial class of~$[\GL_{n/r}(F)^{\times r}, \pi_0^{\otimes r}]$ the $\GL_{n/r}(F)$-conjugacy class of pairs~$(\theta_0, \sigma_0)$ such that the maximal simple type~$\kappa_0 \otimes \sigma_0$ appears in~$\pi_0$, for $\kappa_0$ the $p$-primary $\beta$-extension of~$\theta_0$. This approach does indeed give a description of all Bernstein components, but leads to the question of whether the $p$-primary $\beta$-extensions of endo-equivalent maximal simple characters are compatible with each other (in the sense of section~\ref{parabolicinductionsection}). If the answer is negative, the parametrization can fail to be compatible with certain natural operations: if we classify the $\cbF_\ell$-representations via the mod~$\ell$ type theory developed in~\cite{Vignerasrepsbook} and~\cite{MStypes}, it can fail to commute with mod~$\ell$ reduction of integral representations. Other problems due to the non-canonicity of $\beta$-extensions have been observed and dealt with, for example, in~\cite{Blondelbeta} and~\cite{BHSsymplectic}. \paragraph{} Further ambiguities can arise from the level zero part. While fixing~$\kappa$ does give a unique decomposition $\lambda = \kappa \otimes \sigma$ of a maximal simple type, the representation~$\sigma$ is defined in terms of the group~$J_\theta/J^1_\theta$, which is isomorphic to a general linear group over a finite extension of~$\mbf$. This extension is not canonically defined: it depends on a choice of unramified parameter field for~$\theta$, and it has nontrivial automorphisms over~$\mbf$. Again, this can be observed when varying the group. If~$G'$ if an inner form of~$G$, its supercuspidal inertial classes afford a similar parametrization, and the notion of endo-equivalence of simple characters has been extended in~\cite{BSSV} to maximal simple characters in~$G'$. A special case of the invariance conjecture of~\cite{BSSV} says that if~$\fs$ is a supercuspidal Bernstein component of~$G$ containing the pair~$(\theta, \sigma)$ then the Jacquet--Langlands transfer~$\JL(\fs)$ contains a maximal simple character~$\theta'$ within the same endo-class of~$\theta$. Granted this, it is natural to ask whether one can also compare the ``level zero part" $\sigma$ with that of~$\JL(\fs)$. Doing so requires comparing representations of $J_\theta/J^1_\theta$ and the corresponding group~$J_{\theta'}/J^1_{\theta'}$, but there seems to be no canonical isomorphism between these groups. Any comparison would thus seem to depend on an arbitrary choice. \paragraph{} We circumvent these issues by adding some structure to the parametrization and requiring that it be preserved by automorphisms of~$J_\theta/J^1_\theta$. Namely, we work with a fixed algebraic closure~$\overline{F}/F$ and we consider unramified extensions of~$F$ in~$\overline{F}$. To any endo-equivalence class $\Theta_F$ of simple characters (or \emph{endo-class}) there is attached a number of invariants, such as \begin{enumerate} \item the degree~$\deg(\Theta_F) = \delta(\Theta_F)$ of the parameter field of any simple character with endo-class~$\Theta_F$, together with its ramification index~$e(\Theta_F)$ and residue class degree~$f(\Theta_F)$. \item the unramified extension $E = F_{f(\Theta_F)}$ of~$F$ in~$\overline{F}$ of degree~$f(\Theta_F)$, with residue field~$\be$. \item the Galois group $\Gamma(\Theta_F) = \Gal(\be_{n/\delta(\Theta_F)}/ \be)$ acting on the group $X(\Theta_F)$ of characters of~$\be_{n/\delta(\Theta_F)}^\times$. \end{enumerate} If~$\theta$ is a maximal simple character in~$\GL_n(F)$ with endo-class~$\Theta_F$ then the group~$J_\theta/J^1_\theta$ is non-canonically isomorphic to $\GL_{n/\delta(\Theta_F)}(\be)$, for~$\be$ the residue field of~$E$. We prove that the choice of a \emph{lift} of~$\Theta_F$ to an endo-class~$\Theta_E$ defined over~$E$, in the sense of~\cite{BHliftingI}, determines a unique conjugacy class~$\Psi(\Theta_E)$ of isomorphisms \begin{displaymath} J_\theta/J^1_\theta \to \GL_{n/\delta(\Theta_F)}(\be) \end{displaymath} under inner automorphisms of the target, whenever an unramified parameter field for~$\theta$ is fixed. We can now apply the results of~\cite{SZtypes} and~\cite{MStypes}, and attach to every $\beta$-extension~$\kappa$ of a maximal simple character with endo-class~$\Theta_F$ a functor \begin{displaymath} \bK_{\kappa}: \left ( \text{representations of~$\GL_n(F)$} \right ) \to \left ( \text{representations of~$\GL_{n/\delta(\Theta_F)}(\be)$} \right ). \end{displaymath} When applied to a supercuspidal representation~$\pi$, the functor~$\bK_\kappa$ recovers the representation~$\sigma$ such that $\lambda = \kappa \otimes \sigma$ is a maximal simple type for~$\pi$, identified with a representation of~$\GL_{n/\delta(\Theta_F)}(\be)$ via~$\Psi(\Theta_E)$. For a simple representation (that is, with inertial supercuspidal support $[\GL_{n/r}(F), \pi_0^{\times r}]$ for some divisor~$r$ of~$n$), the supercuspidal support of~$\bK_{\kappa}(\pi)$ is a multiple of a representation that we call the \emph{level zero part} $\Lambda_{\kappa}(\pi)$ of~$\pi$. It depends on the choice of~$\kappa$ and of the lift~$\Theta_E \to \Theta_F$. The Green parametrization of supercuspidal representations of~$\GL_{n/\delta(\Theta_F)}(\be)$ (and its analogue for modular representations due to James) then identifies~$\Lambda_\kappa(\pi)$ with an element of $\Gamma(\Theta_F) \backslash X(\Theta_F)$. Notice that there is no regularity assumption here: indeed, $\be$-regular orbits correspond to supercuspidal representations. \paragraph{} We then proceed to construct a level zero map~$\Lambda$ for the Langlands parameters of simple representations (or rather their restrictions to inertia), itself depending on the lift~$\Theta_E \to \Theta_F$ but not on~$\kappa$. For supercuspidal parameters we have the Ramification Theorem of Bushnell and Henniart, identifying the endo-class of a representation with the restriction to wild inertia of the Langlands parameter. Then we get an element of $\Gamma(\Theta_F)\backslash X(\Theta_F)$ from Clifford theory: the role played by the lift~$\Theta_E$ becomes transparent here. The general case is handled by taking direct sums, and we make the following definition. \begin{defnintro} Fix an endo-class~$\Theta_F$ and a lift~$\Theta_E \to \Theta_F$. Let~$\kappa$ be a $\beta$-extension of a maximal simple character in~$\GL_n(F)$ with endo-class~$\Theta_F$, and form the level zero maps $\Lambda_{\kappa}$ and~$\Lambda$ with respect to $\Theta_E$. We say that~$\kappa$ is a \emph{canonical $\beta$-extension} if \begin{displaymath} \Lambda_\kappa(\pi) = \Lambda(\rec(\pi)) \end{displaymath} for all simple representations~$\pi$ with endo-class~$\Theta_F$. \end{defnintro} Here, $\rec$ denotes the local Langlands correspondence. This definition is independent of~$\Theta_E$, as changing it twists both sides of the equation by the same element of~$\Gal(\be/\mbf)$. We prove the following theorem, which is the main result of this article. \begin{thmintro} Let~$\theta$ be a maximal simple character in~$\GL_n(F)$. Then \begin{enumerate} \item $\theta$ admits a unique canonical $\beta$-extension~$\kappacan$. It is the twist of the $p$-primary $\beta$-extension by~$\epsilon^1_\theta\epsilon_{\Gal}$, where~$\epsilon^1_\theta$ is the symplectic sign character of~$\theta$ and~$\epsilon_{\Gal}$ is a quadratic character which is nontrivial if and only if $p \not = 2$ and the degree of a tame parameter field of~$\Theta_F$ over~$F$ is even. \item if~$\theta'$ is an endo-equivalent maximal simple character in~$\GL_{an}(F)$ for some positive integer~$a$, then~$\kappacan$ is compatible with the canonical $\beta$-extension of~$\theta'$. \end{enumerate} \end{thmintro} To summarize with an example, we obtain a parametrization of the supercuspidal inertial classes of~$\GL_n(F)$ by triples $(\Theta_F, \Theta_E, [\chi])$, consisting of \begin{enumerate} \item an endo-class~$\Theta_F$ defined over~$F$, of degree~$\delta(\Theta_F)$ dividing~$n$. \item a lift $\Theta_E \to \Theta_F$ of~$\Theta_F$ to~$E = F_{f(\Theta_F)}$. \item a Galois orbit of $\be$-regular characters of~$\be_{n / \delta(\Theta_F)}^\times$ under the action of $\Gal(\be_{n / \delta(\Theta_F)} / \be)$. \end{enumerate} by letting $\fs_G(\Theta_F, \Theta_E, [\chi])$ be the inertial class~$\fs$ with endo-class~$\Theta_F$ and such that the level zero part $\Lambda_{\kappacan}(\fs)$ equals $[\chi]$ when computed with respect to the lift $\Theta_E \to \Theta_F$. The restriction to inertia of irreducible $W_F$-representations can be described similarly, by letting $\fs_{\Gal}(\Theta_F, \Theta_E, [\chi])$ have restriction to wild inertia given by~$\Theta_F$ under the Ramification Theorem, and level zero part~$[\chi]$ when computed with respect to~$\Theta_E$. This parametrization has finite fibers, which can be described in terms of the action of~$\Gal(\be / \mbf)$ on~$[\chi]$ when varying~$\Theta_E$. The local Langlands correspondence takes the form \begin{displaymath} \rec \, \fs_G(\Theta_F, \Theta_E, [\chi]) = \fs_{\Gal}(\Theta_F, \Theta_E, [\chi]). \end{displaymath} This description of supercuspidal inertial classes in terms of a ``wild part''~$\Theta_F$ and a ``level zero part''~$[\chi]$, provided one fixes a lift~$\Theta_E$, extends by construction to simple inertial classes. At the end of this paper, we sketch an extension of this connection between the level zero parts of types and Langlands parameters to arbitrary inertial classes of irreducible representations. There is an analogous parametrization over any algebraically closed field~$R$ of characteristic~$\ell$ different from~$p$, and it is compatible with reduction modulo~$\ell$ when dealing with integral $\cbQ_\ell$-representations. In~\cite{InertialJL}, we show that canonical $\beta$-extensions exist for inner forms of~$\GL_n(F)$, and that the Jacquet--Langlands correspondence admits an equally direct description in these terms. Notice, however, that the Ramification Theorem itself is a nonconstructive bijection, and does not describe the wild inertia representation corresponding to an endo-class (see the introduction to~\cite{BHeffective} and references therein for more on this). \paragraph{} Let us briefly describe how our work relates to the literature on the subject. The role of the lift~$\Theta_E \to \Theta_F$ is to provide a rigidification: there are many automorphisms of~$J_\theta/J^1_\theta$, and~$\Theta_E$ singles out an inner conjugacy class. The need for this seems to arise whenever one needs to compare inertial classes in different groups and wants the result to be independent of all choices in the construction, but it hasn't been treated systematically away from the supercuspidal case. The rigidification can be provided in various ways, all closely related: compare for instance the Compatibility Assumption in~\cite{SecherreStevensJL} section~9, and the role of the tame parameter field in~\cite{BHeffective}. Our method is directly inspired by the latter reference. The supercuspidal case of our main result is a direct consequence of the main theorems of~\cite{BHeffective}, and we deduce the general case via a technique introduced by S\'echerre and Stevens in~\cite{SecherreStevensJL}, using reduction modulo various primes to analyze the level zero parts while keeping the endo-class fixed. We deduce the compatibilities with reduction mod~$\ell$ which are required to apply this technique from work of Vign\'eras \cite{Vignerasl1}, \cite{Vignerasl2}. To our knowledge, the notion of a canonical $\beta$-extension is new and hasn't appeared previously. \paragraph{Acknowledgments.} This paper grew from a comment of Vincent S\'echerre and Shaun Stevens regarding~\cite{InertialJL}, namely that $p$-primary $\beta$-extensions in different groups might not be compatible. I am grateful to them for this remark. The idea that the local Langlands correspondence might be used to normalize $\beta$-extensions was suggested by Colin Bushnell. This work was supported by the Engineering and Physical Sciences Research Council [EP/L015234/1], The EPSRC Centre for Doctoral Training in Geometry and Number Theory (The London School of Geometry and Number Theory), University College London, and Imperial College London. \paragraph{Notation and conventions.} The notation for local fields will be as follows: $F$ denotes a local field, $\mbf$ the residue field of~$F$, $F_n$ the unramified extension of~$F$ of degree~$n$ in some fixed algebraic closure~$\overline{F}$, and~$\mbf_n$ the residue field of~$F_n$. The group of Teichm\"uller roots of unity in~$F$ is denoted~$\mu_F$. We write~$W_F$ for the Weil group of~$F$, $I_F$ for the inertia group and $P_F$ for the wild inertia group. We normalize the Artin map $\Art_F: F^\times \to W_F^{\mathrm{ab}}$ so that uniformizers correspond to geometric Frobenius elements. If~$\sigma$ is a representation of~$W_F$, its twist by the unramified character of~$W_F$ sending a geometric Frobenius element to~$q^{-n}$ for~$n \in \bZ$ is denoted~$\sigma(n)$. This character for~$n = 1$ corresponds to the normalized absolute value of~$F$ under~$\Art_F$, hence we denote it by $w \mapsto |w|$. For a prime number~$\ell$, we say that an element~$g$ of a finite group is $\ell$-primary if it has order a power of~$\ell$ and $\ell$-regular if it has order coprime to~$\ell$. We write $g^{(\ell)}$ for the $\ell$-regular part of~$g$ and~$g_{(\ell)}$ for the $\ell$-primary part of~$g$. Representations of a locally profinite group like $\GL_n(F)$ or~$W_F$ are assumed to be smooth (and finite-dimensional for~$W_F$), with coefficients over an algebraically closed field~$R$ of characteristic different from~$p$, which will be specialized to~$\bC$, $\cbQ_\ell$ and~$\cbF_\ell$ in the course of the paper. Parabolic induction from a standard Levi subgroup is always along the upper-triangular parabolic, and normalized, and we write $\pi_1 \times \cdots \times \pi_n$ for the parabolic induction of $\pi_1 \otimes \cdots \otimes \pi_n$. This requires us to fix a square root of~$q$ in~$R^\times$, but changing it does not modify the inertial class of the supercuspidal support of any given irreducible representation, hence the choice will not affect any of our results which are concerned with inertial classes. \section{Representations of~$\GL_n(F)$.} In this section we recall from~\cite{MSreps} and~\cite{MStypes} the definition of the $\bK$-functor attached to a $\beta$-extension with endo-class~$\Theta_F$ and we show how a lift $\Theta_E \to \Theta_F$ allows us to write down a level zero map~$\Lambda_\kappa$ sending a simple inertial class to an element of~$\Gamma(\Theta_F) \backslash X(\Theta_F)$. We introduce compatible $\beta$-extensions and describe the behaviour of~$\Lambda_\kappa$ under parabolic induction and under mod~$\ell$ reduction for $\cbQ_\ell$-coefficients. Unless otherwise specified, the representations in this section have coefficients in a fixed algebraically closed field~$R$ of characteristic different from~$p$. \subsection{$\bK$-functors and blocks.} Let~$\theta$ be a maximal simple character in~$G = \GL_n(F)$ and fix a $\beta$-extension~$\kappa$ to~$J_\theta$. This defines an exact functor~$\bK_\kappa^+$ from representations of~$G$ to representations of~$J_\theta/J^1_\theta$, by $\pi \mapsto \Hom_{J^1_\theta}(\kappa, \pi)$, with the $J_\theta$-action by $f \mapsto x \circ f \circ x^{-1}$. By~\cite{SecherreStevensblocks}, see also~\cite{Vignerasrepsbook}, there is a block decomposition of the category of smooth $R$-representations of~$G$ with the blocks indexed by inertial equivalence classes of supercuspidal supports in~$G$, generalizing the Bernstein decomposition over the complex numbers. We are going to study this functor in the case of simple blocks of endo-class~$\Theta_F$, that is, those whose supercuspidal support is inertially equivalent to $(\GL_{n/r}(F), \pi_0^{\otimes r})$ for some positive divisor~$r$ of~$n$ and some representation~$\pi_0$ of endo-class~$\Theta_F$. We will call the set of irreducible representations in a block an \emph{inertial class} of representations. We record the behaviour of the $\bK^+_\kappa$-functor on cuspidal representations. \begin{lemma}[See~\cite{MStypes} lemma~5.3]\label{Kcuspidal} If~$\pi$ is cuspidal (maybe not supercuspidal) then~$\bK^+_\kappa(\pi) = \sigma$ if~$\pi$ contains the maximal simple type~$\kappa \otimes \sigma$, and~$\bK^+_\kappa(\pi) = 0$ otherwise. \end{lemma} In the lemma, $\bK^+_\kappa(\pi)$ and~$\sigma$ are regarded as representations of~$J_\theta/J^1_\theta$. As explained in the introduction, we'd like to relate them to representations of a group intrinsic to the base field~$F$ rather than the group~$G$. To do so, we describe a way to single out a conjugacy class of isomorphisms~$J_\theta/J^1_\theta \to \GL_{n/\delta(\Theta_F)}(\be)$. \subsection{Lifts and rigidifications.} Choose a simple stratum $[\fA, \beta]$ defining~$\theta$, and let $\fB = \fA \cap B$ for the commutant $B = Z_A(F[\beta])$. Since~$\theta$ is maximal, $\fB$ is a maximal order in the $F[\beta]$-algebra~$B$. Recall that all maximal simple characters in~$G$ which are endo-equivalent to~$\theta$ are actually conjugate to~$\theta$, because they have conjugates which are endo-equivalent maximal simple characters defined on the same order. These intertwine by theorem~8.7 in~\cite{BHliftingI}, and there is an ``intertwining implies conjugacy" theorem for~$\GL_n(F)$, see theorem~3.5.11 in~\cite{BKbook}. We obtain a system of $\beta$-extensions of all conjugates of~$\theta$, stable under conjugation by~$G$, by noticing that the pullback $\ad(g)^*\kappa$ is a $\beta$-extension of~$\ad(g)^*\theta$ whenever~$g \in G$. For this to be well-defined, we need to check that if~$g \in G$ normalizes~$\theta$, then it normalizes~$\kappa$; but the normalizer~$\bJ_\theta$ of~$\theta$ in~$G$ normalizes $J_\theta$, which is the unique maximal compact subgroup of~$\bJ_\theta$, and~$\theta$ and~$\kappa$ have the same $G$-intertwining (this is a defining property of $\beta$-extensions), hence the claim follows. See~\cite{BHeffective}~2.1.1 for more details. Fix an $F[\beta]$-linear isomorphism \begin{displaymath} \Phi: B \to M_{n/\delta(\Theta_F)}(F[\beta]) \end{displaymath} such that the order~$\fB$ gets mapped to~$M_{n/\delta(\Theta_F)}(\fo_{F[\beta]})$. Such a~$\Phi$ exists since~$\theta$ is maximal. The inclusion induces an isomorphism $U(\fB)/U^1(\fB) \to J_\theta/J^1_\theta$, and its inverse yields an isomorphism \begin{displaymath} \Phi: J_\theta/J^1_\theta \to U(\fB)/U^1(\fB) \to \GL_{n/\delta(\Theta_F)}(\mbf[\beta]) \end{displaymath} upon composition with~$\Phi$. Here, $\mbf[\beta]$ denotes the residue field of~$F[\beta]$. Recall that a \emph{parameter field} for~$\theta$ is by definition an $F$-subalgebra of~$A$ of the form~$F[\beta]$ for a simple stratum $[\fA, \beta]$ for which~$\theta$ is a simple character. An \emph{unramified parameter field} is a subfield of~$A$ of the form $\Fbetaur$ for a parameter field~$F[\beta]$ (the maximal unramified extension of~$F$ in~$F[\beta]$). \begin{pp}[See~\cite{BHeffective}, 2.6 Proposition] \label{urparameterfields} Let~$\theta$ be a maximal simple character in~$A^\times$ and let $E_1, E_2$ be unramified parameter fields for~$\theta$. Then \begin{enumerate} \item there exists $j \in J^1_\theta$ conjugating~$E_1$ to~$E_2$ \item if~$j \in J^1_\theta$ normalizes an unramified parameter field for~$\theta$, then it centralizes it. \end{enumerate} It follows that there exists exactly one isomorphism $E_1 \to E_2$ which can be realized by conjugation by elements of~$J^1_\theta$. \end{pp} The degree of an unramified parameter field of~$\theta$ over~$F$ equals~$f(\Theta_F)$, which is independent of the choice of~$[\fA, \beta]$ defining~$\theta$, and even of the choice of a representative~$\theta$ of~$\Theta_F$. Let $E = F_{f(\Theta_F)}$, the unramified extension of~$F$ in~$\overline{F}$ of degree~$f(\Theta_F)$. By proposition~\ref{urparameterfields}, between any two unramified parameter fields~$E_i$ for~$\theta$ there is a distinguished isomorphism $\iota_{E_1, E_2}: E_1 \to E_2$. Choose $F$-linear isomorphisms with~$E$ for any unramified parameter field for~$\theta$, such that $\iota_{E_1, E_2}\iota_{E_1} = \iota_{E_2}$ throughout. Denote this system of isomorphisms by~$\iota$. Returning to our fixed parameter field~$F[\beta]$ for~$\theta$ and $F[\beta]$-linear isomorphism $\Phi: B \to M_{m'}(F[\beta])$, the choice of~$\iota$ yields a distinguished embedding $E \to F[\beta]$, hence a distinguished isomorphism $\be \to \mbf[\beta]$. Putting this all together, we get an isomorphism \begin{displaymath} \Psi: J_\theta/J^1_\theta \to \GL_{n/\delta(\Theta_F)}(\be). \end{displaymath} \begin{pp}\label{compatibleconjugacy} The orbit~$\Psi(\iota)$ of~$\Psi$ under the conjugation action of~$\GL_{n/\delta(\Theta_F)}(\be)$ is independent of the choice of~$[\fA, \beta]$ and~$\Phi$, and only depends on~$\theta$ and~$\iota$. \end{pp} \begin{proof} Take two maximal simple strata defining~$\theta$. By~\cite{BHeffective}~2.1.1, they are both constructed on the same order~$\fA$), hence they have the form~$[\fA, \beta_i]$. Fix $F[\beta_i]$-linear isomorphisms $\Phi_i: B_i \to M_{n/\delta(\Theta_F)}(F[\beta_i])$. We obtain isomorphisms \begin{equation}\label{compatibleconjugacyequation} J_\theta/J^1_\theta \to U(\fB_i) / U^1(\fB_i) \to \GL_{n/\delta(\Theta_F)}(\mbf[\beta_i]) \to \GL_{n/\delta(\Theta_F)}(\be), \end{equation} and it suffices to prove that they differ by an inner automorphism of the target. Observe that~(\ref{compatibleconjugacyequation}) is induced on the groups of units by an analoguous sequence \begin{displaymath} \fj(\beta_i, \fA) / \fj^1(\beta_i, \fA) \to \fB_i/\fP_1(\fB_i) \to M_{n/\delta(\Theta_F)}(\mbf[\beta_i]) \to M_{n/\delta(\Theta_F)}(\be) \end{displaymath} of $\be$-linear ring isomorphisms between $\be$-algebras. The equality $\fj^1(\beta_1, \fA) = \fj^1(\beta_2, \fA)$ holds since $\fj^1(\beta_i, \fA) = J^1(\beta_i, \fA) -1$. The orders $\fj(\beta_i, \fA)$ have the same group of units, since $\fj(\beta_i, \fA)^\times = J(\beta, \fA)$. The quotient $\fj(\beta_i, \fA)/\fj^1(\beta_i, \fA)$ is additively generated by its group of units (as for all matrix algebras over fields), hence $\fj(\beta_1, \fA) = \fj(\beta_2, \fA)$. The $\be$-algebra structure on $\fj(\beta_i, \fA)/\fj^1(\beta_i, \fA)$ comes from the embedding $\iota_{F[\beta_i]^{\mathrm{ur}}}$ for $i = 1, 2$, and by construction these embeddings are conjugate by the action of~$J^1_\theta$. So these two $\be$-algebra structures coincide. The claim now follows from the Skolem--Noether theorem applied to the two $\be$-linear ring isomorphisms $\fj(\beta_i, \fA) / \fj^1(\beta_i, \fA) \to M_{n/\delta(\Theta_F)}(\be)$. \end{proof} We now show how a lift~$\Theta_E \to \Theta_F$, defined as in section~9 of~\cite{BHliftingI}, gives rise to such a compatible system of isomorphisms. Let $[\fA, \beta_i]$ for~$i = 1, 2$ be a simple stratum in~$A$ defining~$\theta$, and let $E_i$ denote the unramified parameter field $F[\beta_i]^{\mathrm{ur}}$ of~$\theta$. Since~$\beta_i$ commutes with~$E_i$, and $E_i[\beta_i] = F[\beta_i]$ is a field with $F[\beta_i]^\times \subseteq \fK(\fA)$, the results in section~7 of~\cite{BHliftingI} apply and we can take the interior lift of~$\theta$ to a maximal simple character~$\theta_{E_i}$ of the centralizer $Z_{G}(E_i)$, which is isomorphic to a general linear group over~$E_i$ (the isomorphism being induced from an~$E_i$-linear isomorphism, hence well-defined up to inner automorphisms). Fix two compatible isomorphisms $\iota_{E_i}: E \to E_i$. We get endo-classes \begin{displaymath} \Theta_E^i = \iota_{E_i}^*\cl(\theta_{E_i}). \end{displaymath} \begin{pp}\label{sameendo-classes} The endo-classes~$\Theta_E^1$ and~$\Theta_E^2$ are equal. \end{pp} \begin{proof} Because the~$\iota_{E_i}$ are compatible, we have $\iota_{E_2} = \iota_{E_1, E_2}\iota_{E_1}$, for $\iota_{E_1, E_2}: E_1 \to E_2$ the only isomorphism induced by conjugation by elements of~$J^1_\theta$ (see proposition~\ref{urparameterfields}). The relation \begin{displaymath} \Theta_E^2 = \iota_{E_2}^*\cl(\theta_{E_2}) = \iota_{E_1}^*\iota_{E_1, E_2}^*\cl(\theta_{E_2}) \end{displaymath} holds. Assume $\iota_{E_1, E_2}$ is induced by conjugation by~$j \in J^1_\theta$. Then \begin{displaymath} \iota_{E_1, E_2}^*\cl(\theta_{E_2}) = \cl(\ad(j)^*\theta_{E_2}). \end{displaymath} However, $J^1_\theta$ normalizes~$\theta$, hence $\ad(j)^*\theta_{E_2}$ is the $E_1$-lift of~$\ad(j)^*\theta = \theta$. But then $\ad(j)^*\theta_{E_2} = \theta_{E_1}$, and the claim follows. \end{proof} \begin{pp}\label{urparameterlift} The group $\Gal(E/F)$ is simply transitive on the set $\Res_{E/F}^{-1}(\Theta_F)$ of $E$-lifts of~$\Theta_F$. \end{pp} \begin{proof} By~\cite{BHliftingIV} 1.5.1, $\Gal(E/F)$ is transitive on~$\Res_{E/F}^{-1}(\Theta_F)$, which is in bijection with the set of simple components of $E \otimes_F F[\beta]$ for any parameter field $F[\beta]$ for~$\theta$. But~$E$ is $F$-isomorphic to the maximal unramified extension of~$F$ in~$F[\beta]$, hence \begin{displaymath} E \otimes_F F[\beta] \cong \prod_{\sigma: E \to F[\beta]}F[\beta] \end{displaymath} and so the fiber $\Res_{E/F}^{-1}(\Theta_F)$ has as many elements as $\Gal(E/F)$. \end{proof} If we fix a lift~$\Theta_E$ of~$\Theta_F$ to~$E$, it follows that for \emph{any} unramified parameter field~$E^{\mathrm{par}}$ for~$\theta$ we can define $\iota_{E^{\mathrm{par}}}: E \to E^{\mathrm{par}}$ to be the only $F$-linear isomorphism such that $\iota_{E^{\mathrm{par}}}^*\cl(\theta_{E^{\mathrm{par}}}) = \Theta_E$; by proposition~\ref{urparameterlift}, $\iota_{E^{\mathrm{par}}}$ is well-defined, and by proposition~\ref{sameendo-classes} this defines a compatible system of isomorphisms. From this, we deduce that~$\Theta_E$ gives rise to a conjugacy class of isomorphisms \begin{displaymath} \Psi(\Theta_E): J_\theta/J^1_\theta \to \GL_{n/\delta(\Theta_F)}(\be) \end{displaymath} for any maximal realization~$\theta$ of~$\Theta_F$ in~$G$, by setting $\Psi(\Theta_E) = \Psi(\iota)$ for the~$\iota$ just constructed. We now define~$\bK_{\kappa}$, the $\bK$-functor associated to a $\beta$-extension~$\kappa$ of~$\theta$, by the composition of $\bK^+_{\kappa}$ and pushforward by~$\Psi(\Theta_E)$: this is an inner conjugacy class of isomorphisms, hence its action on isomorphism classes of representations is well-defined. \begin{pp}\label{sameKfunctor} If $\theta_1 = \ad(g)^*\theta_2$ are conjugate maximal simple characters in~$G$, and~$\kappa_1 = \ad(g)^*\kappa_2$ are $\beta$-extensions of the~$\theta_i$, then $\bK_{\kappa_1} = \bK_{\kappa_2}$. Conversely, if $\kappa_1, \kappa_2$ are $\beta$-extensions of~$\theta$ with $\bK_{\kappa_1} = \bK_{\kappa_2}$, then $\kappa_1 = \kappa_2$. \end{pp} \begin{proof} Let~$E_1$ be an unramified parameter field for~$\theta_1$, and let~$\iota_{E_1} : E \to E_1$ be the only $F$-linear isomorphism with $\iota_{E_1}^*\cl(\theta_{1, E_1}) = \Theta_E$. Then $g E_1 g^{-1}$ is an unramified parameter field for~$\theta_2$, and we have an isomorphism $\ad(g) \circ \iota_{E_1} : E \to g E_1 g^{-1}$. Since $\theta_1 = \ad(g)^*\theta_2$, the relation $\theta_{1, E_1} = \ad(g)^*\theta_{2, g E_1 g^{-1}}$ holds on the interior lifts. Hence $(\ad(g) \circ \iota_{E_1})^*\cl \theta_{2, g E_1 g^{-1}} = \Theta_E$ and $\ad(g) \circ \iota_{E_1}$ is the isomorphism specified by~$\Theta_E$. So conjugation by~$g$ preserves the classes~$\Psi(\Theta_E)$ of isomorphisms $J_{\theta_i}/J^1_{\theta_i} \to \GL_{n/\delta(\Theta_F)}(\be)$, and since $\bK_{\kappa_1}^+ = \ad(g)^*\bK_{\kappa_2}^+$ the first claim follows. Now assume that the~$\kappa_i$ are $\beta$-extensions of~$\theta$ and~$\bK_{\kappa_1} = \bK_{\kappa_2}$. There exists an abelian character~$\chi$ of~$J_\theta/J^1_\theta$ such that~$\chi\kappa_1 \cong \kappa_2$. By lemma~\ref{Kcuspidal} we see that $\bK_{\kappa_1}^+$ and~$\chi \bK_{\kappa_2}^+$ coincide on cuspidal representations, so that~$\chi$ is a character of~$\be^\times$ fixing all irreducible cuspidal representations of~$\GL_{n/\delta(\Theta_F)}(\be)$. This implies that~$\chi = 1$: we prove this in proposition~\ref{fixingcharacter}, after recalling the classification of cuspidal representations of~$\GL_n(\be)$. \end{proof} By proposition~\ref{sameKfunctor}, we can speak of the $\bK$-functor associated to a lift~$\Theta_E \to \Theta_F$ and a $\GL_n(F)$-conjugacy class of $\beta$-extensions of the maximal simple characters of endo-class~$\Theta_F$. We will often omit mention of the conjugacy class and just refer to~$\bK_\kappa$. \subsection{Parabolic induction.}\label{parabolicinductionsection} Next we relate $\bK$-functors on different groups to deal with supercuspidal supports. Let~$(n_1, \ldots, n_r)$ be a sequence of positive integers summing to~$n$, defining a Levi subgroup~$M$ of~$\GL_n(F)$. Let~$\Theta_F$ be an endo-class whose degree divides all the~$n_i$. Then to every maximal $\beta$-extension~$\kappa$ in~$\GL_n(F)$ of endo-class~$\Theta_F$ we can associate a unique sequence $(\kappa_1, \ldots, \kappa_n)$ of \emph{compatible} maximal $\beta$-extensions~$\kappa_i$ in~$\GL_{n_i}(F)$, also of endo-class~$\Theta_F$. The compatibility we need can be expressed as follows. \begin{pp}\label{parabolicinduction} Fix a lift~$\Theta_E \to \Theta_F$, and let~$\pi_i$ be an irreducible representation of~$\GL_{n_i}(F)$. Then there is a canonical isomorphism \begin{displaymath} \bK_{\kappa}(\pi_1 \times \cdots \times \pi_r) \to \bK_{\kappa_1}(\pi_1) \times \cdots \times \bK_{\kappa_r}(\pi_r) \end{displaymath} where~$\prod_{i} \GL_{n_i}(F)$ is block-diagonally embedded, and the parabolic induction at the right-hand side is for the Levi subgroup $\prod_{i} \GL_{n_i/\delta(\Theta_F)}(\be)$ of~$\GL_{n/\delta(\Theta_F)}(\be)$. \end{pp} To see that these properties determines the~$(\kappa_i)$ uniquely, let the~$\pi_i$ vary amongst irreducible cuspidal representations with endo-class~$\Theta_F$, and apply uniqueness of cuspidal support in finite general linear groups together with propositions~\ref{Kcuspidal} and~\ref{sameKfunctor}. Because $\bK_\kappa \cong \chi\bK_{\chi\kappa}$ for any character~$\chi$ of~$\be^\times$, we also see that if~$\kappa$ and~$(\kappa_i)$ are compatible then so are~$\chi\kappa$ and~$(\chi\kappa_i)$. The existence of compatible $\beta$-extensions is established during the construction of covers of maximal simple types, and proposition~\ref{parabolicinduction} is a consequence of proposition~5.9 of~\cite{MStypes}, although this reference does not keep track of~$\Theta_E$. So we review the construction briefly. Fix a decomposition~$F^n = F^{n_1}\oplus\cdots\oplus F^{n_r}$. Assume that~$\theta$ is defined by a stratum $[\fA_{\max}, \beta]$ such that $F(\beta)$ preserves the decomposition and~$\fA_{\max}$ conforms to the decomposition (see 5.3 in~\cite{MStypes}). We obtain an embedded block-diagonal $ \prod_i A_i \cong \prod_i M_{n_i}(F)$ in~$M_n(F)$, for which $F(\beta)$ is diagonally embedded. The commutant $B=Z_A(E)$ contains the product~$B_1\times\cdots\times B_r$, and the order~$\fA_{\max}$ yields orders~$\fA_{\max, i}$ in each of the~$A_i$, by intersecting the lattice sequence of~$\fA_{\max}$ with~$F^{n_i}$. Transfer~$\theta$ to maximal simple characters~$\theta_i$ defined on the~$\fA_{\max, i}$. We are going to see that~$\kappa$ determines a $\beta$-extension of each of these~$\theta_i$. For this, we need to fix an $F[\beta]$-linear isomorphism $\Phi : B \to M_{m}(F[\beta])$ identifying $\fA^\times \cap B$ with $M_{m}(\fo_{F[\beta]})$, and a simple stratum~$[\Lambda, \beta]$ in~$A$ satisfying conditions~(1) and~(2) in~\cite{MStypes} section~5.3. Transfer~$\theta$ to a simple character~$\theta_{\Lambda}$ defined on~$\Lambda$ (it won't be maximal) and let~$\kappa_\Lambda$ be the transfer of~$\kappa$ to a $\beta$-extension of~$\theta_\Lambda$. Then let~$N$ be the upper-triangular unipotent group defined by the $(n_1, \ldots, n_r)$, and take the invariants of~$\kappa_\Lambda$ under $J(\beta, \Lambda) \cap N$: this is a representation of~$J(\beta, \Lambda) \cap M$ which by~\cite{SecherreStevensVI} proposition~6.6 decomposes as a tensor product of $\beta$-extensions of the~$\theta_i$. If~$n_i = n_j$ then the same reference shows that $\kappa_i \cong \kappa_j$; when studying simple inertial classes, all these~$n_i$ coincide. We will consider the functors $\bK_i$ that these maximal $\beta$-extensions~$\kappa_i$ define with the respect to the same lift $\Theta_E \to \Theta_F$. By construction, the isomorphism~$\Phi$ restricts to $\prod_i B_i \to \prod_i M_{m_i}(F[\beta_i])$ for some~$m_i$. The lift~$\Theta_E$ defines an $F$-linear embedding $\iota: E \to F[\beta]$, characterized by the equality $\cl(\iota^*\theta_{\Fbetaur}) = \Theta_E$. Projecting to the $i$-th factor of~$\prod_i M_{n_i}(F)$, the field~$F[\beta]$ identifies with a parameter field for~$\theta_i$. \begin{lemma}\label{compatibleinteriorlifting} The equality $\cl(\iota^*(\theta_{i, \Fbetaur})) = \Theta_E$ also holds. \end{lemma} \begin{proof} Recall that~$\theta_i$ is the transfer of~$\theta$ to~$\fA_{\max, i}$; we know by assumption that the interior lift~$\theta_{\Fbetaur}$ has endo-class~$\Theta_E$ under~$\iota$, and the content of the lemma is that the same is true for these transfers. This follows from the compatibility between interior lifts and transfer maps, for which see for instance~\cite{BSSV} theorem~6.7. \end{proof} \begin{proof}[Proof of proposition~\ref{parabolicinduction}] By proposition~5.9 in~\cite{MStypes}, we have an isomorphism \begin{displaymath} \bK^+_{\kappa}(\pi_1 \times \cdots \times \pi_r) \to \bK^+_{\kappa_1}(\pi_1) \times \cdots \times \bK^+_{\kappa_r}(\pi_r). \end{displaymath} Here, both sides are representations of~$J_\theta/J^1_\theta$ and the parabolic induction refers to $\prod_{i} J_{\theta_i} / J^1_{\theta_i}$, identified with a Levi subgroup of~$J_\theta/J^1_\theta$. By lemma~\ref{compatibleinteriorlifting}, any isomorphism $J_\theta/J^1_{\theta} \to \GL_{n/\delta(\Theta_F)}(\be)$ in the class~$\Psi(\Theta_E)$ restricts to an isomorphism $\prod_i J_{\theta_i} / J^1_{\theta_i} \to \prod_i \GL_{n_i/\delta(\Theta_F)}(\be)$ which is in the class~$\Psi(\Theta_E)$ on each factor. The claim follows. \end{proof} \begin{rk} So far, we have implicitly assumed that the endo-class~$\Theta_F$ is nontrivial. To treat the case of level zero representation, we fix a maximal order~$\fA$ in~$\GL_n(F)$ with Jacobson radical~$\fP$ and identify $\fA/\fP \to M_n(\mbf)$ by any $\mbf$-linear isomorphism. The unit group~$A^\times$ is a maximal compact subgroup, and we find a canonical inner conjugacy class of isomorphisms $\fA^\times/U^1(\fA) \to \GL_n(\mbf)$. The corresponding $\bK$-functor sends a smooth representation~$\pi$ of~$G$ to the representation of~$\fA^\times \cong \GL_n(\fo_F)$ on the $U^1(\fA)$-invariants of~$\pi$. The analogue of proposition~\ref{parabolicinduction} is true, and proved in~\cite{Vignerasrepsbook} lemme~III.3.14: given irreducible representations~$\pi_i$ of~$\GL_{n_i}(F)$, one has a canonical isomorphism $\bK(\pi_1 \times \cdots \times \pi_r) \to \bK(\pi_1) \times \cdots \times \bK(\pi_r)$. \end{rk} Whenever we have $(n_1, \ldots, n_r)$, and a sequence $(\kappa_i)$ with the same endo-class as~$\kappa$, it makes sense to ask whether they are compatible. As remarked above, a necessary condition is that $\kappa_i \cong \kappa_j$ if~$i= j$. In the simple case, in which all the~$n_i$ are equal to~$n/r$ for some positive divisor~$r$ of~$n$, every maximal $\beta$-extension~$\kappa_{n/r}$ admits a unique compatible $\beta$-extension~$\kappa_n$ (compare~\cite{MStypes} remarque~5.17). On refining the decomposition~$(n_i)$, we have the following transitivity result. \begin{pp}\label{refinement} Let~$(n_i)_{i \in I}$ be a sequence summing to~$n$. Assume that for all~$i$ we have a sequence $(m_{ij})_{j \in J_i}$ of positive integers summing to~$n_i$, such that~$\delta(\Theta_F)$ divides every~$m_{ij}$. Let~$\kappa$ be a $\beta$-extension in~$\GL_n(F)$ of endo-class~$\Theta_F$. Let~$(\kappa_i)_{i \in I}$ be a sequence of $\beta$-extensions compatible with~$\kappa$, and for all~$i$ let $(\kappa_{ij})_{j \in J_i}$ be a sequence compatible with~$\kappa_i$. Then $(\kappa_{ij})$ is compatible with~$\kappa$ for the sequence~$(m_{ij})$. \end{pp} \begin{proof} There exists a unique sequence $(\varkappa_{ij})$ of $\beta$-extensions of the~$\GL_{n/m_{ij}}(F)$ compatible with~$\kappa$. Fix cuspidal representations~$\rho_{ij}$ of~$\GL_{m_{ij}}(F)$, and form $\bK$-functors with respect to a fixed lift~$\Theta_E$. Then by proposition~\ref{parabolicinduction} we have isomorphisms \begin{align*} \bK_{\kappa}(\times_{i, j}\rho_{ij}) & \to \times_{i, j}\bK_{\varkappa_{ij}}(\rho_{ij}).\\ \bK_{\kappa}(\times_{i, j}\rho_{ij}) & \to \times_{i \in I} \bK_{\kappa_i}(\times_{j \in J_i} \rho_{ij}) \to \times_{ij}\bK_{\kappa_{ij}}(\rho_{ij}). \end{align*} By proposition~\ref{Kcuspidal}, the representations~$\bK_{\kappa_{ij}}(\rho_{ij})$ and~$\bK_{\varkappa_{ij}}(\rho_{ij})$ are irreducible and cuspidal. If the~$\varkappa_{ij}$ and the~$\kappa_{ij}$ were nontrivial twists of each other then we would derive a contradiction from proposition~\ref{fixingcharacter} and the uniqueness of cuspidal support in finite general linear groups, by letting the~$\rho_{ij}$ vary. \end{proof} \begin{rk}\label{Kinertial} A compatibility of this kind is implicit in~\cite{MStypes} remarque~5.17, so probably it can be proved directly (without the use of $\bK$-functors). \end{rk} \subsection{Level zero maps.} In this section we define the level zero part of a simple irreducible representation~$\pi$ of~$G = \GL_n(F)$. It only depends on the inertial class of~$\pi$, which for a supercuspidal representation~$\pi$ consists of the unramified twists of~$\pi$ and is determined by the conjugacy class of maximal simple types it contains. Recall that the coefficient field~$R$ is an algebraically closed field of characteristic different from~$p$. Assume that~$\pi$ is supercuspidal, and let~$\Theta_F$ be its endo-class. Fix a $\beta$-extension~$\kappa$ of a maximal simple character~$\theta$ in~$G$ with endo-class~$\Theta_F$. Let $\Theta_E \to \Theta_F$ be a lift. Then, $\pi$ contains a unique maximal simple type of the form~$(J_\theta, \kappa \otimes \sigma)$. Using the conjugacy class~$\Psi(\Theta_E)$ as in the previous section, we identify~$\sigma$ with a representation~$\bK_{\kappa}(\pi)$ of~$\GL_{n/\delta(\Theta_F)}(\be)$, which by~\cite{MSreps} lemme~6.1 and lemme~6.8 is supercuspidal if so is~$\pi$. We will refer to this representation as the \emph{level zero part} of~$\pi$. It only depends on the inertial class of~$\pi$ and the lift~$\Theta_E$. For a simple representation~$\pi$ of~$G$ with supercuspidal support inertially equivalent to~$(\GL_{n/r}(F)^{\times r}, \pi_0^{\otimes r})$, such that~$\pi_0$ has endo-class~$\Theta_F$, we define the level zero part of~$\pi$ to be the level zero part of~$\pi_0$, computed with respect to the $\beta$-extension compatible to~$\kappa$ and the same lift~$\Theta_E$. By proposition~\ref{parabolicinduction}, the supercuspidal support of every Jordan--H\"older factor of~$\bK_\kappa(\pi)$ is a multiple of the level zero part of~$\pi$. \paragraph{} To go further in the study of the level zero part, we recall some properties of the Green parametrization of cuspidal $\cbQ_\ell$-representations of general linear groups over finite fields, and of its analogue mod~$\ell$ studied by James. Over~$\cbQ_\ell$, we have a bijection \begin{displaymath} \sigma: (\text{orbits of~$\Gal(\be_n / \be)$ on $\be$-regular characters of~$\be_n^\times$}) \to (\text{supercuspidal irreducible representations of~$\GL_n(\be)$}) \end{displaymath} characterized by a character identity on maximal elliptic tori (see~\cite{Greencharacters} or section~2 of~\cite{BHLLIII}). We recall that if $\be_n^\times$ is embedded in~$\GL_n(\be)$ via the left multiplication action on~$\be_n$, and $x \in \be_n^\times$ is a primitive element for the extension $\be_n / \be$, then \begin{displaymath} \tr \sigma[\chi](x) = (-1)^{n-1}\sum_{i=0}^{n-1}\chi(F^i x) \end{displaymath} for~$F$ the Frobenius element of~$\Gal(\be_n/\be)$. A character $\chi : \be_n^\times \to \cbQ_\ell^\times$ decomposes uniquely as a product of an $\ell$-singular part~$\chi_{(\ell)}$ and an $\ell$-regular part~$\chi^{(\ell)}$, whose orbits under~$\Gal(\be_n / \be)$ only depend on the orbit of~$\chi$. We use the mod~$\ell$ reduction map to identify the prime-to-$\ell$ roots of unity in~$\cbQ_\ell$ and~$\cbF_\ell$. Then the reduction mod~$\ell$ of~$\chi$ identifies with~$\chi^{(\ell)}$. The reduction~$\br_\ell(\sigma[\chi])$ is irreducible and cuspidal, and only depends on~$[\chi^{(\ell)}]$. We denote it by~$\sigma_\ell[\chi^{(\ell)}]$. This defines a bijection, from the orbits of~$\Gal(\be_n/\be)$ on the characters of~$(\be_n^\times)^{(\ell)}$ which have an $\be$-regular extension to~$\be_n^\times$, to the set of cuspidal irreducible representations of~$\GL_n(\be)$ over~$\cbF_\ell$. The representation~$\sigma_\ell[\chi^{(\ell)}]$ is supercuspidal if and only if~$[\chi^{(\ell)}]$ is itself $\be$-regular. Finally, if $\chi^{(\ell)}$ is norm-inflated from an $\be$-regular $\cbF_\ell$-character~$\chi^{(\ell), \reg}$ of~$\be_{n/a}^\times$ for some positive divisor~$a$ of~$n$, then the supercuspidal support of $\br_\ell(\sigma[\chi])$ is $\sigma_\ell[\chi^{(\ell), \reg}]^{\otimes a}$ (see~\cite{Vignerasrepsbook}~III.2.8 and \cite{MStypes} th\'eor\`eme~2.36). \begin{example} Since $\bF_9^\times$ has eight elements, the only character $\bF_9^\times \to \cbF_2^\times$ is the trivial character. Hence $\GL_2(\bF_3)$ has no supercuspidal representations over~$\cbF_2$, and precisely one cuspidal irreducible representation, with supercuspidal support~$1\otimes 1$. \end{example} \begin{pp}\label{fixingcharacter} Let~$R$ be an algebraically closed field of characteristic $\ell \not = p$, let $\be = \bF_q$, and let~$\psi$ be an $R$-character of~$\be^\times$ such that~$\psi\pi \cong \pi$ for all cuspidal $R$-representations of~$\GL_n(\be)$. Then~$\psi = 1$. \end{pp} \begin{proof} Assume first that~$R = \cbQ_\ell$. By theorem~1.1 of~\cite{SZcharacters}, the equality $\tr \sigma[\chi_1] = \tr \sigma[\chi_2]$ holds on primitive elements of~$\be_n / \be$ if and only if $[\chi_1] = [\chi_2]$. This implies that $\psi \otimes \sigma[\chi] \cong \sigma[\psi \chi]$. Then the claim follows because if $\psi \not = 1$ there always exists an $\be$-regular character~$\chi$ of~$\be_n^\times$ with no $\Gal(\be_n/\be)$-conjugate of the form $\psi\chi$. Indeed, if $\chi^{q^i} = \psi\chi$ then $\chi^{(q-1)(q^i-1)} = 1$, and taking~$\chi$ to be a generator of the character group yields a contradiction if $0 < i \leq n-1$. Then the claim holds for any~$R$ of characteristic zero. For $R = \cbF_\ell$, such a~$\psi$ lifts to a character $\psi: (\be^\times)^{(\ell)} \to \cbQ_\ell^\times$ such that for all $\be$-regular $\chi: \be_n^\times \to \cbQ_\ell^\times$ we have $[\chi^{(\ell)}] = [\chi^{(\ell)}\psi]$. By duality, we get an element $x \in (\be^\times)^{(\ell)}$ such that whenever $z \in \be_n^\times$ is $\be$-regular we have $[z^{(\ell)}x] = [z^{(\ell)}]$. Assume that $\be_n^\times$ contains an $\ell$-singular element---that is, some $\zeta \in \mu_{l^\infty}(\be_n)$---which is $\be$-regular. Then $\zeta^{(\ell)} = 1$ implies $[x] = [1]$, hence $x = 1$. Otherwise, there exists a proper divisor $a | n$ such that $(\be_n^\times)_{(\ell)} = (\be_a^\times)_{(\ell)}$. Let~$\tau$ be a generator of $(\be_n^\times)^{(\ell)}$: then~$\tau$ is the $\ell$-regular part of some $\be$-regular element of~$\be_n^\times$, which can be chosen to be a generator of~$\be_n^\times$. There exists a proper divisor~$b$ of~$n$ such that $(\Frob_q)^b \tau = \xi \tau$ for some $\xi \in \be^\times$, because the set of $g \in \Gal(\be_n/\be)$ with $(g\tau)\tau^{-1} \in \be^\times$ is a subgroup and by assumption it is not trivial if $x \not = 1$. Let~$w$ be the order of $\xi \in \be^\times$, which is a divisor of~$|\be^\times| = q-1$. Then $(\Frob_q)^b(\tau^w) = (\xi \tau)^w = \tau^w$, and $(\Frob_q)^b$ fixes the subgroup $w\cdot (\be_n^{\times})^{(\ell)}$, which has index at most~$w^{(\ell)}$ in~$(\be_n^{\times})^{(\ell)}$. Since $\be_n^\times \cong (\be_n^\times)^{(\ell)}\times (\be_n^\times)_{(\ell)}$, we find a bound \begin{displaymath} q^n - 1 \leq w^{(\ell)}|(\be_b^\times)^{(\ell)}||(\be_a^\times)_{(\ell)}|. \end{displaymath} Since $w | q - 1$, we have that $w^{(\ell)}|(\be_a^\times)_{(\ell)}|$ divides $|\be_a^\times|$. Then the bound yields $q^n - 1 \leq (q^a-1)(q^b-1)$ for certain proper divisors $a, b |n$. This is impossible even if both~$a$ and~$b$ coincide with the largest proper divisor~$d$ of~$n$, because \begin{displaymath} \frac{q^n - 1}{q^d - 1} = 1 + q^d + \cdots + q^{d(\frac{n}{d} -1)} > q^d -1. \end{displaymath} The claim then holds over~$\cbF_\ell$, and follows over arbitrary~$R$ of characteristic~$\ell$ because an irreducible $\cbF_\ell$-representation of~$\GL_n(\be)$ is absolutely irreducible, and the number of irreducible representations over~$\cbF_\ell$ and~$R$ is the same (it is the number of $\ell$-regular conjugacy classes in~$\GL_n(\be)$). \end{proof} Using the mod~$\ell$ reduction map $\cbQ_\ell^\times \to \cbF_\ell^\times$, and the fact that~$H^1_\theta$ is a pro-$p$ group for every maximal simple character~$\theta$ in~$\GL_n(F)$, we identify maximal simple characters over these fields. By~\cite{MStypes} proposition~2.37, the reduction of every lattice in a $\beta$-extension~$\kappa$ of a maximal simple $\cbQ_\ell$-character~$\theta$ is a $\beta$-extension of the reduction of~$\theta$. For an endo-class~$\Theta_F$ we write $X_R(\Theta_F)$ for the group of $R$-values characters of~$\be_{n/\delta(\Theta_F)}^\times$ (no regularity assumption) and $\Gamma(\Theta_F)$ for the Galois group~$\Gal(\be_{n/\delta(\Theta_F)}/\be)$. Then over $R = \cbQ_\ell$ a choice of $\beta$-extension~$\kappa$ determines a \emph{level zero map} \begin{displaymath} \Lambda_{\kappa, \cbQ_\ell}: (\text{simple inertial classes with endo-class~$\Theta_F$}) \to \Gamma(\Theta_F) \backslash X_{\cbQ_\ell}(\Theta_F) \end{displaymath} and via the reduction of~$\kappa$ there is a similar map over~$\cbF_\ell$, \begin{displaymath} \Lambda_{\br_\ell(\kappa), \cbF_\ell}: (\text{simple inertial classes with endo-class~$\Theta_F$}) \to \Gamma(\Theta_F) \backslash X_{\cbF_\ell}(\Theta_F). \end{displaymath} In more detail, a simple inertial class has the form $[\GL_{n/m}(F), \pi_0^{\otimes m}]$, where~$\pi_0$ is supercuspidal. If~$\pi_0$ has endo-class~$\Theta_F$ then its level zero part is an irreducible supercuspidal representation of~$\GL_{n/m\delta(\Theta_F)}(\be)$, which corresponds to an orbit of $\be$-regular characters of~$\be_{n/m\delta(\Theta_F)}^\times$. By definition, $\Lambda_\kappa[\GL_{n/m}(F), \pi_0^{\otimes m}]$ is the inflation to~$\be_{n/\delta(\Theta_F)}^\times$ of this orbit. By the structure of blocks for~$\GL_n(F)$, both maps are bijections. They satisfy the following compatibility with respect to reduction modulo~$\ell$. \begin{lemma}\label{modlreduction} Let~$\pi$ be an integral $\cbQ_\ell$-representation which is simple of endo-class~$\Theta_F$. Then all the factors of its reduction mod~$\ell$ have the same supercuspidal support, and are simple of endo-class~$\Theta_F$. If~$\tau$ is a factor of~$\br_\ell(\pi)$, then $\Lambda_{\br_\ell(\kappa), \cbF_\ell}(\tau) = \Lambda_{\kappa, \cbQ_\ell}(\pi)^{(\ell)}$. \end{lemma} \begin{proof} The representation~$\pi$ is a subquotient of a parabolic induction $\chi_1\pi^0 \times \cdots \times \chi_m\pi^0$ for an integral supercuspidal representation~$\pi^0$ of some~$\GL_{n/m}(F)$ and unramified characters~$\chi_i$ valued in~$\overline{\bZ}_\ell^\times$. Then the Jordan--H\"older factors of~$\br_\ell(\pi)$ form a subset of those of~$\overline{\chi}_1\br_\ell(\pi^0) \times \cdots \times \overline{\chi}_m\br_\ell(\pi^0)$. So they have all the same supercuspidal support, which consists of unramified twists of a single supercuspidal representation~$\pi^0_\ell$. Since~$\br_\ell(\pi^0)$ is irreducible and cuspidal and contains the reduction of a maximal simple character contained in~$\pi^0$, the endo-class of~$\pi^0_\ell$ is~$\Theta_F$ by proposition~\ref{parabolicinduction}. By~\cite{MStypes} lemme~5.11, the equality $\br_\ell[\bK^+_{\kappa}(\pi)] = [\bK^+_{\br_\ell(\kappa)}(\br_\ell(\pi))]$ holds. Over~$\cbQ_\ell$ and~$\cbF_\ell$ alike, we compute the level zero part by taking a parameter field~$F[\beta]$ of a maximal simple character~$\theta$ with endo-class~$\Theta_F$, and an embedding $E \to \Fbetaur$ so that the pullback of the interior lift of~$\theta$ has endo-class~$\Theta_E$, and so~$\br_\ell([\bK_\kappa(\pi)] = [\bK_{\br_\ell(\kappa)}(\br_\ell(\pi))]$. By construction and proposition~\ref{parabolicinduction}, $\Lambda_{\br_\ell(\kappa), \cbF_\ell}(\tau)$ is the inflation of the character orbit corresponding to the supercuspidal support of~$\bK_{\br_{\ell}(\kappa)}(\tau)$. Every factor of~$\bK_\kappa(\pi)$ has supercuspidal support $\bK_{\kappa_0}(\pi^0)^{\otimes m}$, where~$\kappa_0$ is compatible with~$\kappa$. Hence the reduction of every factor of~$\bK_\kappa(\pi)$ has the same supercuspidal support as~$\br_\ell(\bK_{\kappa_0}(\pi^0)^{\otimes m})$. Again, $\Lambda_{\kappa, \cbQ_\ell}(\pi)$ is the inflation of the character orbit corresponding to~$\bK_{\kappa_0}(\pi^0)$, hence, by the discussion above, the supercuspidal support of $\br_\ell(\bK_{\kappa_0}(\pi^0))$ is a multiple of $\sigma_\ell(\Lambda_{\kappa, \cbQ_\ell}(\pi)^{(\ell), \reg})$. Since~$\bK_{\br_{\ell}(\kappa)}(\tau)$ appears in the reduction of~$\bK_\kappa(\pi)$, the claim follows. \end{proof} We record a lemma on the behaviour of the level zero map under change of lift. \begin{lemma} Write~$\Lambda_\kappa^{\Theta_E}$ for the level zero map of the $\beta$-extension~$\kappa$ formed with respect to the lift $\Theta_E \to \Theta_F$. Let~$\gamma \in \Gal(E/F)$. Then $\gamma^*\Lambda_\kappa^{\gamma^*\Theta_E} = \Lambda_\kappa^{\Theta_E}$. \end{lemma} \begin{proof} Given an isomorphism $J_\theta/J^1_\theta \to \GL_{n/\delta(\Theta_F)}(\be)$ in the conjugacy class~$\Psi(\Theta_E)$, induced by an embedding $E \to F[\beta]$ in a parameter field, one sees that the isomorphism $J_\theta/J^1_\theta \to \GL_{n/\delta(\Theta_F)}(\be) \xrightarrow[]{\gamma} \GL_{n/\delta(\Theta_F)}(\be)$ is induced by~$E \xrightarrow[]{\gamma} E \to F[\beta]$, hence is in conjugacy class~$\Psi(\gamma^*\Theta_E)$. The claim follows. \end{proof} In the rest of the paper, we will usually have a fixed lift~$\Theta_E \to \Theta_F$, and won't mention~$\Theta_E$ in the notation for~$\Lambda_\kappa$. Similarly, we won't mention the coefficient field when it is clear from the context. \section{Langlands parameters.} \subsection{Langlands correspondence and change of fields.} We briefly review the local Langlands correspondence for~$\GL_n(F)$ over the complex numbers. The Langlands parameters for~$\GL_n(F)$ can be identified with Frobenius-semisimple Weil--Deligne representations over the complex numbers, which are pairs $(V, N)$ consisting of a semisimple smooth representation of~$W_F$ and a nilpotent monodromy operator $N: V(1) \to V$. They can be written uniquely as direct sums \begin{displaymath} V = \bigoplus_i \sigma_i \otimes \Sp(n_i) \end{displaymath} for irreducible smooth representations~$\sigma_i$ of~$W_F$. The special representation~$\Sp(n)$ has a basis $\{ e_1, \cdots, e_n\}$ such that $we_i = |w|^{i-1}e_i$ for~$w \in W_F$, and the monodromy acts as $Ne_i = e_{i+1}$ for $i \in \{1, \ldots, n-1 \}$. The local Langlands correspondence is a bijection, denoted~$\rec$, of the isomorphism classes of irreducible complex representations of~$\GL_n(F)$ onto the complex Frobenius-semisimple Weil--Deligne representations of dimension~$n$. It restricts to a bijection~$\rec^0$ from supercuspidal irreducible representations to irreducible smooth $W_F$-representations (since the kernel of~$N$ is stable under~$W_F$, these have trivial monodromy). It satisfies a number of compatibilities we shall not use directly, for which see section~1 of~\cite{HenniartRome} for instance. We will need, however, the compatibility of~$\rec$ with the Bernstein--Zelevinsky classification. Recall that a \emph{segment} of complex supercuspidal representations of~$\GL_{n}(F)$, of length~$m$, consists of a sequence \begin{displaymath} (\rho, \rho(1), \ldots, \rho(m-1)) \end{displaymath} of twists of a supercuspidal~$\rho$ by powers of the unramified character $g \mapsto |\det(g)|$. The representations of~$\GL_n(F)$ are in bijection with the multisets of segments of total length~$n$. \begin{lemma}\label{Langlandssupport} Let~$\pi$ be an irreducible complex representation of~$\GL_n(F)$ corresponding to the multiset~$\{ \Delta_1, \ldots, \Delta_r\}$, where $\Delta_i = (\rho_i, \ldots, \rho_i(n_i - 1))$. Then $\rec(\pi) = \oplus_i \rec(\rho_i) \otimes \Sp(n_i)$. It follows that if~$\pi$ has supercuspidal support \begin{displaymath} [\GL_{n_1}(F) \times \cdots \times \GL_{n_r}(F), \pi_1 \otimes \cdots \otimes \pi_r] \end{displaymath} then the $W_F$-representation underlying~$\rec(\pi)$ is $\rec(\pi_1) \oplus \cdots \oplus \rec(\pi_r)$. \end{lemma} \begin{proof} This holds by the construction of~$\rec$ from~$\rec^0$, for which see~\cite{RodierBourbaki} section~4.4. \end{proof} The inertial class of an irreducible representation of~$\GL_n(F)$ is described by the restriction to inertia of its Langlands parameter, in the following sense. For a Weil--Deligne representation~$\tau$, write~$\tau|I_F$ to denote the restriction to~$I_F$ of the underlying $W_F$-representation. \begin{lemma}\label{inertialcorrespondence} Let~$\pi_1, \pi_2$ be two irreducible representations of~$\GL_n(F)$. Then $\rec(\pi_1)|_{I_F} \cong \rec(\pi_2)|_{I_F}$ if and only if~$\pi_1$ and~$\pi_2$ are inertially equivalent. \end{lemma} \begin{proof} If the~$\pi_i$ are inertially equivalent then the~$W_F$ representations underlying~$\rec(\pi_i)$ have the same restriction to~$I_F$ by lemma~\ref{Langlandssupport} and the compatibility of~$\rec$ with unramified twists. Conversely, assume that $\rec(\pi_1)|_{I_F} \cong \rec(\pi_2)|_{I_F}$ and let~$\tau_1$ occur in the supercuspidal support of~$\pi_1$. By lemma~\ref{Langlandssupport}, $\rec(\tau_1)$ is a direct summand of $\rec(\pi_1)|_{W_F}$, hence $\rec(\tau_1)|{I_F}$ shares a constituent with~$\rec(\tau_2)|_{I_F}$ for some~$\tau_2$ in the supercuspidal support of~$\pi_2$. Since the restriction of an irreducible $W_F$-representation to~$I_F$ is multiplicity-free and consists of a single orbit of representations under the action of~$W_F/I_F$, this implies that $\rec(\tau_1)|{I_F} \cong \rec(\tau_2)|{I_F}$. Hence~$\rec(\tau_1)$ and~$\rec(\tau_2)$ are unramified twists of each other, so that an unramified twist of~$\tau_1$ occurs in the supercuspidal support of~$\pi_2$. The claim follows. \end{proof} Concerning the wild part of the Langlands parameter, we recall the following result of Bushnell and Henniart. Write~$P_F^\vee$ for the set of irreducible smooth representations of~$P_F$, and~$\mathcal{E}(F)$ for the set of endo-classes of simple characters over~$F$. There is a left action of~$W_F$ on~$P_F^\vee$ by conjugation. If~$\sigma$ is an irreducible representation of~$W_F$, then let~$r_F^1(\sigma) \in W_F \backslash P_F^\vee$ be the orbit contained in the restriction $\sigma |_{P_F}$ (which needn't be multiplicity-free). \begin{thm}[See~\cite{BHeffective}, Ramification Theorem]\label{ramification} The Langlands correspondence induces a bijection \begin{displaymath} \Phi_F : W_F \backslash P_F^\vee \to \mathcal{E}(F) \end{displaymath} such that $\Phi_F(r^1_F(\sigma))$ is the endo-class of~$\rec^{-1}(\sigma)$, for any irreducible~$\sigma$. If $\gamma : F \to F$ is a topological automorphism, extended in some way to an automorphism of~$W_F$, then $\Phi_F(\gamma^*[\alpha]) = \gamma^*\Phi_F[\alpha]$ for all $[\alpha] \in W_F \backslash P_F^\vee$. \end{thm} \begin{rk} These results have been extended in the supercuspidal case to study the behaviour of the whole ramification filtration under the local Langlands correspondence, see~\cite{BHRamification}. However, we will not make use of this. \end{rk} We will need to work over~$\cbQ_\ell$ for primes~$\ell \not = p$. For this, we can fix a ring isomorphism~$\iota_\ell: \bC \to \cbQ_\ell$ and then transfer~$\rec$ to a bijection~$\rec_\ell$ between irreducible representations of~$\GL_n(F)$ over~$\cbQ_\ell$ and Frobenius-semisimple Weil--Deligne representations of dimension~$n$ over~$\cbQ_\ell$. However, some care has to be taken since the Langlands correspondence does not commute with all automorphisms of~$\bC$: see~\cite{HenniartRome}~7.4. One way of getting around this is to fix a square root of~$q$ in~$\bC$ and~$\cbQ_\ell$ and to work with isomorphisms~$\iota_\ell$ that preserve it. In any case, any two choices of~$\iota_\ell$ define bijections~$\rec_\ell$ which differ at most by a quadratic unramified twist at any given $\cbQ_\ell$-representation of~$\GL_n(F)$. Since we'll mostly be concerned with the restriction to inertia of Weil--Deligne representations, our results will be independent of the choice of~$\iota_\ell$. For instance, the Ramification Theorem holds over~$\cbQ_\ell$: any choice of~$\iota_\ell$ induces via~$\rec_\ell$ a bijection between endo-classes for~$F$ over~$\cbQ_\ell$ and orbits of~$W_F$ on irreducible smooth $\cbQ_\ell$-representations of~$P_F$, and this bijection is independent of the choice of~$\iota_\ell$. Since~$P_F$ is a pro-$p$ group, the orbits of its irreducible smooth $\cbF_\ell$-representations under~$W_F$ are identified with those over~$\cbQ_\ell$ by choosing a lattice (which will be unique up to homothety) and reducing mod~$\ell$ (the reduction will be irreducible). Similarly, the endo-classes over~$\cbQ_\ell$ are identified with those over~$\cbF_\ell$, and the Ramification Theorem also holds over~$\cbF_\ell$. \subsection{Level zero maps.}\label{levelzeroparameters} A \emph{supercuspidal inertial type} for~$W_F$ is the restriction to inertia of an irreducible representation~$\sigma$ of~$W_F$. In this section, we use Clifford theory for the group~$W_F$ over the algebraically closed field~$R$ (of characteristic different from~$p$), as in~\cite{Vignerasl1} and section~1 of~\cite{BHeffective}, to define the level zero part of a supercuspidal inertial type. Let~$\sigma$ be an irreducible $R$-representation of~$W_F$ of dimension~$n$. Since~$P_F$ is a normal subgroup of~$W_F$, the restriction $\sigma|_{P_F}$ is semisimple and consists of a single~$W_F$-orbit of irreducible representations (possibly with multiplicity). Let~$\alpha$ be a representative of this $W_F$-orbit. Let~$T = T_\alpha = Z_F(\alpha)$ be the tamely ramified extension of~$F$ corresponding to the stabilizer of~$\alpha$ in~$W_F$. By~\cite{BHeffective} 1.3, there exists a unique extension~$\rho_\alpha$ of~$\alpha$ to~$I_T$ with $p$-primary determinant, and~$\rho_\alpha$ extends to~$W_T$. We denote by~$\rho(\alpha)$ an arbitrary choice of extension of~$\rho_\alpha$ to~$W_T$. As in~\cite{Vignerasl1} section~2.6, there exists a unique tamely ramified representation $\sigmatr(\alpha)$ of~$W_T$, denoted~$\tau$ in~\cite{BHeffective}, such that $\sigma \cong \Ind_T^F(\rho(\alpha) \otimes \sigmatr(\alpha))$. Pass to the $\alpha$-isotypic component~$\sigma_\alpha$ of~$\sigma$. This carries the irreducible representation $\rho(\alpha) \otimes \sigmatr(\alpha)$ of~$W_T$: to see this, notice that $\rho(\alpha) \otimes \sigmatr(\alpha)$ is an irreducible $W_T$-subspace of~$\sigma_\alpha$, and that if~$g \in W_F \backslash W_T$ then $g(\rho(\alpha) \otimes \sigmatr(\alpha)) \cap \sigma_\alpha = 0$, hence $R[W_F](\rho(\alpha) \otimes \sigmatr(\alpha))$ would be a proper $W_F$-subspace in~$\sigma$. The representation~$\sigmatr(\alpha)$ can be written uniquely as an induced representation $\Ind_{T_d}^T(\chi_1(\alpha))$ for some unramified extension $T_d/T$ of degree~$d>0$ and some $\Gal(T_d/T)$-orbit of $T$-regular characters~$[\chi_1(\alpha)]$ of~$T_d^\times$, where~$\chi_1(\alpha)$ is trivial on~$U^1(T_d)$ and is inflated to a character of~$W_{T_d}$ via the Artin reciprocity map \begin{displaymath} \Art_{T_d}^{-1}: W_{T_d} \to T_d^\times. \end{displaymath} One then finds that $\sigma \cong\Ind_{T_d}^F(\rho_d(\alpha) \otimes \chi_1(\alpha))$ for the restriction~$\rho_d(\alpha)$ of~$\rho(\alpha)$ to~$W_{T_d}$. Write~$\chi(\alpha) = \chi_1(\alpha) |\mu_{T_d}$. Then the restriction of~$\sigma_\alpha$ to~$I_{T_d} = I_T$ is a direct sum of the twists $\rho_\alpha \otimes \chi(\alpha)$ for $\chi(\alpha) \in [\chi(\alpha)]$, hence we can recover~$[\chi(\alpha)]$ from~$\sigma$ in a direct way: take the $\alpha$-isotypic component~$\sigma_\alpha$, restrict it to~$I_{T_\alpha}$, and decompose the restriction as a direct sum of twists of~$\rho_\alpha$, which is the only irreducible extension of~$\alpha$ to~$I_{T_\alpha}$ with $p$-primary determinant character. By the Ramification Theorem~\ref{ramification}, to give the $W_F$-orbit $[\alpha]_F$ is the same as to give an endo-class~$\Theta_F = \Phi_F[\alpha]_F$. By the Tame Parameter Theorem of~\cite{BHeffective}, the field~$T$ above is isomorphic over~$F$ to a tame parameter field for~$\Theta_F$, and the degree~$\delta(\Theta_F)$ equals~$[T:F]\dim \alpha$. Since~$\sigma$ decomposes as the direct sum of its $\alpha$-isotypic components for $\alpha \in [\alpha]_F$ and the orbit $[\alpha]_F$ has~$[T:F]$ elements, and~$\rho(\alpha)$ extends~$\alpha$, we have the equality $n = [T:F](\dim \alpha)(\dim \sigmatr(\alpha))$, and $d = \dim \sigmatr(\alpha) = n/\delta(\Theta_F)$. Let's introduce the maximal unramified extension~$E = \Tur$ of~$F$ in~$T$. This is independent of the choice of~$\alpha$, and isomorphic to the unramified parameter field of~$\Theta_F$ in~$\overline{F}$. At this stage, we have attached to~$\sigma$ an endo-class~$\Theta_F$ of degree dividing~$n = \dim(\sigma)$, and whenever we choose a representative~$\alpha$ of the orbit~$[\alpha]_F$ attached to~$\Theta_F$, we obtain a $\Gal(\be_{n/\delta(\Theta_F)}/\be)$-orbit~$[\chi(\alpha)]$ of~$\be$-regular characters of~$\be_{n/\delta(\Theta_F)}^\times$, since $\mu_T = \mu_E$, $\mu_{T_d} = \mu_{E_d}$ and~$d = n/\delta(\Theta_F)$. We now consider how this character orbit $[\chi(\alpha)]$ changes when we change representative $\alpha \in [\alpha]_F$. By our explicit description of~$[\chi(\alpha)]$ in terms of the $\alpha$-isotypic component of~$\sigma$, it follows that $g^*[\chi(g\alpha)] = [\chi(\alpha)]$, hence $[\chi(\alpha)] = [\chi(g\alpha)]$ if and only if~$g \in W_E$, otherwise the orbit changes according to the image of~$g$ in $W_F/W_E \cong \Gal(E/F)$. Indeed if $g \in W_F$, then $Z_{W_F}(g\alpha) = g Z_{W_F}(\alpha) g^{-1}$, hence $T_{g\alpha} = gT_\alpha$, where $T_\alpha$ is regarded via the Galois correspondence as embedded in~$\overline{F}$ on which~$W_F$ acts, and since $T_\alpha/E$ is totally ramified, the Teichm\"uller roots of unity in~$T_\alpha$ coincide with those in~$E$. A choice of lift~$\Theta_E$ of~$\Theta_F$ to~$E$ defines an orbit of~$W_E$ on~$[\alpha]_F$, and we are now in a similar situation as for~$\GL_n(F)$, except that we have no ambiguity coming from the $\beta$-extension: a choice of lift $\Theta_E \to \Theta_F$ defines a level zero part map $\Lambda^+_{\Theta_E}: \sigma \mapsto [\chi(\alpha)]$, for any~$\alpha$ such that~$\Theta_E = \Phi_E[\alpha]_E$. This is a character orbit $[\chi(\alpha)] \in \Gamma(\Theta_F) \backslash X_R(\Theta_F)$. \begin{lemma}\label{changelift} Let~$\gamma \in \Gal(E/F)$. Then $\gamma^*\Lambda^+_{\gamma^*\Theta_E} = \Lambda^+_{\Theta_E}$. \end{lemma} \begin{proof} By theorem~\ref{ramification}, if~$\Theta_E = \Phi_E[\alpha]$ then $\gamma^*\Theta_E = \Phi_E(\ad(g)^*[\alpha]) = \Phi_E[g\alpha]$ for any lift~$g \in W_F$ of~$\gamma$. We have seen that $g^*[\chi(g\alpha)] = [\chi(\alpha)]$, which implies the lemma. \end{proof} We then see that the behaviour of level zero maps under change of lifts is the same for~$\GL_n(F)$ and~$W_F$. We will now fix a lift~$\Theta_E \to \Theta_F$ and write~$\Lambda^+$ for~$\Lambda^+_{\Theta_E}$. \begin{pp}\label{sameinertialparameter} Two irreducible $W_F$-representations $\sigma_1$ and~$\sigma_2$ containing~$\alpha \in P_F^\vee$ have the same image under~$\Lambda^+$ if and only if they have isomorphic restriction to~$I_F$. \end{pp} \begin{proof} For this, consider any irreducible $W_F$-representation $\sigma = \Ind_{T_\alpha}^F(\sigma_\alpha)$, where~$\sigma_\alpha$ is the~$\alpha$-isotypic component of~$\sigma$. Then the Mackey formula for induction and restriction implies that \begin{equation}\label{Mackeyformula} \Res^{W_F}_{I_F}\sigma = \Res^{W_F}_{I_F}\Ind_{W_{T_\alpha}}^{W_F}\sigma_\alpha = \bigoplus_{\gamma \in I_F \backslash W_F / W_{T_\alpha}} \Ind_{I_{T_{\gamma\cdot\alpha}}}^{I_F}\Res^{W_{T_{\gamma\cdot\alpha}}}_{I_{T_{\gamma\cdot\alpha}}}\sigma_{\gamma \cdot \alpha}. \end{equation} The representation~$\sigma_\alpha$ is isomorphic to $\rho(\alpha) \otimes \sigmatr(\alpha)$, where~$\rho(\alpha)$ is some extension of~$\alpha$ to~$W_{T_\alpha}$ whose restriction to~$I_{T_\alpha}$ has $p$-primary determinant. The representation~$\sigmatr(\alpha)$ is by definition the induction to~$T_\alpha$ of a character orbit~$[\chi_1(\alpha)]$, which depends on the choice of~$\rho(\alpha)$, but its restriction to~$I_{T_\alpha}$ does not. So~$\sigma|_{I_F}$ determines the level zero part of~$\sigma$, because \begin{equation}\label{induction} \sigma_\alpha | I_{T_\alpha} \cong \rho_\alpha \otimes \bigoplus_{\xi \in [\chi(\alpha)]}\chi \end{equation} where~$\rho_\alpha$ is the unique extension of~$\alpha$ to~$I_{T_\alpha}$ with $p$-primary determinant: this shows that the action of~$I_{T_\alpha}$ on the $\alpha$-isotypic component of~$\sigma|_{I_F}$, which coincides with the $\alpha$-isotypic component of~$\sigma$, determines the level zero part of~$\sigma$ with respect to~$\alpha$. Conversely, one can construct $\sigma|_{I_F}$ if one knows that~$\Lambda^+_{\Theta_E}(\sigma) = [\chi]$. Indeed, choose a representative~$\alpha$ of the $W_{T_E}$-orbit of representations of~$P_F$ attached to~$\Theta_E$. Then the isotypic component~$\sigma_\alpha$ is isomorphic to \begin{displaymath} \rho(\alpha) \otimes \Ind_{T_{d, \alpha}}^{T_{\alpha}}\chi_1(\alpha) \end{displaymath} for some~$\rho(\alpha)$ and some extension~$\chi_1(\alpha)$ of~$\chi$ to a character of~$T_d^\times$ trivial on~$U^1(T_d)$, which are not determined by the level zero part. However, by formula~(\ref{induction}) we see that the restriction~$\sigma_\alpha | I_{T_\alpha}$ is determined by~$\Lambda^+_{\Theta_E}(\sigma)$. Similarly one computes all the $\sigma_\gamma|_{I_{T_\gamma}}$ for~$\gamma \in [\alpha]_F$, applying lemma~\ref{changelift}, and then the restriction~$\sigma | I_{F}$ is determined by formula~(\ref{Mackeyformula}). \end{proof} At this stage, we have defined in terms of a lift~$\Theta_E \to \Theta_F$ a level zero map \begin{displaymath} \Lambda^+ : (\text{supercuspidal inertial types of dimension~$n$ over~$R$ containing~$\Theta_F$}) \to \Gamma(\Theta_F) \backslash X_R(\Theta_F) \end{displaymath} with image the $\be$-regular orbits. The left-hand side consists of course of those representations whose restriction to~$P_F$ corresponds to~$\Theta_F$. The Langlands parameter of a simple $\GL_n(F)$-representation~$\pi$ over~$R$ of characteristic zero has restriction to inertia isomorphic to $\sigma^{\oplus m}$ for some~$m | n$ and some supercuspidal inertial type~$\sigma$ of dimension $n/m$: $\sigma$ is the restriction to inertia of the Langlands parameter of a representation in the supercuspidal support of~$\pi$. Motivated by this, we define a \emph{simple inertial type} over~$R$ (of arbitrary characteristic) to be a multiple of a supercuspidal inertial $R$-type, and we extend the map~$\Lambda^+$ by \begin{displaymath} \Lambda^+(\sigma^{\oplus m}) = N^*(\Lambda^+ \sigma) \end{displaymath} where~$N$ is the norm for the field extension $\be_{n/\delta(\Theta_F)} / \be_{n/m\delta(\Theta_F)}$. Notice that~$n/(m\delta(\Theta_F))$ is an integer: $\sigma$ is an $n/m$-dimensional irreducible representation of~$W_F$, hence (by the computation of the dimension of~$\sigmatr$ in the above) its level zero part is indeed a character orbit of~$\be_{n/m\delta(\Theta_F)}^\times$. It will be convenient (because of the statement of theorem~\ref{typestheorem} to follow) to twist this level zero map by a certain automorphism of~$\be_{n/\delta(\Theta_F)}^\times$. Let~$p^r$ be the degree of any parameter field~$P$ of~$\Theta_F$ over the maximal tamely ramified extension of~$F$ it contains (the degree of the ``wildly ramified part'' of the endo-class~$\Theta_F$). Then we define \begin{displaymath} \Lambda(\tau) = \Lambda^+(\tau)^{p^{-r}} \end{displaymath} for any simple inertial type~$\tau$ for~$W_F$. \subsection{Compatibilities between level zero maps.} Fix a lift~$\Theta_E$, so that every $\beta$-extension~$\kappa$ in~$\GL_n(F)$ of a maximal simple character with endo-class~$\Theta_F$ defines a level zero map~$\Lambda_{\kappa}$ on simple inertial classes with endo-class~$\Theta_F$. As in the previous section, we also have a level zero map~$\Lambda$ on simple inertial types for~$W_F$, and the local Langlands correspondence over~$\bC$ defines a bijection \begin{equation}\label{recinertial} \rec: (\text{simple $\bC$-inertial classes with endo-class~$\Theta_F$}) \to (\text{simple $\bC$-inertial types with endo-class~$\Theta_F$}). \end{equation} We define a permutation~$\xi(\kappa)$ of the set~$\Gamma(\Theta_F) \backslash X_\bC(\Theta_F)$, depending on~$\kappa$, via \begin{displaymath} \xi(\kappa)(\Lambda_{\kappa}(\pi)) = \Lambda(\rec\,\pi) \end{displaymath} for any simple irreducible representation~$\pi$ of~$\GL_n(F)$ with endo-class~$\Theta_F$. Any isomorphism~$\iota_\ell : \bC \to \cbQ_\ell$ defines a bijection~$\rec_\ell$ analoguous to~(\ref{recinertial}), identifying both sides of~(\ref{recinertial}) with their analogues over~$\cbQ_\ell$, and these identifications commute with the level zero maps through~$\iota_\ell$. The permutation~$\xi_\ell(\kappa)$ of~$\Gamma(\Theta_F) \backslash X_{\cbQ_\ell}(\Theta_F)$ is defined in the same way, using~$\rec_\ell$ and the level zero maps over~$\cbQ_\ell$. \begin{thm}\label{samelregularpart} Two elements of~$\Gamma(\Theta_F) \backslash X_\bC(\Theta_F)$ have the same $\ell$-regular part if and only if their images under $\xi(\kappa)$ have the same $\ell$-regular part. \end{thm} \begin{proof} By the discussion above, it suffices to prove the theorem over~$\cbQ_\ell$. Since~$\xi_\ell(\kappa)$ is a bijection, it suffices to prove that it preserves equality of $\ell$-regular parts. Consider two simple irreducible integral representations~$\pi_i$ with endo-class~$\Theta_F$ such that~$\Lambda_{\kappa, \cbQ_\ell}(\pi_i) = [\psi_i]$ and $[\psi_1]^{(\ell)} = [\psi_2]^{(\ell)}$. Assume that~$\psi_i$ is norm-inflated from an $\be$-regular character~$\mu_i$ of~$\be_{n/a_i\delta(\Theta_F)}^\times$. By proposition~\ref{modlreduction}, the equality $\Lambda_{\br_\ell\kappa, \cbF_\ell}(\br_\ell\pi_i) = [\psi_i]^{(\ell)}$ holds. We can actually choose the~$\pi_i$ so that the~$\br_\ell(\pi_i)$ have the same supercuspidal support, and not just up to inertia. To see this, first choose the~$\pi_i$ so that they have supercuspidal support of the form $(\pi_i^0)^{\otimes a_i}$ for integral representations~$\pi_i^0$. By the classification of cuspidal representations in section~6 of~\cite{MSreps}, the supercuspidal support of~$\br_\ell(\pi_i^0)$ has the form $\tau_i \otimes \cdots \otimes \tau_i(m_i-1)$ for some~$\tau_i$. It follows from our assumption that $a_1m_1 = a_2 m_2$. Now consider twists of $\cbQ_\ell$-representations with supercuspidal support $\pi_i^0 \otimes \pi_i^0(m_i) \otimes \cdots \otimes \pi_i^0((a_i-1)m_i)$. Write~$\tau_i$ for the semisimple $W_F$-representation underlying~$\rec_\ell(\pi_i)$. It is a direct sum of~$a_i$ copies of some irreducible representation~$\sigma_i$. Writing $\br_\ell(\sigma)$ for the semisimplification of the mod~$\ell$ reduction of a semisimple finite-dimensional $\cbQ_\ell$-representation~$\sigma$ of~$W_F$, by 1.6 Th\'eor\`eme Principal in~\cite{Vignerasl2} we know that $\br_\ell(\tau_1) = \br_\ell(\tau_2)$. By the Ramification Theorem~\ref{ramification}, the~$\sigma_i$ contain the same irreducible representation~$\alpha$ of the wild inertia group~$P_F$. They can therefore be written as inductions of their $\alpha$-isotypic component, $\sigma_i = \Ind_T^F \rho(\alpha) \otimes \sigmatr_i(\alpha)$, and there exist integers~$d_i$ and characters~$\chi_i = \chi_i(\alpha)$ of~$T_{d_i}^\times$ such that $\sigmatr_i(\alpha) = \Ind_{T_{d_i}}^T \chi_i(\alpha)$ and $\sigma_i = \Ind_{T_{d_i}}^F\rho(\alpha) \otimes \chi_i(\alpha)$. Notice that the~$\chi_i$ may be characters of different groups, and at this stage we don't attempt to compare them with the~$\psi_i$. It suffices to prove that $(\chi_1|{\mu_{T_{d_1}}})^{(\ell)}$ and $(\chi_2|{\mu_{T_{d_2}}})^{(\ell)}$ are both norm-inflated from $\mu_T$-regular characters of the same~$\mu_{T_r}$ for some $r>0$, and that these characters of~$\mu_{T_r}$ are conjugate over~$T$. Now we proceed as in section~6.2.1 of~\cite{Vignerasl1}. Since the wild inertia group~$P_F$ is a pro-$p$ group, we can identify its representations over~$\cbQ_\ell$ and~$\cbF_\ell$. Then, we use that $\br_\ell(\sigma_i)$ is the semisimplification of $\Ind_{T_{d_i}}^F \rho(\alpha) \otimes \br_\ell(\chi_i)$. The character~$\xi_i = \br_\ell(\chi_i)$ needs not be $\ell$-regular, and it extends to its stabilizer in~$W_T$, the Weil group of some intermediate unramified extension~$T_{r_i}$ of~$T$. Since~$\rho(\alpha)$ extends to~$W_T$, hence to~$W_{T_{r_i}}$, the induction $\Ind_{T_{d_i}}^{T_{r_i}} \rho(\alpha) \otimes \xi_i$ semisimplifies to the direct sum of~$\rho(\alpha) \otimes \widetilde{\xi}_i$ over all the extensions~$\widetilde{\xi}_i$ of~$\xi_i$ to~$T_{r_i}$. All these extensions are unramified twists of each other. By~\cite{Vignerasl1} corollaire~4.3, each induced representation $\Ind_{T_{r_i}}^F\rho(\alpha) \otimes \widetilde{\xi}_i$ is irreducible, because the stabilizer of~$\alpha$ in~$W_F$ is~$W_T$ and the stabilizer of~$\xi_i$ in~$W_T$ is~$W_{T_{r_i}}$. So $\br_\ell(\sigma)$ is a direct sum of unramified twists of a single irreducible representation, which can be taken to be any of the $\Ind_{T_{r_i}}^F\rho(\alpha) \otimes \widetilde{\xi}_i$. Since~$\br_\ell(\tau_1) = \br_\ell(\tau_2)$ and $\br_\ell(\tau_i)$ is a multiple of~$\br_\ell(\sigma_i)$ in the Grothendieck group, we see that $\Ind_{T_{r_1}}^F\rho(\alpha) \otimes \widetilde{\xi}_1$ and $\Ind_{T_{r_2}}^F\rho(\alpha) \otimes \widetilde{\xi}_2$ are unramified twists of each other. This implies that~$r_1 = r_2$ and the restriction to~$\fo_{T_{r_i}}^\times$ of the~$\widetilde{\xi}_i$ are conjugate over~$T$. But since $\xi_i = \br_\ell(\chi_i)$ this implies that $(\chi_1|{\mu_{T_{d_1}}})^{(\ell)}$ and $(\chi_2|{\mu_{T_{d_2}}})^{(\ell)}$ are conjugate over~$T$, after descending to~$\mu_{T_r}$ via the norm (here $r = r_1 = r_2$). \end{proof} There is a similar compatibility with the parametric degree of $[\chi] \in \Gamma(\Theta_F) \backslash X_\bC(\Theta_F)$, defined as the size of the orbit~$[\chi]$. \begin{pp}\label{sameparametricdegree} The map~$\xi(\kappa)$ preserves parametric degrees. \end{pp} \begin{proof} This is an immediate consequence of the definition of~$\Lambda_{\kappa}$ and~$\Lambda$, together with lemma~\ref{Langlandssupport}. \end{proof} \section{Canonical $\beta$-extensions.} By requesting that the level zero maps for the same $\Theta_E \to \Theta_F$ on~$\GL_n(F)$ and on~$W_F$ coincide, we obtain a canonical normalization for maximal $\beta$-extensions. In this section we work over the complex numbers. \begin{defn} Fix a lift~$\Theta_E \to \Theta_F$. We say that a maximal $\beta$-extension~$\kappa$ of endo-class~$\Theta_F$ is \emph{canonical} if $\Lambda_\kappa(\pi) = \Lambda(\rec(\pi))$ for all simple irreducible representations of~$\GL_n(F)$ of endo-class~$\Theta_F$. Equivalently, $\xi(\kappacan)$ is the identity. \end{defn} \begin{thm}\label{canonicalwideextensions} Let~$\kappacan$ be a maximal $\beta$-extension of endo-class~$\Theta_F$ such that $\xi(\kappacan)$ fixes the $\be$-regular elements of $\Gamma(\Theta_F) \backslash X(\Theta_F)$. Then~$\xi(\kappacan) = 1$. \end{thm} \begin{proof} This is proved as lemma~9.11 in~\cite{SecherreStevensJL}. Assume that~$\alpha$ is a character of~$\be_{n/\delta(\Theta_F)}^\times$ which is not $\be$-regular: we will prove that $\xi(\kappacan)[\alpha] = [\alpha]$. Consider a simple representation~$\pi$ of~$\GL_n(F)$ with supercuspidal support~$\pi_0^{\otimes r}$ and~$\Lambda_{\kappacan}(\pi) = [\alpha]$. Let~$a \geq 1$ be some large integer ($a \geq 7$ will suffice) and write~$\kappacans$ for the maximal $\beta$-extension in~$\GL_{an}(F)$ compatible with~$\kappa$, and let~$\pi_a$ be a representation of~$\GL_{an}(F)$ with supercuspidal support~$\pi_0^{\otimes ar}$. Then it follows from proposition~\ref{refinement} that $\Lambda_{\kappacans}\pi_a$ is the inflation~$[\alpha^*]$ of~$\alpha$ to~$\be_{an/\delta(\Theta_F)}^\times$. By lemma~\ref{Langlandssupport} we have $\rec(\pi_a)|_{I_F} = \rec(\pi_0)|_{I_F}^{\oplus ar}$, so that if $\Lambda(\rec \, \pi) = [\mu]$ then $\Lambda(\rec \, \pi_a) = [\mu^*]$. Hence by definition we have $[\mu] = \xi(\kappacan)[\alpha]$ and $[\mu^*] = \xi(\kappacans)[\alpha^*]$, although at this stage we do not know whether $[\alpha] = [\mu]$. It follows that $\xi(\kappacans)[\alpha^*] = (\xi(\kappacan)[\alpha])^*$ and so it suffices to prove that $\xi(\kappacans)[\alpha^*] = [\alpha^*]$, because the norm is surjective in finite extensions of finite fields. Write $\be[\alpha]^\times$ for the fixed field of the stabilizer of~$\alpha$ in~$\Gal(\be_{n/\delta(\Theta_F)} / \be)$. By lemma~8.5 and remark~8.7 in~\cite{SecherreStevensJL}, there exist an $\be$-regular character~$\beta$ of~$\be_{an/\delta(\Theta_F)}^\times$ and a prime number~$\ell \not = p$ not dividing the order of~$\be[\alpha]^\times$ such that $\alpha^*$ is the $\ell$-regular part of~$\beta$. By proposition~\ref{samelregularpart} we have that $(\xi(\kappacans)[\alpha^*])^{(\ell)} = (\xi(\kappacans)[\beta])^{(\ell)}$, and it suffices now to prove that $\xi(\kappacans)[\beta] = [\beta]$ and that $\xi(\kappacans)[\alpha^*]$ is $\ell$-regular. That $\xi(\kappacans)[\alpha^*]$ is $\ell$-regular follows by proposition~\ref{sameparametricdegree}, because it has the same parametric degree as~$[\alpha^*]$ and~$\ell$ does not divide the order of~$\be[\alpha]^\times$. Now, we know by theorem~\ref{typestheorem} that there exists some $\beta$-extension~$\varkappa$ in~$\GL_{an}(F)$ such that $\xi(\varkappa)[\beta] = [\beta]$. So there exists some character~$\delta$ of~$\be^\times$ such that $\xi(\kappacans)[\beta] = [\delta\beta]$ for \emph{every} $\be$-regular character~$\beta$ of~$\be_{an/\delta(\Theta_F)}^\times$, because~$\varkappa$ and~$\kappacans$ are unramified twists of each other. We will prove that~$\delta$ is trivial: this implies the theorem. Fix some $\be$-regular character~$\alpha_+$ of~$\be_{n/\delta(\Theta_F)}^\times$. Because~$a$ is large enough, there exists some prime number $\ell \not = p$ not dividing the order of~$\be_{n/\delta(\Theta_F)}^\times = \be[\alpha_+]$ (maybe not the same~$\ell$ as before) and some $\be$-regular character $\beta_+$ of $\be_{an/\delta(\Theta_F)}^\times$ such that~$\alpha^*_+$ is the $\ell$-regular part of~$\beta_+$. We know that $\xi(\kappacan)[\alpha_+] = [\alpha_+]$ by regularity of~$\alpha_+$ and by definition of~$\kappacan$. At the same time, $\xi(\kappacans)[\alpha^*_+] = [\delta \beta_+]^{(\ell)} = [\delta^{(\ell)}\alpha^*_+]$ (and this~$\delta$ is the same~$\delta$ as before), and since $\xi(\kappacans)[\alpha^*_+] = (\xi(\kappacan)[\alpha_+])^*$ we find that $[\alpha^*_+] = [\delta^{(\ell)} \alpha^*_+]$. It follows that we can write $\delta = \delta_{(\ell)}(\alpha^*_+)^{|\be|^{i}-1}$ for some $\ell$-primary character $\delta_{(\ell)}$ and some integer $i \in \{0, \ldots, \frac{n}{\delta(\Theta_F)} - 1 \}$. The order of~$\delta$ divides $|\be| - 1$, as it is a character of~$\be^\times$, and so $(\delta_{(\ell)})^{1-|\be|} = (\alpha^*_+)^{(|\be|^i - 1)(|\be| - 1)}$. But the order of~$\delta_{(\ell)}$ is a power of~$\ell$, and~$\ell$ is coprime to $|\be_{n/\delta(\Theta_F)}^\times| = |\be|^{n/\delta(\Theta_F)}-1$, hence to $|\be| - 1$. So~$\delta_{(\ell)} = 1$. Finally, we can take~$\alpha_+$ to be a generator of the character group of $\be_{n/\delta(\Theta_F)}^\times$, hence we can assume that~$\alpha_+$ has order $|\be|^{n/\delta(\Theta_F)} - 1$. But the order of~$\alpha^*_+$ divides $(|\be|^i - 1)(|\be| - 1)$ by the above; and since $|\be| \geq 2$ we have $|\be|^{n/\delta(\Theta_F)} - 1 > (|\be|^i - 1)(|\be| - 1)$, hence $i = 0$ and~$\delta$ is trivial. \end{proof} The existence of~$\kappacan$ satisfying the assumptions of theorem~\ref{canonicalwideextensions} can be deduced from the Types Theorem of Bushnell and Henniart, see~\cite{BHeffective}. We will give an explicit description of~$\kappacan$ as a twist of a $p$-primary $\beta$-extension. \begin{thm}\label{typestheorem} Let~$\theta$ be a maximal character in~$\GL_n(F)$ of endo-class~$\Theta_F$. Let~$\kappa$ be the $p$-primary $\beta$-extension of~$\theta$, let $\epsilon^1_\theta$ be the symplectic sign character of~$\theta$ (see section~5 of~\cite{BHeffective}) and let~$\epsilon_{\Gal}$ be the quadratic character of~$\be^\times$ which is nontrivial if and only if $p \not = 2$ and the degree of a tame parameter field of~$\Theta_F$ over~$F$ is even. Then~$\kappacan = \epsilon_{\Gal} \epsilon^1_\theta \kappa$ has the property that \begin{displaymath} \Lambda_{\kappacan}(\pi) = \Lambda(\rec(\pi)) \end{displaymath} for all supercuspidal irreducible representations~$\pi$ of~$\GL_n(F)$ with endo-class~$\Theta_F$. \end{thm} \begin{proof} Let~$\sigma$ be an irreducible representation of~$W_F$ with~$\Lambda(\sigma) = [\chi]$, so that if we fix~$\alpha \in [\alpha]_E$ corresponding to~$\Theta_E$ then the isotypic component~$\sigma_\alpha$ of~$\sigma$ is isomorphic to $\rho(\alpha) \otimes \sigmatr(\alpha)$ for some choice of~$\rho(\alpha)$ and~$\sigma(\alpha)$. The Types Theorem in~\cite{BHeffective} then says that $\rec^{-1}(\sigma)$ contains an extended maximal simple type of the form \begin{displaymath} \psi \odot \lambda_{\sigmatr(\alpha)} \ltimes \nu. \end{displaymath} Let's give definitions for these objects. First, one fixes a simple stratum~$[\fA, \beta]$ defining~$\theta$, with tame parameter field $T_\theta \subseteq F[\beta]$, and identifies~$T = T_\alpha$ with~$T_\theta$. This is done via an isomorphism $\iota: T_\alpha \to T_\theta$ in such a way that the pullback to~$T$ of the endo-class over~$T_\theta$ of the interior lift~$\theta_{T_\theta}$ coincides with the endo-class~$\Theta_T = \Phi_T(\alpha)$ corresponding to the $W_T$-orbit $[\alpha]_T$ of representations of the wild inertia group $P_F = P_T$. By construction, $\Phi_E(\alpha) = \Theta_E$, and by~\cite{BHeffective} 6.2 Proposition we have that~$\Theta_T$ is a lift of~$\Theta_E$ to~$T$. But~$\cl(\theta_{T_\theta})$ is a lift to~$T_\theta$ of the endo-class~$\cl(\theta_{T_\theta^{\mathrm{ur}}})$ of the interior lift of~$\theta$ to the unramified parameter field~$T_\theta^{\mathrm{ur}}$ contained in~$F[\beta]$. It follows that this isomorphism $\iota: T \to T_\theta$ induces the isomorphism $\iota_{T_\theta^{\mathrm{ur}}}: E \to T_\theta^{\mathrm{ur}}$ associated to~$\Theta_E$, because \begin{displaymath} (\iota_{T_\theta^{\mathrm{ur}}})^*\cl(\theta_{T_\theta^{\mathrm{ur}}}) = (\iota|_E)^*\Res_{T_\theta/T_\theta^{\mathrm{ur}}}\cl(\theta_{T_\theta}) = \Res_{T/E}(\iota^*\cl(\theta_{T_\theta})) = \Res_{T/E}\Theta_T = \Theta_E. \end{displaymath} Then put $d = n/\delta(\Theta_F) = \dim \sigmatr(\alpha)$. Take an unramified extension~$F[\beta]_d$ of~$F[\beta]$ of degree~$d$, contained in the centralizer of~$F[\beta]$ in~$M_n(F)$, such that~$F[\beta]_d^\times$ normalizes~$\theta$. Let~$K_d$ be the maximal unramified extension of~$F$ in~$F[\beta]_d$. The representation~$\nu$ is a full Heisenberg representation of~$\bJ_\theta$ over~$\theta$ in the sense of~\cite{BHeffective} 3.2 Definition, such that the trace of~$\nu$ is constant over $K_d/F$-regular elements of~$\mu_{K_d}$. Since this condition determines~$\nu|J_\theta$ uniquely, we find that~$\nu|J_\theta \cong \epsilon^1_\theta\kappa$. The representation~$\lambda_{\sigmatr(\alpha)}$ is constructed as follows (see~\cite{BHeffective} 3.6). Consider the characters~$[\chi_1(\alpha)]$ of~$T_d$ attached to~$\sigmatr(\alpha)$, and the restrictions~$\chi(\alpha) = \chi_1(\alpha)|\fo_{T_d}^\times$. Observe that the isomorphism $\iota: T \to T_\theta$ extends to an isomorphism $\iota: T_d \to T_{\theta, d}$ to the maximal tamely ramified extension~$T_{\theta, d}$ of~$F$ in~$F[\beta]_d$, which is a degree~$d$ unramified extension of~$T_\theta$. We get via~$\iota$ a well-defined orbit~$[\chi_1(\alpha)]$ of $T_\theta$-regular characters of~$T_{\theta, d}$ under~$\Gal(T_{\theta, d}/T_\theta)$. Inflate~$\chi_1$ to a character~$\chi_1^*$ of~$F[\beta]_d^\times$ via the norm $N_{F[\beta]_d / T_{\theta, d}}: F[\beta]_d^\times \to T_{\theta, d}^\times$, and let~$\fB$ be the intersection of~$\fA$ with the centralizer of~$F[\beta]$ in~$M_n(F)$. The restriction of~$\chi_1^*$ to~$\mu_{F[\beta]_d} = \mu_{T_d} \cong \bt^\times_{d} = \be^\times_{d} = \be^\times_{n/\delta(\Theta_F)}$ is an $\be$-regular character~$\chi^*$. Embedding~$\mu_{T_d} \cong \be_d^\times$ in $\GL_d(\be)$ as a maximal elliptic torus, we see that there exists a unique supercuspidal irreducible representation~$\widetilde{\sigma}[\chi^*]$ of $U(\fB)/U^1(\fB)$ whose trace on $\mu_{F[\beta]_d} = \mu_{T_{\theta, d}}$ is given in terms of~$\chi^*$ under the Green parametrization and the isomorphism $\iota: \mu_{T_d} \to \mu_{T_{\theta, d}}$. The representation~$\widetilde{\sigma}[\chi^*]$ is extended to~$\bJ_\theta = F[\beta]^\times U(\fB)J^1_\theta$ by letting~$J^1_\theta$ act trivially, and the extension is denoted~$\lambda_{\chi_1^*}^\bJ$. Then, by definition, \begin{displaymath} \lambda_{\sigmatr(\alpha)} \ltimes \nu = \lambda_{\chi_1^*}^\bJ \otimes \nu. \end{displaymath} Since $\iota: T_d \to T_{\theta, d}$ induces $\iota_{T_\theta^{\mathrm{ur}}}: E \to T_\theta^{\mathrm{ur}}$, any isomorphism in the conjugacy class $\Psi(\Theta_E): U(\fB)/U^1(\fB) \to \GL_d(\be)$ induces the isomorphism~$\iota^{-1}: \mu_{T_{\theta, d}} \to \mu_{T_d} \cong \be_{n/\delta(\Theta_F)}^\times$, up to~$\GL_d(\be)$ conjugacy and the action of~$\Gal(\be_{n/\delta(\Theta_F)}/\be)$. Then, $\widetilde{\sigma}[\chi^*]$ is isomorphic to the inflation of the representation $\sigma[\chi^*]$ of~$\GL_d(\be)$ through any isomorphism in the conjugacy class~$\Psi(\Theta_E)$. It follows that the restriction of~$\lambda_{\sigmatr(\alpha)} \ltimes \nu$ to~$J_\theta$ is a maximal simple type corresponding to the unique Bernstein component~$\fs$ with endo-class~$\Theta_F$ and~$\Lambda_{\epsilon^1_\theta\kappa}(\fs) = [\chi^{p^r}]$, for $p^r = [F[\beta] : T_\theta] = [F[\beta]_d : T_d]$. Indeed, the norm $N: F[\beta]_d^\times \to T_d^\times$ induces on the residue field the automorphism of raising to the $p^r$-th power, and the trace of~$\lambda_{\sigmatr(\alpha)}^\bJ$ on~$\mu_{F[\beta]_d}$ is given in terms of~$\chi^* = \chi^{p^r}$. The character~$\psi$ is a character of~$T^\times$ trivial on~$U^1(T)$ and corresponding to~$\epsilon_{\Gal}$ on $\mu_T$, by definition. By part~(1) of~\cite{BHeffective} 3.6 proposition, one has \begin{displaymath} \psi \odot \lambda_{\sigmatr(\alpha)}\ltimes \nu = \lambda_{\sigmatr(\alpha)} \ltimes (\psi \odot \nu) \end{displaymath} where the operation $\psi \odot \nu$ is defined in~\cite{BHeffective} (3.2.1) as given by $\psi^\bJ \otimes \nu$ for the $\theta$-flat character~$\psi^\bJ$ of~$\bJ_\theta$ attached to~$\psi$. This character in defined in~\cite{BHeffective} 3.1 Definition, and by part~(1) of~\cite{BHeffective} 3.1 Proposition we have $\psi^\bJ(x) = \psi(\det_T(x))$ for all $x \in \bJ_\theta \cap Z_G(T)$. But then the restriction of~$\psi \odot \lambda_{\sigmatr(\alpha)}\ltimes \nu$ to~$J_\theta$ is a maximal simple type for the unique Bernstein component~$\fs'$ with endo-class~$\Theta_F$ and~$\Lambda_{\kappacan}(\fs') = [\chi^{p^r}]$. \end{proof} We can now prove that the canonical $\beta$-extensions behave well under transfer. \begin{pp} Let~$\kappacan$ be the canonical $\beta$-extension in~$\GL_n(F)$ of endo-class~$\Theta_F$. Consider a sequence~$(n_i)$ of positive integers summing to~$n$, such that~$\delta(\Theta_F)$ divides each~$n_i$. Let~$\kappa_i$ be the corresponding sequence of compatible $\beta$-extensions in the~$\GL_{n_i}(F)$. Then each~$\kappa_i$ is canonical. \end{pp} \begin{proof} Fix a lift~$\Theta_E \to \Theta_F$. By proposition~\ref{refinement}, it suffices to prove that~$\kappacan$ is compatible with~$\kappacan_0$, the canonical $\beta$-extension in~$\GL_{\delta(\Theta_F)}(F)$. Write~$\kappa_+$ for the $\beta$-extension in~$\GL_n(F)$ compatible with~$\kappacan_0$. Let~$\pi_0$ be a supercuspidal representation of~$\GL_{\delta(\Theta_F)}(F)$ with endo-class~$\Theta_F$ and $\Lambda_{\kappacan_0}(\pi_0) = [1]$. There exists a character~$\chi$ of~$\be^\times$ such that~$\chi\kappacan \cong \kappa_+$, and then $\Lambda_{\kappacan}(\pi) = \chi\Lambda_{\kappa_+}(\pi)$ for all simple representations~$\pi$ of endo-class~$\Theta_F$. Let~$\pi$ be a simple representation of~$\GL_{n}(F)$ with supercuspidal support inertially equivalent to $\pi_0^{\otimes n/ \delta(\Theta_F)}$. Then $\Lambda_{\kappa_+}(\pi)$ is inflated from $\Lambda_{\kappacan_0}(\pi_0) = \Lambda(\rec \, \pi_0)$, by compatibility, and $\Lambda_{\kappacan}(\pi) = \Lambda(\rec \, \pi)$, since~$\kappacan$ is canonical. But by construction we have that $\Lambda(\rec \, \pi)$ is inflated from $\Lambda(\rec \, \pi_0)$, hence $\Lambda_{\kappa_+}(\pi) = \Lambda_{\kappacan}(\pi) = [1]$. It follows that~$\chi = 1$ and~$\kappacan$ is compatible with~$\kappacan_0$. \end{proof} Finally, we mention that the connection between $\bK$-functors and level zero parts of Langlands parameters carries over to arbitrary Bernstein components of~$\GL_n(F)$. We briefly sketch how to see this. Given an inertial class of supercuspidal supports in~$\GL_n(F)$ \begin{displaymath} \fs = \left [ \prod_{i=1}^r \GL_{m_i}(F), \times_{i=1}^r \pi_i \right ] \end{displaymath} we can assume that the~$\pi_i$ are ordered according to their endo-class, so that there is a partition $I_1, \ldots, I_t$ of~$\{1, \ldots, r \}$ such that $i \in I_j$ if and only if~$\pi_i$ has endo-class~$\Theta_j$. For $1 \leq j \leq t$ write~$n_j = \sum_{i \in I_j} m_j$. In section~6 of~\cite{SecherreStevensblocks} there is constructed a functor \begin{displaymath} \bK: \left ( \text{smooth representations of~$\GL_n(F)$} \right ) \to \left ( \text{representations of~$\prod_{j=1}^t \GL_{n_j}(\be(\Theta_j))$} \right ) \end{displaymath} with the following two properties: \begin{enumerate} \item $\bK$ only depends on the choice of a maximal $\beta$-extension~$\kappa_j$ in~$\GL_{n_j}(F)$ of endo-class~$\Theta_j$. \item (see theorem~6.2 in~\cite{SecherreStevensblocks}) taking the $\beta$-extensions~$\kappa_i$ in~$\GL_{m_i}(F)$ compatible with~$\kappa_j$, for~$i \in I_j$, there is an isomorphism \begin{displaymath} \bK(\Ind_P^G(\otimes_{i=1}^r \pi_i)) \to \times_{i=1}^r \bK_i(\pi_i). \end{displaymath} \end{enumerate} The induction at the left-hand side is unnormalized, but this does not affect conclusions regarding inertial classes. If~$\pi$ is a representation with supercuspidal support in~$\fs$, we see that $\rec(\pi)|_{I_F} \cong \oplus_{i=1}^r \rec(\pi_i)|_{I_F}$ by lemma~\ref{Langlandssupport}. It follows that, if all the~$\kappa_i$ are canonical and we compute with the corresponding $\bK$-functor, then the supercuspidal support of~$\bK(\pi)$ encodes the level zero part of the Langlands parameter of~$\pi$. \bibliographystyle{amsalpha}
{ "timestamp": "2018-02-21T02:03:39", "yymm": "1802", "arxiv_id": "1802.06909", "language": "en", "url": "https://arxiv.org/abs/1802.06909" }
\section{Introduction} Among various combinations of hadrons, the antikaon ($\bar{K}$) and nucleon ($N$) form one of the most interesting pairs. The $\bar{K}$ meson is a Nambu-Goldstone boson of the spontaneous chiral symmetry breaking of quantum chromodynamics (QCD), which constrains the $\bar{K} N$ interaction to be strongly attractive in a model independent manner. This chiral $\bar{K} N$ interaction together with its coupled channels dynamically generates the $\Lambda (1405)$ resonance~\cite{Kaiser:1995eg, Oset:1997it, Oller:2000fj, Oset:2001cn, Lutz:2001yb, Jido:2003cb}. Recently it was shown that the $\Lambda (1405)$ resonance in chiral dynamics was indeed a $\bar{K} N$ bound state~\cite{Sekihara:2014kya, Kamiya:2015aea} in terms of the compositeness~\cite{Hyodo:2011qc, Aceti:2012dd, Sekihara:2016xnq}. Because the $\bar{K} N$ interaction is attractive enough to make a bound state as the $\Lambda (1405)$ resonance, we expect that there should exist bound states of $\bar{K}$ and nuclei, which are called kaonic nuclei. Motivations to study kaonic nuclei are: they are exotic states of many-body systems interacting strongly, and provide us with a test field of the $\bar{K} N$ interaction and behavior of a strange quark in finite nuclear density. For kaonic nuclei, in particular for the simplest kaonic nucleus, \textit{i.e.}, the $\bar{K} N N$ bound state or the ``$K^{-} p p$'' state, many experimental searches and theoretical predictions have been performed, but even their existence is still controversial (see review in Ref.~\cite{Nagae:2016cbm}). \begin{figure}[b] \centering \includegraphics[width=8.6cm]{dsdM_KNN_A.eps} \caption{$\Lambda p$ invariant-mass spectrum of the $K^{-} {}^{3}{\rm He} \to \Lambda p n$ reaction~\cite{Sekihara:2017ncl}. Our theoretical result, shown as the thick red line, is obtained in the scenario that the $\bar{K} N N$ bound state is generated~\cite{Sekihara:2016vyd}. The experimental (E15) data and its fit are taken from Ref.~\cite{Sada:2016nkb} and shown in arbitrary units.} \label{fig:dsdM} \end{figure} In this line, the result from the J-PARC E15 experiment~\cite{Hashimoto:2014cri, Sada:2016nkb} is very promising. In the J-PARC E15 experiment, they observed the $K^{-} {}^{3} \text{He} \to \Lambda p n$ reaction with the initial kaon of momentum $1 \text{ GeV} / c$, a fast and forward neutron in the final state, and no spectator nucleon. As a result of the E15 first run, they found a peak structure near the $K^{-} p p$ threshold as the black points and blue bands in Fig.~\ref{fig:dsdM}, which could be a signal of a $\bar{K} N N$ bound state. In order to understand the reaction mechanism and to investigate how this peak is constructed, we perform a theoretical analysis on this reaction. Details of the calculations are provided in Refs.~\cite{Sekihara:2016vyd, Sekihara:2016gjh, Sekihara:2017ncl}. \section{Theoretical analysis on the $K^{-} {}^{3} \text{He} \to \Lambda p n$ reaction} \begin{figure}[t] \centering \begin{tabular}{cc} \includegraphics[scale=0.16]{diag_KNN.eps} & \includegraphics[scale=0.16]{diag_FCA.eps} \\ (a) & (b) \end{tabular} \caption{(a) Feynman diagram most relevant to the three-nucleon absorption of an in-flight $K^{-}$, and (b) multiple $\bar{K}$ scattering and absorption~\cite{Sekihara:2016vyd}. In (b), the dashed lines and circles represent $\bar{K}$ and $\bar{K} N \to \bar{K} N$ amplitude, respectively.} \label{fig:diag} \end{figure} In the J-PARC E15 experiment, they bombarded a ${}^{3} \text{He}$ target with $K^{-}$s of momentum $1 \text{ GeV} / c$ and observed the $K^{-} {}^{3} \text{He} \to \Lambda p n$ reaction with forward neutron in the final state and no spectator nucleon. This reaction can be expressed as the diagram in Fig.~\ref{fig:diag}. In the first step of the reaction, the $K^{-}$ kicks out a fast and forward final-state neutron and loses its energy. The amplitude of this first collision is calculated so as to reproduce the experimental values of the cross sections of $K^{-} n \to K^{-} n$ and $K^{-} p \to \bar{K}^{0} n$. Because both the $K^{-} n \to K^{-} n$ and $K^{-} p \to \bar{K}^{0} n$ cross sections have their local or global minima when the final-state neutron goes forward, the $K^{-} {}^{3} \text{He} \to \Lambda p n$ reaction favors the forward neutron emission compared to the middle-angle emission. \begin{figure}[b] \centering \includegraphics[width=8.6cm]{dsdMdcos_kin_A.eps} \caption{Differential cross section of the $K^{-} {}^{3}{\rm He} \to \Lambda p n$ reaction but neglecting $\bar{K} N N$ dynamics.} \label{fig:kin} \end{figure} Then, the slow $\bar{K}$ after the first collision propagates and is absorbed into two nucleons from ${}^{3} \text{He}$. An important point is that this slow $\bar{K}$ can create a kinematic peak because the propagating $\bar{K}$ can go almost on its mass shell, which largely enhances the $\bar{K}$ propagator. In order to see this effect, we calculate the differential cross section of the $K^{-} {}^{3} \text{He} \to \Lambda p n$ reaction according to the Feynman diagram in Fig.~\ref{fig:diag} but neglecting the contribution from the shaded box, \textit{i.e.}, making it unity. This means that we neglect dynamics of the slow $\bar{K}$ with two nucleons, which may generate the $\bar{K} N N$ bound state. The result is shown in Fig.~\ref{fig:kin}. As one can see, even if we do not have $\bar{K} N N$ dynamics, we obtain a peak structure whose peak position is just above the $\bar{K} N N$ threshold $= 2.37 \text{ GeV}$. The peak position shifts upward as the neutron angle becomes larger, which can be explained by the kinematics of the quasi-elastic kaon scattering in the first collision. Now let us take into account $\bar{K} N N$ dynamics and transition $\bar{K} N N \to \Lambda p$. Because all the three particles, $\bar{K}$ and two nucleons, are slow in the present reaction mechanism, the multiple $\bar{K}$ scattering as in Fig.~\ref{fig:diag}(b) should be essential in general. Actually, if we truncate the scattering in Fig.~\ref{fig:diag}(b) up to the first term in the right-hand side [uncorrelated $\Lambda (1405) p$ scenario], we cannot reproduce the behavior of the lower tail $\sim 2.3 \text{ GeV}$ in the experimental $\Lambda p$ invariant-mass spectrum~\cite{Sekihara:2016vyd}. In this study, we calculate the multiple $\bar{K}$ scattering in the so-called fixed center approximation~\cite{Bayar:2011qj, Bayar:2012hn}. By including the two-nucleon absorption width for $\bar{K}$ in a phenomenological way, we obtain a $\bar{K} N N$ bound state with its pole position at $2354 - 36 i \text{ MeV}$~\cite{Sekihara:2016vyd}. This multiple $\bar{K}$ scattering creates the peak structure in the $\Lambda p$ invariant-mass spectrum as the red thick line in Fig.~\ref{fig:dsdM}. Our mass spectrum is consistent with the experimental one within the present error. An interesting finding is that we predict two peaks across the $\bar{K} N N$ threshold in the spectrum. The lower peak comes from the $\bar{K} N N$ bound state, which reproduces the tail at the lower energy $\sim 2.3 \text{ GeV}$ qualitatively well. This means that our spectrum supports the explanation that the E15 signal in the ${}^{3} \text{He} ( K^{-} , \, \Lambda p) n$ reaction is indeed a signal of the $\bar{K} N N$ bound state. On the other hand, the higher peak originates from the kinematics, \textit{i.e.}, from the almost on-shell $\bar{K}$ denoted in Fig.~\ref{fig:kin}. Because using almost on-shell $\bar{K}$ is essential to make a $\bar{K} N N$ bound state in this reaction, this inevitably brings a kinematic peak above the $\bar{K} N N$ threshold in the physical mass spectrum. \section{Summary} We expect that kaonic nuclei should exist owing to the strongly attractive interaction between antikaon and nucleon. Even the existence of kaonic nuclei is still controversial, but a peak structure which could be a signal of the simplest kaonic nucleus, the $\bar{K} N N$ bound state, was recently found in the in-flight ${}^{3} \text{He} ( K^{-} , \, \Lambda p ) n$ reaction in the J-PARC E15 experiment. In order to understand the mechanism of the reaction, we theoretically analyzed the reaction observed in the J-PARC E15 experiment. We found that, by detecting a fast and forward neutron in the final-state, an almost on-shell $\bar{K}$ is guaranteed, which is essential to make a bound state with two nucleons from ${}^{3} \text{He}$. This almost on-shell $\bar{K}$ can bring a signal of the $\bar{K} N N$ bound state in the $\Lambda p$ invariant-mass spectrum, although it inevitably brings a kinematic peak above the $\bar{K} N N$ threshold as well. As a consequence, we predicted two peaks across the $\bar{K} N N$ threshold in the spectrum: the lower peak coming from the $\bar{K} N N$ bound state, and the higher one originating from the kinematics. We finally note that the predicted two-peak structure is indeed implied by the data of the E15 second run, where 30 times more statistics of the same reaction are accumulated~\cite{Iwasaki:2017}. This will support more strongly that the E15 signal in the ${}^{3} \text{He} ( K^{-} , \, \Lambda p) n$ reaction is indeed a signal of the $\bar{K} N N$ bound state.
{ "timestamp": "2018-02-21T02:00:29", "yymm": "1802", "arxiv_id": "1802.06781", "language": "en", "url": "https://arxiv.org/abs/1802.06781" }
\section{INTRODUCTION} \label{sec:intro} \setcounter{footnote}{30} The standard cosmological model with cold dark matter predicts that structure forms hierarchically over a wide range of size scales. The two most prominent satellites of the Milky Way, the Large and Small Magellanic Clouds (LMC and SMC), are both sufficiently massive to expect that they hosted their own populations of luminous satellites prior to their arrival at the Milky Way~\citep{D'Onghia:2008a,Sales:2011a,Dooley2017}. Indeed, the spatial distribution of the newly discovered ultra-faint dwarf galaxies in the Dark Energy Survey \citep[DES;][]{Abbott:2005bi} is heavily biased toward the direction of the LMC and SMC, providing strong but indirect observational evidence for the existence of ``satellites of satellites'' around our Milky Way~\citep{koposov15,bechtol15,dw15b,deason15,jethwa16,Sales2017}. Motivated by this distinct anisotropy in the southern satellite distribution, the Magellanic Satellites Survey (MagLiteS) is imaging the unexplored area on the other side of the Magellanic Clouds with the Dark Energy Camera~\citep[DECam;][]{flaugher_2015_decam} on the Blanco 4m telescope at Cerro Tololo Inter-American Observatory. MagLiteS is described in more detail by \citet{dw16}. Recently, a pair of dwarf galaxy candidates located on the outskirts of the LMC were discovered using photometric data from MagLiteS: Carina~II (Car~II\xspace) and Carina~III (Car~III\xspace) \citep[][hereafter Paper I]{torrealba:18}. These two systems are both extremely faint and close to us, with absolute magnitudes of $M_V \sim -4.5$ and $M_{V} \sim -2.4$, and heliocentric distances of $d \sim 37$~kpc and $d\sim 28$~kpc, respectively. Car~II\xspace ($r_{1/2} \sim 90$~pc) is significantly more extended than Car~III\xspace ($r_{1/2}\sim 30$~pc). Remarkably, these two objects form a close pair both on the sky (where they have a projected separation of $18\arcmin$ or $\sim 150$~pc) and along the line of sight (where they are $\sim$10~kpc apart), raising the question of whether Car~II\xspace and Car~III\xspace are gravitationally bound. Furthermore, due to the proximity of both systems to the LMC ($\sim$20~kpc), it seems likely that one or both are (or were) physically associated with the Magellanic Clouds. Kinematic information, such as line-of-sight velocities, is necessary to address these hypotheses, and confirm the nature of the two systems. Soon after the initial discovery in 2016 December, we began a spectroscopic follow-up program with the Magellan Baade Telescope, the Anglo-Australian Telescope (AAT), and the Very Large Telescope (VLT). Rapid follow-up with the AAT and VLT was possible thanks to short turn-around time for approval of service observations and Director's Discretionary time. Our multi-pronged observational strategy enabled both deep observations of a smaller number of faint targets with the 8-m-class Magellan and VLT, and wider-field observations of a large number of brighter targets (in both Car~II\xspace and Car~III\xspace together) with the AAT. Here, we report the first spectroscopic analysis of the Car~II\xspace and Car~III\xspace dwarf galaxy candidates discovered in MagLiteS. In \S\ref{sec:observations}, we describe the observations with all three telescopes, and the data reduction procedures. In \S\ref{sec:results}, we detail the results from our spectroscopic program, including the set of spectroscopic members, and measurements of the radial velocity, velocity dispersion, mean metallicity, and metallicity dispersion for each dwarf galaxy candidate. In \S\ref{sec:discussion}, we discuss the implications of these derived parameters as they relate to the classification of Car~II\xspace and Car~III\xspace, along with other unique features of this pair -- specifically, the possible tidal interaction between the two systems, and the association of Car~II\xspace and Car~III\xspace with the Magellanic Clouds. We also briefly discuss the search for dark matter annihilation within the Carina systems. We conclude in \S\ref{sec:summary}. The photometry in this work has been dereddened using the \citet{1998ApJ...500..525S} extinction map around Car~II\xspace and Car~III\xspace. Because of the relatively low Galactic latitude, the average reddening in this region is $E(B-V)\sim0.19$. \section{OBSERVATIONS AND DATA REDUCTION} \label{sec:observations} \input{obstable.tex} \begin{figure*}[th!] \epsscale{1} \plotone{Car_CMD} \caption{\emph{upper-left}: Color-magnitude diagram for observed stars with Magellan/IMACS, AAT/2dF+AAOmega and VLT/GIRAFFE+FLAMES. Overplotted are the PARSEC isochrones of a metal-poor population with age = 12.0 Gyr, [Fe/H] = -2.2 at the distance of Car~II\xspace ($m-M = 17.86$, green) and Car~III\xspace ($m-M = 17.22$, magenta). Green squares indicate 18 members of Car~II\xspace, magenta circles indicate 4 members of Car~III\xspace, blue triangles indicate 7 non-members with velocities $v_{hel} > 220\kms$, and small black dots are remaining non-member with velocities $v_{hel} < 220\kms$; black cross markers indicate candidate members that were observed but for which we were unable to obtain velocity measurements (mainly due to low S/N). \emph{upper-right}: Spatial distribution of the targets. Gold ellipses show the half-light radius of Car~II\xspace (larger ellipse) and Car~III\xspace (smaller ellipse). \emph{lower-left}: Heliocentric velocity versus distance from the center of Car~III\xspace. The separation between the Car~II\xspace members, Car~III\xspace members and the non-members is obvious. A few non-members have velocities similar to that of Car~III\xspace, but are far away from the Car~III\xspace center. The black dashed line indicates the half-light radius of Car~III\xspace ($r_h=3.75\arcmin$; Paper~I). \emph{lower-right}: Velocity distribution of 283 stars with successful velocity measurements. Car~II\xspace members are indicated as the peak around 480~\kms (in green) and Car~III\xspace members are indicated as the peak around 280~\kms (in magenta). A few stars have velocities close to Car~III\xspace and are shown as a blue histogram. From their position on the sky (upper-right panel) and their distance from Car~III\xspace (lower-left panel) we conclude that they are not members of Car~III\xspace. } \label{cmd} \end{figure*} \subsection{Magellan/IMACS Spectra} \label{sec:imacs} We obtained multi-slit spectroscopy of Car~II\xspace and Car~III\xspace with the IMACS spectrograph~\citep{dressler06} on the Magellan Baade telescope on 2017 January 24--25. The observing setup was the same as for the spectroscopy of the Tucana~III~\citep{tuc3} and Eridanus~II~\citep{eri2} dwarf galaxies. We used the f/4 camera on IMACS, which provides a full field-of-view of $15.4\arcmin \times 15.4\arcmin$. The spectrograph was configured with the 1200~$\ell$/mm grating and a tilt angle of 32.4\degr, producing a spectral resolution of $R\sim11,000$ for a 0\farcs7 slit width and a wavelength range of at least $7550-8750$~\AA\ for each slit. This wavelength range covers the calcium triplet (CaT) lines around 8500~\AA, used for measuring radial velocities and metallicities of candidate member stars, as well as the telluric absorption lines (Fraunhofer A-band) around 7600~\AA\, used for the correction of velocity errors caused by mis-centering of the stars within the slits (see \citealt{sohn07} for details). The target selection and mask design for Magellan/IMACS (hereafter IMACS) were performed using the photometry from the original MagLiteS catalog. Based on the knowledge of confirmed members of the DES-discovered dwarf galaxies Reticulum~II from~\citet{simon15b} and Tucana~III from~\citet{tuc3}, we used similar selection criteria as described in \citet{tuc3}. Target selection and mask design used a preliminary estimate for the distance modulus for Car~II\xspace (Car~III\xspace) of $m-M = 17.5$ ($m-M = 17.1$). For Car~II\xspace, the red giant branch (RGB) candidate members were selected to be redder than the fiducial sequence of the metal-poor globular cluster M92 from~\citet{an08}, bluer than a 12~Gyr, $\feh = -2.2$ theoretical PARSEC isochrone\footnote{Note that an earlier version of the PARSEC isochrone was used. See the Appendix for more details regarding the updated PARSEC isochrones.} from~\citet{bressan12}, and brighter than $g=20.8$. Several candidate blue horizontal branch (BHB) stars were selected at $17.7 < g < 18.8$ and $g-r < 0.1$; a handful of candidate red horizontal branch (RHB) stars were selected at $17.9 < g < 18.6$ and $0.1 <g-r < 0.3$. Potential main sequence turnoff (MSTO) stars were selected using a 0.1 mag wide window in $g - r$ around the PARSEC isochrone for $20.8 < g < 22$ (although because of the observing conditions, we did not get any useful spectra for MSTO candidates). For Car~III\xspace, we used the same selection criteria as for Car~II\xspace, with the exception of shifting all sequences 0.4~mag brighter according to the difference in distance moduli. Based on these targets, we designed two slitmasks (\code{IMACS-Car2Mask1} and \code{IMACS-Car2Mask2}) near the center of Car~II\xspace and one slitmask (\code{IMACS-Car3Mask1}) near the center of Car~III\xspace, using the \code{maskgen} program\footnote{\url{http://code.obs.carnegiescience.edu/maskgen}}. Stars were placed on the slit masks in a category prioritization descending order of BHB, RGB, RHB and MSTO. Within each category, priorities were based on brightness and distance from the center of Car~II\xspace or Car~III\xspace. Finally, any remaining mask space was filled by stars with photometry that made them unlikely to be members. \code{IMACS-Car2Mask1} contains 72 slits, \code{IMACS-Car2Mask2} contains 48 slits and \code{IMACS-Car3Mask1} contains 67 slits. All the targets, observed with IMACS and other instruments described in later sections, are presented in Figure~\ref{cmd}. We obtained a 1.7-hr exposure with \code{IMACS-Car2Mask2} and 3.7-hr exposure with \code{IMACS-Car3Mask1} on 2017 January 24, and 1.25-hr exposure on \code{IMACS-Car2Mask1} on 2017 January 25. One of the BHB candidates, MAGLITES\,J073834.84$-$575211.2, on \code{IMACS-Car3Mask1} happened to fall in a gap between CCDs, so we also obtained a 30-min exposure on this star with a 0\farcs7-wide long slit (\code{IMACS-Car3LongSlit}) on 2017 January 25. The observing conditions on both nights were relatively poor, with high humidity and $\sim1\arcsec-3\arcsec$ seeing. We reduced the IMACS spectra following the procedures described by~\citet{tuc3} for Tucana~III. We first performed the bias subtraction and removal of read-out pattern noise, then we used the Cosmos pipeline~\citep{dressler11,oemler17} to derive an initial wavelength solution and performed the slit mapping, followed by a refined wavelength calibration and spectral extraction using an IMACS pipeline derived from the DEEP2 data reduction pipeline for Keck/DEIMOS \citep{cooper12}. For each mask, the extracted spectra from multiple exposures were combined using inverse-variance weighting. The combined spectra reach a signal-to-noise ratio (S/N) $\sim5$~pixel$^{-1}$ at $r = 19.0$ for Car~II\xspace and at $r = 19.3$ for Car~III\xspace. The details of the instrument setup, observing information, mask information, etc., for IMACS and other instruments described in later sections, are summarized in Table~\ref{tab:obstable}. \subsection{AAT/AAOmega+2dF Spectra} \label{sec:aaomega} We observed Car~II\xspace and Car~III\xspace with the AAOmega Spectrograph~\citep{Sharp2006}, a fiber-fed multi-object spectrograph on the 3.9~m Anglo-Australian Telescope (AAT) at Australian Astronomical Observatory (AAO). The AAOmega Spectrograph is fed by the Two Degree Field (``2dF") multi-object system, allowing acquisition of up to 392 simultaneous spectra of objects within a 2\degr\ field on the sky. AAOmega is a dual-beam spectrograph, which feeds a blue arm and a red arm with a beam splitter at 5700~\AA. For the red arm, we utilized the 1700D grating, providing a spectral resolution of $R = 10,000$ and wavelength coverage of $8400-8810$~\AA, which enables us to target the spectral region of the CaT absorption lines for velocity and metallicity measurements. For the blue arm, we chose the 580V grating with resolution of $R = 1,300$ and wavelength coverage of $3750-5750$~\AA, which allows us to study additional elements (e.g., carbon) in the blue. This paper focuses on the kinematics and metallicities of the Carina systems, and therefore the spectra from the blue arm will be discussed in a future paper. Observations with AAT/AAOmega+2dF (hereafter AAT) were taken on 2017 January 23 and May 29 through the service observing program, and on 2017 January 25 through classical observing time. We obtained three 40-min exposures on January 23, one 40-min exposure and one 60-min exposure on January 25, and two 40-min exposures on May 29. To ensure accurate velocity determination, the arc frames were taken right before the science exposures at the same position. During the January run, the seeing was around 1\arcsec--1\farcs5 with intermittent clouds. During the May run, the weather was clear with seeing around 1\farcs6--2\farcs2. Among the 392 fibers, 25 of them were assigned to sky positions, 8 of them were assigned to guide stars selected from the UCAC4 catalog~\citep{Zacharias:2013}, and the remaining fibers were assigned to target stars. The targets for the AAT run were mostly selected using the photometry from the original MagLiteS catalog. The RGB and RHB candidates were selected using the best-fit PARSEC isochrone for Car~II\xspace ($\mathrm{log~age} = 10.0$, $\feh=-1.7$, $m-M=17.5$) and Car~III\xspace ($\mathrm{log~age} = 9.75$, $\feh=-0.9$, $m-M=17.1$) at the time of the observations\footnote{Note that the final isochrone parameter values reported in Paper~I are different from those used for spectroscopic target selection because the photometry and the fits continued to be refined after the spectroscopic observations were obtained.}. The BHB candidates were selected using a fiducial M92 BHB isochrone placed at the distance modulus of Car~II\xspace and Car~III\xspace. In addition to MagLiteS photometry, we also used photometry from time-series follow-up observations (to search for RR Lyrae stars) acquired with DECam during Blanco 4-m Director's Discretionary and engineering time. The exposure times for these follow-up studies were shorter than the original MagLiteS exposures and therefore brighter stars could be observed. In addition, $u-$band measurements were performed in the follow-up observations, and a handful of K/M giant candidates were selected based on the $u-g$ and $g-i$ color~\citep[e.g., see Figure 2 of ][]{Yanny2009}. Furthermore, we included some RR Lyrae candidates from the preliminary analysis of time-series follow-up studies. Thanks to the proximity of Car~II\xspace and Car~III\xspace on the sky, as well as the large field of view of AAT+2dF, both systems were targeted in a single pointing. We assigned RR Lyrae candidates the highest priority, followed by the BHB and K/M giant candidates. Stars in two remaining categories (RGB, RHB) were prioritized based on their brightness in $r-$band. Note that the target spacing of 2dF is typically 30\arcsec -- 40\arcsec due to fiber collisions, and therefore some targets located close to the centers of Car~II\xspace and Car~III\xspace were missed where the target density is high. The candidate stars were then allocated according to the priorities described above using the fiber configuration program \code{configure}\footnote{\url{https://www.aao.gov.au/science/software/configure}} provided by AAO. Flexibility in target allocation with 2dF allowed us to identify bright (S/N $>$ 20 per pixel) non-member stars in the January 23 data, leading to re-allocation of those fibers to alternate targets during the January 25 observations. 388 candidates were targeted in total from the two nights of observations in the January run; 309 of them were targeted again in the May run. The data reduction was performed using the \code{2dfdr}\footnote{\url{https://www.aao.gov.au/science/software/2dfdr}} v6.28 data reduction program of the AAO. The reduction includes bias subtraction, scattered light subtraction, flat-fielding, optimal spectral extraction, wavelength calibration, sky subtraction, and frame combination with cosmic ray rejection. Wavelength calibration was first performed using the arc frames taken immediately before or after each science exposure, followed by a recalibration with a second order polynomial fit using sky emission lines. As the observations were taken from different nights, the reduced spectra were corrected for the heliocentric motion of the Sun at each exposure, before the spectra from multiple exposures were combined using inverse-variance weighting. To detect possible binary stars, we combined the spectra from the January run (\code{AAT-Jan}) and May run (\code{AAT-May}) separately for the velocity measurements. The combined spectra have S/N $\sim5$~pixel$^{-1}$ at $r = 18.7$ for \code{AAT-Jan} and at $r = 18.0$ for \code{AAT-May}. \subsection{VLT/GIRAFFE+FLAMES Spectra} \label{sec:giraffe} After the observations with AAT and Magellan, we also observed Car~II\xspace and Car~III\xspace with the GIRAFFE+FLAMES spectrograph~\citep{pasquini00} on the 8.2-m Kueyen telescope (UT2) based at the ESO-VLT through Director's Discretionary Time. Observations were taken in MEDUSA mode, which allows the simultaneous observation of up to 132 objects, with a minimum target separation of 11\arcsec~due to fiber collisions. On the night of 2017 February 26, one 2775-second exposure was taken under excellent seeing condition ($\sim 0\farcs3)$. The LR8 grating was used for this observation, which covers the wavelength range from 8206 -- 9400~\AA~at a resolution of $R\sim6,000$. The calibration frames, including biases, flats and ThAr arcs, were taken at the end of the night. Target selection for VLT/GIRAFFE+FLAMES (hereafter VLT) was done in a similar way as for the AAT, with the exception that we manually shifted the best-fit PARSEC isochrone $g-r \sim 0.07$ bluer, based on the confirmed Car~II\xspace members identified by IMACS and AAT\footnote{The original PARSEC synthetic isochrones used an out-of-date DECam system response. See details about this shift in the Appendix.}. As the field of view of FLAMES is about 25\arcmin\ in diameter, we centered the exposure field in between Car~II\xspace and Car~III\xspace and we therefore missed some of the BHB members found in the AAT data (see \S\ref{sec:results}). 116 targets were selected to feed to FLAMES, with 13 fibers assigned to blank sky positions. As only a single exposure was obtained with VLT, we first removed cosmic rays using L.A.Cosmic~\citep{vandokkum01}. We then reduced the data with the GIRAFFE \code{Gasgano} pipeline (v2.4.8) provided by ESO for bias subtraction, flat-fielding, wavelength calibration and spectral extraction of individual objects. We performed a wavelength re-calibration using sky emission lines and a sky subtraction with our own code. For details, we refer to the spectroscopic analysis of the Horologium~I dwarf galaxy (Li et al., in prep.). In summary, a first-order wavelength correction derived from sky lines was applied to every spectrum to compensate for the wavelength shift likely caused by the temperature changes between the science observing during the night and the calibration frames taken at the end of the night. We then combined the 13 sky fibers into a master sky spectrum. To compensate for fiber-to-fiber throughput and resolution variations, for each target spectrum we degraded the resolution of either the target spectrum or the master sky spectrum (whichever had the higher resolution) and then scaled the master sky spectrum to match the intensity of the sky lines in the target spectrum before the subtraction. The final reduced spectra (referred to as \code{VLT-Feb}) have S/N $\sim7$~pixel$^{-1}$ at $r \sim 19.8$. \section{RESULTS} \label{sec:results} In this section, we present the results derived from the observations taken from three telescopes. We first determine the radial velocity of each individual candidate star. We then identify member stars based on the velocity, spatial location, and location on the color-magnitude diagram. After identifying the member stars, we also compute the systemic velocity, velocity dispersion, mean metallicity and metallicity dispersion for Car~II\xspace and Car~III\xspace. We use the distance moduli (dereddened) and structural parameters from Paper~I for the analysis in this work, unless otherwise stated. These parameters, together with the derived quantities in this section, are summarized in Table~\ref{tab:car2_table}. We note that for Car~II\xspace, the distance modulus derived from RR Lyrae in Paper~I has smaller uncertainties and is used in this work. \input{car2_table.tex} \subsection{Radial Velocity Measurements} \label{sec:RV} The reduced spectra from IMACS, AAT, and VLT were used for radial velocity measurements following the method described in~\citet{eri2}. We measured the heliocentric radial velocities ($v_\mathrm{hel}$) by fitting the reduced spectra with velocity templates using a Markov chain Monte Carlo (MCMC) sampler and finding the best-fit velocity that maximizes the likelihood defined by Eq. 1 in~\citet{eri2}. Instead of using only one velocity template per spectrum, we defined a set of templates for each instrument and used the template that gave the largest likelihood at the best-fit velocity as the best template for each star. The template set for each instrument includes at least one metal-rich RGB, one metal-poor RGB, and one BHB star. The velocity templates for AAT and IMACS were observed using the same instrument setting as the science observation and were constructed following the description in~\citet{tuc3}. We were not able to obtain any velocity template spectra during the VLT run. Instead, we used the Keck/DEIMOS templates from \citet{kirby15b}, as the Keck/DEIMOS spectra have a much wider wavelength coverage and a similar resolution ($R\sim6,000)$ as our VLT spectra. For the IMACS spectra, we also applied a telluric correction derived using a telluric template to correct for the mis-centering of spectroscopic targets within each slit (see \citealt{eri2} for more details). The statistical uncertainty on each velocity measurement is calculated as the standard deviation of the posterior velocity distribution from the MCMC sampler. This error is related primarily to the S/N of the spectra, with stellar temperature and metallicity also playing a role. Other systematic effects, such as instrument flexure, uncertainties in the wavelength calibration, uncertainties in the template velocity and template mismatching, should also be considered in the final velocity uncertainty budget. We estimated the systematic uncertainty as the quadrature difference between repeat measurements and the statistical uncertainty~\citep[c.f.][]{sg07,tuc3,eri2}. We adopted the systematic floor of 1.0~\kms for IMACS from~\citet{tuc3}. For AAT, we determine the systematic floor to be 0.5~\kms using repeat measurements of 18 bright stars (S/N $>8$) from the January run and the May run. Since only one exposure was taken with VLT, we were not able to derive a systematic floor with this dataset. We adopted a systematic floor of 0.9~\kms from the VLT observations of Horologium~I (Li et al., in prep), which has the same instrument setup as this data set. We added these systematic uncertainties in quadrature with the statistical uncertainties to obtain the final reported velocity uncertainties $\delta_v$. In order to combine the velocities derived from three different spectrographs to produce the final dataset for the velocity dispersion determination in \S\ref{sec:vdisp}, we need to verify that there is no systematic offset between these three data sets. We compare the repeated measurements from different instruments as shown in the top panels Figure~\ref{rv_compare} and find no obvious systematic offset between any given pair of instruments. In order to confirm that our error estimation is reasonable for each instrument, we again use these repeated measurements from each pair of instruments and compute the distribution of velocity differences between the two independent measurements ($v_1$, $v_2$), divided by the quadrature sum of their uncertainties ($\sqrt{\delta_1^2+\delta_2^2}$). The resulting distributions shown in the bottom panels of Figure~\ref{rv_compare} are well-described by normal distributions with zero mean and unit variance, as shown by red dashed curves in the same plots. From this comparison, we conclude that there is no significant zero-point shift between the various spectrographs, and that combining the three datasets will not introduce additional velocity uncertainties. In order to study the kinematics of Car~II\xspace and Car~III\xspace, as well as the spectroscopic membership in each system, we combined the velocity measurements from the three different instruments into a single data set. With this combined sample, we successfully determined the velocities of 283 stars. The heliocentric velocities and the associated uncertainties are reported in Table~\ref{tab:car2_spec}. We note that although the results reported in the table are from each observing run or each mask, we used the weighted average ($w = 1/\delta^2$) for stars with more than one measurement for the remainder of this paper. \begin{figure*}[th!] \epsscale{1} \plotone{RV_compare} \caption{$Top~Row$: We compare velocity measurements from different instruments using repeated measurements. There are no obvious zero-point shifts between three instruments. $Bottom~Row$: Radial velocity uncertainty estimation tests using the repeated observations from different instruments. The histograms show the distributions of the velocity difference normalized by the quadrature sum of their uncertainties. The red dashed curves show a normal distribution with zero mean and unit variance scaled by the total number of pairs. The good agreement with the blue histograms indicates that our estimation of the velocity uncertainties is reasonable. } \label{rv_compare} \end{figure*} \subsection{Spectroscopic Membership Determination} \label{sec:membership} Figure~\ref{cmd} shows the color-magnitude diagram (CMD), spatial distribution, and velocity distribution of the observed stars in both systems. We identified a total of 18 members in Car~II\xspace and 4 members in Car~III\xspace from the combined sample (see below). The 18 members in Car~II\xspace form a coherent velocity peak near 480~\kms in the heliocentric velocity distribution (lower right panel), including 6 BHB members, 2 RR Lyrae members, and 10 RGB members. Since the heliocentric velocity of Car~II\xspace is quite high relative to the mean velocity and velocity dispersion of the Milky Way halo, there are no foreground contaminants anywhere near the velocity of Car~II\xspace. Furthermore, all stars within this velocity peak fall on the Car~II\xspace isochrone, making the membership of this system unambiguous. The BHB members are all far from the center of Car~II\xspace (r $>$ 10\arcmin) and therefore 4 of them are only observed with AAT. The velocity uncertainties of these BHB members are relatively large as a result of their broad lines and the low S/N of their spectra. The brightest RGB member, MAGLITES\,J073621.25$-$575800.3, was observed both in January (AAT \& IMACS) and in May (AAT). The difference between the January and May observations is $\sim$30~\kms. Another RGB member, MAGLITES\,J073646.47$-$575910.2, was observed both in January (AAT \& IMACS) and February (VLT), and the difference is $\sim$25~\kms. We therefore conclude that these two stars are binaries. The 2 RR Lyrae members of Car~II\xspace also show velocity variability, and are discussed in more detail in \S\ref{sec:rrl}. We find a second narrow peak in the velocity distribution, containing 4 stars, centered at 280~\kms. All four stars are located within the Car~III\xspace half-light radius and we therefore identify them as Car~III\xspace members. While it is difficult to confirm an association based on only four stars, two of the four are BHB stars at the distance of Car~III\xspace ($m-M$ = 17.22, see upper-left panel in Figure~\ref{cmd}). The brighter of the two RGB stars lies exactly on the expected Car~III\xspace isochrone, while the fainter one is slightly redder than expected. Nevertheless, the combination of the spatial coincidence between these stars and their position in the CMD strongly suggests that this group is related to Car~III\xspace. Finally, seven candidate stars have velocities in the range of 260--400~\kms and are displayed in blue in Figure~\ref{cmd}. Given their velocities, these stars are clearly not members of Car~II\xspace. Their CMD positions and large distance away from Car~III\xspace also indicate that they are not Car~III\xspace members. We used the Besan\c{c}on Galactic stellar model \citep{besancon} to estimate the expected number of foreground Milky Way stars in our spectroscopic sample. We selected simulated stars within 0.2 mag of the PARSEC isochrone and with $r < 19.5$. We found that in an area of 1~deg$^2$ centered on Car~II\xspace there are $\sim$15 simulated stars that have a velocity larger than 260\kms, with a majority at $g-r < 0.4$ (i.e., foreground main-sequence stars). The surface density of non-Car~II\xspace and Car~III\xspace members in our spectroscopic sample is similar to this value. The small number of contaminants from the Besan\c{c}on model further supports the conclusion that the two peaks are associated with Car~II\xspace and Car~III\xspace members. Given that Car~II\xspace is relatively close to the LMC on the sky, some of these non-member stars might also belong to the LMC, which is discussed further in \S\ref{sec:lmcstar}. \subsubsection{LMC Contamination} \label{sec:lmcstar} The field of Car~II\xspace and Car~III\xspace is located 18\degr~($\sim$20 kpc) from the center of the LMC. While the visible body of the LMC is contained within the central $\sim 10\degr$~\citep[e.g.,][]{Besla2016}, stars associated with the LMC have been detected as far out as $\sim 20\degr$~\citep[e.g.,][]{Nidever2017}. Recently, \citet{Belokurov2016} also reported the detection of a small number of BHB candidates likely associated with the Magellanic Clouds, at a wide range of angular distances, extending out to $\sim$30\degr\ or perhaps even $\sim$50\degr\ from the LMC. Interestingly, the diffuse cloud of BHB-like stars appears rather clumpy; the authors identify at least four individual stream-like structures. The most significant of these, the so-called S1 stream, can be traced securely to $20-25$\degr\ from the LMC. Further support for the picture in which the LMC is enshrouded in a thin veil of stellar debris comes from the studies of \citet{Mackey2016} and \citet{Belokurov2017}, who use main sequence and RR Lyrae stars, respectively, to trace a halo-like component around the LMC out to $\sim20$\degr\ from its center. Finally, as \citet{Boubert2017} demonstrate using a combination of a stellar evolution code and N-body simulations of the LMC in-fall, the Cloud ought to be surrounded by an envelope of runaway stars. These high-velocity escapees are kicked out of the dwarf's disk during stellar binary disruption as a result of core-collapse supernova explosions, and can travel many tens of kpc away from the LMC in all directions. It is therefore possible that some stars in our spectroscopic sample belong to the LMC. To calculate what the velocities of such stars might be, we used the rotating disk models of~\citet{vanderMarel2016}. These imply that the line-of-sight velocity of the LMC disk at the sky position of Car~II\xspace and Car~III\xspace is 380~\kms. This is $118$~\kms higher than the systemic velocity of the LMC center of mass, due to the fact that far from the center of the galaxy a significant component of its large transverse velocity vector projects along the line of sight. The location of Car~II\xspace and Car~III\xspace is on the near side of the inclined LMC disk, so the distance to the disk there is only $43.5$ kpc. Since the positions of Car~II\xspace and Car~III\xspace are near the kinematic minor axis, a possible non-rotating LMC halo population would have more-or-less the same velocity (namely, 380~\kms) as the rotating disk. Old populations in the visible part of the LMC have velocity dispersions in the range 20--30~\kms~\citep{vanderMarel2009}. \citet{vdm02} also shows that the velocity dispersion is almost a constant of $\sim20~\kms$ between 2~\unit{kpc} and 9~\unit{kpc} from the LMC center. We expect the dispersion at the position of Car~II\xspace and Car~III\xspace ($\sim$20 kpc from LMC) to be similar, though it largely depends on the mass and extent of the LMC's dark halo. Moreover, the tidal radius of the LMC is $24.0\degr\pm 5.6\degr$~\citep{vanderMarel2014}. Therefore, tidal perturbations could affect both the mean velocity and velocity dispersion of LMC stars at the position of Car~II\xspace and Car~III\xspace. The mean velocities inferred here for Car~II\xspace and Car~III\xspace are offset by $\sim \pm 100$~\kms, respectively, from the predicted velocities of LMC members. Therefore, contamination by LMC members at the Car~II\xspace and Car~III\xspace velocities is expected to be negligible. We do detect 7 non-member stars (triangles in Figure~\ref{cmd}; see also Table~\ref{tab:car2_spec}) with $v_{\rm hel}$ in the range 260--400~\kms. These velocities correspond to much smaller velocities in the Galactocentric frame ($\sim 40$--180~\kms), since Car~II\xspace and Car~III\xspace are located almost opposite from the direction of solar motion. Among these 7 stars, 6 have $ 0.2 < g-r < 0.4$ and are very likely to be foreground halo stars at much closer distances. The seventh star, MAGLITES\,J073634.86$-$580340.6, is a BHB star with $g-r\sim -0.2$. From the CMD (see upper-left panel in Figure~\ref{cmd}), its distance is slightly farther than that of Car~II\xspace. Comparing its $r-$band magnitude with the BHB members in Car~II\xspace and Car~III\xspace, we estimate the distance modulus of this star to be $m-M = 18.1\pm0.1$, corresponding to a heliocentric distance of $42\pm 2~$kpc, matching well with the model prediction of $\sim 43.5~$kpc for the near side of the LMC mentioned above. This BHB star has independent observations from AAT, IMACS and VLT. The weighted average velocity is $v_\mathrm{hel} =331.7\pm2.0~\kms$, showing no evidence of binary motion. For comparison, \citet{Munoz2006} detect a group of LMC stars in the field of the Carina dwarf spheroidal galaxy ($\sim22$\degr\ from LMC center and $\sim10$\degr\ from Car~II\xspace and Car~III\xspace) with an average radial velocity around $v_\mathrm{hel} = 332~\kms$. Therefore, the distance and velocity of this BHB star both suggest an association with the LMC. It lies about 18\degr\ from the center of the LMC, making it one of the LMC's most distant spectroscopically confirmed BHB members. Clearly, finding additional LMC stars with similar radial velocity at separate positions on the sky would help our understanding of the structure and dynamics of the LMC's outer regions. \subsubsection{RR Lyrae Stars} \label{sec:rrl} Car~II\xspace contains 3 RR Lyrae stars (Paper~I). Our spectroscopic runs targeted 2 of those stars, namely MAGLITES\,J073637.00$-$580114.5 and MAGLITES\,J073645.86$-$575154.1 (or V1 and V2 in Paper~I), which are the two innermost of the RR Lyrae stars, with distances from the center of Car~II\xspace of 2\farcm1 and 7\farcm7. The derivation of the center-of-mass velocity, or systemic velocity, of RR Lyraes requires special treatment since the radial velocity for these stars changes significantly (up to $\sim 100$ km/s for RRab stars) during their pulsation cycle. A model of a radial velocity curve must be fitted to the spectroscopic data. To do this, we followed the procedure developed in \citet{vivas08}. The observational data and the fitted model for each star are shown in Figure~\ref{fig-velRR}. As the period of RR Lyrae is usually less than a day, we measured the velocities from the January 23 and January 25 AAT observations separately. For other measurements, we used the velocity measured over 1-3 combined exposures from that night, as reported in Table~\ref{tab:car2_spec}. We therefore have 5 independent velocity measurements for V1 and 3 for V2. \begin{figure}[htb!] \plotone{rrl.pdf} \caption{Radial velocity curve fits for the observations of the RR Lyrae stars MAGLITES\,J073637.00$-$580114.5 (V1, top) and MAGLITES\,J073645.86$-$575154.1 (V2, bottom). Circles represent the measurements from each individual spectrum, which were obtained at different phases during the pulsation cycle. The rightmost symbol (grey) for V1 (at phase 0.93) is shown only for reference but it was not used in the fitting of the radial velocity curve since it is located near a large discontinuity in the radial velocity curve. Error bars on the horizontal axis are not real errors but represent the time span (in units of phase) over which each spectrum was obtained. Solid lines represent the models fitted to both stars, X~Ari in the case of the RRab star V1, and a template based on T~Sex and DH~Peg in the case of the RRc star V2. } \label{fig-velRR} \end{figure} For V1, an RRab star, we used the radial velocity model of the star X Arietis, which was parametrized by \citet{layden94} based on observations made by \citet{oke66}. The radial velocity curve of the ab-type RR Lyrae stars (the type of our star V1) has a large discontinuity near maximum light. Thus, it is advisable not to measure radial velocities near that phase in the pulsation cycle ($\phi<0.05$ or $\phi>0.9$). However, at the time of the spectroscopic observations we had not obtained final light curves of the RR Lyrae stars and thus, spectra were taken at random phases. Phases were calculated later once the light curves were characterized by Paper~I. One of our observations for V1 was indeed not useful since the spectrum was acquired at phase $\phi=0.93$. Thus, the radial velocity curve was fitted using 4 observations with phases ranging from 0.08 to 0.58. As seen in Figure~\ref{fig-velRR}, all the individual observations follow very nicely the model of X Arietis, which was shifted in velocity to match the observations. The best match was produced when the systemic velocity was 491~\kms. The rms of the fit is 2.8~\kms. However, to obtain a more realistic error we followed \citet{vivas05} and included uncertainties due to star-to-star variations of the amplitude of the radial velocity curve as well as possible differences on the exact phase of the systemic velocity. We determine a final systemic (heliocentric) velocity for V1 of $491\pm7$~\kms. For V2, which is an RRc star, we used a template constructed by \citet{duffau06} based on observations of T~Sex and DH~Peg. The amplitude of the radial velocity curve of RRc stars is not as large as for RRab variables, nor is there a discontinuity at maximum light. Thus, all 3 spectra available for this star can be used to determine its velocity. We measure a heliocentric velocity for V2 of $474 \pm 5$~\kms. The systemic radial velocities obtained for these two RR Lyrae stars confirm that they are members of Car~II\xspace. \subsection{Velocity Dispersion} \label{sec:vdisp} We used 8 RGB stars (excluding the 2 binaries mentioned in \S~\ref{sec:membership}) and 6 BHB stars (hereafter the 14 star sample) to calculate the systemic velocity and the velocity dispersion of Car~II\xspace using the 2-parameter Gaussian likelihood function defined in~\citet{Walker06} and an MCMC to sample the distributions of the systemic velocity $v_\mathrm{hel}$ and the velocity dispersion $\sigma_v$. We used a flat prior for the systemic velocity with range (455,495)~\kms and a non-informative Jeffreys prior for the velocity dispersion with range (0.01, 100)~\kms (or equivalent to a flat prior in log($\sigma_v$) space with range (-2, 2)). The probability distribution from the MCMC is shown in Figure~\ref{mcmc}. We find a systemic velocity of $v_{\rm hel} = \ensuremath{477.2 \pm 1.2}\xspace$~\kms and a velocity dispersion of $\sigma_{v} = \ensuremath{3.4^{+1.2}_{-0.8} }\xspace$~\kms, where we report the median of the posterior and the uncertainty calculated from the 16th and 84th percentiles. \input{vdisp_table.tex} \begin{figure*}[th] \plottwo{car2_vhel_mcmc}{car3_vhel_mcmc} \caption{Two-dimensional and marginalized posterior probability distributions from an MCMC sampler using a likelihood model for the systemic velocity and velocity dispersion of Car~II\xspace \emph{(left)} and those of Car~III\xspace \emph{(right)}. The 16th, 50th, and 84th percentiles are indicated by dashed lines in the 1-D histograms.} \label{mcmc} \end{figure*} In order to test the effects of our input assumptions, we also calculated the systemic velocity and velocity dispersion with different priors and different datasets. A summary of these comparisons is presented in Table~\ref{tab:vdisp_table}. With a flat prior for velocity dispersion and the same 14 star sample, the velocity dispersion is $3.8_{-0.9}^{+1.3}~\kms$, which is slightly higher than the value determined using the Jeffreys prior with same dataset. This result is similar to what has been seen in~\citet{kim15_peg3}. If we expand our sample to 16 stars by including the two RR Lyrae stars using the velocities derived in \S\ref{sec:rrl}, both the systemic velocity and the velocity dispersion are very similar to what we determined with the 14 star sample (default sample). We then run similar calculations with only RGB members and only BHB members. The result with 8 RGB members is again very similar to our default sample, suggesting that the results are mostly constrained by the RGB members (which have smaller velocity uncertainties). The 6 BHB members give a smaller dispersion, but with larger uncertainties so that the results are statistically consistent, mainly due to the large velocity uncertainties ($\delta_v\gtrsim4$~\kms) on the BHB stars. We also calculated the systemic velocity and velocity dispersion using the results from each instrument to see if there is any instrumental bias. We obtain very consistent results using the VLT data or IMACS data alone, while the AAT data show much smaller dispersion as the members found by AAT are mostly BHBs (plus RR Lyraes and binaries). We additionally calculated the velocity dispersion using the velocity measurements from only one epoch. We include the 14 stars in addition to the two binaries. For each star the measurement with highest S/N was chosen. The derived velocity dispersion is more than doubled compared to that derived from the 14 star sample. This exercise mimics a case in which only single-epoch velocity measurements are made for each star and therefore no binary information is available. Because of the large velocity amplitudes of the two binary stars, observations made near the velocity extrema of the binary orbits can substantially inflate the apparent velocity dispersion of Car~II\xspace. Finally, we performed a jackknife test~\citep{macqueen1967} to assess the robustness of the measured velocity dispersion with the 14 star sample, in particular, to check whether the results are driven by any single star. We remove one star out of the 14 star sample and recompute the average velocity and velocity dispersion. In the jackknife runs, the average velocity had a median difference of 0.0~\kms, a standard deviation of 0.3~\kms and a minimum and maximum difference of $-$0.5~\kms and 0.6~\kms. For the velocity dispersion the median difference was 0.1~\kms, the standard deviation 0.2~\kms, and the minimum and maximum $-$0.5~\kms and 0.3~\kms. We conclude that, apart from the binaries, there are no individual stars whose inclusion or exclusion from the sample significantly affects the kinematics of Car~II\xspace. We checked if Car~II\xspace contains a velocity gradient following the method in \citet{eri2}. We calculate a best-fit velocity gradient of $0.0\pm0.3\kms~{\rm arcmin}^{-1}$, consistent with the null model. We computed the Bayes Factor comparing the velocity gradient and constant velocity dispersion models and find $\ln{\rm B}=-2.2$, which favors the constant velocity dispersion model (we follow \citealt{2017MNRAS.465.2420W} to interpret the Bayes' Factor value). We conclude that there is no evidence for a velocity gradient in Car~II\xspace. For Car~III\xspace, we determined a systemic velocity of \ensuremath{284.6^{+3.4}_{-3.1}}\xspace~\kms and velocity dispersion of \ensuremath{5.6^{+4.3}_{-2.1} }\xspace~\kms using the 4 identified members. We caution that the small number of stars may not lead to a reliable estimate for the velocity dispersion of Car~III\xspace. Furthermore, a single binary star can easily inflate the velocity dispersion. We give a more detailed discussion of the implications of the measurements and the nature of Car~III\xspace in \S\ref{sec:properties}. \subsection{Metallicity and Metallicity Dispersion} \label{sec:feh} We measured the metallicity of the red giant member stars in both systems using the equivalent widths (EWs) of the CaT lines. Following the procedure described by~\citet{simon15b} and \citet{eri2}, we fitted all three of the CaT lines with a Gaussian plus Lorentzian function and then converted the summed EWs of the three CaT lines to metallicity using the calibration relation from~\citet{carrera13} with absolute $V$ magnitude. We first performed the color-transformation from DES-$g$ and DES-$r$ to apparent $V$ magnitude using Equation (5) in~\citet{bechtol15} and then adopted distance moduli of $(m-M) = 17.86$ for Car~II\xspace members and $(m-M) = 17.22$ for Car~III\xspace members to calculate absolute magnitudes. The statistical uncertainties on the EWs are calculated from the Gaussian and Lorentzian fit. We added a systematic uncertainty of 0.2~\AA\ (as determined in \citealt{eri2}) in quadrature with the statistical uncertainties to obtain the final EW uncertainties. The metallicity uncertainties shown in Table~\ref{tab:car2_spec} are dominated by the uncertainties on the CaT EWs, with small contributions from the uncertainties on the distances, the stellar photometry, and the uncertainties on the calibration parameters from~\citet{carrera13}. Among the 10 confirmed spectroscopic RGB members in Car~II\xspace, we successfully measured metallicities for 9 stars. The metallicities of the Car~II\xspace members range from $\feh = -2.7$ to $\feh = -1.9$. We used a Gaussian likelihood model as described above for the velocities to calculate the mean metallicity and metallicity dispersion of Car~II\xspace. We find a mean metallicity of $\feh = \ensuremath{-2.44 \pm 0.09}\xspace$, with a dispersion of $\sigma_{\feh} = \ensuremath{0.22 ^{+0.10}_{-0.07}}\xspace$. The probability distributuion from the MCMC is shown in Figure~\ref{mcmc_feh}. \begin{figure}[th] \plotone{car2_feh_mcmc} \caption{Two-dimensional and marginalized posterior probability distributions from an MCMC sampler using a likelihood model for the mean metallicity and metallicity dispersion of Car~II\xspace. The 16th, 50th, and 84th percentiles are indicated by dashed lines in the 1-D histograms.} \label{mcmc_feh} \end{figure} For Car~III\xspace, we measured the metallicity of the brightest RGB member, MAGLITES\,J073834.94$-$575705.4, and obtained $\feh = -1.97\pm0.12$ for this star. If this RGB member represents the mean metallicity of Car~III\xspace, then the metallicity of Car~II\xspace and Car~III\xspace are different at 3-$\sigma$ level. Note that in Paper~I we obtained $\feh = -1.8\pm0.1$ for Car~II\xspace and $\feh = -1.8\pm0.2$ for Car~III\xspace from the isochrone fitting using photometry alone. While this metallicity estimate for Car~III\xspace is consistent with that of the brightest RGB member from the spectroscopic measurements, for Car~II\xspace the metallicity derived from isochrone fitting is more metal-rich than its spectroscopic mean metallicity. \section{Discussion} \label{sec:discussion} \subsection{Properties of Car~II\xspace and Car~III\xspace and Their Possible Association} \label{sec:properties} We calculated the mass of Car~II\xspace contained within the half-light radius according to the mass estimator from \citet{wolf10} \citep[see also][]{Walker2009}, using the velocity dispersion determined in \S\ref{sec:vdisp} and the half-light radius of Car~II\xspace from Paper~I. We found a dynamical mass of $M_{\rm 1/2} = \ensuremath{1.0^{+0.8}_{-0.4} \times 10^{6}}\xspace$~\unit{M_\odot} and a mass-to-light ratio of \ensuremath{369^{+309}_{-161}}\xspace~\unit{M_\odot}/\unit{L_\odot} for Car~II\xspace. The reported uncertainties on the dynamical mass and mass-to-light ratio include the uncertainties on the velocity dispersion from this paper, and the uncertainties on the half-light radius and luminosity from Paper~I. The mass of Car~II\xspace is much larger than its stellar mass, and the mass-to-light ratio is similar to those of other dwarf galaxies with comparable luminosities. The low average metallicity (\ensuremath{-2.44 \pm 0.09}\xspace) and large metallicity dispersion (\ensuremath{0.22 ^{+0.10}_{-0.07}}\xspace) also match observations of other dwarf galaxies with similar luminosities~\citep{kirby13b}. We therefore conclude that Car~II\xspace is a dark matter-dominated dwarf galaxy. Because we have only identified 4 members of Car~III\xspace, neither its mass nor its metallicity distribution is significantly constrained. We therefore cannot determine whether Car~III\xspace is a dwarf galaxy. If the metallicity of the brightest confirmed member star ($\feh = -1.97$) represents the average metallicity of the system, then Car~III\xspace is more metal-rich than most of the dwarf galaxies with similar luminosities, but still much more metal-poor than all known star clusters at a similar luminosity. If the velocity dispersion calculated from the 4 confirmed members is close to the true dispersion of the system, then Car~III\xspace is likely to be a dark matter-dominated dwarf galaxy. Given the small sample, though, a single binary star could easily inflate the velocity dispersion, and therefore this dispersion shall be used with caution. Interestingly, among the 4 member stars, MAGLITES\,J073834.94$-$575705.4, the brightest RGB member, was observed in both January (IMACS) and February (VLT), and MAGLITES\,J073835.54$-$575622.3, a BHB member, was observed in January (IMACS), February (VLT) and May (AAT). The differences in velocities are consistent with the measurement uncertainties (see Table~\ref{tab:car2_spec}) and therefore we do not see any strong evidence for binarity of these two stars from our observations across 1-3 month baselines. However, the velocities of these two stars are about 8~\kms apart (and are the source of the large velocity dispersion on Car~III\xspace). This large difference, if indeed not caused by binary motion, could provide a hint that Car~III\xspace should be classified as a dwarf galaxy. Identifying more members with deeper observations will be necessary to confirm the nature of Car~III\xspace. Observing these bright members at one or two additional epochs will also help determine whether or not they are in binary systems. We note that the mass estimator from \citet{wolf10} is only valid for dispersion-supported stellar systems in dynamical equilibrium. It is possible that Car~II\xspace has had a tidal interaction with the Milky Way ($d\sim36$~kpc), LMC ($d\sim20$~kpc), or Car~III\xspace ($d\sim10$~kpc) due to their close proximity. Either a velocity gradient or an increasing velocity dispersion at large radii could be potential signs of tidal disruption. In \S\ref{sec:vdisp}, we conclude that we are not able to detect a velocity gradient with our current data. In Figure~\ref{Car2_radial}, we show the velocity as a function of distance to the center of Car~II\xspace. Interestingly, the six BHB stars are also the outermost members. As shown in Table~\ref{tab:vdisp_table}, the velocity dispersion from the BHB sample alone is small due to the large velocity uncertainties, and therefore, we also do not see a larger velocity dispersion at large radii. However, we note that both null results could result from the large velocity uncertainties of these BHB stars. Further studies that either identify more RGB stars at large radii or improve the velocity precision for the known BHB members will be necessary to completely rule out a velocity gradient or other tidal effects. \begin{figure}[th!] \plotone{Car2_radial} \caption{Velocity as a function of distance to the center of Car~II\xspace, for Car~II\xspace members only. Two binary stars are not shown. For the two RR Lyrae stars, the systemic velocities calculated in Sec~\ref{sec:rrl} are used. The six outermost member stars are BHB members at $r > 10\arcmin$. The shaded region shows the systemic velocity of Car~II\xspace with $1-\sigma$ uncertainty. The dashed line shows the half-light radius of Car~II\xspace from Paper~I. } \label{Car2_radial} \end{figure} The fact that all 6 BHB members are the outermost stars (Figure~\ref{Car2_radial}) implies a possible non-uniform spatial distribution of the stellar populations in Car~II\xspace. However, we also caution that this distribution is likely caused by a selection bias from the observations. Note that the four outermost BHB members are uniquely identified by AAT, which has the largest FOV among three instruments. However, the target selection for AAT was performed with an older version of PARSEC isochrones and therefore may have missed a few RGB members with $r>r_h$, as explained in the Appendix. Further observations, including a more comprehensive search for bright RGB members outside of the half-light radius, may be necessary to fully understand the possible stellar population dependent spatial distribution in Car~II\xspace. To check whether the observed kinematics of Car~II\xspace and Car~III\xspace are consistent with their being gravitationally bound systems, we have computed their tidal radii ($r_t$) with the Milky Way as the host via equation 18 in \citet{2015MNRAS.453..849B}, using the Milky Way mass model presented in \citet{2016ApJ...829..108E}. To set a lower limit on $r_t$, we used the Car~II\xspace $M_{\rm 1/2}$ value for the total mass and find $r_t\sim500$~pc. This lower limit on $r_t$ is already significantly larger than the observed size of the system. With a mass profile based on a \citet*{nfw96} profile, the total mass of Car~II\xspace (with $r = 300$~pc) is estimated to be $5 - 10$ times larger than $M_{\rm 1/2}$ (with $r_s=0.1-0.5$ kpc), implying $r_t\sim 0.9-1.1$~kpc. The mass profile of Car~III\xspace is more uncertain due to the small number of stars and the unknown nature of the object. If we assume conservatively that Car~III\xspace is a dwarf galaxy with dispersion $\sigma=1\kms$ and $M(r_\mathrm{max}) = 10\times M_{\rm 1/2}$, then we find $r_t\sim250$~pc, again much larger than the observed size of Car~III\xspace. If the true dispersion of Car~III\xspace is close to the measurement from a sample of 4 members, then the tidal radius will be even larger. We additionally computed $r_t$ assuming that the LMC is the host instead of the Milky Way. For Car~II\xspace, the LMC host $r_t$ values are 5-10\% larger than the Milky Way host values while for Car~III\xspace they are $\sim50\%$ larger. Although a complete analysis of the Car~II\xspace tidal radius should include the LMC+Milky Way system, we still expect the tidal radius to be larger than the observed size of Car~II\xspace. Therefore, we conclude that Car~II\xspace is likely to be a bound system based on its current location in the Milky Way, though it is still possible that it had a smaller tidal radius if it approached closer to the Milky Way or LMC in the past. The small projected separation ($\sim18\arcmin$) of Car~II\xspace and Car~III\xspace and their similar distances naturally lead to the question of whether the two are (or were) a bound pair of satellites. Similar speculation has occurred for the satellite pairs Leo~IV--Leo~V \citep[$\Delta d_{\rm 3D} \sim 20.6~\unit{kpc}$ and $\Delta v \sim 47~\kms$;][]{2008ApJ...686L..83B, 2009ApJ...694L.144W, 2010ApJ...710.1664D} and Pisces~II-Pegasus~III \citep[$\Delta d_{\rm 3D} \sim 43~\unit{kpc}$ and $\Delta v \sim 10~\kms$;][]{kim15_peg3, 2016ApJ...833...16K}. While Car~II\xspace and Car~III\xspace have the smallest known physical separation to date, $\Delta d_{\rm 3D} \sim 10~\unit{kpc}$, their separation in velocity is quite large $\Delta v \sim 193~\kms$. We applied the method presented in \citet{2014MNRAS.440.1225E} to estimate the minimum halo mass for the Car~II\xspace-Car~III\xspace system to be bound and find an unrealistically large halo mass of ${\sim}10^{11}~\unit{M_\odot}$ (similar to the halo mass of the LMC; \citealt{vanderMarel2014}). Based on the observed kinematics of Car~II\xspace, the escape velocity at the distance of Car~III\xspace is between $15-25~\kms$, significantly smaller than the observed velocity difference. Car~II\xspace and Car~III\xspace are therefore highly unlikely to be a bound pair of satellites. Assuming the pair have similar proper motions, the two satellites would have had a close encounter and sailed past one another ${\sim}53~{\rm Myr}$ ago. Based on this trajectory and the observed separation they would have passed within $200$~pc of one another ($\sim2\times(r_{1/2, \, {\rm Car~II\xspace}}+r_{1/2, {\rm Car~III\xspace}})$). At the point of closest encounter, the Car~III\xspace tidal radius would have been no more than a few tens of parsecs. Regardless of the nature of Car~III\xspace, a close encounter between the satellites could have disrupted Car~III\xspace. While there is no reason to expect Car~II\xspace and Car~III\xspace to have similar proper motions given the large difference in their radial velocities, it will be interesting to explore this scenario further when proper motions are available. We note that the brightest spectroscopic members in both Car~II\xspace and Car~III\xspace are brighter than the faint limit for Gaia proper motion measurements. The properties of Car~II\xspace and Car~III\xspace derived in this paper are summarized in Table~\ref{tab:car2_table}. \subsection{Association with the Magellanic Clouds} \label{sec:lmcconnect} As discussed in \S\ref{sec:intro}, the MagLiteS survey was designed to search for satellites of the Magellanic Clouds. Having searched in the vicinity of the LMC and SMC, it is therefore unsurprising that Car~II\xspace and Car~III\xspace are physically close to the Magellanic Clouds. The newly measured velocities of the Carina pair can now be used to test whether a physical association with the Clouds is likely. \begin{figure*}[th!] \plottwo{jethwa_lmc_sims}{ms_data} \caption{\emph{Upper panels:} On-sky positions of the newly discovered satellites in DES (red outline) and MagLiteS (yellow outline), together with the LMC (white square) and SMC (white circle), shown in Magellanic Stream coordinates~\citep{Nidever2008}. Car~II\xspace and Car~III\xspace, the Magellanic Clouds, and several other dwarf galaxies form a tight sequence on the sky. The black dashed line is a fit to this sequence as indicated in Paper I. \emph{Lower panels:} Line-of-sight velocities for ultra-faint satellites near the Magellanic Clouds as a function of Magellanic Stream longitude. Objects with velocities from the literature are plotted as orange diamonds \citep{kirby15, simon15b, koposov15, 2016ApJ...819...53W, tuc3}, while measurements of Car~II\xspace and Car~III\xspace from this work are displayed in green and magenta, respectively. The gray contours show the probability distribution of the LMC satellites from \citet{jethwa16} (left) and the neutral hydrogen column density from~\citet{Nidever2010} (right). The dash-dotted (dashed) curves show the leading (trailing) orbit of LMC. } \label{mc_dist} \end{figure*} To aid comparison with models, we first transform the line-of-sight velocities of Car~II\xspace and Car~III\xspace from the heliocentric frame to the Galactic Standard of Rest frame\footnote{We adopted the circular orbital velocity of Milky Way at the Sun's radius $\Theta_{0} = 239~\kms$~\citep{McMillan2011} and solar motion of $(U_\odot,~V_\odot,~W_\odot) = (11.1,~12.24,~7.25)~\kms$~\citep{Schonrich2010} for the velocity transformation from heliocentric to Galactic Standard of Rest to match the values used in ~\citet{jethwa16}.} (GSR) and obtain $v_{\rm GSR,~Car~II\xspace} = \ensuremath{235}\xspace$~\kms and $v_{\rm GSR,~Car~III\xspace} = \ensuremath{42}\xspace$~\kms. Next, we compare these measurements with the dynamical model of Magellanic satellites presented in~\citet{jethwa16}. Assuming an association with the LMC, this model predicts a velocity of $v_{\rm GSR} = 118^{+142}_{-80}(149^{+142}_{-114}$~\kms) at the position of Car~II\xspace (Car~III\xspace). For an association with the SMC, the predicted velocities are higher, at $v_{\rm GSR} = 350^{+50}_{-70}$~\kms for both Car~II\xspace and Car~III\xspace. According to this model, both Carinas therefore have velocities consistent with an LMC association. In Figure~\ref{mc_dist}, we show the comparison between the observed phase-space distribution of dwarf galaxies/dwarf galaxy candidates and the simulated probability distribution of LMC satellites from the \citet{jethwa16} model, and the neutral hydrogen gas column density from \citet{Nidever2010}. According to the \citeauthor{jethwa16} model, both Car~II\xspace and Car~III\xspace are consistent with having originated with the LMC. As pointed out in \S\ref{sec:lmcstar}, the non-rotating LMC halo population and the LMC rotating disk both have $v_\mathrm{hel}\sim380~\kms$ at the location of Car~II\xspace and Car~III\xspace, which differs by $\sim$100~\kms from the heliocentric velocities of Car~II\xspace and Car~III\xspace. According to \citet{vanderMarel2014}, the enclosed LMC mass out to a radius of 8.7 kpc is M($<8.7~{\rm kpc)} = 1.7 \times 10^{10} \unit{M_\odot}$. The corresponding escape velocity is $\sim90~\kms$ at the distance of Car~II\xspace ($\sim 18\unit{kpc}$ from LMC) and $\sim75~\kms$ at the distance of Car~III\xspace ($\sim 25\unit{kpc}$ from LMC). Because these values are based on a lower limit to the enclosed mass of the LMC, the actual escape velocities are likely to be somewhat larger. Therefore, we tentatively suggest that one or both of Car~II\xspace and Car~III\xspace are likely to be bound satellites of the LMC, although proper motion measurements will be needed to confirm this hypothesis. We also note that all three newly discovered MagLiteS satellite candidates (Car~II\xspace, Car~III\xspace, and Pictor II) fall along a linear sequence on the sky as defined by the positions of the LMC, SMC and 7 of the DES satellite candidates. This configuration is shown by the dashed black line in Figure~\ref{mc_dist}. This linear sequence was first pointed out in~\citet{jethwa16}, prior to any MagLiteS discoveries. As discussed in Paper~I, it is unclear whether this linear sequence corresponds to a planar distribution of satellites around the Magellanic Clouds, or simply a satellite distribution that is elongated along the LMC-SMC separation vector. Once dynamical models of both scenarios are available, the velocities we have measured may provide a useful discriminant. \subsection{J and D-factors} \label{sec:jfactor} Milky Way satellite galaxies are among the most promising targets for indirect dark matter searches due to their substantial dark matter content, proximity, and dearth of conventional non-thermal emission \citep[e.g.,][]{Baltz:2008wd,Winter2016}. In particular, analyses of {$\gamma$-ray}\xspace data from the {\it Fermi}\xspace Large Area Telescope (LAT) around previously known Milky Way satellites are now sensitive to dark matter annihilating at the canonical thermal relic cross section for particle masses up to 100 \unit{GeV} \citep[e.g.,][]{Ackermann2015,GeringerSameth2015}. The discovery of additional Milky Way satellites, especially nearby objects such as Car~II\xspace and Car~III\xspace, can improve the sensitivity of such searches \citep{He2015,Charles2016}, as demonstrated by \citet{dw15a} and \citet{Albert2017}. In this subsection, we compute the astrophysical component of the dark matter annihilation and decay fluxes, the so-called J and D-Factors, for both Car~II\xspace and Car~III\xspace. The J-factor is the line-of-sight integral of the dark matter density squared: $J(\theta) = \int \rho_{\rm DM}^2 \mathrm{d}\Omega \mathrm{d}l$. The D-Factor is the linear analog: $D(\theta) = \int \rho_{\rm DM} \mathrm{d}\Omega \mathrm{d}l$. Here, $\rho_{\rm DM}$ is the dark matter density and the integral is performed over a solid angle $\Delta \Omega$ with radius $\theta$. The standard approach for computing $\rho_{\rm DM}$ in dwarf spheroidal galaxies uses the spherical Jeans equation \citep[e.g.,][]{strigari2008, 2015MNRAS.446.3002B}. The three main ingredients of a spherical Jeans analysis are: the stellar density profile, which we modeled as a Plummer profile \citep{1911MNRAS..71..460P}; the gravitational potential, assumed to be dark matter-dominated and modeled with a Navarro-Frenk-White profile \citep{nfw96}; and the stellar anisotropy, modeled with a constant profile\footnote{Analysis with generalized stellar, dark matter, and anisotropy profiles would produce larger confidence intervals \citep{2015MNRAS.453..849B}.}. Jeffreys priors are assumed for the dark matter halo parameters: $-2 < \log_{10}{\left( r_s/{\rm kpc}\right)} < 1$ and $4 < \log_{10}{\left( \rho_s/{\rm \unit{M_\odot} \, kpc^{-3}}\right)} < 14$ for the scale radius, $r_s$, and scale density, $\rho_s$, respectively. Additionally, a prior of $r_s > r_{1/2}$ is imposed, where $r_{1/2}$ is the azimuthally averaged stellar half-light radius. We adopted the $r_s>r_{1/2}$ prior for several reasons: in our posterior distributions there are no trends between $J$ and $r_s$ except for $r_s<r_{1/2}$ where $J$ is systematically higher, the J-factor tends to be overestimated in mock data sets without this cut \citep[see Section 4.1 of][]{2015MNRAS.446.3002B}, and small $r_s$ values are disfavored in $\Lambda$CDM N-body simulations \citep[based on][ a halo with $V_{\rm max}\sim5-10\kms$ has a $r_s\sim100-300$~pc]{2014MNRAS.444..222G}. For the anisotropy prior, we assumed a flat symmetrized anisotropy parameter; $-0.95 < \tilde{\beta} < 1.0$ \citep[see Eq. 8 in][]{2006MNRAS.367..387R}. A flat prior was used for the average velocity ($470 < \overline{v} < 490 \kms$) and Gaussian priors were assumed for the distance and structural parameters\footnote{We used the azimuthally averaged half-light radius to account for the axisymmetry of the systems ($r_{1/2} = r_{\rm azimuthal} = r_{\rm major}\sqrt{1 -\epsilon}$. For non-spherical analysis of dwarf galaxy J-factors see \citet{2016MNRAS.461.2914H} and \citet{2016PhRvD..94f3521S}.}. We used an unbinned likelihood function \citep{2008ApJ...678..614S, Martinez2009JCAP...06..014M, Geringer-Sameth2015ApJ...801...74G} and determined posterior distributions with the MultiNest sampling routine \citep{2008MNRAS.384..449F, 2009MNRAS.398.1601F}. We estimated the dark matter $r_t$ (required to compute the J and D-Factors) at each point in the posterior distribution by iteratively computing the enclosed mass and solving for $r_t$ as described in \S\ref{sec:properties}. We find the Car~II\xspace $r_t$ posterior to be roughly Gaussian, centered at $1~\unit{kpc}$ but containing a substantial tail to larger values. We calculated the Car~II\xspace integrated J-factor enclosed within solid angles of radii $\theta=\alpha_c, 0.1, 0.2, 0.5^\circ$ to be $\log_{10}{\left(J/{\rm GeV^2 \, cm^{-5}} \right)} = \ensuremath{18.1_{-0.5}^{+0.5}}\xspace, \ensuremath{17.9_{-0.5}^{+0.6}}\xspace, \ensuremath{18.0_{-0.5}^{+0.5}}\xspace, \ensuremath{18.2_{-0.5}^{+0.5}}\xspace$, respectively, using the 14 star sample. $\alpha_c$ is the angle within which the J-factor errors are minimized \citep{Walker2011ApJ...733L..46W}; $\alpha_c=2 r_{1/2} /d\approx 0.23^\circ$ for Car~II\xspace. The equivalent radius for the D-Factor occurs at $\alpha_c/2$. We determined the D-Factor within $\theta=\alpha_c/2, 0.1, 0.2, 0.5^\circ$ to be $\log_{10}{\left(D/{\rm GeV \, cm^{-2}} \right)} = \ensuremath{17.1_{-0.3}^{+0.3}}\xspace, \ensuremath{16.9_{-0.3}^{+0.3}}\xspace, \ensuremath{17.4_{-0.3}^{+0.3}}\xspace, \ensuremath{18.0_{-0.4}^{+0.4}}\xspace$. These values agree with the simple J-factor estimator \citep[Eq. 13 of ][]{2016PhRvD..93j3512E}. This value is an order of magnitude smaller than the simple empirical J-distance scaling relations \citep{dw15a, Albert2017}. The J-factor contains a large velocity dispersion scaling ($J\propto M^2\propto\sigma^4$) and an increase of only $\Delta \sigma \sim 1.5 \kms$ would move Car~II\xspace onto the J-distance scaling relation (Pace \& Strigari in prep). There are multiple ultra-faint satellites with larger J-factors \citep[$6-8$ are larger depending on the J-factor compilation;][]{Geringer-Sameth:2014yza, 2015MNRAS.453..849B}. The D-Factor at $0.1^\circ$ is smaller than most of the other dSphs \citep{2015MNRAS.453..849B}. Though Car~II\xspace has similar distance and velocity dispersion to Ret~II, its J-factor is smaller because it has a larger $r_{1/2}$ (Pace \& Strigari in prep). Car~II\xspace is therefore not the most promising individual target for a dark matter annihilation signal but will be a useful addition in stacked analyses. We applied the same methodology to the 4 star sample of Car~III\xspace. We find the integrated J-factor within solid angles of radii $\theta=\alpha_c, 0.1, 0.2, 0.5^\circ$ to be $\log_{10}{\left(J/{\rm GeV^2 \, cm^{-5}} \right)} = 19.8_{-0.9}^{+1.0}, \ensuremath{19.9_{-0.9}^{+1.0}}\xspace, \ensuremath{20.1_{-0.9}^{+1.0}}\xspace, \ensuremath{20.2_{-0.9}^{+1.0}}\xspace$, respectively. $\alpha_c=0.08^\circ$ for Car~III\xspace. The D-Factor for Car~III\xspace is $\theta=\alpha_c/2, 0.1, 0.2, 0.5^\circ$ to be $\log_{10}{\left(D/{\rm GeV \, cm^{-2}} \right)} = 17.2_{-0.4}^{+0.5}, \ensuremath{17.8_{-0.5}^{+0.5}}\xspace, \ensuremath{18.3_{-0.5}^{+0.6}}\xspace, \ensuremath{18.8_{-0.7}^{+0.6}}\xspace$. The J-factor estimation of Car~III\xspace is larger than Car~II\xspace due to its proximity, smaller size, and larger (but uncertain) velocity dispersion. From our analysis, Car~III\xspace potentially has one of the largest J-factors. However, given the very small stellar kinematic sample and the uncertain classification of Car~III\xspace, it is premature to draw strong conclusions about the suitability of Car~III\xspace as a dark matter annihilation target. As discussed in \S\ref{sec:properties}, the large velocity dispersion could result from binary star motions, small number statistics, or possible tidal effects. As a cautionary case in point, the Triangulum~II ultra-faint dwarf galaxy has recently had its J-Factor values revised downward due to the identification of previously unsolved binary stars \citep{2016MNRAS.463.3630G, 2017ApJ...838...83K}. In addition, the velocity dispersion of Bo\"{o}tes~II is likely overestimated in past determinations due to the presence of one binary star~\citep{2009ApJ...690..453K, 2016ApJ...817...41J}. \begin{figure}[t!] \epsscale{1.35} \plotone{carina_skymap} \caption{ {\it Fermi}\xspace LAT {$\gamma$-ray}\xspace counts map ($E > 1 \unit{GeV}$) in the vicinity of Car~II and Car~III (Galactic coordinates). White plus signs indicate the positions of known gamma-ray sources from the 3FGL. Open diamonds indicate the positions of new gamma-ray point-source candidates found in this analysis. \label{fermi_counts} } \end{figure} \subsection{Gamma-Ray Observations} \label{sec:gamma} We searched for excess {$\gamma$-ray}\xspace emission coincident with Car~II\xspace and Car~III\xspace using eight years of LAT data (2008 August 4 to 2016 August 5) passing the \irf{P8R2\_SOURCE} event class selection from $500\unit{MeV}$ to $500\unit{GeV}$. The low-energy bound of 500 \unit{MeV} was selected to mitigate the impact of leakage from the bright limb of the Earth because the LAT point spread function broadens considerably below that energy. The high-energy bound of 500 \unit{GeV} is chosen to mitigate the effect of the increasing residual charged-particle background at higher energies~\citep{Ackermann:2014usa}. To remove {$\gamma$ rays}\xspace produced by cosmic-ray interactions in the Earth's limb, we rejected events with zenith angles greater than $100^{\circ}$. To analyze data coincident with Car~II\xspace and Car~III\xspace, we used $10^{\circ}\times 10^{\circ}$ regions of interest (ROIs) centered on each object. Data reduction was performed using \emph{ScienceTools}\xspace version 11-05-03.\footnote{\url{http://fermi.gsfc.nasa.gov/ssc/data/analysis/software/}} We used the maximum-likelihood analysis pipeline described by \citet{Ackermann:2013yva} to test for {$\gamma$-ray}\xspace emission coincident with Car~II and III in excess of the known astrophysical backgrounds. The background model for the ROI includes Galactic interstellar emission \citep{2016ApJS..223...26A}, isotropic emission\footnote{\url{http://fermi.gsfc.nasa.gov/ssc/data/access/lat/BackgroundModels.html}}, and point sources from a catalog derived from four years of data \citep[3FGL; ][]{3fgl}. Car~II\xspace and Car~III\xspace reside in a region of the sky where the diffuse $\gamma$-ray background is relatively smooth (Galactic latitude of $\sim\,15\degr$), and the nearest 3FGL catalog source is $\sim2\degr$ away. We first created a detection significance map for the entire $10^{\circ}\times 10^{\circ}$ ROI by rastering a putative point source with fixed power-law spectrum ($dN/dE \sim E^{-2}$) across the ROI in $0.1\degr$ steps and computing the improvement in the delta log-likelihood test statistic \citep[TS;][]{Mattox:1996zz}. This procedure led to the identification of three additional point-like source candidates in the region, none of which are within $3\degr$ of Car~II\xspace or III (Figure~\ref{fermi_counts}). The TS obtained at the locations of Car~II\xspace and Car~III\xspace is 0.16 and 4.2, respectively, both consistent with the background-only hypothesis. We note that Car~II\xspace and Car~III\xspace would not be resolvable as independent sources given the resolution of the LAT instrument, which is $\sim1\deg$ at 1\unit{GeV} and asymptotes to $\sim0.1\deg$ above 10\unit{GeV}. The TSs associated with Car~II\xspace and Car~III\xspace are thus correlated, and can be attributed to a single excess of counts located $\sim\,0.7\degr$ from Car~III\xspace at ($\alpha, \delta$) = (114.5\degr, $-$57.9\degr). To search for {$\gamma$-ray}\xspace emission consistent with the annihilation of a dark matter particle into standard model products, we fit the {$\gamma$-ray}\xspace data coincident with Car~II\xspace and Car~III\xspace using a variety of spectral models generated by DMFit \citep{Jeltema2008,Ackermann:2013yva}. We scan a range of dark matter masses spanning from $2\unit{GeV}$ to $10\unit{TeV}$ and annihilating through the \ensuremath{b \bar b}\xspace and \ensuremath{\tau^{+}\tau^{-}}\xspace channels. The most significant excess has TS = 6.2 and occurs for a dark matter particle with mass 35.4 \unit{GeV} annihilating through the \ensuremath{\tau^{+}\tau^{-}}\xspace channel. Given that the statistical significance of this excess is well below the typical point-source detection criteria of the LAT (TS = 25), we calculate limits on the dark matter annihilation cross section, \ensuremath{\langle \sigma v \rangle}\xspace, using the J-factors derived in \S\ref{sec:jfactor}. We find that Car~II\xspace (Car~III\xspace) can be used to constrain $\ensuremath{\langle \sigma v \rangle}\xspace < 2.2 \times 10^{-24} \unit{cm}^3 \unit{s}^{-1}$ ($3.3 \times 10^{-25} \unit{cm}^3 \unit{s}^{-1}$) for $100 \unit{GeV}$ dark matter particles annihilating through the \ensuremath{b \bar b}\xspace channel. These constraints are $\sim 100$ ($\sim 10$) times larger than the thermal-relic cross section \citep[i.e.,][]{Steigman:2012nb}. We again caution against overinterpreting the constraints derived from Car~III\xspace because the J-factor value of Car~III\xspace is derived from a very small stellar kinematic sample. \section{SUMMARY} \label{sec:summary} In this paper, we presented the first spectroscopic analysis of the Carina~II and Carina~III dwarf galaxy candidates recently discovered in MagLiteS. Based on the kinematic and chemical properties of 18 confirmed spectroscopic member stars in Car~II\xspace, we conclude that it is a dark-matter-dominated dwarf galaxy. On the other hand, only 4 members were identified in Car~III\xspace. With this small spectroscopic sample we cannot yet determine whether Car~III\xspace is a compact dwarf galaxy or an extended star cluster. While Car~II\xspace and Car~III\xspace are located very close to each other both in sky projection ($\sim18'$) and in three dimensions ($\sim10\unit{kpc}$), their systemic velocities differ by $\sim200\kms$. We therefore conclude that these two systems are unlikely to be a pair of bound satellites. Both Car~II\xspace and Car~III\xspace have line-of-sight velocities consistent with the hypothesis that they formed as members of a group of satellites around the Large Magellanic Cloud (LMC). Furthermore, one or both systems might remain bound to LMC due to the small difference in the line-of-sight velocity. The brightest RGB members in Car~II\xspace and Car~III\xspace are bright enough to have proper motion measurements from Gaia to test this hypothesis. We further identify one blue horizontal branch (BHB) star as a likely LMC member in the region of Car~II\xspace. Located at about 18\degr\ from the center of the LMC, this star is one of the LMC's most distant spectroscopically confirmed BHB members, and might provide hint on our understanding of the structure and dynamics of the LMC's outer regions. No statistically significant excess of {$\gamma$-ray}\xspace emission is found at the locations of Car~II\xspace and Car~III\xspace in eight years of \textit{Fermi}-LAT data. Using the J-factors derived from the kinematics data, Car~II\xspace and Car~III\xspace can be used to constrain the dark matter annihilation cross section. \acknowledgements{ TSL thanks Leo Girardi for helpful discussions regarding the PARSEC isochrones. The authors thank Louis Strigari for helpful conversations regarding the J-factor calculation. We are grateful for the service observations and Director's Discretionary time on AAT and VLT. We thank the service observers, Jeffrey Simpson and Chris Lidman, for collecting the AAT data during the service runs. We acknowledge the traditional owners of the land on which the AAT stands, the Gamilaraay people, and pay our respects to elders past and present. This project is partially supported by the NASA Fermi Guest Investigator Program Cycle 9 No. 91201. JDS acknowledges support from the National Science Foundation under grant AST-1714873. ABP acknowledges generous support from the George P. and Cynthia Woods Institute for Fundamental Physics and Astronomy at Texas A\&M University. MASC is supported by the {\it Atracci\'on de Talento} contract no. 2016-T1/TIC-1542 granted by the Comunidad de Madrid in Spain. APJ is supported by NASA through Hubble Fellowship grant HST-HF2-51393.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. BCC acknowledges the support of the Australian Research Council through Discovery project DP150100862. This research has made use of NASA's Astrophysics Data System Bibliographic Services. This research made use of \code{Astropy}, a community-developed core Python package for Astronomy~\citep{Astropy2013}. Contour plots were generated using \code{corner.py} \citep{corner}. This project used data obtained with the Dark Energy Camera (DECam), which was constructed by the Dark Energy Survey (DES) collaboration. Funding for the DES Projects has been provided by the U.S. Department of Energy, the U.S. National Science Foundation, the Ministry of Science and Education of Spain, the Science and Technology Facilities Council of the United Kingdom, the Higher Education Funding Council for England, the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign, the Kavli Institute of Cosmological Physics at the University of Chicago, the Center for Cosmology and Astro-Particle Physics at the Ohio State University, the Mitchell Institute for Fundamental Physics and Astronomy at Texas A\&M University, Financiadora de Estudos e Projetos, Funda{\c c}{\~a}o Carlos Chagas Filho de Amparo {\`a} Pesquisa do Estado do Rio de Janeiro, Conselho Nacional de Desenvolvimento Cient{\'i}fico e Tecnol{\'o}gico and the Minist{\'e}rio da Ci{\^e}ncia, Tecnologia e Inovac{\~a}o, the Deutsche Forschungsgemeinschaft, and the Collaborating Institutions in the Dark Energy Survey. The Collaborating Institutions are Argonne National Laboratory, the University of California at Santa Cruz, the University of Cambridge, Centro de Investigaciones En{\'e}rgeticas, Medioambientales y Tecnol{\'o}gicas-Madrid, the University of Chicago, University College London, the DES-Brazil Consortium, the University of Edinburgh, the Eidgen{\"o}ssische Technische Hoch\-schule (ETH) Z{\"u}rich, Fermi National Accelerator Laboratory, the University of Illinois at Urbana-Champaign, the Institut de Ci{\`e}ncies de l'Espai (IEEC/CSIC), the Institut de F{\'i}sica d'Altes Energies, Lawrence Berkeley National Laboratory, the Ludwig-Maximilians Universit{\"a}t M{\"u}nchen and the associated Excellence Cluster Universe, the University of Michigan, {the} National Optical Astronomy Observatory, the University of Nottingham, the Ohio State University, the OzDES Membership Consortium the University of Pennsylvania, the University of Portsmouth, SLAC National Accelerator Laboratory, Stanford University, the University of Sussex, and Texas A\&M University. Based on observations at Cerro Tololo Inter-American Observatory, National Optical Astronomy Observatory (NOAO Prop. ID 2016A-0366 and PI Keith Bechtol), which is operated by the Association of Universities for Research in Astronomy (AURA) under a cooperative agreement with the National Science Foundation. } {\it Facilities:} \facility{Magellan/Baade (IMACS); Very Large Telescope (GIRAFEE+FLAME); Anglo-Australian Telescope (AAOmega+2dF)} \bibliographystyle{apj}
{ "timestamp": "2018-02-21T02:00:49", "yymm": "1802", "arxiv_id": "1802.06810", "language": "en", "url": "https://arxiv.org/abs/1802.06810" }
\subsubsection{SGX} \label{sgx-back} The Intel SGX architecture offers a set of x86-64 ISA extensions that enable applications to instantiate a secure software container, called an~\emph{enclave}; an area in the virtual address space of the application, which is protected by the processor from accesses of any software that does not reside in it (e.g., other applications, OS, BIOS). The enclave data is stored in a reserved memory cache, called Enclave Page Cache (EPC). The Memory Encryption Engine (MEE) encrypts the enclave data in EPC to avoid memory attacks (e.g., memory snooping). To access enclave data in EPC, the processor enters a new CPU mode, called \emph{enclave mode}, which applies additional hardware verifications to each memory access. Specifically, the data in EPC is decrypted only when entering the CPU package (enclave mode) and is encrypted again and stored to EPC when leaving the CPU package. Untrusted code can make incoming calls (ECALLs) to trusted enclave functions defined and exposed by developers, while enclave code can make outgoing calls (OCALLs) to untrusted code. In cases that the enclave execution is interrupted due to asynchronous events, such as interrupts and exceptions, the processor state is securely saved inside the enclave to prevent any leakage of secrets. After the event is serviced, the processor state can be restored and the enclave execution resumes from the point that was interrupted. An enclave can prove that it has been properly instantiated on a platform through \emph{CPU-based attestation}. There are 2 attestation categories; \emph{local} and \emph{remote}. Local attestation enables 2 enclaves instantiated on the same platform to authenticate each other, while remote attestation enables an enclave instantiated on a remote platform to attest that it is ``trusted" to a remote attestation provider, so that secrets can be provisioned to it~\cite{johnson2016intel}. \subsubsection {DPDK} DPDK consists of a set of libraries and optimized Network Interface Card (NIC) drivers for highly-scalable and fast packet processing, which is designed to run on any processor. It avoids the overhead imposed by Linux kernel processing (e.g., system calls, context switching on blocking I/O, copying data from kernel to user space, interrupts) and achieves high performance by: 1) leveraging processor affinity, 2) allocating huge memory pages to avoid swaps and reduce TLB misses, 3) placing device drivers in user space to achieve zero-copy packet processing, 4) accessing all devices by polling, 5) achieving synchronization without locks, and 6) handling large batches of packets and distributing them to processing threads for unified processing. Each DPDK process (application) occupies one CPU core in full, but can actually use one or more of its logical cores. To exchange data among logical cores, lock-less First-In-First-Out (FIFO) ring structures are used. Each application can make use of DPDK libraries that provide network packet buffer management and packet forwarding mechanisms, and implement the TCP/IP protocol stack. \subsection {Related Work} \label{related-work} In this section, we discuss software and hardware-based approaches that protect applications against unauthorized access. We also present some related work based on SGX and a few approaches that study the application of NFs directly to encrypted data. \subsubsection {Software-Based Protection} One of the very first works towards protecting applications and their sensitive data from unauthorized access by privileged software is~\emph{NGSCB}~\cite{peinado2004ngscb}. NGSCB made use of virtualization to run trusted and untrusted OSs simultaneously on the same machine enabling critical applications to use the trusted OS. A similar approach was also taken by~\emph{Proxos}~\cite{ta2006splitting} that requires application developers to specify which system calls are sensitive, so that they are forwarded to a trusted private OS, protecting applications against an untrusted OS. Approaches, such as~\emph{Overshadow}~\cite{chen2008overshadow},~\emph{Virtual Ghost}~\cite{criswell2014virtual} and~\emph{InkTag}~\cite{hofmann2013inktag}, assumed a trusted virtualization layer to protect sensitive application data and aimed to reduce the size of TCB. Specifically, Overshadow offers different views of physical memory for each memory access, therefore, an application can have a normal view of its resources, but the OS an encrypted one. While Overshadow focuses on ensuring that applications are isolated from the OS, InkTag allows applications to use the services of an untrusted OS and define their own access control policies on secure files. Virtual Ghost utilizes compiler support to secure applications from an untrusted OS and creates secure memory, which cannot either be read or written by the OS.~\emph{MiniBox}~\cite{li2014minibox} is a two-way sandbox that protects critical applications from a malicious OS, as well as an OS from malicious applications. All these approaches are purely based on software and do not require any special hardware support. Therefore, they can be used in cases that hardware-based solutions, such as SGX, cannot be deployed. \subsubsection {Hardware-Based Protection} \label{nets} Several systems have utilized trusted hardware to secure applications running on them along with sensitive data from unauthorized access. \emph{Trusted Platform Modules (TPMs)}~\cite{tpm} offer a dedicated micro-controller to offer secure generation of cryptographic keys and restrict their accessibility. TPMs also support remote attestation and data sealing functions. However, there are privacy concerns associated with the \emph{Direct Anonymous Attestation} (DAA) scheme used by TPM when a small number of keys is used for the entire platform lifetime~\cite{leung2008possible}. To address those concerns, SGX extends DAA by using an Enhanced Privacy ID (EPID) key during remote attestation~\cite{johnson2016intel}. \emph{Secure co-processors}~\cite{smith1999building, dyer2001building} offer hardware that can be trusted even in cases of physical attacks, so that trusted computations can be performed on untrusted remote devices. However, they are expensive and their performance is limited due to thermal throttling issues. \emph{ARM TrustZone}~\cite{arm2009security} is a security technology for System-on-a-Chip (SoC) and CPU systems based on the concept of physically separated trusted and untrusted worlds. It has been used to build an embedded virtualization system on commodity hardware~\cite{pinto2014towards} and a multi-layer security architecture for mobile devices~\cite{lengyel2014multi}. TrustZone is mainly used by embedded systems and does not offer memory encryption, therefore, attacks are possible in cases of physical DRAM access. \emph{AMD's memory encryption technology}~\cite{kaplan2016amd} is integrated into the x86 CPU architecture and offers a security subsystem for key generation, platform boot, off-chip storage for sensitive data, protection against physical memory attacks and support for encrypted virtual machines. However, it specializes in memory encryption and does not provide a framework to run applications in trusted mode. \subsubsection{SGX-Based Approaches} \emph{SGX} offers CPU features that enable applications to instantiate secure and trusted enclaves. Research areas, to which SGX has been applied, include networked and distributed systems, cloud systems and applications. In Network Function Virtualization (NFV) environments, the design of an enclavized NAT, policy control and intrusion detection application, HTTP and web caching proxy~\cite{shih2016s}, and an extension to enclavize the Click modular router~\cite{coughlin2017trusted} have been presented. These approaches do not study the performance and overhead of applying SGX to frame processing and trusted encryption/decryption for IPsec and MACsec traffic, while the provided experimental results are limited. Designs that explore how SGX could strengthen the security and privacy of peer-to-peer anonymity networks, such as Tor~\cite{kim2015first,kim2017enhancing}, network protocols, such as TLS~\cite{aublintalos}, and distributed services, such as the Apache ZooKeeper~\cite{brenner2016securekeeper}, have been studied. These protocols and services operate on a higher layer of the TCP/IP protocol stack and the performance of pure network forwarding and switching is not evaluated. Slick~\cite{trach2017slick} proposes a trusted middlebox framework to deploy network functions on untrusted servers. This work tackles a problem related to ours and some of their design decisions and optimizations can be used to further enhance our approach and vice versa. Haven~\cite{baumann2015shielding} protects the confidentiality and integrity of applications and their associated data from the untrusted cloud on which they run, while VC3~\cite{schuster2015vc3} allows users to keep their data and secrets safe during the execution of distributed MapReduce computations in the cloud. The idea of an inverted cloud infrastructure has been discussed~\cite{strackx2015idea}, where mini providers use SGX to secure confidential information, so that they can join forces to provide cloud services instead of receiving services by a single major provider. SGX has also been used to secure content-based routing mechanisms~\cite{pires2016secure} and Database Management Systems (DBMS)~\cite{arasu2014querying} operating on the cloud. These pieces of work focus on trusted cloud applications and services running on top of today's networks, rather than the performance of the underlying network infrastructure itself. \subsubsection{Network Functions Over Encrypted Data} Starting with APLOMB~\cite{sherry2012making}, the idea of out-sourcing NF processing to the cloud emerged, without taking into account though its security considerations. BlindBox~\cite{sherry2015blindbox}, extending the approach taken by APLOMB, performs deep packet inspection directly over encrypted network traffic. Later on, Embark~\cite{lan2016embark} added support for a wider set of NFs over encrypted network data. This work focuses exclusively on protecting network traffic through encryption, rather than investigating how the execution of the NF software itself can be secured. Moreover, the studied middleboxes operate on the network layer and above, without considering any link layer devices or server-side applications. \subsection{Threat Model} We focus on cases, where systems and applications may process confidential information, therefore they leverage trusted execution to secure crucial data and perform critical processing operations. Such cases can occur either in an untrusted cloud setup due to an accidental leak of confidential information or malicious tenants that try to compromise the execution integrity of others, or in setups, where resources are distributed at the edge of the network. Similar to the use-cases mentioned in section~\ref{threat-models}, attackers may attempt to learn routing and forwarding policies, compromise encryption and authentication keys, monitor encrypted traffic, etc. According to the SGX threat model~\cite{mckeen2013innovative}, we assume that an attacker can compromise software components, including privileged code (e.g., OS and BIOS) and launch physical attacks. Following the SGX threat model~\cite{mckeen2013innovative} and prior related work~\cite{kim2017enhancing, kim2015first}, DoS and DDoS attacks are out of the scope of our work, since compromised software or hardware can deny the service at any point of the execution (e.g., restart or crash the system or flush all the unprotected DPDK memory). The same assumption applies to side channel attacks against SGX (e.g., page fault and cache-based side channel attacks). Software techniques to protect SGX-capable applications against attacks aiming to exploit bugs (e.g., buffer overflows, synchronization bugs, etc.)~\cite{weichbrodt2016asyncshock, seo2017sgx} are also out of the scope of our work. \subsection{Combining DPDK and SGX} We used DPDK version 17.02 and implemented our applications as enclaves using SGX version 1.8. To make the enclave ECALL/OCALL API accessible to DPDK context, we had to first compile the enclavized application and then compile the DPDK context, which was using the enclave API as a shared library. To be able to make use of more DPDK context in enclave mode, we had to modify the DPDK codebase to alleviate its coupling with standard C libraries (e.g., by modifying functions for log collection and printing, converting inline functions to macros), since SGX currently supports only the use of memory allocation and deallocation standard C libraries in enclave mode. However, in some cases, the coupling was so tight (e.g., DPDK libraries for hash and flow table implementation and lookup) that a specific OCALL had to be executed. We discuss further details and our optimization to alleviate this additional overhead for such cases in section~\ref{application-scenarios}. The API of our application enclaves exposes a single ECALL, which is required to be called only once during the enclave instantiation. No further ECALLs are required during the execution, since the communication between DPDK and the enclave is achieved through DPDK rings, a structure optimized for performance. Most of the developed codebase, including all the performance-sensitive enclave components, is written in C instead of C++ for compatibility with the DPDK codebase and performance purposes. \subsection{Application Scenarios} \label {application-scenarios} We implemented a few DPDK application scenarios to experiment with. The enclavized components of each application scenario are summarized in Table~\ref{Table:encl-comps}. \textbf{Layer 2 (L2) forwarding}: A frame processing application implementing the operation of a network switch. We use a single processing enclave for the trusted applications following the design of section~\ref{baseline}. The untrusted application part receives frames, which enqueues to the Rx ring. The processing enclave dequeues frames from this ring and performs a number of frame sanity checks, destination MAC address lookup and source MAC address rewriting. Once processing is done, the enclave enqueues the frames to the Tx ring. \textbf{Layer 3 (L3) forwarding}: A packet processing application implementing the operation of a network router. The enclave operations include a few packet header sanity checks and the longest prefix match lookup of the destination address of a packet. \textbf{Encrypted L2 forwarding}: A frame processing application for encrypted Ethernet traffic implementing the operation of a MACsec-capable switch. We use the frame format of MACsec~\cite{romanow2006media}, where the Integrity Check Value (ICV) field consists of an 128-byte CMAC hash, and multiple processing enclaves following the design of section~\ref{scaling}. The untrusted application part receives encrypted frames, which are enqueued to a processing enclave. A processing enclave dequeues and decrypts them, while it also generates the ICV of the raw frame data and compares it with the ICV of the received frames to verify their integrity. If the integrity verification is successful, the raw frames are processed (MAC table lookup and MAC address rewriting) and their new ICV is generated. Finally, the enclave encrypts the processed frames and enqueues them to the Tx ring to be forwarded. \textbf{Encrypted L3 forwarding}: A packet processing application for encrypted network level traffic implementing the operation of a VPN endpoint (router). We use multiple processing enclaves following the design of section~\ref{scaling} and the packet format provided by the ''Encapsulating Security Payload'' function of IPsec~\cite{kent2005ip}, where the Integrity Check Value (ICV) field consists of an 128-byte CMAC hash. The workflow is similar to the one explained for the encrypted L2 forwarding application. \textbf{Load-balancing \& backend server processing}: An application implementing the operation of a load balancer distributing traffic to multiple backend server processes (either VMs or containers) running on the same physical machine. The load-balancing process that maintains a flow table (with 1 million flow entries) and classifies the received packets into flows based on their destination IP address. The backend server processes filter and forward the distributed packets based on their destination IP address (hash-based forwarding). The load balancer forwards traffic to the backend processes through a number of DPDK rings (one ring per backend process). An enclave of a load balancing or server process has to make an explicit OCALL for every batch of packets dequeued from the Rx ring. The vanilla DPDK application uses DPDK libraries for the flow table and the hash table lookups that leverage standard C libraries, which are not trusted and cannot be used in enclave mode. To allow the DPDK libraries to access the enclave buffer containing the keys for the lookups, this buffer has to be copied during the OCALL transition from the enclave's EPC to untrusted memory. To return the lookup results back to the enclave, a separate buffer has to be copied from the untrusted memory to the enclave's EPC when the OCALL returns. In addition to the copy operation itself, additional checks have to be performed by SGX to ensure that the full memory range of the buffers passed to the untrusted application part is within the enclave. To enhance system performance and alleviate overheads, we performed the following optimizations: \begin{itemize}[leftmargin=*] \item Increased the number of packets dequeued as batch by a server or load balancing enclave from its corresponding ring to reduce the total number of performed OCALLs. \item Used untrusted memory for the buffers of the lookup keys and results to avoid memory copy from/to the EPC and the additional checks. Essentially, we traded security for performance, as explained in section~\ref{lessons}. \item According to the design presented in section~\ref{scaling}, we enabled each backend process to instantiate two enclaves implementing the same processing logic. \end{itemize} \begin{figure}[h] \centering \includegraphics[scale=0.5]{figures/setup.pdf} \caption {Experimental Setup} \label{Figure:setup} \end{figure} \begin{table*}[h!] \centering \caption{Enclavized Components For Each Application Scenario} \label{Table:encl-comps} \begin{tabular}{|c|c|} \hline \textbf{Application} & \textbf{Enclavized Components} \\ \hline L2 Forwarding (Plain Traffic) & \begin{tabular}[c]{@{}c@{}}Sanity check, MAC address lookup, \\ MAC address rewriting\end{tabular} \\ \hline L3 Forwarding (Plain Traffic) & \begin{tabular}[c]{@{}c@{}}Sanity Check, longest prefix match lookup \end{tabular} \\ \hline L2 Forwarding (Encrypted Traffic) & \begin{tabular}[c]{@{}c@{}}Frame decryption, sanity check, frame integrity check, \\ MAC address lookup, MAC address rewriting, \\ ICV generation, frame encryption\end{tabular} \\ \hline L3 Forwarding (Encrypted Traffic) & \begin{tabular}[c]{@{}c@{}}Packet decryption, sanity check, packet integrity check, \\ longest prefix match lookup, ICV generation, packet encryption\end{tabular} \\ \hline Load Balancer & \begin{tabular}[c]{@{}c@{}}Packet processing before and after \\ flow table lookup\end{tabular} \\ \hline Back-end Server & \begin{tabular}[c]{@{}c@{}}Packet processing before and after \\ hash table lookup for hash-based forwarding\end{tabular} \\ \hline \end{tabular} \end{table*} \subsection*{Abstract} Nowadays, enterprises widely deploy Network Functions (NFs) and server applications in the cloud. However, processing of sensitive data and trusted execution cannot be securely deployed in the untrusted cloud. Cloud providers themselves could accidentally leak private information (e.g., due to misconfigurations) or rogue users could exploit vulnerabilities of the providers' systems to compromise execution integrity, posing a threat to the confidentiality of internal enterprise and customer data. In this paper, we identify (i) a number of NF and server application use-cases that trusted execution can be applied to, (ii) the assets and impact of compromising the private data and execution integrity of each use-case, and (iii) we leverage Intel's Software Guard Extensions (SGX) architecture to design Trusted Execution Environments (TEEs) for cloud-based NFs and server applications. We combine SGX with the Data Plane Development KIT (DPDK) to prototype and evaluate our TEEs for a number of application scenarios (Layer 2 frame and Layer 3 packet processing for plain and encrypted traffic, traffic load-balancing and backend server processing). Our results indicate that NFs involving plain traffic can achieve almost native performance (e.g., $\sim 22$ Million Packets Per Second for Layer 3 forwarding for 64-byte frames), while NFs involving encrypted traffic and server processing can still achieve competitive performance (e.g., $\sim 12$ Million Packets Per Second for server processing for 64-byte frames). \input{introduction} \input{background-related} \input{threat-models} \input{design} \input{implementation} \input{evaluation} \input{discussion} \input{conclusion} \bibliographystyle{plain}
{ "timestamp": "2018-02-21T02:05:17", "yymm": "1802", "arxiv_id": "1802.06970", "language": "en", "url": "https://arxiv.org/abs/1802.06970" }
\section{Introduction} \label{sec:intro} An isolated astrophysical black hole is characterized by two fundamental quantities: its mass, $M$, and its angular momentum, $J$; this assumes charge neutrality, and thus the space-time is described by the Kerr metric \citep[][]{Kerr63}. The mass can be considered as a scaling factor for distances, timescales and luminosities. In other terms, this means that accreting black holes with masses ranging from stellar mass, as seen in X-ray binaries, to supermassive black holes (SMBHs), as seen in active galactic nuclei (AGN), have a similar spectro-temporal behaviour once the luminosities and timescales are scaled properly \citep{McHardy06}. However, the angular momentum (usually described in terms of the dimensionless spin parameter, $a^\ast = Jc/GM^2$), which a black hole acquires from its growth history, is arguably the most interesting parameter as it affects the Kerr metrics leading to various properties of astrophysical importance. Theoretically, the spin values range in the $[-0.998, 0.998]$ interval \citep{Thorne74}. These limits are found without considering magnetohydrodynamic (MHD) accretion. In fact, the magnetic fields of the plunging regions should give rise to torques that tend to reduce the maximum spin ($\sim 0.9-0.95$) that can be achieved by a black hole \citep[e.g.][]{Gammie04, McKinney04}. Measurements of SMBH spins are a key ingredient for understanding the physical processes on scales ranging from the accretion disc out to the host galaxy. In fact, the spin determines the position of the innermost stable circular orbit (ISCO) of the accretion disc and of the event horizon, which are 1.24 and 1.06\,$r_{\rm g}$ for a maximally rotating black hole, and 6 and 2\,$r_{\rm g}$ for a non-spinning black hole, respectively, where $r_{\rm g} = GM/c^2$ is the gravitational radius. Hence, it has been shown that for a Schwarzschild black hole ($a^\ast = 0$) half of the energy is radiated within $\sim 30\,r_{\rm g}$, while half of the radiation emerges from within $\sim 5\,r_{\rm g}$ for a rapidly spinning black hole \citep[e.g.][]{Thorne74, Agol00}. This can be translated into an increase of the radiative accretion effeciency ($\eta$) from $\eta = 0.057$ for $a^\ast = 0$, to 0.32 for $a^\ast = 0.998$. \cite{Vasudevan16} assumed a toy model with a bimodal spin distribution and showed that a SMBH population where only 15\% of the sources are maximally rotating can produce 50\% of the cosmic X-ray background (CXB) owing to their high radiative efficiency. Moreover, these authors showed that the spin bias is even larger in flux-limited surveys, since half of the CXB can be accounted for if only 7\% of the sources have a spin of 0.998 \citep[see also][]{Brenneman11}. The SMBH spin distribution is also fundamental for understanding the SMBH-host galaxy co-evolution. In fact, the angular momentum of a black hole matures over cosmic time and its final value is determined by the accretion and merger history of the galaxy. For instance, mergers tend to spin down the black hole \citep{Volonteri13}, while the SMBH spins up through prograde accretion of material through the galactic disc \citep{King08}. It has also been shown that highly energetic outflows in the form of relativistic winds \citep{Gofford15} or jets \citep[e.g.][]{Blandford77,King15} are affected by the accretion flow, hence their strengths are somehow related to the SMBH spin. These forms of mechanical AGN feedback, in addition to the high radiative efficiency, seem to play a crucial role in the evolution of the host galaxy and its star formation history. Hence, understanding the growth of SMBHs and their spin distribution is a key point for our understanding of the larger scale structure of the Universe \citep[see][for a review about AGN feedback]{Fabian12}. In addition to the importance of SMBH spin in cosmology and galaxy evolution, the nuclear regions in AGN can be considered as unique laboratories to directly test the effects of general relativity, which manifest themselves as extreme physical phenomena such as light bending \citep[e.g.][]{Miniutti04} and reverberation lags \citep[e.g.][]{Fabian09, Emmanoulopoulos11,Kara16}. This requires high-quality X-ray observations. In fact, AGN are strong X-ray emitters and it is widely accepted that the X-rays arise from the innermost regions of the accretion disc where the primary continuum is due to the Comptonization of ultraviolet (UV) disc photons by a hot ($\sim 10^9$\,K) transrelativistic medium, usually referred to as the X-ray corona \citep[e.g.][]{Shap76,Haa93,Pet01b,Pet01a}. As the measure of the spin is strongly dependent on the irradiation and subsequent emissivity of the disc, its success is tightly connected to the study of the X-ray corona itself, whose nature and properties are still largely unknown. \section{Spin measurement methods and their limitations} \label{sec:spinmethod} The increase in number and quality of AGN spectra, at various wavelengths, allowed astronomers to attempt a determination of the SMBH spin parameters with a relatively high confidence. A variety of observed features are considered as good indicators of the black hole spin, such as continuum shape \citep[e.g.][]{Done13}, broad iron K$\alpha$ line \citep{Fab00}, and quasi-periodic oscillations \citep[QPOs; e.g.][]{Mohan14}. Recently, the detection of gravitational waves through the coalescence of black hole pairs founded a new technique to constrain the spins of non-accreting black holes \citep[][]{LIGO1, LIGO2}. There are several advantages and caveats relative to each method. Continuum fitting, for instance, can be applied to any AGN for which continuum emission is detected and has been applied to sources out to redshift $\sim 1.5$ \citep[e.g.][]{Capellupo15, Capellupo16}. However, one of the main drawbacks of this method is that it requires a broad and simultaneous wavelength coverage, which usually exceeds the capabilities of a single observatory, in order to determine properly the shape of the relevant part of the spectral energy distribution (i.e. optical to X-rays). This method requires accurate estimates for the black hole mass, disc inclination, and distance, which are typically derived from optical data. Furthermore, the continuum fitting method can be applied effectively only when the peak of the emission from the accretion disc can be reasonably probed. Since most AGN spectra peak in the extreme UV, this range is only accessible by current detectors in high-redshift objects, at the expense of a rather modest quality for the corresponding X-ray spectra \citep[e.g.][]{Collinson17}. As for the QPOs, they are common in Galactic binaries while few examples exist in AGN light curves, most of which are statistically marginal and/or controversial \citep[apart from the notable case of RE J1034+396;][]{Gierlinski08}. Their detection requires long monitoring, high signal-to-noise ratio (S/N), and a proper modelling of the continuum power spectrum \citep{Vaughan05, Vaughan06}. The most direct and robust measurements to date are those obtained through the detection of a strong relativistic reflection feature in the X-ray spectra. This method can be applied to a wider black hole (BH) mass range. X-ray spectra of AGN can be expressed as a sum of several components, in particular a primary continuum that is well approximated by a power law with a high-energy exponential cut-off and ionized and/or neutral reflection that is detected in most of the sources, arising either from the accretion disc within a few gravitational radii from the BH or from distant Compton-thick material (the broad line region or the molecular torus), respectively \citep[e.g.][]{Light88, Geo91, Ghisellini94,Bia09}. The resulting reflection spectrum is characterized mainly by the iron K$\alpha$ emission line at $\sim 6.4-7.0$\,keV and a broad component peaked at around 20–30 keV, known as the Compton hump. Special and general relativistic effects result in blurring the ionized reflection spectrum and asymmetrically broadening the Fe K$\alpha$ emission line owing to the gravitational redshift and the motion of the emitting particles in the disc \citep[see][for reviews]{Fab00, Reynolds03}. This method consists in fitting the X-ray spectrum of a given source with a reflection model accounting for the relativistic distortions that affect these features on their way to the observer. We mention some of the models that predict the relativistic line profile for a narrow line emitted in the rest frame of the accretion disc: {\tt diskline}, {\tt laor}, {\tt kyrline}, {\tt kerrdisk}, {\tt relline} as published in \cite{Fabian89, Laor91, Dovciak04, Brenneman06, Dauser10}, respectively. The resulting shape of the reflection spectrum strongly depends on the parameters of the system. Hence, this method can be used not only to determine black hole spins but also to probe the innermost regions of the accretion discs, providing information about its inclination, ionization state, elemental abundance, and emissivity \citep[see][for a review]{Reynolds14}. However, there are known difficulties in determining the spins via X-ray reflection, which are mainly due to the complexity of (and some subjectivity in) modelling the AGN spectra, considering the various emission and absorption components that are known to be present, hence requiring high-quality data \citep[e.g.][]{Guainazzi06, Mantovani16}. An alternative absorption-based interpretation has been proposed to explain the apparent, broad red wing of the Fe line and the spectral curvature below 10\,keV \citep[e.g.][]{Miller08, Miller09}. According to this scenario, partial-covering absorbers in the line of sight (having column densities in the $10^{22-24}\,\rm cm^{-2}$ range) plus distant (i.e. non-relativistic) low-ionization reflection, can produce an apparent broadening of the Fe K$\alpha$ line similar to that caused by relativistic effects. Variability in the covering fraction of these absorbers would also provide a complete description of the observed spectral variability. Contrary to stellar-mass BHs, whose spectra are much brighter and typically less complex, both blurred reflection and partial covering are relevant to the X-ray spectra of AGN. In fact, while the former process is able, in principle, to explain the spectral and timing properties of any accreting system, the rapid Compton-thin to Compton-thick (and vice versa) transitions seen in changing-look AGN \citep[after][]{Matt03} imply that also partial covering must be taken into account. In a single-epoch AGN spectrum, the effects of disc reflection and partial covering are often hard to separate or distinguish from each other, thus leading to a long-standing debate. However, thanks to the high-quality spectra provided by {\it XMM-Newton} \citep{Jans01} and {\it NuSTAR} \citep{Har13} observations, which jointly cover a wide energy range from 0.3\,keV up to 80\,keV, it has now become possible to disentangle the two scenarios, as shown in the case of NGC\,1365 \citep{Risaliti13}. Three different variable absorbers with column densities in the range $5\times 10^{22}-6.5\times 10^{24}\rm\,cm^{-2}$ and variable covering factors would be needed to explain the spectrum of the source below 10\,keV. However, this absorption-only model fails to explain the hard X-ray spectrum in the 10--80\,keV band, where the inclusion of relativistic reflection provides a statistically better description of the data \citep{Risaliti13, Walton14}. Furthermore, the latter model is also preferred on physical grounds, as the inferred bolometric luminosity in absorption-only scenarios is significantly higher compared to other indicators such as the [\ion{O}{iii}]\,$\lambda$5007 line. The case of NGC~1365 lends weight to the idea that X-ray reflection is indeed an effective means of measuring the spin of SMBHs, even when absorption is present. \section{Motivation and methods} \label{sec:motivation} Our main aim is to test the reliability of spin measurements when the spectra include additional components with respect to the simple primary continuum plus disc reflection configuration (i.e. always, in principle). In fact, the innermost emission components are generally subject to absorption by gas with column densities from $N_{\rm H} < 10^{21}\,{\rm cm^{-2}}$ to $N_{\rm H} > 10^{24}\,{\rm cm^{-2}}$ and ionization states from neutral to almost completely ionized. As demonstrated by the case of NGC\,1365, the presence of a given absorption component can be better identified thanks to its variability. NGC\,1365 is one of the unique sources showing frequent changes in its obscuration state. Recently, \cite{Risaliti16} summarized the various observational aspects of this source and proposed a multi-layer structure of the circumnuclear medium to explain all the observed absorption states and their variability. {NGC\,1365 has been observed several times in reflection-dominated states, suggesting the presence of a layer of neutral Compton-thick ($N_{\rm H}>10^{24}\rm\,cm^{-2}$) absorber located at a distance of the order of or larger than that of the broad line region \citep{Risaliti07}. The source is usually caught in a Compton-thin state, $N_{\rm H} \sim 10^{23}\,\rm cm^{-2}$, but the column can even occasionally drop down to $N_{\rm H} \sim 10^{22}\,\rm cm^{-2}$ \citep{Braito14}. Furthermore, absorption lines have been detected when the source is not heavily obscured, indicating a stratification of absorbers with ionization states ranging from highly ionized ($\log \xi > 3$)\footnote{$\xi = L/n r^2$ is the ionization parameter (units $\rm erg\,cm\,s^{-1}$), where $L$ is the luminosity of the X-ray source, and $n$ is the volume density of the absorber situated at a distance $r$ from the source.} to mildly ionized ($\log \xi \sim 1-2$) down to neutral ($\log \xi < 1$)}. {\it All} these components and absorption states can be present in {\it all} AGN. However, their detection in a single source, even if at different times, is highly dependent, on the one hand, on our line of sight, and, on the other hand, on the chance to observe any given component when variability is present. The repetition of measurements (via X-ray reflection) can result in inconsistent values of the spin parameter for a given source. The discrepancy is mainly due to the use of different components in modelling the spectra, such as the use of dual reflectors, partial-covering, and/or warm absorbers. For example, \cite{Patrick11} analysed the {\it Suzaku} spectra of NGC\,3783, among others, and found the spin parameter in this source is $a^{\ast}<-0.04$. This result contradicts the high spin parameter ($a^\ast \geq 0.88$) found by \cite{Brenneman11} and \cite{Reynolds12}, who analysed the same observations. The work presented in this paper is a preliminary study aiming to test, through the simulation of high-quality {\it XMM-Newton} and {\it NuSTAR} spectra (in the 0.3--79\,keV range), the reliability of reflection-based SMBH spin measurements that can currently be achieved. A similar approach has been adopted recently by \cite{Bonson16} and \cite{Choudhury17}, who simulated AGN spectra by assuming only two components: primary emission and relativistic reflection. Instead, we assume a more complex spectral configuration, closer to the real general case. This is presented below, along with a detailed description of how we simulated and fitted the data. We note that both of the aforementioned studies neglected the soft X-ray band, in which the soft excess can be a crucial driver of reflection-based spin determinations \citep[e.g.][]{Walton13}. \subsection{Simulation set-up} \label{sec:sim} \begin{figure} \centering {\includegraphics[width = 0.49\textwidth]{plots/sketch3.pdf}} \caption{Schematic (not to scale) of the proposed configuration, presenting the various emission and absorption features that we consider in this work (See \S\,\ref{sec:sim} for details).} \label{fig:configuration} \end{figure} As mentioned above, various emission/absorption components can be present in the X-ray spectrum of any AGN. However, depending on the state in which the source is caught, we may be able to observe all or only some of these components. Generally, the Ockham's razor argument is (or should be) applied during the spectral fitting, thereby avoiding the inclusion of unnecessary components. While this is the correct practice, we might miss a component that is actually present in the case of a single-epoch observation, either if the spectra do not cover a broad band or do not have enough S/N. We know from the literature the expected ranges of the parameters for the various components that are observed and are potentially present in any AGN spectrum. Hence, we can simulate the most general spectrum and then examine how well the model parameters are recovered using the common fitting techniques. \begin{figure} \centering \includegraphics[width = 0.49\textwidth]{plots/model-chi2-G8_formain.pdf} \caption{Top panel: Example of the simulated {\it XMM-Newton} (red) and {\it NuSTAR} (blue) spectra (corresponding to simulation G8) together with the various components of the theoretical model assumed. The primary emission plus ionized reflection (dashed lines), neutral reflection (dash-dotted lines), and thermal emission (dotted lines) are shown. Middle and bottom panels: The $\chi^2$ residuals obtained by the two separate fits are indicated (see \S\,\ref{sec:fitting} for details).} \label{fig:spectra} \end{figure} We simulated AGN spectra in the 0.3--79\,keV band via the XSPEC\,v12.9.0s\,\citep{Arnaud96} command {\tt FAKEIT} and the {\it XMM-Newton} EPIC-pn \citep{Struder01} and the {\it NuSTAR} response matrices in the 0.3--10\,keV (with an exposure time of 90\,ks)\footnote{This is approximately the maximum effective exposure per {\it XMM-Newton} orbit in Small Window mode (needed to avoid pile-up in bright sources).} and 3--79\,keV (with an exposure time of 100\,ks, i.e. 50\,ks per focal plane module) ranges, respectively. The spectra were binned not to oversample the FWHM resolution by a factor larger than 3 and 2.5 for {\it XMM-Newton} and {\it NuSTAR}, respectively. Then, we grouped the spectra, for both instruments, to ensure a minimum S/N of 5 in each energy channel. The simulations are intended to represent single-epoch observations of bright low-redshift AGN, similar to the observed sources, using {\it XMM-Newton} and {\it NuSTAR} simultaneously. Hence, we defined a generic parent model that contains the various expected emission and absorption components. The former are described below through their Xspec spectral counterparts: \begin{itemize} \item {\tt APEC}: thermal diffuse emission at soft X-rays arising from the host galaxy in the cases when the star formation rate is enhanced and/or from gas photoionized by the AGN in the narrow line region \citep[see][for the reference case of NGC 1365, and references therein for other notable sources]{Nardini15}. The parameters of this model that were varied during the simulations are the temperature of the gas ($kT$) and its abundance in solar units \citep[from][]{Grevesse98}. \item {\tt RELXILLLP\,v0.4a}: primary emission plus blurred relativistic reflection from ionized material, assuming a lamp-post geometry for the emitting source \citep{Dauser13, Garcia14}. The free parameters of this model are the photon index ($\Gamma$) of the incident continuum, the height of the lamp post ($h$, in units of $r_{\rm g}$), the spin parameter ($a^\ast$), and the inclination ($i$), ionization ($\xi_{\rm d}$), and iron abundance ($A_{\rm Fe}$ in solar units) of the accretion disc. We kept the high-energy cut-off fixed to 300\,keV. The reflection fraction is computed self-consistently within the model and fixed to the lamp-post value ({\tt fixReflFrac = 1}), as defined in \cite{Dauser16}. This also implies that the disc emissivity as a function of radius is fully determined by the height of the X-ray source. \item {\tt XILLVER}: neutral reflection arising from distant material by fixing the ionization parameter, inclination angle and iron abundance of the reflector to $\log \xi = 0$, $i =45^\circ$ and $A_{\rm Fe} = 1$, respectively. For simplicity, we tied the values of the photon index and the high-energy cut-off of this component to those of the primary continuum. \end{itemize} \noindent As for the absorption components, we modelled the Galactic column using the {\tt PHABS} model and assuming $N_{\rm H} = 5\times 10^{20}\,\rm cm^{-2}$. We assumed that the intrinsic absorption can affect only the innermost emission components (primary continuum and relativistic reflection), and we considered the following configuration: \begin{figure*} \centering \includegraphics[width = 0.64\textwidth]{plots/corners/Corner_G8_f1_main.pdf} \\ \includegraphics[width = 0.64\textwidth]{plots/corners/Corner_G8_f2_main.pdf} \caption{Results of the MCMC analysis for the relevant best-fit reflection parameters corresponding to the two different spectral fits shown in the middle and bottom panels of Fig.\,\ref{fig:spectra}. The red lines correspond to the input values assumed in order to create the simulations. We show the $\chi^2$ values obtained from the corresponding best fit, whose accuracy as a whole is excellent in both cases. The individual parameters, however, are not all correctly retrieved.} \label{fig:contours} \end{figure*} \begin{itemize} \item {\tt WARMABS}: fully covering warm absorption modelled through an XSTAR table \citep{Kallman01} having an input continuum with a photon index of 2. Although it is usually seen in outflow \citep[e.g.][and references therein]{Braito14}, we assume for simplicity that this component is at rest in the local frame. The free parameters of this model are the column density $N_{\rm H,\, wa}$ and the ionization parameter of the absorber $\xi_{\rm wa}$. \item {\tt ZPCFABS}: two layers of partially covering neutral absorbers that can represent the various neutral-absorption states (from Compton-thin to Compton-thick regimes). We let free to vary the column densities $N_{\rm H,\, 1/2}$ and the covering fractions $\rm CF_{1/2}$ of both absorbers. \end{itemize} \noindent The final model can be written in XSPEC terminology, neglecting Galactic absorption, as follows: \begin{align} {\tt model = } & {\tt \,WARMABS \times ZPCFABS \times ZPCFABS \times RELXILLLP} \notag \\ & \tt + XILLVER + APEC. \notag \end{align} \noindent A scheme of the proposed configuration is shown in Fig.\,\ref{fig:configuration}. We report in Table\,\ref{table:paramInput} the input range chosen for each of the free parameters considered in the simulations. The redshift of the simulated source is fixed at 0.02. We kept the normalizations of the various emission components free, and the only limitation is that the observed flux is between 1 and 3\,mCrab in the 0.3--10\,keV range (resulting in $\sim 3\times 10^5 - 10^6$\,counts, for the {\it XMM-Newton} spectra). \begin{table} \centering \caption{Key parameters used to perform the simulations with the corresponding input range, for the various emission and absorption components. All the key parameters were allowed to vary freely during the spectral fitting.} \begin{tabular}{lc} \hline\hline \\[-0.2cm] Parameter & Input range \\[0.2cm] \hline \\[-0.2cm] \multicolumn{2}{c}{Warm absorption } \\[0.2cm] $N_{\rm H,\, wa}\,({\rm cm^{-2}})$ & $10^{18} -3\times10^{24} $ \\[0.2cm] $\log [\xi_{\rm wa}\,({\rm erg\,cm\,s^{-1}})]$ & $0-5$ \\[0.2cm] \hline \\[-0.2cm] \multicolumn{2}{c}{ Reflection} \\[0.2cm] $h\,(r_{\rm g})$ & $2-100$ \\[0.2cm] $a^\ast$ & $0-0.998$ \\[0.2cm] $i(^\circ)$ & $3-89$ \\[0.2cm] $\Gamma$ & $1.5-2.5$ \\[0.2cm] $\log [\xi_{\rm d}\,({\rm erg\,cm\,s^{-1}})]$ & $0-4.7$ \\[0.2cm] $A_{\rm Fe}$\,(solar) & $0.5-10$ \\[0.2cm] \hline \\[-0.2cm] \multicolumn{2}{c}{PC neutral absorption} \\[0.2cm] $N_{H,\,1}\, (10^{22}\,{\rm cm^{-2}})$ & $0.01-20$ \\[0.2cm] CF$_1$ & $0-1$ \\[0.2cm] $N_{H,\,2}\,( 10^{22}\,{\rm cm^{-2}})$ & $0.01-500$ \\[0.2cm] CF$_2$ & $0-1$ \\[0.2cm] \hline \\[-0.2cm] \multicolumn{2}{c}{ Thermal emission} \\[0.2cm] $kT$\,(keV) & $0.1-1.5$ \\[0.2cm] Abundance (solar) & $0-5$ \\[0.2cm] \hline \hline \end{tabular} \label{table:paramInput} \end{table} We note that the configuration we adopted may have some caveats on physical grounds. For instance, we neglected Compton scattering out of and into the line of sight for partial coverers with column densities $N_{\rm H}> 10^{24}\,{\rm cm^{-2}}$. In fact, this would make the structure of the model much more complicated. On the one hand, accounting for the scattering into the line of sight would require arbitrary geometrical assumptions. On the other hand, the combination of partial covering and scattering out of the line of sight is not trivial in terms of model definition and handling. Even if these effects were treated properly, the actual physics of a real system would still be likely much more complex than the model we adopted. For example, the distant reflection is assumed to arise from a plane-parallel slab with an intermediate inclination of 45$^\circ$, but this is just a coarse representation of the expected geometry of the reflector \citep[see e.g.][]{Yaqoob12}. Moreover, this component might not be completely neutral; its ionization is low but not negligible, which leads to some heating near to the surface of the reflector \citep{Garcia13}. This mainly shifts the narrow iron K feature to higher energy. An additional complexity could be also due to the possible presence of multi-temperature thermal emission, a more complex structure of the warm absorption, or other forms of scattering into the line of sight. Finally, it should be kept in mind that the actual geometry of the X-ray corona is largely unknown. The point-like, lamp-post corona is a convenient approximation, but it also has some clear physical limitations, for instance a compactness problem similar to gamma-ray bursts \citep[e.g.][]{Fabian15, Dovciak16}. It is worth stressing, however, that none of our assumptions affect our results as long as the simulations and spectral fitting are performed in a self-consistent way. \subsection{Fitting procedure} \label{sec:fitting} In order to reduce the observer-expectancy effect, each simulation was created by one member of the group and fitted blindly by the two other members separately. The various spectral components mentioned above were allowed to be present or absent in any simulation (and fit), except for the primary continuum plus ionized reflection component, which was always included by construction. We first simulated a general set of 15 simulations (5 simulations per person; hereafter Set\,G, we refer to these simulations as G1--G15). The simulated parameters were allowed to vary within the input range, while in the fits they were free to vary without any restriction, apart from neglecting negative spins and heights below 2\,$r_{\rm g}$. The Ockham's razor criterion was followed in the spectral analysis. A fit was considered as satisfactory at personal discretion, provided that 1) the overall statitistics was good, 2) the $\chi^2$ value represented a stable minimum, and 3) no obvious residuals were present. This does not ensure that the accepted fit is strictly the best possible. Indeed, in some cases only a local $\chi^2$ minimum is found, revealing how critical this kind of analysis can be in practice (see Section\,\ref{sec:discussion} for a more detailed discussion). An example of the simulated data with the theoretical model and the corresponding residuals from the two different fits is shown Fig.\,\ref{fig:spectra}. Errors on the parameters are calculated from Markov chain Monte Carlo (MCMC)\footnote{We use the {\tt XSPEC\_EMCEE} implementation of the {\tt PYTHON EMCEE} package for X-ray spectral fitting in XSPEC by Jeremy Sanders (\url{http://github.com/jeremysanders/xspec_emcee}).}, using the Goodman-Weare algorithm \citep{Goodman10} with a chain of $510,000$ elements (170 walkers and 3000 iterations), and discarding the first $51,000$ elements as part of the `burn-in' period. We show in Fig.\,\ref{fig:contours} the results of the MCMC analysis for the best-fit reflection parameters found for the simulated spectrum presented in Fig.\,\ref{fig:spectra}. \section{Results} \label{sec:results} \begin{table*} \centering \caption{Summary of the success/failure in measuring the relevant parameters for the three simulation sets. Simulations performed assuming a spin parameter $< 0.8$ are indicated with italics font, while the underline corresponds to a height of the lamp post that is $\leq \rm 5\,r_{\rm g}$. Symbols for the individual parameters are as follows: full success (\cm), fair success ({\ding{72}}), undetermined ({\bf?}), added component (+), missing component ($-$), failure (\xm), while blank corresponds to the cases when a given component is neither present in the simulated model nor in the fit. The qualitative classification of the fit as a whole is represented as follows: excellent fit ({\LARGE \color{green}$\bullet$}), good fit ({\LARGE \color{yellow}$\bullet$}), and inaccurate fit ({\LARGE \color{red}$\bullet$}). See text for details.} \begin{adjustbox}{max width=0.9\textwidth} \begin{tabular}{lccccccccccccccc} \hline\hline\\[-0.3cm] Parameter & {\it G1} & {\it \underline{G2}} & \underline{G3} & \underline{G4} & {\it G5} & {\it G6} & {\it G7} & G8 & \underline{G9} & {\it G10} & {\it G11} & {\it\underline{G12}} & \underline{G13} & {\it G14} & \underline{G15} \\[0.1cm] \hline \\[-0.3cm] $a^\ast$ & \xm{\ding{72}} & {\ding{72}}\upd & \cm{\ding{72}} & {\ding{72}}\upd & {\ding{72}}{\bf?} & {\bf?}\qm & \cm{\ding{72}} & \xm{\bf?} & {\ding{72}}\upd & {\bf?}\cm & \xm\xm & \xm\xm & \cm\cm & {\bf?}\qm & \cm\cm \\[0.1cm] $h$ & \xm\xm & \cm\xm & \cm\cm & \cm\cm & \xm\xm & \cm\cm & \xm\xm & \xm\xm & \cm\xm & \xm\xm & \xm\xm & \xm\xm & \cm{\ding{72}} & \xm\xm & {\ding{72}}\cm \\[0.1cm] $i$ & {\ding{72}}\upd & \cm\xm & \cm\cm & \cm\xm & \cm{\ding{72}} & {\ding{72}}\upd & \cm\cm & {\ding{72}}\cm & \xm\cm & \xm\xm & \cm\cm & \cm\cm & \cm\cm & \xm\cm & \cm\cm \\[0.1cm] $\Gamma$ & \cm{\ding{72}} & \cm\cm & \cm\cm & {\ding{72}}\upd & {\ding{72}}\upd & \cm\cm & {\ding{72}}\cm & \cm\cm & \cm\cm & {\ding{72}}\cm & \cm\cm & \cm{\ding{72}} & \cm\cm & {\ding{72}}\cm & {\ding{72}}\cm \\[0.1cm] $\xi_{\rm d}$ & \cm\xm & \xm\xm & \cm\cm & \cm\cm & \cm\xm & \xm\xm & \cm\cm & \xm\cm & \xm\cm & \xm\cm & \xm\cm & \xm\cm & {\ding{72}}\upd & \xm\cm & \cm\cm \\[0.1cm] $A_{\rm Fe}$ & \cm\xm & \xm\cm & \cm\cm & \cm\cm & \cm\xm & \cm\cm & \xm\cm & \xm\cm & \xm\cm & \xm\xm & \cm\cm & \xm\cm & \cm\xm & \xm\cm & \cm\cm \\[0.1cm] $N_{\rm H,\, wa}$ & \cm{\ding{72}} & \xm\xm & \cm\cm & {\ding{72}}\xm & \cm\cm & \cm\cm & \cm\cm & {\ding{72}}\cm & \cm\cm & \xm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] $\xi_{\rm wa}$ & \xm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\xm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & {\ding{72}}\cm & \cm\cm & \cm\cm & \xm\xm \\[0.1cm] $N_{H,\,1}$ & $-\,$\xm & \xm\xm & \xm\xm & \cm\xm & \hspace{8pt}+ & \cm\cm & \cm{\ding{72}} & & \xm\cm & \cm\cm & \cm\cm & $-\,$\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] CF$_1$ & $-\,$\xm & \cm\cm & \cm\cm & \xm\xm & \hspace{8pt}+ & \cm\cm & \cm\cm & & \cm\cm & \cm\cm & \cm\cm & $-\,$\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] $N_{H,\,2}$ & \cm{\ding{72}} & \hspace{8pt}+ & \cm\cm & \hspace{8pt}+ & \hspace{8pt}+ & \cm\cm & \cm\cm & \xm\cm & \cm\cm & $-\,${\ding{72}} & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] CF$_2$ & \cm{\ding{72}} & \hspace{8pt}+ & \cm\cm & \hspace{8pt}+ & \hspace{8pt}+ & \cm\cm & \cm\cm & \cm\cm & \cm\cm & $-\,$\cm & \cm\cm & {\ding{72}}\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] $kT$ & \cm{\ding{72}} & \cm\cm & \cm\cm & $+\,+$ & \cm\cm & \cm\cm & \hspace{8pt}+ & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] $N_{\rm xillver}$ & \cm\xm & \cm\cm & {\ding{72}}\upd & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & {\ding{72}}\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] \hline & & & & & & & & & & & & & & & \\[-0.3cm] Fit & {\LARGE \color{yellow}$\bullet$}{\LARGE \color{red}$\bullet$} & {\LARGE \color{green}$\bullet$}{\LARGE \color{red}$\bullet$} & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{red}$\bullet$}\rbul & {\LARGE \color{green}$\bullet$}{\LARGE \color{red}$\bullet$} & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{green}$\bullet$}{\LARGE \color{red}$\bullet$} & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{red}$\bullet$}{\LARGE \color{green}$\bullet$} & {\LARGE \color{red}$\bullet$}{\LARGE \color{yellow}$\bullet$} & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{yellow}$\bullet$}{\LARGE \color{green}$\bullet$} & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{yellow}$\bullet$}{\LARGE \color{green}$\bullet$} & {\LARGE \color{green}$\bullet$}\gbul \end{tabular} \end{adjustbox} \begin{adjustbox}{max width = 0.9\textwidth} \centering \begin{tabular}{lccccccccc|cccccc} \hline\hline & & & & & & & & & & & & & & & \\[-0.3cm] Parameter & K1 & \underline{K2} & K3 & K4 & \underline{K5} & \underline{K6} & \underline{K7} & \underline{K8} & K9 & B1 & {\it \underline{B2}} & {\it B3} & \underline{B4} & B5 & \underline{B6} \\[0.1cm] \hline & & & & & & & & & & & & & & & \\[-0.3cm] $a^\ast$ & \xm\xm & \xm\cm & \xm\xm & \xm\xm & {\ding{72}}\upd & {\ding{72}}\xm & {\ding{72}}\upd & \cm\cm & \xm\xm & \xm\xm & \cm{\ding{72}} & \xm\xm & {\ding{72}}\upd & \xm\xm & {\ding{72}}\cm \\[0.1cm] $h$ & \xm\xm & \xm\xm & \xm\xm & \xm\xm & \cm\xm & \cm\cm & {\ding{72}}\cm & \cm\cm & \cm\cm & \cm\cm & \cm\xm & \xm\xm & \xm\cm & \xm\xm & {\ding{72}}\upd \\[0.1cm] $i$ & \cm\xm & \cm\cm & \cm\cm & \xm\xm & {\ding{72}}\cm & \cm\xm & {\ding{72}}\upd & \cm\cm & \cm\cm & {\ding{72}}\upd & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] $\Gamma$ & \cm{\ding{72}} & {\ding{72}}\cm & \cm\cm & {\ding{72}}\upd & {\ding{72}}\upd & {\ding{72}}\upd & \cm\cm & {\ding{72}}\upd & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] $\xi_{\rm d}$ & \cm\xm & \cm\xm & \cm\cm & \xm\xm & \xm{\ding{72}} & \cm\xm & \xm\xm & \cm\cm & \xm\xm & {\ding{72}}\xm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] $A_{\rm Fe}$ & \cm\xm & \xm\xm & \cm\cm & \cm\xm & \xm\cm & {\ding{72}}\xm & \cm\xm & \xm\xm & \xm\xm & \cm\cm & \xm\xm & \cm\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] $N_{\rm H,\, wa}$ & & \xm\xm & \cm\cm & \xm\xm & \xm{\ding{72}} & \cm\cm & \cm\xm & \cm\cm & \xm\xm & & & & & & \\[0.1cm] $\xi_{\rm wa}$ & & \xm\cm & \cm\cm & {\ding{72}}\cm & \xm\cm & \xm\cm & \cm\cm & \cm\cm & \cm\xm & & & & & & \\[0.1cm] $N_{H,\,1}$ & \cm\cm & \xm\xm & \cm\cm & \xm\xm & \xm\cm & \hspace{8pt}+ & \cm\cm & \cm\cm & $- \,-$ & & & & & & \\[0.1cm] CF$_1$ & \cm\xm & \xm\cm & \xm\xm & \xm\xm & \cm{\ding{72}} & \hspace{8pt}+ & \cm\cm & \cm\cm & $-\, -$ & & & & & & \\[0.1cm] $N_{H,\,2}$ & \hspace{8pt}+ & {\ding{72}}\upd & {\ding{72}}\upd & {\ding{72}}\upd & \xm{\ding{72}} & \cm{\ding{72}} & {\ding{72}}\upd & & \cm\cm & & & & & & \\[0.1cm] CF$_2$ & \hspace{8pt}+ & {\ding{72}}\upd & {\ding{72}}\upd & {\ding{72}}\cm & \xm{\ding{72}} & \cm\cm & {\ding{72}}\upd & & {\ding{72}}\upd & & & & & & \\[0.1cm] $kT$ & \cm\xm & \xm\cm & \cm\cm & $-\,$\cm & {\ding{72}}\cm & \cm\xm & \cm\cm & \cm\cm & \cm\cm & $+\, +$ & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm \\[0.1cm] $N_{\rm xillver}$ & \cm\xm & \cm\cm & \xm\xm & \cm\xm & $+\,+$ & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & \cm\cm & $-\,$\xm & {\ding{72}}\upd & \cm\cm \\[0.1cm] \hline & & & & & & & & & & & & & & & \\[-0.3cm] Fit & {\LARGE \color{green}$\bullet$}{\LARGE \color{red}$\bullet$} & {\LARGE \color{red}$\bullet$}{\LARGE \color{green}$\bullet$} & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{red}$\bullet$}\rbul & {\LARGE \color{red}$\bullet$}\rbul & {\LARGE \color{yellow}$\bullet$}{\LARGE \color{red}$\bullet$} & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{yellow}$\bullet$}\ybul & {\LARGE \color{red}$\bullet$}\rbul & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{green}$\bullet$}{\LARGE \color{yellow}$\bullet$} & {\LARGE \color{red}$\bullet$}{\LARGE \color{green}$\bullet$} & {\LARGE \color{green}$\bullet$}\gbul & {\LARGE \color{green}$\bullet$}\gbul \\[0.1cm] \hline \\[-0.3cm] \end{tabular} \end{adjustbox} \label{table:summary} \end{table*} We present in Table\,\ref{table:summary} a qualitative summary of the results we obtained from the blind spectral analysis, based on the classification criteria defined below. While these criteria are to a certain extent (but unavoidably) arbitrary, none of the conclusions of the paper are substantially modified. \noindent $\bullet$ {\bf Individual parameters:} For all the parameters, except for the spin and, to a lesser extent, iron abundance, the measurements are generally very well constrained. Thus, we define both a \textit{full} and a \textit{fair} success criterion. The former (denoted by the \cm\,sign) is met when a measurement is consistent with the input value within a confidence level of 90\%, while the latter (denoted by the {\ding{72}}\,sign) is met when the fitted and input values are formally inconsistent, but agree with each other within a 10\% uncertainty\footnote{We note that we considered the ratios $\xi_{\rm fit}/\xi_{\rm input}$ for the ionization parameters of the accretion disc ($\rm \xi_d$) and of the warm absorber ($\rm \xi_{wa}$) rather than the ratios of the logarithms. }. All the other cases are classified as \textit{failures} (denoted by the \xm\,sign). \noindent $\bullet$ {\bf Spin classification:} Since the measure of the spin is the main aim of our study and the spin is the parameter that shows the most complex behaviour in the fits, we adopted a different approach to classify the goodness of our constraints on the spin parameter. We divided the 0--0.998 spin range into three bands: low spin ($a^\ast \in [0,0.4[$), intermediate spin ($a^\ast \in [0.4,0.8[$), and high spin ($a^\ast \in [0.8,0.998]$). Hence, we classified the measurements based on the following criteria: (a) full success if the measured value is consistent with the input one within the 90\% confidence level and the uncertainty range is within a single spin band; (b) fair success if either the measured value is consistent with the input value within the 90\% confidence level but the uncertainty range covers two spin bands, or the measured value is not consistent with the input value but the uncertainty range is within the same single band that contains the input value; (c) {\it undetermined} (denoted by the {\bf?}\,sign) if the measured value is consistent with the input one but the uncertainty range covers three spin bands; and (d) {\it failure} for the other cases. \noindent $\bullet$ {\bf Fit accuracy:} Irrespective of the values of the individual parameters and their degree of adherence to the input values, we also defined the following quality criteria for the accuracy of the whole fit: a) {\it excellent} if the adopted model is correct and the fit statistic is within a $\Delta \chi^2$ of $2.3$ from the putative absolute minimum (see below); b) {\it good} if either the model is correct and the distance from the absolute minimum is $\Delta \chi^2 < 9.2$, or the model misses a component that turns out to be significant at less than 99\%; c) {\it inaccurate} in all the other cases, including overfitting. The absolute minimum is evaluated as $\rm min\{\chi^2_0, \chi^2_a, \chi^2_b\}$, where $\chi^2_0$ is obtained by applying a posteriori the input model and $\chi^2_{\rm a,b}$ are the results from the blind spectral analysis (provided that the correct model is used). We found that 18 out of 30 fits were excellent, 4 out of 30 were good, and 8 out of 30 were inaccurate. We note that the application of the correct model (i.e. corresponding to the input one), as becomes evident below, does not imply that the input parameters are individually recovered with success (see Fig.\,\ref{fig:contours}). We stress again, however, that even inaccurate fits are fully acceptable on statistical grounds and meet the three conditions listed in \S\,\ref{sec:fitting}. \begin{figure*} \centering {\includegraphics[width = \textwidth]{plots/Reflection_par_SetG_colorcode.pdf}} \caption{Input values (black dots) of the reflection parameters assumed for creating the various simulations (for Set\,G). The best-fit values obtained for the various parameters are shown as squares and diamonds for the two different realizations. The colour code refers to the quality of the fit as a whole: green for excellent, yellow for good, and red for inaccurate (see text for details). The error bars represent the 90\,\% confidence levels obtained from the MCMC analysis.} \label{fig:refpar} \end{figure*} We summarize below the constraints that we obtained on the relevant parameters that are always present in the model by construction, distinguishing between accurate (either good or excellent) and inaccurate fits as follows: \begin{itemize} \item The measure of the {\it spin parameter} was a full/fair success in 7+4 cases, while it was undetermined/failed in 5+6 cases out of the 22 accurate fits. In the 8 inaccurate fits, 6 spins were fairly retrieved and 2 were undetermined. Low and intermediate spins (i.e. lower than 0.8) were present in 9 out of 15 simulations (i.e. 18 spectral fits, 13 of which were accurate). A low spin value was determined correctly and well constrained in only 1 out of 18 cases. The measure of the spin was fairly successful in 5 cases and it was undetermined in 6. However, for the 6 high-spin simulations (9/12 accurate fits), the measurements were successful in 10 cases (5 fully, 5 fairly), undetermined once, and the only failure was in a fit classified as excellent. In summary, the two different fits corresponding to the same simulated spectrum might result in different best-fit values of the spin parameter if one of the fits hits a secondary minimum or makes use of a {\it wrong} model, thus classified as inaccurate. However, even fully successful fits are not always able to recover the correct value of the spin, with a clear preference for high values. \item The {\it height of the lamp post}, which is the other key parameter that determines the strength of the reflection component in the observed spectra, was measured with success in 7 (full) plus 2 (fair) cases, while it failed in the remaining 13 of the 22 accurate fits. However, 3 more full successes were found among the 8 other fits. The role of the source height is further discussed later on. \item The {\it disc inclination} was measured with success in 16 plus 4 of the accurate fits (2 failures) and 2 plus 2 of the inaccurate fits (4 failures). \item The {\it photon index} was measured successfully in all cases, of which 20/30 were a full success. We found a maximum difference between the measured and input photon index of $\Delta \Gamma = 0.12$. \item The {disc ionization parameter} was retrieved with success in 13 plus 2 of the accurate fits (with 7 failures) and 3 of the inaccurate fits (5 failures). \item The measure of {\it iron abundance} was fully successful in 15/22 accurate fits and in 4/8 inaccurate fits, and unsuccessful in all the other cases. \end{itemize} \noindent As already noted, the failure in measuring the single parameters might also occur in cases in which the fit is highly accurate for both analysts (e.g. G8, G11), which is a possible indication of complex degeneracies. The various simulated spectra for Set G together with their corresponding residuals are presented in Fig.\,\ref{fig:appendix-spectra}. The best-fit results relative to the reflection components are presented in Fig.\,\ref{fig:refpar}, while those for the absorption and thermal components are presented in Fig.\,\ref{fig:AbsApecpar}. Table\,\ref{table:App} shows the input and best-fit values of the lamp-post height and spin parameters. We report the best-fit $\chi^2/{\rm dof}$ values in the same table. \begin{figure*} \centering {\includegraphics[width = 0.99\textwidth]{plots/Abs_apec_par_SetGK_colorcode.pdf}} \caption{Similar to Fig.\,\ref{fig:refpar} but for the absorption parameters (top panels) and the gas temperature of thermal emission component (last panel), presenting the results for Sets\,G and K. The column densities are in units of $10^{22}\,\rm cm^{-2}$.} \label{fig:AbsApecpar} \end{figure*} \section{Discussion} \label{sec:discussion} We simulated in this work high-quality single-epoch spectra of AGN at low redshift in the 0.3--79\,keV band using the responses of both {\it XMM-Newton} and {\it NuSTAR}. We assumed a general spectrum that includes, in addition to the primary emission, both ionized and neutral reflection, thermal emission, a warm absorber, and two layers of neutral partially covering absorbers. While in most cases the blind fitting procedure should be considered as successful, this fails in retrieving all of the individual input parameters. Below we examine the possible causes. \subsection{The Kerr BH case} \label{sec:highspin} As noted in \S\,\ref{sec:results}, spectral analysis tends to recover high input spins better than low/intermediate spins: 10 out of 12 high-spin measurements were at least fairly successful, while only 7 out of 18 low/intermediate spins were reasonably retrieved. Furthermore, the measured spin distribution, as reported in the literature, shows a clear tendency towards high spins \citep[e.g.][]{Walton13, Reynolds14, Vasudevan16}. This evidence was the main motivation for us to simulate a set of high-spin spectra (hereafter Set\,K). Thus, we generated a set of 9 simulations (3 simulations per person) by fixing the spin parameter to its maximum allowed value; we refer to these simulations as K1--K9. However, the spin parameter was free to vary within the 0--0.998 range during the spectral fitting. The constraints on the best-fit parameters of the reflection and absorption components are presented in Table\,\ref{table:summary}, and plotted together with the corresponding input values in Figs.\,\ref{fig:refparKB} and \ref{fig:AbsApecpar}, respectively. The spin was retrieved successfully in 6 (3 fully and 3 fairly) of the 11 accurate fits, with 5 failures, while the 7 inaccurate fits returned 2 fairly constrained spins (with 5 failures). In total, only 18 out of 30 high-spin cases (in sets G and K) were a success: 13 (8 plus 5) are obtained in the 20 accurate fits, while other 5 fair successes still emerge from the 10 inaccurate fits. This suggests that, even though it plays an important role, the spin is not the only factor that may lead to a positive result in recovering its input value. \subsection{Effects of absorption} \label{sec:absorption} We summarize below the constraints we obtained on the absorption components in Sets G and K. We note that these components can be added/removed arbitrarily. 1) The fully covering warm absorber is included in 23 simulations (equivalent to 46 fits). Its column density and ionization parameter were positively recovered in 34 (30 full plus 4 fair successes) and 38 (36 plus 2) cases, respectively. Both rates are higher than the incidence of accurate fits (32/46). 2) The partial-covering low-column absorber is present in 21 simulations. Its column density and covering fraction were measured successfully in 25 plus 1 and in 28 plus 1 cases, respectively, against 25 out of 38 accurate fits (in 4 cases this component is missed in the spectral analysis). 3) The partial-covering high-column absorber is present in 19 simulations. Its column density and covering fraction were measured successfully in 23 plus 12 and 24 plus 12 cases, respectively, when the accurate fits are 24 out of 37 (this component is missed only once). Summarizing, the properties of the absorbers are correctly estimated in the majority of the blind fits. Even though the number of simulations that we performed is statistically small, we can still derive a general idea of the degeneracy that may be present between the reflection-based models and the complex absorption model. It may happen that the inclusion of an absorber that is not present intrinsically mimics some of the relativistic effects on the spectrum, thus resulting in a wrong measurement of the spin parameter. However, it seems that absorption plays only a marginal role in the ability of measuring spins, as the overall absorption configuration in the fits was correct in 37 out of 48 cases. These issues are further investigated in sections\,\ref{sec:modeldependence} and \ref{sec:PCabs}. \begin{figure*} \centering {\includegraphics[width = \textwidth]{plots/Reflection_par_SetBK_colorcode.pdf}} \caption{Similar to Fig.\,\ref{fig:refpar} but for Sets K and B.} \label{fig:refparKB} \end{figure*} \subsection{The bare sources case} \label{sec:baresource} In order to completely remove the uncertainties associated with absorption effects, we performed an additional set of 6 simulations (two per person) of bare sources, without including any intrinsic absorption (hereafter Set B, we refer to these simulations as B1-B6). The input model of this set can be written in XSPEC terminology, neglecting Galactic absorption, as follows: {\tt model = RELXILLLP + XILLVER + APEC}. The simulated spectra and the $\chi^2$ residuals are presented in Fig.\,\ref{fig:appendix-spectra}. The input parameters together with the best-fit values are plotted in Figure\,\ref{fig:refparKB}. We summarize in Table\,\ref{table:summary} the qualitative constraints on the parameters that we obtained for this set. Four of these simulations involved a high spin value, and two cases each correspond to a lamp-post height $\leq$ or $>$ $5\,r_{\rm g}$. All the successful spin measures (1 full and 3 fair, with 3 out of 4 accurate fits) occurred for a low height of the source. Interestingly, at low height even a small spin is correctly retrieved (1 full and 1 fair success in 2 out of 2 accurate fits). Conversely, when the height of the lamp post is larger than 5\,$r_{\rm g}$, the measure of the spin always fails, irrespective of its value and despite the good rate (4/6) of accurate fits. The other disc reflection parameters were all well constrained in all cases, except for the Fe abundance in a single instance. As for the thermal and distant reflection components, they were both correctly assessed in 10 out of 12 fits. In total, 19 out of the 30 simulations (i.e. 38 out of 60 fits) were performed asssuming maximally rotating black holes. The spectral analysis resulted in 22 successes in the measure of the spin (9 full and 13 fair), with 1 undetermined and 15 failed cases, for 25/38 accurate fits. Remarkably, when we consider only the simulations performed with a low lamp-post height, the total count features all the 22 successes and only 2 failures, for the same fraction (16/24) of accurate fits. This strongly suggests that the height of the primary X-ray source is the most critical ingredient for an accurate measure of black hole spin. \subsection{Effects of the lamp-post height} In the light of these findings, we further explored the dependence of our results on the input lamp-post height. Half of the simulations were perfomed assuming an input lamp-post height lower or equal to 5\,$r_{\rm g}$. In general, these heights were measured successfully in 21 (16 plus 5) fits, with 5 full successes coming from the 9/30 inaccurate fits. Three of these 15 simulations were performed assuming a low input spin parameter. The fit was accurate in 5 out of 6 cases, returning 2 successes in the measure of the height and 4 (1 plus 3) in that of the spin. By considering the second half of the simulations, with a lamp-post height larger than 5\,$r_{\rm g}$, we find that the height was correctly estimated in 6/30 fits only despite the 21 out of 30 accurate fits. In 5 out of 30 spectral fits we were able to recover the spin (2 full and 3 fair successes), while in 6 out of 30 cases the spin was undetermined. Eight of these 15 simulations were performed assuming a low spin. As already noted, the high spin and large height case gave 1 undetermined and 13 failures, despite the 9 out of 14 accurate fits. This apparently suggests that, at large heights, the value of the spin has little weight, and the chance of success in its measure is not only small but also random (i.e. depending on several other factors such as iron abundance). In this sense, the preference for low spins is most likely a bias, so any measure at large heights should be taken with caution \citep[see also][]{Fabian14}. The full summary is given in Table\,\ref{table:spinvsheight}. \begin{table} \caption{Summary of the constraints on determining the spin showing its dependence on the lamp-post height. The values/fractions between parentheses refer to the accurate fits only.} \centering \begin{adjustbox}{max width=0.98\linewidth} \begin{tabular}{l|l} \hline \hline & \\[-0.3cm] $a^\ast < 0.8$: 11 models & $a^\ast \geq 0.8$: 19 models \\ & \\ $h \leq 5 \,r_{\rm g}$: 6 (5) fits & $h \leq 5 \,r_{\rm g}$: 24 (16) fits \\ ~~~~\llap{\quad - }~~ Full success: 1/6 ( 1/5 ) & ~~~~\llap{\quad - }~~ Full success: 9/24 ( 9/16 ) \\ ~~~~\llap{\quad - }~~ Fair success: 3/6 ( 2/5 ) & ~~~~\llap{\quad - }~~ Fair success: 13/24 ( 7/16 ) \\ ~~~~\llap{\quad - }~~ Undetermined: 0/6 ( 0/5 ) & ~~~~\llap{\quad - }~~ Undetermined: 0/24 ( 0/16 ) \\ ~~~~\llap{\quad - }~~ Failure: 2/6 ( 2/5 ) & ~~~~\llap{\quad - }~~ Failure: 2/24 ( 0/16 ) \\ & \\ $h > 5 \,r_{\rm g}$: 16 (12) fits & $h > 5 \,r_{\rm g}$: 14 (9) fits \\ ~~~~\llap{\quad - }~~ Full success: 2/16 ( 2/12 ) & ~~~~\llap{\quad - }~~ Full success: 0/14 ( 0/9 ) \\ ~~~~\llap{\quad - }~~ Fair success: 3/16 ( 1/12 ) & ~~~~\llap{\quad - }~~ Fair success: 0/14 ( 0/9 ) \\ ~~~~\llap{\quad - }~~ Undetermined: 6/16 ( 4/12 ) & ~~~~\llap{\quad - }~~ Undetermined: 1/14 ( 1/9 ) \\ ~~~~\llap{\quad - }~~ Failure: 5/16 ( 5/12 ) & ~~~~\llap{\quad - }~~ Failure: 13/14 ( 8/9 ) \\ \hline \end{tabular} \end{adjustbox} \label{table:spinvsheight} \end{table} \subsection{Model dependence} \label{sec:modeldependence} By construction, the disc reflection component is always included in both the simulated models and the fitted models, while the presence of all the other components, either additive (distant reflection, thermal emission) or multiplicative (warm, cold absorption) is arbitrary. This allows us also to investigate the impact of model dependence on the ability to accurately recover the reflection parameters. Considering all the simulations from sets G, K, and B, soft X-ray thermal emission was present in 27 out of 30 cases, and was missed in only one out of the 54 relevant fits (K4a). Of the 3 out of 30 cases where it was not required, it was included in 5 out of 6 fits (G4a,b, G7b, B1a,b). Correctly accounting for this component or not seems to have little effect on the spin determination. Interestingly, its inclusion does not necessarily undermine the measure of the spin, but the accuracy is lower (compare the results of G7b versus G7a; Table\,\ref{table:summary}. We note that model G7 is a low-spin, moderate-height case). In total, this component was measured successfully in 50 out of 53 spectral fits. While the soft thermal component is easier to distinguish from the smooth, blurred reflection, the contribution from the distant reflector can significantly modify the shape of the Compton hump above 10 keV. This is present in 29 out of 30 models. It is missed once, and added instead in both fits of the single model (K5) where it was not originally included. This is a maximum-spin, low-height case, which, as we have seen, should have a higher chance of success. Indeed, both fits meet the fair success criterion for the spin. However, we could argue that the inclusion of distant reflection prevents us from obtaining more stringent constraints. In total, the normalization of this component was measured succesfully in 51 out of 57 spectral fits. Absorption is allowed in 24 models only (G and K sets). The fully covering, warm layer is present in 23 out of 24 simulations. Remarkably, it is never missed and never added. This suggests that the features imprinted on the spectrum from mildly (or even highly) ionized gas in the line of sight are relatively easy to identify, at least at the X-ray brightness level of the simulated spectra. Hence, we expect this component to have no significant effect on the measure of the spin. In reality, however, the different treatment of warm absorption might lead to incompatible spin measures for the same data set of the same source, as in NGC\,3783 (\cite{Brenneman11} versus \cite{Patrick11}). The micro-calorimeters on board {\it ATHENA} and, possibly, earlier X-ray missions such as Arcus \citep{Smith16} and XARM will conclusively remove this source of ambiguity. The lower column partial-covering absorber is included in 21 out of 24 models, while the higher column partial-covering absorber in 19 out of 24. The former is missed 4 times and added in 2 out of 6 fits, the latter is missed once and added in 4 out of 10 fits. Without distinguishing between the relative column densities, the configuration of the partial-covering, cold absorber consists of a single layer in 6 out of 24 cases and of a double layer in 17 out of 24 cases. No cold layers are included in the remaining case (G5), but they are both used in G5b, leading to an undetermined spin measure (as opposed to a fair success for G5a, where the correct model is applied). Two layers instead of the single one required are adopted in 5 out of 12 fits, while one of the two layers is missed in 5 out of 34 fits. Surprisingly, the addition of a layer does not always preclude a decent (or even good) measure of the spin (e.g. G2b), although the statistical significance of the second layer in these cases is most likely marginal. Conversely, all the 5 cases in which a layer is missed correspond to failures or indetermination in the spin measurement. We conclude that the effects of partial covering and of relativistic reflection, when high-quality broadband spectra are available, can be generally well distinguished from each other. We further discuss this point in the next section. \subsection{Reflection versus partial covering absorption} \label{sec:PCabs} According to some interpretations \citep[e.g.][]{Miller08}, no relativistic signature is needed to explain the spectra (and variability) of most AGN. This is a natural consequence of the substantial statistical equivalence between absorption- and reflection-based models, especially when the spectra are complex and require some combination of both ingredients. The relative dispute was initially concentrated on the nature of the Fe K line broadening since the partial covering absorption could reproduce both the smooth, extended red wing of the putative relativistic line as a gentle continuum curvature and its blue horn as an absorption edge. The appearance of tentative hard X-ray excesses in the \textit{Suzaku} era added a further controversial element, which could be explained either as a Compton reflection hump \citep[e.g.][]{Walton13} or as a signature of Compton-thick absorption \citep[e.g.][]{Tatum13}, thus reinforcing the polarity between the two mainstream scenarios. The advent of \textit{NuSTAR}, providing high-quality spectra also above 10 keV, accurate background subtraction, and substantial overlap with the band covered by the other X-ray observatories, can greatly reduce this persistent ambiguity. While, in principle, the ambiguity works in both ways, in our simulations we did not assume any pure partial-covering configuration (i.e. with no disc reflection), so we cannot verify whether an absorption layer could be missed by overestimating the amount of reflection. This is not the scope of our work. In fact, in this context it is more interesting to explore the possibility for the simulated models to be adequately reproduced without considering any relativistic component. We therefore checked the consequences of replacing the \texttt{relxilllp} component in our fits with a simple power law, in which the cut-off is fixed at 300 keV for consistency with the primary continuum in the parent model. This is equivalent to fixing the reflection fraction in \texttt{relxilllp} to zero. In order to compensate for the lack of disc reflection, we allow for a larger complexity in the absorption configuration. It turns out that only 1 of the 30 simulated spectra could be perfectly described also by a pure partial-covering model, as the relative fits would meet all of our acceptance criteria, namely good statistics and lack of residuals. This is G10 ($\chi^2/\rm dof = 397/394$), where three cold layers are required: $N_{\rm H,1} = 6 \times 10^{21}$ cm$^{-2}$ (CF$_1 = 1$), $N_{\rm H,2} = 5.5 \times 10^{23}$ cm$^{-2}$ (CF$_2 \sim 0.1$), and $N_{\rm H,3} = 3.6 \times 10^{24}$ cm$^{-2}$ (CF$_3 \sim 0.25$). The fully covering, thinner layer is perfectly matched to a component of the input model, whose second layer has $N_{\rm H} = 1.9 \times 10^{24}$ cm$^{-2}$ and CF = 0.26. Although G10 is a low-spin ($a^\ast = 0.12$), large-height ($h = 32\,r_{\rm g}$) case, a much more complex (and rather extreme) absorption pattern is required to compensate for the lack of disc reflection in the fit. In three other cases (G14, K4, and K9) the reduced $\chi^2$ is fair (1.02, 1.10, and 1.04, respectively), and two layers have similar (even if not strictly consistent) properties to the input components. There are, however, some clear residual structures that make the absorption-only models not satisfactory. A third partial-covering layer is not statistically required. At low S/N, these spectra (with maximal spin but $h > 5 \,r_{\rm g}$) could be easily misinterpreted. A peculiar case is that of G5, where no cold absorbers are included in the simulation. This spectrum can be well fitted by a power-law continuum ($\chi^2/\rm dof = 479/477$) that is subject to warm absorption only with the exact input parameters. Even if the disc reflection component in this case is very smooth and featureless, a model where this is correctly accounted for (G5a; $\chi^2/\rm dof = 449/473$) is still statistically preferred \citep[at the $>4\sigma$ level based on the corrected Akaike Information Criterion usually adopted for non-nested models;][]{Akaike74}. It is likely that the contribution from disc reflection would be missed in lower quality data. A handful ($\leq 5$) of other cases, in principle, could become acceptable after the inclusion of a third partial-covering layer if the spectra were fainter by at least an order of magnitude with respect to the simulated spectra, which could mask the residuals within the photon noise. We note, however, that such a complex absorption configuration (even if real) should be rejected as a form of data overfitting at low S/N. We conclude that in most cases (26 out of 30), including all the set~B of bare spectra, disc reflection cannot be missed or mimicked by absorption effects in high-quality broadband X-ray spectra. The problems with its identification arise when the reflected spectrum is extremely smooth, so this could become a non-negligible issue if the X-ray corona is radially extended and thus responsible for the Comptonization of the relativistic signatures from the inner disc \citep[see][]{Steiner17}. An interesting outcome of our analysis is that the failed spin measurements have a nearly flat distribution, which is clearly different from those reported in the literature. We plot in Fig.\,\ref{fig:histogram} the distribution of spin measurements listed in \cite{Vasudevan16} together with that of wrong and undetermined measurements from our blind spectral analysis. As the uncertainties on the individual entries are rather large for both samples, we do not attempt any statistical test to compare quantitatively the two distributions. We note, however, that, while the selection effects leading to an observed spin distribution peaked towards higher values are well known \citep[e.g.][]{Brenneman11}, our results seem to discard any systematics or biases associated with possible reflection versus absorption spectral degeneracies. \begin{figure} \includegraphics[width = 0.48\textwidth]{plots/spin_distribution.pdf} \caption{Distribution of spin measurements from \cite{Vasudevan16} (blue histograms) and the distribution of wrong and undetermined measurements (red histograms) obtained from the simulations performed in the current work.} \label{fig:histogram} \end{figure} \subsection{Simulations with ATHENA} \label{sec:ATHENA} \begin{figure}[!] \centering \includegraphics[width = 0.33\textwidth]{plots/corners/Athena_corner_N01.pdf} \\ \includegraphics[width = 0.33\textwidth]{plots/corners/Athena_corner_R01.pdf} \\ \caption{Black hole spin vs. lamp-post height contour plots obtained by the MCMC analysis obtained from the best-spectral fits of G6 (top panel) and G11 (bottom panel), performed using the {\it ATHENA}-WFI response files. The input spin and height values (red lines) are listed on the top right corner of each panel for the corresponding simulation.} \label{fig:ATHENA-corner} \end{figure} To verify whether the failures require an even larger data quality (hence inaccessible to the current X-ray observatories), we chose two models for which the measured values of the spin were either undetermined (G6) or wrong (G11) despite the excellent accuracy of both fits. Based on our results, both cases are expected to be rather challenging, as they involve intermediate black-hole spin and large lamp-post height. We then simulated the same input model using the response files (with an exposure time of 100\,ks) of the Wide Field Imager \citep[WFI;][]{WFI}, one of the two scientific instruments proposed for the {\it ATHENA} X-ray observatory \citep{ATHENA}. We performed an MCMC analysis as described in Section\,\ref{sec:fitting}. For model G6, which has $h = 17 \,r_{\rm g}$, the results are similar to those obtained by the spectral analysis of the joint {\it XMM-Newton} and {\it NuSTAR} spectra. Even with WFI, the measured value of the spin remains undetermined, as shown by the $a^\ast-{\rm vs}-h$ contour plots in Fig.\,\ref{fig:ATHENA-corner}. For model G11, which has almost the same input spin but lower height ($\rm 8\,r_{\rm g}$), the measured spin remains inconsistent with the actual value of 0.7, but it is now much closer to this value ($\sim 0.9$ against 0.2). This test, although unsuccessful, confirms the main indication of our study, i.e. the importance of a small lamp-post height (hence an effective illumination of the innermost disc) for an accurate measure of the BH spin. However, one of the limitations of fitting the spectra with {\it ATHENA} is the incapability of probing the Compton hump at high energies. We will further investigate this issue in a future work, where we will also consider the potential of high-resolution spectroscopy. \section{Conclusions and future work} \label{sec:conclusion} The measure of black-hole spin in AGN has many important implications and the modelling of X-ray reflection features from the inner accretion disc provides a powerful method in this sense. The reliability of the available reflection-based SMBH spin measurements, however, is not fully established yet. In this work, we have investigated this issue through the simulation of high-quality broadband spectra, representive of the best possible data that can be achieved with a single simultaneous \textit{XMM--Newton} and \textit{NuSTAR} observation of a local, bright AGN. A similar attempt has been carried out recently by \cite{Bonson16} and by \cite{Choudhury17}. Both studies, however, only considered the spectra in the \textit{NuSTAR} energy range (2.5--79 keV), thereby neglecting the statistically dominant soft X-ray excess component. Moreover, the ideal scenario of pure reflection was assumed in both cases. We allowed, instead, for the general spectral complexity observed in real AGN spectra, including absorption, thermal, and distant reflection components in our parent model. The spectra were simulated by one member of the team and blindly fitted by the other two, where the only constraint was to use the same number of components employed in the parent model or less. We have shown that the analysis of single-epoch AGN spectra can be really challenging. In fact, our simulations suggest that a correct determination of the BH spin parameter is not straightforward, and that the height of the X-ray source (in a lamp-post geometry) plays a major role in the spin measurement. This is not surprising and agrees with the conclusions reached by \cite{Fabian14}, \cite{Bonson16}, and by \cite{Choudhury17}: the closer the source to the black hole, the stronger the relativistic distortions that allow an accurate spin measurement. However, we also demonstrated that complex (i.e. partial-covering, multi-layer) absorption does not seem to have a critical impact on the ability to measure the spin correctly, at least at very high S/N. In summary, 42 out of 60 blind fits (from the 30 simulations) turned out to be accurate in that the model employed in the analysis corresponds to the input model and the overall $\chi^2$ is equal or very close to the expected absolute minimum (Section \,\ref{sec:results}). The spin was retrieved perfectly or reasonably well in 12 out of 42 and 10 out of 42 cases, respectively. It was unconstrained or wrong in 5 out of 42 and 15 out of 42 fits. The remaining 18 fits, formally inaccurate, actually return 9 additional acceptable spin measures (with 2 undetermined and 7 failed). By dividing the simulations over the four quadrants of the spin/height plane identified by the $a^\ast = 0.8$ and $h = 5\,r_{\rm g}$ values (Table \ref{table:spinvsheight}), we obtain a remarkable 16 out of 16 success rate for the accurate fits in recovering the correct spin in the high-spin/low-height quadrant. We note that the fraction of accurate fits is virtually constant over the whole parameter space. Several lines of evidence suggest a compact primary source that is located at a few gravitational radii from the BH. Spectral-timing and reverberation studies, for example, are suggestive of a physically small corona that lies within 3--10 $\rm r_g$ above the central BH \citep[e.g.][]{Fabian09, Demarco13, Emma14, Gallo15}. Moreover, X-ray microlensing analyses of some bright lensed quasars suggest that the hard X-rays are emitted from compact regions with half-light radii less than $6\,\rm r_g$ \citep{Char09, Mosquera13, Reis13}. Our findings imply that X-ray reflection is indeed an effective method to measure the BH spin provided that reflection dominates the broadband intrinsic (i.e. before foreground absorption) spectrum, which might not always be the case \citep[e.g.][]{Parker17Ton}. Furthermore, our analysis implies that the actual nature of the X-ray source (as yet unknown) should heavily affect any reflection-based spin measure. Several other factors may lead to a wrong determination of the spectral parameters in single-epoch observations, such as the choice of a \textit{wrong} model. In fact, at low spectral quality, some absorption configurations can indeed mimic the relativistic effects. Moreover, for large lamp-post heights, the chances of reliably assessing the spin are small, and apparently independent on the value of the spin itself. Simply increasing the total number of counts or effective area does not bring any substantial improvement. For single-epoch, low-resolution spectra, indirect or complementary arguments, such as energy conservation, fractional variability, or model-independent techniques \citep[e.g.][]{Kammoun17}, are still recommended to support the conclusions of the spectral analysis. Spectral variability, however, could greatly help in constraining the constant parameters in the reflection models, such as the spin, inclination, and iron abundance, as already proved by the NGC~1365 campaign, while high resolution can remove the ambiguities associated with the introduction of ad hoc absorption components. The importance of variability in measuring the spins and the impact of future X-ray missions, carrying calorimeters and polarimeters, will be investigated in a future work. \begin{acknowledgements} We would like to thank the anonymous referee for his/her constructive comments that helped clarifying our manuscript. We would like to thank Prof. Andrew Fabian for valuable suggestions and comments. EN received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sk\l odowska-Curie grant agreement no. 664931. The figures were generated using {\tt matplotlib} \citep{Hunter07}, a {\tt PYTHON} library for publication of quality graphics. The MCMC results were presented using the open source code {\tt corner.py} \citep{corner}. \end{acknowledgements} \bibliographystyle{aa}
{ "timestamp": "2018-02-28T02:08:40", "yymm": "1802", "arxiv_id": "1802.06800", "language": "en", "url": "https://arxiv.org/abs/1802.06800" }
\section{Introduction} The ability for an agent to localize itself within an environment is a crucial prerequisite for many real-world applications, such as household robots \cite{thrun2005probabilistic}, autonomous drones \cite{forster2014svo}, augmented and virtual reality applications, and video game AI \cite{parisotto2017neural}. In most cases, the main challenge for an agent localizing itself is that, the agent is not provided with a map of the environment and therefore the agent must simultaneously map the environment and localize itself within the incomplete map it has produced. A wide variety of algorithms to solve this Simultaneous Localization and Mapping (SLAM) task have been developed over a long history \cite{thrun2005probabilistic,cadena2016past}, with modern methods achieving impressive accuracy and real-time performance \cite{mourikis2007multi,kummerle2011g,mur2015orb,engel2014lsd}. These methods still have several shortcomings, owing mainly to the hand-engineered features, dense matching, and heuristics used in the design of these algorithms. For example, most methods are brittle in certain scenarios, such as varying lighting conditions (e.g.\ changing time of day), different weather conditions or seasons \cite{sattler2017benchmarking}, repetitive structures, textureless objects, extremely large viewpoint changes, dynamic elements within the environment, and faulty sensor calibration \cite{cadena2016past}. Because these situations are common in real-world scenarios, robust applications of those systems are difficult. In this paper, we develop a method which can be made more robust to the common situations where previous SLAM algorithms typically degrade. To do this, we formulate a novel neural network architecture called ``\methodname''. \methodname~consists of differentiable analogues of the common types of subsystems used in modern SLAM algorithms, such as a local pose estimation model, a pose selection module (key frame selection, essential graph), and a graph optimization process. Because each component in the system is differentiable, the entire architecture can be trained in an end-to-end fashion, enabling the network to learn invariances to the types of scenarios observed during training. To demonstrate the ability of our method to learn pose estimation, we use trajectories sampled from several simulated environments. The first environment is a 2D maze where the agent has a single-pixel row-scan as input. We then scale the model up to 3D mazes based on the ViZDoom environment \cite{kempka2016vizdoom}, where the agent receives an image of the first-person view of the world as input. \section{Related Work} SLAM is a process in which an agent needs to localize itself in an unknown environment and build a map of this environment at the same time, with uncertainties in both its motions and observations. SLAM has evolved from filter-based to graph-based (optimization-based) approaches. Some EKF-based systems have demonstrated state-of-the-art performance, such as the Multi-State Constraint Kalman Filter \cite{mourikis2007multi}, the VIN \cite{kottas2013consistency}, and the system of Hesch et al. \cite{hesch2014camera}. Those methods, even though efficient, heavily depend on linearization and Gaussian assumptions, and thus under-perform their optimization-based counterparts, such as OK-VIS \cite{leutenegger2015keyframe}, ORB-SLAM \cite{mur2015orb}, and LSD-SLAM \cite{engel2014lsd}. Graph-based SLAM typically includes two main components: the front-end and the back-end. The front-end extracts relevant information (e.g.\ salient features) from the sensor data and associates each measurement to a specific map feature, while the back-end performs graph optimization on a graph of abstracted data produced by the front-end. Graph-based SLAM can be categorized either as feature-based or direct methods depending on the type of front-end. Feature-based methods rely on local features (e.g.\ SIFT, SURF, FAST, ORB, etc.) for pose estimation. For example, ORB-SLAM \cite{mur2015orb} performs data association and camera relocalization with ORB features and DBoW2 \cite{galvez2012bags}. RANSAC~\cite{fischler1987random} is commonly used for geometric verification and outlier rejection, and there are also prioritized feature matching approaches \cite{sattler2016efficient}. However, hand-engineered feature detector and descriptors are not robust to motion blur, illumination changes, or strong viewpoint changes, any of which can cause localization to fail. To avoid some of the aforementioned drawbacks of feature-based approaches, direct methods, such as LSD-SLAM \cite{engel2014lsd}, utilize extensive photometric information from the images to determine the pose, by minimizing the photometric error between corresponding pixels. This approach is in contrast to feature-based methods, which minimize the reprojection error. However, such methods are usually not applicable to wide baseline settings \cite{cadena2016past} during large viewpoint changes. Recent work in \cite{forster2014svo} \cite{forster2017svo} combines feature and direct methods by minimizing the photometric error of features lying on intensity corners and edges. Some methods focus on dense recontruction of the scene, for instance \cite{whelan2016elasticfusion} builds dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera, without pose graph optimisation, while KinectFusion \cite{newcombe2011kinectfusion} obtains depth measurements directly using active sensors and fuse them over time to recover high-quality surface maps. These approaches still suffer from strict calibration and synchronization requirements, and the data association modules require extensive parameter tuning in order to work correctly for a given scenario. In light of the limitations of feature-based and direct approaches, deep networks are proposed to learn suitable feature representations that are robust against motion blur, occlusions, dynamic scenes, illumination, texture, and viewpoint changes. They have been successfully applied to several related multiview vision problems, including learning optical flow \cite{dosovitskiy2015flownet}, depth \cite{liu2015deep}, homography between frame pairs \cite{detone2016deep}, and localization \cite{chaplot2018active} and re-localization problems. Recent work includes re-formulating the localization problem as a classification task \cite{weyand2016planet}, a regression task \cite{kendall2015posenet,hazirbasimage2017}, end-to-end trainable filtering \cite{haarnoja2016backprop}, and differentiable RANSAC \cite{brachmann2016dsac}. More specifically, PlaNet \cite{weyand2016planet} formulates localization as a classification problem, predicting the corresponding tile from a set of tiles subdividing Earth surface for a given image, thus providing the approximate position from which a photo was taken. PoseNet \cite{kendall2015posenet} formulates 6-DoF pose estimation as a regression problem. One drawback of the PoseNet approach is its relative inaccuracy, compared to state-of-the-art SIFT methods. Similarly, \cite{melekhov2017relative} fine-tunes a pretrained classification network to estimate the relative pose between two cameras. To improve its performance, \cite{hazirbasimage2017} added Long-Short Term Memory (LSTM) units to the fully-connected layers output, to perform structured dimensionality reduction, choosing the most useful feature correlations for the task of pose estimation. From a different angle, DSAC \cite{brachmann2016dsac} proposes a differentiable RANSAC so that a matching function that optimizes pose quality can be learned. These approaches are not robust to repeated structure or similar looking scenes, as they ignore the sequential and graphical nature of the problem. Addressing this limitation, work in \cite{clark2017vinet} fused additional sequential inertial measurement with visual odometry. SemanticFusion \cite{mccormac2017semanticfusion} combines convolutional neural networks (CNNs) and a dense ElasticFusion \cite{whelan2016elasticfusion}. However, classic feature-based methods still outperform CNN-based methods published to date in terms of accuracies. Recently, there has been an increasing interest in combining navigation and plannning in an end-to-end deep reinforcement learning (DRL) framework. The efforts to date can be divided into two categories depending on the presence of external memory in the architecture or not. Target-driven visual navigation takes a visual observation and an image of the target \cite{zhu2017target} or range findings \cite{tai2017virtual} as input, and plans goal seeking actions in a 3D indoor simulated environment as the output. In simulated environments, \cite{mirowski2016learning} uses stacked LSTM in a goal-driven RL problem with auxilary tasks of depth prediction and loop-closure classification, while \cite{zhang2016deep} added successor features to ease transfer from previously mastered navigation tasks to new ones. Work in \cite{bhatti2016playing} augmented DRL with Faster-RCNN for object detection and SLAM (ORB-SLAM2) for pose estimation; observing images and depth from VizDoom, they built semantic maps with 3D reconstruction and bounding boxes as input to a RL policy. To deal with the limited memory of standard recurrent architures (such as LSTM) more structured external memories have been developed to take the spatial relations of memories into account. \cite{gupta2017cognitive} assumes known ego-motion and constructs a metric egocentric multi-scale belief map (top-down-view latent representation of free space) of the world with a 2D spatial memory, upon which RL plans a sequence of actions towards goals in the environment with a value iteration network. Neural Map in \cite{parisotto2017neural} is a writable structured 2D external memory map for an agent to learn to navigate within 2D and 3D maze environments. These works all assume precise egomotion and thus perfect localization, a prerequisite that can rarely be met in real-world scenarios. Relaxing this assumption and resembling traditional occupancy grid SLAM, Neural SLAM \cite{zhang2017neural} uses an occupancy-grid-like memory map, assuming only an initial pose is provided, and updates the pose beliefs and grid map using end-to-end DRL. One of key ingredient for the success of graph-based SLAM is the back-end optimization. The back-end builds the pose graph, in which two pose nodes share an edge if an odometry measurement is available between them, while a landmark and a robot-pose node share an edge if the landmark was observed from the corresponding robot pose. In pose graph optimization, the variables to be estimated are poses sampled along the trajectory of the robot, and each factor imposes a constraint on a pair of poses. Modern SLAM solvers exploit the sparse nature of the underlying factor graph and apply iterative linearization and optimization methods (e.g.\, nonlinear least squares via the Gauss-Newton or Levenberg-Marquardt algorithm). Several such solvers achieve excellent performance, for example, g2o \cite{kummerle2011g}, TSAM \cite{dellaert2012factor}, Ceres, iSAM \cite{kaess2012isam2}, SLAM++ \cite{salas2013slam}, and recently \cite{bowman2017probabilistic} for optimization with semantic data association. The SLAM back-end offers a natural defense against data association and perceptual aliasing errors from the front-end, where similarly looking scenes, corresponding to distinct locations in the environment, would deceive place recognition. However, they depend heavily on linearization of the sensing and motion models, and require good initial guesses. Current systems can be easily induced to fail when either the motion of the robot or the environment are too challenging (e.g.\, fast robot dynamics or highly dynamic environments) \cite{cadena2016past}. In this work we formulate a complete end-to-end trainable solution to the graph-based SLAM problem. We present a novel architecture that combines a CNN-based local front-end and an attention-based differentiable back-end. We learn effective features automatically and perform implicit loop closure by designing an additional differentiable Neural Graph Optimizer to perform global optimization over entire pose trajectories and correct errors accumulated by the local estimation model. \begin{figure*} \vspace{-0.1in} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{fig2} \caption{\small{The architecture of the proposed model, showing the Local Pose Estimation, the Pose Aggregation, and the Neural Graph Optimization modules.}} \label{fig:global_arch} \vspace{-0.1in} \end{figure*} \section{Method} The \methodname~architecture is split into distinct differentiable components. Similar to many of the previous methods, we split the process into local adjustments between temporally adjacent frames combined with a global optimization procedure which distributes error over the entire observed trajectory. As will be shown in the experiments, the global graph optimization procedure is critical to removing drift (the accumulation of small errors over long trajectories). The graph optimization procedure does this by learning to do loop closures, recognizing when the agent has revisted the same location, and enforcing a constraint that those poses should be nearly equal. The local model is crucial for providing a good starting point for the global optimization. It does this by estimating relative transformations between two temporally adjacent frames. By accumulating transformations from the start of the trajectory to the end, we can use this model to get the initial pose estimate within the global frame. The complete model architecture is shown in Fig.\ \ref{fig:global_arch}. We will describe relative poses as $\Delta P = (\Delta p_1,\hdots,\Delta p_T)$ with the first pose set as the origin, i.e.\ $\Delta p_1$ is the transformation from origin to pose 1, $\Delta p_2$ is the transformation from pose~1 to pose 2, and so on. These relative poses can be transformed into a global frame of reference by accumulating the relative pose changes along the trajectory, i.e.\ $p_1 = \Delta p_1 \bf{I}$, $p_2 = \Delta p_2 \Delta p_1 \bf{I}$, and so on. These global poses will be refered to as $P = (p_1,\hdots,p_T)$. There exists a differential function $r2g = g2r^{-1}$ such that $P = r2g(\Delta P)$ and $\Delta P = g2r(P)$. Each component is described in more detail in the next sections. \subsection{Local Pose Estimation Network} The Local Pose Estimation network learns to predict the relative pose change between two consecutive frames. From two consecutive observations, where each observation is, for example, an RGB frame, this component predicts the x-coordinate, y-coordinate, and orientation ($\Delta x$, $\Delta y$ and $\Delta \theta$) of the second frame with respect to the first frame. It can also optionally take in side information, such as the action taken by the agent between the two frames. The architecture of the Local Pose Estimation network is shown in Fig.~\ref{fig:local_arch_actions}. Both frames are stacked and passed through a series of convolutional layers. The output of the convolutional layers is flattened and passed to two fully-connected layers that predict the translational and rotational pose change respectively. Some of the recent work showed that optical flow is useful in predicting frame-to-frame ego-motion \cite{costante2016exploring}. The architecture of the Local Pose Estimation network is inspired by the architecture of Flownet \cite{dosovitskiy2015flownet} which predicts the optical flow between two frames. The convolutional layers in the Local Pose Estimation network are identical to the convolutional layers in Flownet. Prior work on visual odometry and visual inertial odometry has also used the convolutional layer architecture of Flownet \cite{clark2017vinet,wang2017deepvo}. \subsection{Pose Aggregation} The next step of the architecture is a Pose Aggregation network which takes in a large number of low-level poses and pose features (up to 2000 for 2D, 1000 for 3D VizDoom environment) and reduces them into a smaller number of more temporally distant ``meta-poses'' and ``meta-pose features'' (around 250 for 2D, 125 for 3D VizDoom). These resulting meta-poses and meta-pose features are then passed to the Neural Graph Optimization procedure. For pose feature aggregation, we utilize a deep temporal convolutional network with several alternating layers of (kernel size 3, stride 1, padding 1) dimension-preserving convolutions and (kernel size 2, stride 2, padding~0) dimension-reducing max pooling (where each max pooling operation halves the sequence size). The number of times we halve the sequence length is a hyperparameter. Instead of temporal convolutions, we could have utilized recurrent networks, but we decided to focus on convolutions for computational and memory-efficiency reasons. In addition to the pose features being aggregated into meta-pose features by the temporal convolution, we also compose all the local pose transformations that were predicted by the Local Pose Estimation model. This composition gives us an initial global pose estimate for each of the meta-poses. The combined meta-features and meta-poses are then passed onto the Neural Graph Optimization layer for the final global pose adjustments, as shown in Fig.~\ref{fig:global_arch}. \begin{figure*} \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{local_arch} \caption{\small{The architecture of the Local Pose Estimation network. The architecture of the convolutional layers is adapted from the architecture of the Flownet \cite{dosovitskiy2015flownet}.}} \label{fig:local_arch_actions} \vspace{-0.1in} \end{figure*} \subsection{Neural Graph Optimization} The final component of our system is the ``Neural Graph Optimizer''. This submodule aggregates information over the entire pose trajectory with the goal of redistributing error to minimize drift. The Neural Graph Optimizer model is a neural analogue of the global optimization procedures commonly used in traditional state-of-the-art SLAM packages, such the g2o framework~\cite{kummerle2011g}. We define the Neural Graph Optimizer as a recurrent network submodule which takes as input sequential pose features and outputs a refined estimate of these poses. In more detail, the Neural Graph Optimizer takes as input some initial $T$ relative pose estimates (i.e. the aggregated output of the local pose estimation network) $\Delta {\bf P}^{(0)} = \left(\Delta p_1^{(0)},\hdots,\Delta p_T^{(0)}\right)$ and produces two outputs for each pose: \vspace{-0.1in} \begin{align*} \nabla {\bf P}^{(1)} &= \left(\nabla p_1^{(1)},\hdots,\nabla p_T^{(1)}\right), \hspace{0.05in} \textrm{and} \nonumber \\ \pmb{\beta}^{(1)} &= \left(\beta_1^{(1)},\hdots,\beta_T^{(1)}\right). \end{align*} New pose estimates are then constructed by performing an iterative update: \vspace{-0.1in} \begin{align*} \Delta p_i^{(1)} = \Delta p_i^{(0)} + \beta_i^{(1)} \nabla p_i^{(1)}. \end{align*} The Neural Graph Optimizer procedure can then be rerun on the new pose estimates $\Delta {\bf P}^{(1)} = (\Delta p_1^{(1)},\hdots,\Delta p_T^{(1)})$ to produce $\Delta {\bf P}^{(2)} = (\Delta p_1^{(2)},\hdots,\Delta p_T^{(2)})$, and so on. The process is repeated until some pre-specified number of iterations $M$ has taken place. We then transform the refined relative pose estimates into the final global output: ${\bf P}^{(M)} = r2g(\Delta {\bf P}^{(M)})$. The specific architecture of the Neural Graph Optimizer is based on two priors that are intuitively useful for pose optimization. The first prior is the notion that poses that are temporally adjacent should have similar outputs, while the second prior is that visually similar but temporally disparate poses should also have similar outputs since this provides a hint that a place has been revisited, thereby potentially enabling a loop closure-like correction of drift. We express these priors by using two architectural systems in the Neural Graph Optimizer. The first is a Transformer-like \cite{vaswani2017attention} attention phase where information is propagated over the entire sequence, and the second is a convolutional phase where local temporal information is aggregated. \subsubsection{Attention Phase} Suppose there is a meta-pose sequence of $T$ steps, processed by the pose aggregation network into an initial set of features at each time step: ${\bf F}^{(0)} = (f_1^{(0)},\ldots,f_T^{(0)})$. The attention phase computes, for each pose, a soft-attention operation over the entire trajectory. This attention operation allows each pose to query information over long time spans. The attention phase takes as input the pose feature sequence $(f_1^{(i-1)},\ldots,f_T^{(i-1)})$ and produces for each time step a query vector: $(q_1^{(i-1)},\ldots,q_T^{(i-1)})$ using a fully-connected layer. Then, for each query vector $q_t^{(i-1)}$, a soft-attention operation is carried out to produce an attention vector $a_t^{(i-1)}$ as follows: \begin{align*} C_{tu} &= \langle q_t, f_u \rangle, \\ \alpha_{tu} &= \frac{C_{tu}}{\sum_{v=1}^T C_{tv}}, \\ a_t &= \sum_{v=1}^T \alpha_{tu} \odot f_u, \end{align*} where the superscripts $(i-1)$ were omitted for clarity of notation. This produces a sequence of attention vectors $(a_1^{(i-1)},\ldots,a_T^{(i-1)})$, which are passed along with $(f_1^{(i-1)},\ldots,f_T^{(i-1)})$ to the next ``Optimization'' phase. \subsubsection{Optimization} The optimization phase aggregates local temporal information by passing the pose features through several temporal convolutions and is responsible for producing the iterative adjustments: $\{\nabla p_1^{(i)},\ldots,\nabla p_T^{(i)}\}$ and $\{\beta_1^{(i)},\ldots,\beta_T^{(i)}\}$. The optimization phase proceeds as follows: First, the attention and feature vectors are concatenated into a new sequence of features: \vspace{-0.1in} \begin{align*} \left(\begin{bmatrix} f_1^{(i-1)} \\ a_1^{(i-1)} \end{bmatrix},\ldots,\begin{bmatrix} f_T^{(i-1)} \\ a_T^{(i-1)} \end{bmatrix}\right). \end{align*} These features are then passed through several layers of 1D convolutions $h_l$ and activations $\sigma_l$: \begin{align*} \begin{bmatrix} {\bf F}^{(i)} \\ \nabla {\bf P}^{(i)} \\ \pmb{\beta}^{(i)} \end{bmatrix} = \sigma_L\left(h_L\left(...~h_1\left(\begin{bmatrix} f_1^{(i-1)} \\ a_1^{(i-1)} \end{bmatrix}...\begin{bmatrix} f_T^{(i-1)} \\ a_T^{(i-1)} \end{bmatrix}\right) ... \right)\right) \end{align*} to produce the current iteration's adjustments ($\nabla {\bf P}^{(i)}$ and $\pmb{\beta}^{(i)}$) as well as the feature layer for the next iteration of the process (${\bf F}^{(i)}$). For our experiments, we use 9 layers of convolutions with filter size 3 and ReLU activations. While temporal convolutions have a limited receptive field which provides a hard upper limit on how far they can transmit information across time, we found that in practice they worked better than using a bidirectional LSTMs. \subsubsection{Induced Attention Graph} We now provide some intuition on why the attention phase enables higher performance than only using the optimization phase, or running all pose features through bidirectional LSTMs. We can see that during the attention phase, some similarity graph $C$ is constructed such that each element $C_{tu}$ is the inner product between the query vector~$q_t$ and the pose feature vector $f_u$. Therefore $C$ represents a similarity matrix between the queries and pose features, and those with very similar features will thus have high information bandwidth through the attention operator because the attention weight $\alpha_{tu}$ will be near $1$ for highly similar query and pose features, and near $0$ otherwise. The attention operation is thus inducing a connectivity graph between poses with highly similar features. This therefore resembles a soft, differentiable analogue of the pose graph constructed in SLAM algorithms such as ORB-SLAM \cite{mur2015orb}. \begin{figure} \centering \minipage{0.48\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{env2d} \endminipage \hfill \minipage{0.48\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{env3d} \endminipage \vspace{0.05in} \caption{\small{\textbf{Left:} A screenshot of the 2D environment based on Box2D. \textbf{Right:} A bird's eye view of the 3D environment based on the Doom game engine.}} \label{fig:env} \end{figure} \begin{table} \small \centering \begin{tabular}{|c|c|} \hline \multicolumn{2}{|c|}{\bf{Results on the 2D Environment}} \\ \hline {\bf Model} & {\bf RMSE} \\ \hline Only Local Estimation & 17.80 \\ \hline Global Estimation - 1 Attend-Opt iteration & 10.21 \\ Global Estimation - 5 Attend-Opt iterations & 3.16 \\ \hline \end{tabular} \vspace{0.1in} \caption{\label{tab:2dngo} Results for different Neural Graph Optimizer architectures and hyperparameters, in terms of test set Global RMSE. We can see that the addition of the global optimization procedure reduces the loss by more than 80\% as compared to solely using the local pose model. } \end{table} \section{Experiments} We use two simulation environments for our experiments, a 2D environment based on Box2D and a 3D environment based on the Doom game engine. To train the system, we pretrained the local pose estimation model and then trained the global optimizer with the local pose model held fixed. This was mainly due to the large sequence lengths we were required to process (on the order of 1000 time steps). This limited the amount of sequences we could process due to the large memory requirements. Training the system in stages enabled us to preprocess the sequence images into a far more memory-efficient compressed representation. \begin{figure} \centering \includegraphics[trim = 7mm 7mm 7mm 7mm,clip, width=0.49\linewidth]{b0_valid_traj.eps} % \includegraphics[trim = 7mm 7mm 7mm 7mm,clip, width=0.49\linewidth]{b1_valid_traj.eps} \\\vspace{0.1in} \centering \includegraphics[trim = 7mm 7mm 7mm 7mm,clip, width=0.49\linewidth]{b2_valid_traj.eps} % \includegraphics[trim = 7mm 7mm 7mm 7mm,clip, width=0.49\linewidth]{b3_valid_traj.eps} \vspace{0.05in} \caption{\label{fig:2dtraj} Images visually demonstrating the effect on pose estimates of adding the Neural Graph Optimizer module on top of the local pose estimation model in the 2D environment. We can see that the global optimization procedure greatly reduces drift. These figures were generated with the 5 iteration Neural Graph Optimization model.} \vspace{-0.05in} \end{figure} \setlength{\tabcolsep}{12.4pt} \begin{table*} \vspace{-0.00in} \small \centering \begin{tabular}{|c|cc|cc|} \hline \multicolumn{5}{|c|}{\bf{Results on the 3D Doom Environment }} \\ \hline {\bf Model} & \multicolumn{2}{|c|}{\bf Seen} & \multicolumn{2}{|c|}{\bf Unseen} \\ & \% Err. Trans. & \% Err. Rot. & \% Err. Trans. & \% Err. Rot. \\ \hline Only Local Estimation & 1.65 & 0.117 & 1.62 & 0.122 \\ \hline Global Estimation - 1 Attend-Opt iteration & 1.42 & 0.071 & 1.16 & 0.071 \\ Global Estimation - 5 Attend-Opt iterations & 1.25 & 0.057 & 1.04 & 0.056 \\ \hline DeepVO~\cite{wang2017deepvo} & 1.78 & 0.079 & 2.39 & 0.091 \\ \hline \end{tabular} \vspace{0.1in} \caption{\label{tab:doom} Results for different Neural Graph Optimizer architectures and hyperparameters, in terms of \% translation and rotation error on maps either seen or unseen during training time. We can see that the addition of the global optimization procedure reduces error significantly compared to using only the local pose model. In addition, increasing the number of attention iterations provides an increase in performance. } \vspace{-0.1in} \end{table*} \subsection{2D Environment} For the 2D Environment, random maze designs are generated using Prim's algorithm \cite{prim1957shortest}, and the environment is created using Box2D (box2d.org). The agent projects 241 rays uniformly in front of itself with an effective field of view of 300$^{\circ}$. The observation of the agent includes the RGB values as well as the depth of the points where these rays hit a wall. An example of the 2D environment is shown in Fig.~\ref{fig:env}. Each cell in the maze has a random color. The agent can take one of three discrete actions at every time step: move-forward, turn-left, or turn-right. These actions result in translational acceleration if the action is move-forward or angular acceleration if the action is turn-left or turn-right. Data is collected by visiting four different corners on the maze using Dijkstra's algorithm \cite{dijkstra1959note}. For this environment, the training data is generated by worker threads in parallel with the model training and each training datapoint is used only once. A test set is fixed and common for all experiments. Each epoch of training consists of $200,000$ datapoints. The error metric is Root Mean Squared Error (RMSE) in pose estimation. To improve upon the results produced by the local pose estimation model, we train a Neural Graph Optimizer on the pose outputs of a pretrained Local Pose Estimation model. For the 2D environment, as shown in Table~\ref{tab:2dngo}, we observed over 80\% improvement in the correction of drift compared to using only the local pose estimation model, as measured by the root mean squared error loss. We can see that increasing the number of iterations (applying the attention operator and then the temporal aggregation operator) improved results from 1 to 5 iterations. We show some sample trajectories in Fig.\ \ref{fig:2dtraj} before and after the Neural Graph Optimizer procedure. \subsection{3D Environment} For the 3D Environment, random maze designs are generated using the Kruskal's algorithm \cite{kruskal1956shortest}, and the environment is created using the ViZDoom API \cite{kempka2016vizdoom}. The agent observes the environment in a first-person view with a field-of-view of 108$^{\circ}$. An example of the 3D environment design is shown in Fig.\ \ref{fig:env}. Similr to the 2D environment, the pose predictions are 3-dimensional tuples (x, y, angle) and the agent can take one of three discrete actions at every time step: move-forward, turn-left, or turn-right, which results in translational or angular acceleration. For collecting data in this environment, a navigation network \cite{lample2017playing} is trained to maximize the distance travelled by the agent using the Asynchronous Advantage Actor-Critic algorithm \cite{mnih2016asynchronous}. The data is collected by using the policy learned by the navigation network. Like the 2D environment, the training data is generated by worker threads in parallel with the model training, and each training datapoint is used only once. We additionally sample two test sets, one containing 39 trajectories sampled from maze geometries that were seen during training and one containing 39 trajectories sampled from novel maze geometries that the agent had not encountered during training. \subsubsection{Results} Results are shown in Table~\ref{tab:doom}. Here we report~\%~Error in Translation and Rotation for seen/unseen mazes, where the accumulated drift error is divided by the entire distance traveled in each trajectory. Observe that the local model is significantly improved by using global optimization and performance of the global model improves as we increase the number of Attend-Opt iterations from 1 to 5. The global model outperforms the DeepVO~\cite{wang2017deepvo} baseline on both the test sets. Additionally, we can clearly see that the model itself does not overfit to the training environments it experienced, and gets similar or even lower error on unseen test mazes. Learning curves are shown in Fig~\ref{fig:doomcurves}. We can see that performance plateaus decrease significantly early on and then progress is much slower after around 2000 updates. \begin{figure}[t] \vspace{-0.1in} \centering \includegraphics[width=0.5\linewidth]{test_fig_tr.eps}% \includegraphics[width=0.5\linewidth]{test_fig_r.eps} \caption{\label{fig:doomcurves} Training curves for Doom over $13,000$ updates for the~5 iteration Attend-Opt model. We show the performance on both seen and unseen test sets as training progress. The dotted line represents the estimate provided by using only the local model. We can see there is a large reduction in error when making use of the global optimizer.} \vspace{-0.05in} \end{figure} The baseline DeepVO~\cite{wang2017deepvo} is one of the state-of-the-art methods using deep neural nets for monocular visual odometry. It stacks 2 consecutive frames and passes them through 9 convolutional layers followed by 2 LSTM layers to estimate the pose changes. As compared to the proposed Local Pose Estimation model which observes only the last 2 frames at the time, the DeepVO model can potentially utilize information from all the prior frames using the LSTM layer. However, the DeepVO model does not correct its previous predictions as it observes new information. The Neural Graph Optimizer has the ability to correct its predictions using the Attention operation and consequently leads to improved performance. \subsubsection{Analysis} We next plot the total rotational and translational errors as a function of number of steps in the trajectory in Figures~\ref{fig:plot_unseen} (for unseen mazes) and ~\ref{fig:plot_seen} (for seen mazes). The global model reduces the slope of the rate of increase of both translational and rotation errors as compared to the local estimates. Figures~\ref{fig:plot_ratio_unseen} and~\ref{fig:plot_ratio_seen} display the ratio of the translational (left) and rotational (right) drift error over distance traveled. We can see from these plots that the trend is negative, meaning that drift accumulates much slower than the distance being traveled. This indicates that the model is likely to generalize well to arbitrarily long trajectories. Additionally, in all plots, we can see a clear ordering of the performance of the models, where the local model performs worst, one iteration of Attend-Opt increases model performance significantly, and increasing the number of Attend-Opt iterations to 5 further increases model performance. The plots in Figures~\ref{fig:plot_unseen} and ~\ref{fig:plot_seen} as well as the numbers in Table~\ref{tab:doom} show that the improvement in rotational errors due to the neural optimization is higher than the improvement in translation errors. Fig~\ref{fig:3dtraj} shows sample trajectories with estimates of both global and local pose estimates. As seen in the figure, the neural graph optimizer considerably improves the rotation estimates, consequently leading to significant improvements in the drift reduction. \begin{figure}[t] \centering \minipage{0.5\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{plot_tr_unseen} \endminipage \hfill \minipage{0.5\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{plot_r_unseen} \endminipage \caption{\small{Translational (Left) and Rotational (Right) RMSE as a function of number of images in the trajectory in {\bf unseen mazes}.}} \label{fig:plot_unseen} \end{figure} \begin{figure}[t] \centering \minipage{0.5\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{plot_tr_seen} \endminipage \hfill \minipage{0.5\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{plot_r_seen} \endminipage \caption{\small{Translational (Left) and Rotational (Right) RMSE as a function of number of images in the trajectory in {\bf seen mazes}.}} \label{fig:plot_seen} \end{figure} \section{Conclusion} In this paper, we designed a novel attention-based architecture to perform an end-to-end trainable global pose estimation. Compared to the previous work on using deep networks to do pose estimation, our method uses an attention operation to re-estimate its trajectory at each time step and therefore enables iterative refinement of the quality of its estimates as more data is available. We demonstrate the benefit of the model on two simulators, the first is a top-down 2D maze world and the second is a 3D random maze environment running the Doom engine. Our results show that our method has an increased performance compared to models which used only temporally local information. The proposed method can be further extended to a complete end-to-end graph-based SLAM system by adding a relocalization module which uses pose features to relocalize in a known map~\cite{chaplot2018active}. It can also be extended to an Active SLAM system where the agent also decides the actions, in order to map the environment as fast as possible. \begin{figure}[t!] \centering \minipage{0.5\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{plot_tr_ratio_unseen} \endminipage \hfill \minipage{0.5\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{plot_r_ratio_unseen} \endminipage \caption{\small{Ratio of the Translational (Left) and Rotational (Right) RMSE to the distance travelled as a function of number of images in the trajectory in {\bf unseeen mazes}.}} \label{fig:plot_ratio_unseen} \end{figure} \begin{figure} \centering \minipage{0.5\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{plot_tr_ratio_seen} \endminipage \hfill \minipage{0.5\linewidth} \centering \includegraphics[width=\linewidth,height=\textheight,keepaspectratio]{plot_r_ratio_seen} \endminipage \caption{\small{Ratio of the Translational (Left) and Rotational (Right) RMSE to the distance travelled as a function of number of images in the trajectory in {\bf seen mazes}.}} \label{fig:plot_ratio_seen} \end{figure} \begin{figure} \centering \includegraphics[trim = 7mm 7mm 7mm 7mm,clip, width=0.49\linewidth]{b6_valid_traj.eps} % \includegraphics[trim = 7mm 7mm 7mm 7mm,clip, width=0.49\linewidth]{b13_valid_traj.eps} \\\vspace{0.1in} \centering \includegraphics[trim = 7mm 7mm 7mm 7mm,clip, width=0.49\linewidth]{b21_valid_traj.eps} % \includegraphics[trim = 7mm 7mm 7mm 7mm,clip, width=0.49\linewidth]{b24_valid_traj.eps} \vspace{0.05in} \caption{\label{fig:3dtraj} Images visually demonstrating the effect on pose estimates of adding the Neural Graph Optimizer module on top of the local pose estimation model in the 3D environment. We can see that the global optimization procedure greatly reduces drift. These figures were generated with the 5 iteration Neural Graph Optimization model. The agent always starts at the origin (0, 0).} \end{figure} \subsubsection*{Acknowledgments} We thank Tim Barfoot and Russ Webb for helpful comments and discussions. We would also like to thank Barry Theobald and Megan Maher for helpful feedback on the manuscript. {\small \bibliographystyle{ieee}
{ "timestamp": "2018-02-21T02:02:16", "yymm": "1802", "arxiv_id": "1802.06857", "language": "en", "url": "https://arxiv.org/abs/1802.06857" }
\section{Introduction}\label{sec:introduction}} \else \section{Introduction} \label{sec:introduction} \fi \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.8b and later. I wish you the best of success. \hfill mds \hfill August 26, 2015 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction}\label{sec:intro} One of the challenges faced by game designers is predicting how different players will interact with the the systems and content that they are crafting. Most games are complex emergent systems that allow for a variety of interaction patterns, depending on the player's preference(s) and the interaction between player, game, and any other players. Game designers employ a variety of processes to imagine and observe how different types of players might respond to their content. The processes can be thought of as existing on a spectrum, ranging from the designer imagining what different types of players might do, to analyzing play data of beta testers or a portion of the player base in the case of continuously updated ``live'' games. Each approach has different strengths, weaknesses and costs, making it relevant for different game makers or different stages of the game making process \cite{fullerton2004game,el2013game}. In this paper we suggest and demonstrate a method taking a new position on this spectrum (illustrated in Figure~\ref{fig:testing_spectrum}): using archetypal generative player models as critics for game content, enabling automated playtesting and evaluation of game content; here specifically levels. We identify this approach as the use of \emph{Procedural Personas for Playtesting}. \begin{figure}[hbt] \centering \includegraphics[width=0.95\columnwidth]{./graphics/testing_spectrum} \caption{Spectrum from simple to complex design testing methods in game development.} \label{fig:testing_spectrum} \end{figure} We evolve artificially intelligent game playing agents for the game \emph{MiniDungeons 2} \cite{holmgard2015minidungeons2}. The agents, or \emph{personas}, are characterized by different utility functions for their decision-making. These utility functions capture various archetypal goals that players might hold in relation to the affordances and potential interactions of the particular game. To control the personas, we use a variant of Monte Carlo Tree Search (MCTS) which is well-suited to building biased search trees in large search spaces. However, rather than applying the Upper Confidence Bound 1 applied to Trees (UCB1) formula typically used for MCTS, we use genetic programming to evolve persona-specific evaluation formulas. This allows us to find mappings between persona utility functions and state evaluation algorithms. We evolve well-performing game-playing agents for all defined personas through this variant of MCTS. Using the evolved personas, we show how different levels can be automatically evaluated in terms of their playability for players holding different preferences. This approach can be useful to game creators as it provides an insight into dynamic properties \cite{hunicke2004mda} of their content as they are crafting the mechanics. For instance, such agents could eventually be added directly to a game engine's editor to allow for almost real-time feedback during content creation. The approach can also be used as an automated evaluation mechanism for procedural content generation of game content, where procedural personas can function as stand-ins for the game designer or different players when large amounts of content have to be evaluated. This paper builds on our previous work on MCTS agents with hand-crafted utility scores for the MiniDungeons 2 game \cite{holmgard2015mcts}, expanding on those concepts by broadening the number and utility of personas (through human design) as well as discovering UCB-like criteria (through evolution) which outperform UCB1. More broadly, this paper enhances our earlier definitions of procedural personas \cite{holmgard2014generativeagents,holmgard2014evolvingpersonas,holmgard2015evolving} which were applied for simulation-based level generation \cite{liapis2015personacritics}. However, the MCTS agents used in this paper are more modular in their utility definitions and afford far faster runtimes when performing automated playtesting. Moreover, the MCTS agents in this paper are tested on MiniDungeons 2, a far more complex game than its predecessor MiniDungeons introduced in \cite{holmgard2014generativeagents}. \section{Related Work}\label{sec:related_work} The approach taken in this paper draws on psychological decision theory, persona theory from design research, and player modeling and agent control from computational intelligence. The procedural persona approach draws all of these four strands of work together in one framework for automatic playtesting in order to create generative player models that to some extent decide and play like human players. This section briefly covers some of the foundational areas before describing prior work in bringing these approaches together. \subsection{Personas for Game Design}\label{sec:play_personas} The use of computational methods to imbue computer game characters with personality has been a focus of game AI programming since the very beginning of the medium. As one instance, Short provides an overview of how non-player characters can be provided with human-like personalities under the heading of procedural personalties \cite{short2016procedural}. The use of personas has a long history within design in general and design for information technology in particular. The approach was pioneered for software development in the early 1990s \cite{cooper2004inmates} as a method for structuring and operationalizing qualitative data gathered from design research, chiefly in the form of interviews. Based off interview data, a number of personas would be defined. Each of these would serve as a specific instantiation of groups of user concerns that tended to co-occur, expressed as an archetypal example user, fully fleshed out with names, back stories, concerns and preferences. Canossa and Drachen transported this approach into the realm of game design \cite{canossa2009patterns}, defining personas less in terms of general life concerns and more in terms of player interaction preferences within the space of the game. They call this new conceptualization \emph{play personas} and operationalized their definition through data mining, suggesting how the persona design process could be supported by analyzing quantitative game data gathered via telemetrics \cite{tychsen2008defining}. While play personas are archetypal models of player behavior inferred from experience or observed data, the re-projection into the game itself is something that is done imaginatively by the designer(s) of the game: i.e. play personas let us understand what players \emph{have done}, but do not enact what players \emph{might do}. Procedural personas~\cite{holmgard2015evolving,holmgard2014generativeagents,holmgaard2014personas} extend the play persona idea by adding a game-playing, generative aspect. By capturing persona characteristics from designer specification or from observed data, and formalizing these as utility functions, procedural personas are implemented as agents that can act in the game, enabling automatic playtesting. Other work in the literature has investigated game testing without natural player data, notably \cite{nelson2011game}. The approach taken here differs from the approach taken in \cite{nelson2011game} as it focuses on generating data from simulated players, rather than taking into account a larger number of potential metrics where some are not centered on player actions. As such, the procedural persona concept is a specialized case of \emph{player modeling}. The line of work leading to this paper has been inspired by the category of ``Generative Action Models'' in the survey on player modeling by \cite{smith2011inclusive}. Until recently, this particular category has been underpopulated. \subsection{Player Modeling}\label{sec:player_modeling} Player modeling is the learning and use of computational models of player preference, experience and/or behavior~\cite{yannakakis2013playermodeling}. Procedural personas, as generative player models, cover some of these aspects: behavior and preferences. Other work in player modeling take different approaches to modeling play behavior and preference generatively. Perhaps the most obvious approach is to use some form of supervised learning to derive a model from play traces~\cite{ortega2013imitating,togelius2007towards}. Cowley \emph{et al.} developed the concept of \emph{behavlets}, features of play derived from observed action sequences, structured through a top-down application of psychological temperament theory combined with machine learning \cite{cowley2016behavlets}. While behavlets can be used as generative models, they do not allow for the specification of player motivations without observations. In contrast, procedural personas are driven by utility functions that can be either specified in a top-down manner by game or persona designers or formulated from play data through methods like inverse reinforcement learning \cite{tastan2011learning} or evolution \cite{holmgaard2014personas}. The particular agent control method that is used to formulate a policy for procedural personas is technically arbitrary, as long as it can accept a utility function as a method of evaluation. The most appropriate method may depend on the game for which the personas are being implemented. Prior work has shown that evolutionary methods and MCTS have potential for defining personas for turn based roguelike games \cite{holmgard2014evolvingpersonas,holmgaard2014personas,holmgard2015mcts}. Devlin \emph{et al.} showed how observations of human play data can be used to bias MCTS to play the card game Spades \cite{devlin2016combining}. They use a relative entropy measure to assess the similarity of playing styles to traces of human players. Zook \emph{et al.} limited the computational resources of MCTS to simulate player skill for a number of games \cite{zook2015monte} and similar findings were reported by Nelson \emph{et al.} \cite{nelson2016investigating}. Another approach to biasing the MCTS search process to be more similar to human players is described by Khalifa \emph{et al.}~\cite{khalifa2016modifying}. In the study described here, we take a similar approach and implement a variation of MCTS. We bias the search using evolution applying designer-defined utility outcomes as the fitness function. \section{Monte Carlo Tree Search} MCTS has shown considerable potential and flexibility as a game-playing algorithm \cite{browne2012mcts}. For our purposes, MCTS has several desirable properties which approximate how decision making occurs in human : it evaluates the next best action based on a utility score for a predicted future state and operates under uncertainty of future outcomes. It also seems that by giving an MCTS algorithm more or less resources, you can simulate strategic depth in human decision-making~\cite{zook2015monte,nelson2016investigating}. \subsection{Traditional MCTS}\label{sec:mcts} As discussed in Section \ref{sec:intro}, MCTS is a tree search algorithm which creates biased search trees for decision processes. Unlike other tree search algorithms like Minimax, breadth-first, or depth-first, MCTS focuses on \emph{exploiting} the most promising moves to expand next, while balancing that by \emph{exploring} more neglected branches of the tree. The balance of exploitation versus exploration is traditionally handled through the evaluation of the Upper Confidence Bound for Trees equation, which applies UCB1 to the tree \cite{browne2012mcts}. The tree is built incrementally, with each iteration following a simple formula: 1. \textbf{Selection}: MCTS chooses the next node to expand via the \textit{tree policy}, starting at the root node and recursively picking the highest scoring child ``until the most urgent expandable node is reached'' \cite{browne2012mcts} or a terminal state (i.e. the game is won or lost). The score for traversing the tree in MCTS is termed \emph{tree policy}, and in traditional MCTS approaches it is given by the Upper Confidence Bound (UCB1) equation: \begin{equation} UCB = \frac{w_i}{n_i} + c{\cdot}\sqrt[]{\frac{\ln(t)}{n_i}} \label{eq:ucb} \end{equation} where $w_i$ is the number of wins which originate from taking move $i$, $n_i$ is the times move $i$ was visited, $c$ is the exploration constant. It is typically chosen that $c=\sqrt[]{2}$, since this value has been shown to guarantee convergence to a value function within finite time for single-player games terminal states and rewards bounded to the range $[0,1]$\cite{browne2012mcts}. $t$ is the total number of simulations for the node considered and is equivalent to the sum of all $n_i$ for all possible moves. The UCB1 equation attempts to balance exploration (looking into paths not yet simulated) and exploitation (looking into paths previously simulated that show good results). 2. \textbf{Expansion}: unless the selected node is a terminal state (i.e. the game is over), a child node ($W$) is created for the next action. Typically, this next action is randomly selected from all possible future actions. 3. \textbf{Simulation}: the \emph{default policy} is used to simulate a random rollout from $W$. The rollout consists of performing actions at random until the game reaches a terminal state, or up to a fixed number of moves. 4. \textbf{Backpropagation}: the result (i.e. utility score) of the simulation is backpropagated to every node, from the expanded $W$ to the root node. This affects future policy decisions, i.e. future selection steps. These four steps are applied sequentially until the computational resources allocated for the agent's move are depleted. The agent then chooses the next move (i.e. the child of the root node) with the highest utility score. \subsection{Evolutionary Tree Policy}\label{sec:mcts_evolution} As discussed in Section \ref{sec:mcts}, selection in MCTS must balance between exploitation and exploration; UCB1 is traditionally used to maintain this balance. Changes to the UCB1 formula of eq.~\ref{eq:ucb} are typically done in order to optimize it for a certain kind of game or to weigh certain kinds of game-play differently~\cite{khalifa2016modifying}. Cazenave's work \cite{cazenave2007evolving} on evolving UCB1 alternatives for Go MCTS agents demonstrated that the resulting agent significantly outperformed peers that used traditional UCB equations, or even agents that used UCB1 alternatives specifically created for Go. In the General Video Game AI (GVGAI) framework, Bravi \emph{et al.} explored the possibility of evolving UCB1 replacements that were not specialized for one particular game \cite{bravievolving}. In this work, we use the approach of Bravi \emph{et al.} not to specialize the agent for particular games, but to bias its playstyle. \section{The MiniDungeons 2 Game} MiniDungeons 2 is a deterministic, turn-based rogue-like game, first described in \cite{holmgard2015minidungeons2}, in which the player takes on the role of a hero traversing a dungeon level, with the end goal of reaching the exit. The game stage is set on a 10 by 20 tile grid. Each tile is either an impassible \textit{wall} or a passable \textit{floor}. Floor tiles may contain objects that the hero can interact with, game characters such as the hero or non-player characters (NPCs), or nothing at all. Gameplay objects come in several varieties such as treasures, potions, portals, traps, and the exit of the level. To win, the player must reach the exit. All game characters have Hit Points (HP) and may deal damage. The player starts with 10 HP; the player loses when they run out of HP and die. Movement within the game is fairly simple. The player gets the first move every turn, and all NPCs move after. The NPCs move in turn according to their original position on the map, starting from the top left corner and moving row-wise left-to-right until the bottom right corner is reached. This initial move sequence is retained even if NPCs later move to other locations. On their turn, a game character may move in one of the four cardinal directions (North, South, East, West) so long as the tile in that direction is not a wall. The player is given one re-usable javelin at the start of every level. The player may choose to throw this javelin and do 1 damage to any monster within their unbroken line of sight. After using the javelin, the hero must traverse to the tile to which it was thrown in order to pick it up and use it again. A map contains many different objects the player can collide with. Different objects have different effects: \begin{itemize} \item\textbf{Potions} are used to increase the HP of the hero by 1, up to the maximum of 10. When collided with by either the hero or blobs, they are consumed and may not be re-used. \item\textbf{Treasures} are used to increase the treasure score of the hero. When collided with by either the hero or ogres, they are consumed and may not be re-used. \item\textbf{Portals} come in pairs. If the hero collides with a portal, they are immediately (on the same turn) transported to the other paired portal. \item\textbf{Traps} deal 1 damage to any game character moving through them, every time. \end{itemize} While exploring a map, the hero may come across a variety of monsters, some of which have secondary goals in addition to attacking the player. \begin{itemize} \item\textbf{Goblins} (or Melee Goblins) move 1 tile every turn towards the player if they have an unbroken line of sight to the player. They have 1 HP and deal 1 damage upon collision. Goblins avoid colliding with other goblins and goblin wizards. \item\textbf{Goblin Wizards} (or Ranged Goblins) cast a spell at the hero if they have an unbroken line of sight within 5 tiles of the player that does 1 damage. If they are over 5 tiles from the player but have line of sight, they will move 1 tile towards the player. Wizards have 1 HP and deal no damage on collision. \item\textbf{Blobs} do not move unless they have unbroken line of sight with either a potion or the hero. If they see either one, they will move 1 tile towards the closest one per turn, preferring potions over the hero in case of a tie. A blob colliding with a potion consumes it. Blobs colliding with other blobs merge into a more powerful blob. The lowest level blob has 1 HP and does 1 damage upon collision. The 2\textsuperscript{nd} level blob has 2 HP and does 2 damage. The most powerful blob has 3 HP and does 3 damage. \item\textbf{Ogres} also do not move unless they have unbroken line of sight with either a treasure or the hero. If they see either one, they will move 1 tile towards the closest per turn, preferring treasures over the hero in case of a tie. When an ogre collides with a treasure, they consume it, and their sprite becomes fancier to look at. Ogres have 2 HP and deal 2 damage to anything they collide with, including other ogres. \item\textbf{Minitaurs} always move 1 step along the shortest path to the hero as determined by A* search, regardless of line of sight. Collision with the minitaur will deal 1 damage. A minitaur has no HP and is incapable of dying. If damage is done to it, the minitaur will be knocked out for 3 rounds (and can be passed through). \end{itemize} The game is technically infinite with all current maps as they all contain areas where the player might move back and forth, continuously dealing with the Minitaur using the Javelin. However, in practice most maps are finished in 20-30 moves with goal directed play. The branching factor is estimated to 3.41 across the included maps \cite{holmgard2015minidungeons2}, but depends on the map. \section{Procedural Personas in MiniDungeons 2}\label{sec:method} We identified four player archetypes which will become our procedural personas. These personas prioritize different interactions with the game and were defined from the four primary objects in the game. The following four archetypes were defined based on our design experience and intuition: \begin{itemize} \item \emph{Runner} aims to reach the exit. \item \emph{Monster Killer} wants to kill monsters. \item \emph{Treasure Collector} desires to collect treasure. \item \emph{Completionist} attempts to consume any game object that can be collected or killed (monsters, potions, treasures). \end{itemize} Apart from the Completionist, these personas have been also featured in previous attempts at modeling personas via MCTS \cite{holmgard2015minidungeons2} or in the simpler {MiniDungeons} game \cite{holmgard2014evolvingpersonas,holmgard2014generativeagents}. Since the types of interactions with the game world are limited, these four single-minded personas capture a large part of the potential play space in MiniDungeons 2. The personas all use MCTS to formulate a sequence of actions for play. Because MiniDungeons 2 is fully deterministic, each persona only builds one tree per map. It will immediately cease construction once a winning terminal state is discovered or it reaches timeout, wherein it will take the best sequence of actions it discovered. On average, trees will contain between two and five million nodes. \subsection{Utility Formation}\label{sec:utility} \begin{table} [t] \caption{Gameplay metrics used as variables combined in the evolving equation trees, and their notation.}\label{table:equationSymbols} \centering \begin{tabular}{|l||l|} \hline Steps Taken (\texttt{ST}) & Proximity to Exit (\texttt{PE}) \\ Potions Drunk (\texttt{PD}) & Treasures Opened (\texttt{TO})\\ Minitaur Knockouts (\texttt{MTK}) & Monsters Slain (\texttt{MS})\\ Javelins Thrown (\texttt{JT}) & Health Left (\texttt{HL})\\ Teleports Used (\texttt{TU}) & Traps Spring (\texttt{TS}) \\ Average MCTS reward (\texttt{\={R}}) & Interactive Objects Consumed (\texttt{IC}) \\ \hline \end{tabular} \end{table} The four personas of MiniDungeons 2 are defined by their primary and secondary objectives, which calculate a move's utility score at the end of a simulation phase and is back-propagated to the rest of the tree in the next phase (see Section \ref{sec:mcts}). From preliminary experiments, it seems that MiniDungeons 2 maps are frequently too complex and long for MCTS to simulate rollouts to a terminal state, as games can become infinite if the hero moves back and forth in place. Therefore, in the rollout stage, our agents simulate 10 random moves before back-propagating the utility score. The different personas use metrics collected from the game's state after 10 random moves: Table 1 describes the variable metrics used in this paper. Note that for $PD$, $MS$, $TO$, and $IC$ the values represent the ratio out of all potions, monsters, treasures, and all non-monster game objects in the map, respectively. \textbf{Runner (R)} has the primary objective of finding the exit in the fewest moves possible. \begin{equation} U_R = \begin{cases} PE - 0.01 \cdot ST &\text{if hero is alive}\\ PE - 0.01 \cdot ST - 5 &\text{if hero is dead}\\ \end{cases} \label{eq:baseline_r} \end{equation} \textbf{Monster Killer (MK)} has the primary objective of killing as many monsters as possible with the secondary objective of finding the exit. \begin{equation} U_{MK} = \begin{cases} 0.7\cdot MS + 0.3\cdot PE &\text{if hero is alive}\\ 0.7\cdot MS + 0.3\cdot PE - 5 &\text{if hero is dead}\\ \end{cases} \label{eq:baseline_mk} \end{equation} \textbf{Treasure Collector (TC)} has the primary objective of consuming as much treasure as possible with the secondary objective of finding the exit. \begin{equation} U_{TC} = \begin{cases} 0.7\cdot TO + 0.3\cdot PE &\text{if hero is alive}\\ 0.7\cdot TO + 0.3\cdot PE - 5 &\text{if hero is dead}\\ \end{cases} \label{eq:baseline_tc} \end{equation} \textbf{Completionist (C)} has the primary objective of consuming as many potions and treasures, and killing as many monsters as possible (thus ``completing'' a map), along with the secondary objective of reaching the exit. \begin{equation} U_C = \begin{cases} 0.7\cdot IC + 0.3\cdot PE &\text{if hero is alive}\\ 0.7\cdot IC + 0.3\cdot PE- 5 &\text{if hero is dead}\\ \end{cases} \label{eq:baseline_c} \end{equation} By studying how these personas traverse the game's maps, we can better evaluate how players will interact with the game. \subsection{Evolving the Policy of Personas}\label{sec:evolving_method} Genetic programming is used to evolve the mathematical formula that replaces UCB1. The Evolute C\# source code\footnote{\url{http://evolute-csharp.sourceforge.net}} was modified to work as follows: The chromosome representation is a syntax tree where all nodes contain a \textit{binary} operation and all leaves contain a \textit{variable} or a \textit{constant}. The four binary functions are addition, subtraction, multiplication, and division. Constant values are uniformly, randomly generated floats within $[-1,1]$. Variable values are derived from the game-play metrics described in Table \ref{table:equationSymbols}. The generator takes these variables and constant numbers and initializes equation trees with them, with an initial minimum depth of 2 and maximum of 5. During evolution, the maximum depth of a tree is set to 8 to avoid extremely long equations. This means that a equation can have as many as $2^8$ (i.e.~256) elements. Each persona has its own utility function, as per Eqs.~(\ref{eq:baseline_r}--\ref{eq:baseline_c}), which evaluates the result to be back-propagated after the simulation step of MCTS. To test the candidate function, the UCB equation is completely replaced by the candidate as the tree policy. The agents are evaluated based on a fitness function calculated at the end of each playthrough, i.e. when the hero has reached the exit, when the hero is killed, or after a maximum allocated time has passed. Each persona uses the same fitness as the utility score (e.g. $f_{MK}=U_{MK}$) calculated at the end of the playthrough. E.g. for the Monster Killer the fitness $f_{MK}$ evaluates how many monsters it killed in total in this map, how close it ended to the exit ($PE=0$ if the exit was reached), and whether the playthrough ended because the hero was killed (which applies a penalty to the fitness). Since MiniDungeons 2 maps can combine interactive tiles and monsters in many different ways, the fitness of each individual is based on their utility in different maps. Evolving agents are tested on maps 1, 2, 3, 4, 7, and 10 of Fig.~\ref{fig:maps}: these maps capture many different playstyle patterns of MiniDungeons 2. The overall fitness of a chromosome is calculated by averaging its fitness across all playthroughs in these six maps. This averaged fitness score was then used to select genes (i.e. UCB1 replacement functions) for recombination, mutation and replacement. The initial population of 100 individuals is created via the initialization process described above. Evolution uses an islands model \cite{whitley1999island} with 5 islands. Migration occurs in every generation. After migration, the five fittest individuals of each island are selected and placed into that island's \emph{mating pool}. Based on preliminary experiments, elitism was set to 15\% of the population. Before crossover, all individuals in the mating pool have a 10\% chance of mutating. Mutation replaces the chromosome with a random chromosome via the same initialization process described above. After mutation, the mating pool undergoes \emph{crossover}: two random chromosomes from the mating pool are crossed-over to create two offspring, formed by exchanging randomly selected sub-trees between the parents. During crossover, every individual in the mating pool has an equal chance to be selected, regardless of fitness. After this process is repeated for every island, a new population is generated and evaluated for fitness. \section{Experiments}\label{sec:experiments} \begin{figure*}[t] \centering \subfloat[Map 1]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map1.png}}~ \subfloat[Map 2]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map2.png}}~ \subfloat[Map 3]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map3.png}}~ \subfloat[Map 4]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map4.png}}~ \subfloat[Map 5]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map5.png}}~ \subfloat[Map 6]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map6.png}}~ \subfloat[Map 7]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map7.png}}~ \subfloat[Map 8]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map8.png}}~ \subfloat[Map 9]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map9.png}}~ \subfloat[Map 10]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map10.png}}~ \subfloat[Map 11]{\includegraphics[width=40px]{./graphics/MapScreenshots/Map11.png}} \caption{All 11 maps in the MiniDungeons 2 game.} \label{fig:maps} \end{figure*} The purpose of evolving UCB1 replacement functions is to optimize the agents' behavior relative to the persona-defining utility function. Below, we describe the results of evolving the agents, comparing them with the standard UCB1 function and using the evolved agents to playtest maps. \subsection{Experimental Protocol}\label{sec:protocol} For the purposes of evolving the four agents' tree policy equations, six maps are played by the agent and the fitness score is calculated as described in Section \ref{sec:evolving_method}. However, for evaluating the agents' performance a broader set of maps is used: all 11 maps of Fig.~\ref{fig:maps} are tested in Sections \ref{sec:persona_analysis} and \ref{sec:level_personas}. To cater for the stochastic nature of MCTS, each map is simulated in 50 trials by each agent. To cater for the stochastic nature of evolutionary algorithms, 3 independent evolutionary runs of 100 generations are performed with a population of 100 individuals, following the process described in \ref{sec:evolving_method}. The best performing run (based on the persona's core priority, e.g. monsters killed for the Monster Killer) is chosen among the three evolutionary runs and reported here. For the purposes of assessing the performance of evolved personas, \emph{baseline MCTS agents} using the UCB1 tree policy of Eq.~\ref{eq:ucb} are used to simulate the 11 maps in 50 trials each. Each baseline persona uses UCB1 for its tree policy but then backpropagates the persona-specific utility of Eqs.~(\ref{eq:baseline_r}-\ref{eq:baseline_c}) after each simulation. All reported significance testing is performed through Student's two-tailed $t$-tests, assuming unequal variance, with an error of $5\%$; when comparing between maps, the 50 playthroughs of each persona are tested for significance. Otherwise, 95\% confidence intervals are calculated via the standard deviation of all playthroughs in all maps. \subsection{Persona Evolution}\label{sec:evolution} For all personas, fitness starts converging after approximately 20 generations and from then on improves only marginally. The best evolved tree policy equation for each persona in its raw form often has duplicate variables and can be simplified. We simplified each of the fittest equations at the end of 100 generations and list them as eq.~(\ref{eq:ucb_r}--\ref{eq:ucb_c}). Some of these equations are quite convoluted but some interesting patterns can be gleaned. The tree policy for the Runner in eq.~\eqref{eq:ucb_r} strongly prioritizes the proximity to the exit variable but also has a negative factor for health left (i.e. it actively prefers reaching the exit with low health). The tree policy for the Monster Killer in eq.~\eqref{eq:ucb_mk} aggressively prioritizes monsters slain and proximity to the exit as these variables are multiplied to any other metric; interestingly this tree policy is the only one among eq.~(\ref{eq:ucb_r}--\ref{eq:ucb_c}) which does not include $R$ (the average reward). The tree policy for the Treasure Collector in eq.~\eqref{eq:ucb_tc} is the only linear equation (a weighted sum) which includes an added 0.19 constant which obviously does not affect the tree policy as it is added to all possible moves. More interestingly, the Treasure Collector policy puts more weight on potions drunk and monsters slain (2 for each) rather than on treasures opened (weight of 1). The Completionist has the most complex tree policy in eq.~\eqref{eq:ucb_c}. It puts a surprising emphasis on steps taken (multiplied to most components), monsters slain and proximity to exit (despite the fact that it also subtracts $PE$); most surprisingly, it only includes treasures opened ($TO$) once and with a negative weight, meaning that it actively tries to reduce the number of treasures opened despite the fact that the utility (and fitness) of eq.~\eqref{eq:baseline_c} actively rewards treasures as a member of the interactive objects set. \begin{align} t_R =& 6.235{\cdot}ST{\cdot}PE^2{\cdot}(PE+1)+R{\cdot}(1-HL) \label{eq:ucb_r}\\ t_{MK} =& 4{\cdot}MS{\cdot}PE{\cdot}(MS+2{\cdot}HL{\cdot}(PE-IC)) \label{eq:ucb_mk}\\ t_{TC} =& 2{\cdot}PD+2{\cdot}MS+TO\nonumber \\ &+3{\cdot}R+ST+PE+0.19 \label{eq:ucb_tc}\\ t_C =& ST{\cdot}MS{\cdot}(ST^2{\cdot}MS + IC) +R-TO+IC \nonumber \\ &-PE +2{\cdot}ST{\cdot}PE{\cdot}(ST{\cdot}MS{\cdot} + 1) \label{eq:ucb_c} \end{align} \subsection{Comparing Personas}\label{sec:persona_analysis} It is not sufficient to assess the evolved personas based on their fitness scores alone. This section tests the final best personas based on what types of content they interact with in all the maps of MiniDungeons 2. The focus is on comparisons with UCB1 MCTS personas (as baselines) regarding agents' efficiency in achieving their core priorities, but also on comparisons between different personas' play behavior. \subsubsection{Overall Performance}\label{sec:averages} \begin{table} \centering \caption{Average scores in several game metrics for evolved and baseline personas. Results are averaged from 50 independent playthroughs of the best personas in each of the 11 maps. Results include the 95\% confidence interval.} \label{table:averages} \begin{tabular}{|l| r@{}l@{\hspace{1em}} r@{}l@{\hspace{1em}} r@{}l@{\hspace{1em}} r@{}l|} \hline Metric & \multicolumn{2}{c}{R} & \multicolumn{2}{c}{MK} & \multicolumn{2}{c}{TC} & \multicolumn{2}{c|}{C} \\ \hline \multicolumn{9}{|l|}{Evolved}\\ \hline Monsters & $46\%$&${\pm}2\%$ & $67\%$&${\pm}2\%$ & $59\%$&${\pm}2\%$ & $66\%$&${\pm}3\%$ \\ Potions & $8\%$&${\pm}1\%$ & $4\%$&${\pm}1\%$ & $25\%$&${\pm}2\%$ & $8\%$&${\pm}1\%$ \\ Treasures & $10\%$&${\pm}1\%$ & $7\%$&${\pm}1\%$ & $35\%$&${\pm}2\%$ & $10\%$&${\pm}1\%$ \\ Interactive Objects & $25\%$&${\pm}1\%$ & $33\%$&${\pm}1\%$ & $41\%$&${\pm}1\%$ & $34\%$&${\pm}1\%$ \\ \hline Win Rate & $100\%$&${\pm}0\%$ & $73\%$&${\pm}4\%$ & $54\%$&${\pm}4\%$ & $100\%$&${\pm}0\%$ \\ Time (sec) & $3.2$&${\pm}0.3$ & $83$&${\pm}11$ & $151$&${\pm}12$ & $8.1$&${\pm}1.2$ \\ \hline \multicolumn{9}{|l|}{Baseline}\\ \hline Monsters & $25\%$&${\pm}1\%$ & $29\%$&${\pm}1\%$ & $25\%$&${\pm}1\%$ & $28\%$&${\pm}0\%$ \\ Potions & $5\%$&${\pm}1\%$ & $5\%$&${\pm}1\%$ & $6\%$&${\pm}1\%$ & $5\%$&${\pm}0\%$ \\ Treasures & $5\%$&${\pm}1\%$ & $5\%$&${\pm}1\%$ & $17\%$&${\pm}2\%$ & $6\%$&${\pm}0\%$ \\ Interactive Objects & $13\%$&${\pm}1\%$ & $16\%$&${\pm}1\%$ & $19\%$&${\pm}1\%$ & $15\%$&${\pm}0\%$ \\ \hline Win Rate & $10\%$&${\pm}3\%$ & $12\%$&${\pm}3\%$ & $9\%$&${\pm}2\%$ & $13\%$&${\pm}0\%$ \\ Time (sec) & $277$&${\pm}6$ & $285$&${\pm}4$ & $279$&${\pm}5$ & $278$&${\pm}0$ \\ \hline \end{tabular} \end{table} Table \ref{table:averages} shows the ratio of game objects each agent has interacted with (i.e. monsters, potions, treasures) on average in the 11 testbed maps of MiniDungeons 2. At a high level, the Monster Killer kills more monsters on average than the other personas, while the Treasure Collector collects more treasure and drinks more potions. A more detailed analysis in Section \ref{sec:ttest_personas} will shed more light on the differences between personas, as there are substantial deviations between maps. Table \ref{table:averages} includes the win rate of different personas (i.e. instances where the agent reached the exit), as well as computation time to find a path (up to a maximum of 300 seconds). The evolved Runner persona is consistently able to reach the exit in all maps, and does so in far fewer steps and requiring far less computational time than all other personas, both evolved and baseline MCTS. Surprisingly, the evolved Completionist persona also completes all maps in all trials, despite the fact that it prioritizes interacting with as many game objects as possible; perhaps due to the latter strategy its computational time is double that of the evolved Runner. Since the Treasure Collector does not receive a large reward for finishing the level, it tends to roam around the map attempting to collect all treasures, and does not finish before a maximum allocated time in 46\% of all trials. The Monster Killer has a similarly low reward for finishing the level, however it only roams around certain maps until the allocated time runs out (27\% of all trials). It should be noted that the computation time is averaged from all playthroughs regardless of whether the agent reached the exit: if only won playthroughs are considered, the average computation time of the evolved Monster Killer (2.06 sec) is the lowest of all other evolved personas, while that of the evolved Treasure Collector remains high (at 23 sec). That said, considering only computation time of won games these values are still better than those of the baseline MCTS Runner (71 sec), Monster Killer (167 sec) and Completionist (133 sec); only the baseline Treasure Collector is relatively close (72 sec) to its evolved counterpart but it only wins in one map (map 8). Comparisons between baseline and evolved agents in the remaining game metrics of Table \ref{table:averages} will be detailed in Section \ref{sec:ttest_baselines}. \subsubsection{Differences from the Baseline}\label{sec:ttest_baselines} For the purposes of comparisons between evolved and baseline personas, the results of Table \ref{table:averages} are too noisy due to the sensitivity of persona behavior in different maps of MiniDungeons 2. For a more thorough comparison, Table \ref{table:ttest_baselines} compares the number of maps (out of 11) in which the different metrics are significantly higher for the evolved persona than the baseline persona of the same type (E), or significantly lower (B). Significance is tested via two-tailed Student's $t$-tests ($p<5\%$) comparing 50 playthroughs of each map per persona (evolved or baseline). Table \ref{table:ttest_baselines} shows that evolved personas score significantly higher (or lower, for computation time) in the different metrics in more maps than their baseline counterparts. A notable exception is the treasure ratio for the Runner, Monster Killer, and Completionist, as the MCTS personas collect more treasure in a comparable number of maps; however, these baseline MCTS agents do not complete the level in far more cases. Especially regarding win rates, the evolved personas are always superior (or at least not inferior) in all maps and for all personas. In comparison, baseline personas need more computational time and do not finish a level far more often as shown by their win rates in Table \ref{table:averages} (less than 15\% for all personas). \begin{table} \centering \caption{Maps in which the shown metrics are significantly higher for the evolved persona (E) than the baseline persona of the same type, and maps in which the reverse is true (B).} \label{table:ttest_baselines} \begin{tabular}{|l|c@{\hspace{1em}}c || c@{\hspace{1em}}c || c@{\hspace{1em}}c || c@{\hspace{1em}}c |} \hline & \multicolumn{2}{|c||}{R} & \multicolumn{2}{c||}{MK} & \multicolumn{2}{c||}{TC} & \multicolumn{2}{c|}{C} \\ \cline{2-9} Metric & E & B & E & B & E & B & E & B \\ \hline Monster Ratio & 8 & 1 & 10 & 0 & 10 & 1 & 10 & 0 \\ Potion Ratio & 2 & 0 & 2 & 1 & 8 & 1 & 2 & 0 \\ Treasure Ratio & 3 & 2 & 2 & 5 & 7 & 1 & 3 & 3 \\ Interactive Objects Ratio & 9 & 1 & 10 & 0 & 10 & 1 & 10 & 0 \\ \hline Time (sec) & 0 & 11 & 0 & 11 & 0 & 10 & 0 & 11 \\ Win Rate & 10 & 0 & 7 & 0 & 7 & 0 & 10 & 0 \\ \hline \end{tabular} \end{table} \subsubsection{Differences among Personas}\label{sec:ttest_personas} Due to the large differences between MiniDungeons 2 maps in the different metrics of Table \ref{table:averages}, to compare whether (and how) the procedural personas play the game differently we evaluate the number of maps in which one persona has a significantly higher value for one metric than another persona. This comparison is summarized in Table \ref{table:ttest_personas}; significant differences are established from a $t$-test ($p<5\%$) between 50 playthroughs of each persona in one map. We are interested in seeing whether the evolved personas, which have been shown to be more efficient and robust at gameplaying, still maintain differentiation in those game metrics that make them unique (e.g. a Monster Killer persona should kill more monsters than other personas). Analyzing the general differences between the evolved Monster Killer and other evolved personas in Table \ref{table:ttest_personas}, we see that its killed monsters ratio is higher for this persona; no other persona has a higher ratio in any map. The evolved Treasure Collector collects significantly more treasure in most maps (8 or 9 out of 11); the baseline Treasure Collector is close but is not superior from other baseline personas in as many maps. Interestingly, the evolved Completionist is underperforming in all relevant metrics compared to the Treasure Collector: it interacts with more game objects only in 1 map (the Treasure Collector has more interactive objects in 8 maps) and generally drinks fewer potions and collects less treasure. It is therefore obvious from Table \ref{table:ttest_personas} that the Completionist is inferior to the Treasure Collector apart from the fact that it kills more monsters in two maps. This is surprising due to the fact that this persona explicitly rewarded a high interactive objects ratio in the fitness for deciding its tree policy, and when scoring the default policy. On the other hand, the evolved Completionist persona is the only persona apart from the Runner which wins all 50 playthroughs in all 11 maps while still interacting with more game objects (primarily monsters) than the Runner. Even the baseline Completionist persona has a high win rate compared to the baseline Runner (see Table \ref{table:averages} while other baseline personas interact with more objects. It is our assumption that using the interactive objects ratio for the utility score (and one would assume as a fitness) creates an imbalance between interacting with an object and approaching the exit. For instance, most maps have 5 to 8 treasure tiles and thus a Treasure Collector would have a higher utility gain by collecting a couple during a playthrough rather than by getting a few steps closer to the exit; instead, when maps have around 20 interactive objects then a completionist interacting with a couple of them will have a lower utility gain than approaching the exit. Completionist agents thus favor reaching the exit, although not as aggressively as the the Runner as there is some reward (however slight) for deviating from the path. \begin{table} \centering \caption{Maps in which the shown metrics are significantly higher for the persona on the row than in the persona in the column. } \label{table:ttest_personas} \begin{tabular}{|l| c c c c || l | c c c c|} \hline & \multicolumn{4}{c||}{Evolved} & & \multicolumn{4}{c|}{Baseline}\\ \cline{2-10} & R & MK & TC & C & & R & MK & TC & C\\ \hline \multicolumn{10}{|l|}{Monster Ratio}\\ \hline R & --- & 0 & 3 & 1 & R & --- & 0 & 1 & 0 \\ MK & \textbf{8} & --- & \textbf{6} & \textbf{3} & MK & \textbf{5} & --- & \textbf{5} & \textbf{2} \\ TC & 7 & 3 & --- & 3 & TC & 2 & 1 & --- & 1 \\ C & 8 & 2 & 6 & --- & C & 3 & 0 & 2 & --- \\ \hline \multicolumn{10}{|l|}{Potion Ratio}\\ \hline R & --- & 2 & 1 & 0 & R & --- & 0 & 1 & 0 \\ MK & 1 & --- & 0 & 1 & MK & 0 & --- & 1 & 0 \\ TC & 8 & 8 & --- & 8 & TC & 3 & 3 & --- & 3 \\ C & 0 & 2 & 1 & --- & C & 0 & 0 & 1 & --- \\ \hline \multicolumn{10}{|l|}{Treasure Ratio}\\ \hline R & --- & 3 & 0 & 0 & R & --- & 1 & 0 & 1 \\ MK & 0 & --- & 0 & 0 & MK & 2 & --- & 0 & 0 \\ TC & \textbf{9} & \textbf{9} & --- & \textbf{8} & TC & \textbf{6} & \textbf{7} & --- & \textbf{6} \\ C & 2 & 3 & 0 & --- & C & 1 & 0 & 0 & --- \\ \hline \multicolumn{10}{|l|}{Interactive Object Ratio}\\ \hline R & --- & 3 & 1 & 1 & R & --- & 0 & 0 & 0 \\ MK & 6 & --- & 1 & 2 & MK & 4 & --- & 0 & 1 \\ TC & 9 & 8 & --- & 8 & TC & 6 & 5 & --- & 6 \\ C & \textbf{8} & \textbf{4} & \textbf{1} & --- & C & \textbf{2} & \textbf{0} & \textbf{0} & --- \\ \hline \multicolumn{10}{|l|}{Time}\\ \hline R & --- & 2 & 0 & 1 & R & --- & 0 & 0 & 2 \\ MK & 3 & --- & 1 & 3 & MK & 1 & --- & 1 & 3 \\ TC & 11 & 10 & --- & 10 & TC & 2 & 1 & --- & 2 \\ C & 6 & 4 & 0 & --- & C & 1 & 0 & 1 & --- \\ \hline \multicolumn{10}{|l|}{Win Rate}\\ \hline R & --- & 3 & 7 & 0 & R & --- & 0 & 1 & 0 \\ MK & 0 & --- & 4 & 0 & MK & 1 & --- & 1 & 0 \\ TC & 0 & 1 & --- & 0 & TC & 0 & 0 & --- & 0 \\ C & 0 & 3 & 7 & --- & C & 1 & 1 & 1 & --- \\ \hline \end{tabular} \end{table} \subsection{Evaluating Levels with Personas}\label{sec:level_personas} Procedural personas can be used for many different purposes, such as modeling players based on the similarity of players' actions with a persona's action \cite{holmgard2015evolving}. However, personas can also be used to evaluate game levels by creating artificial playtraces; this can be used as feedback to a human designer when procedural personas test an authored level, but also as a way to improve computer generated levels in a search-based approach driven by the artificial playtraces of personas in similuations \cite{liapis2015personacritics}. In this paper, the former approach is followed and we evaluate which patterns of game levels affect the performance of different personas. We will only use the evolved personas, as they are overall superior. \begin{figure}[t] \centering \includegraphics[width=0.95\columnwidth]{./graphics/stacked_interactables} \caption{Number of interactive objects in MD2 maps.} \label{fig:stacked} \end{figure} In order to first identify what the differences are between levels, Fig.~\ref{fig:stacked} shows the number of interactive objects (i.e. potions, treasure, and the different types of monsters) contained in each map used in this analysis. Obviously, some maps have fewer interactive objects (map 1, map 9), and some maps have more potions and few treasures (e.g. map 5) or vice versa (e.g. map 9). There are also many differences in the types of monsters favored; while all maps include at least one minitaur (map 2 has two of them), some maps do not include ogres (map 1, map 9) and some maps have more ranged goblin enemies than melee goblin enemies (e.g. map 4, map 10) or vice versa (e.g. map 1). It should be noted that besides interactive objects, these maps differ in terms of other types of tiles, e.g. seven maps contain a set of portals allowing shortcuts through the level, while six maps contain one or more traps which deal damage when the tile is visited. A broad range of metrics on the levels' structure alone (before simulation) were collected from the 11 maps of {MiniDungeons 2}. These include the number of interactive objects of Fig.~\ref{fig:stacked}, the number of portals and traps, the number of wall tiles, choke points and dead ends (tiles with only two or one connected passable tiles, respectively), length of the shortest path between entrance and exit and many others. The metrics of all maps were then analyzed in terms of their correlations with the performance of each persona in the same map. For the sake of brevity, only correlations with each persona's win rate and \emph{core priority} will be discussed: i.e. for the Runner computation time is the core priority, treasure ratio for Treasure Collector, monster ratio for Monster Killer and interactive object ratio for Completionist. While many of the level metrics were found to be correlated with these persona metrics, due the small sample size (11 maps) only a handful of significant correlations were found ($p<0.05$ of the Pearson's correlation coefficient, which is also reported as $r$) . For the Runner, computation time was significantly correlated with the length of the shortest path between entrance and exit ($r=0.71$). This is not surprising, since the Runner is efficient at finding a short route to the exit and thus requires less computation time if the exit is nearby. For the Treasure Collector, the ratio of collected treasures has a significant negative correlation with the number of walls ($r=-0.69$) and a significant positive correlation with the number of open areas ($r=0.63$); open areas are tiles where all adjacent tiles are non-walls, thus it is not surprising that walls and open areas have opposite effects. It seems that the Treasure Collector is less able to handle winding corridors. The win ratio of the Treasure Collector persona has a significant negative correlation with the shortest path length to the exit ($r=-0.62$), again pointing at this persona's poor performance in winding maze-like corridors. While no significant correlations were found for the Monster Killer, the Completionist's interactive object ratio is negatively correlated with the number of treasures in the map ($r=-0.65$). This is not surprising as the Completionist's evolved tree policy in eq.~\eqref{eq:ucb_c} actively discourages opening treasure chests, so the more of those there are in the map the fewer the Completionist's interactions with game objects. \begin{figure}[t] \centering \subfloat[Map 6]{\includegraphics[width=0.45\columnwidth]{./graphics/radial_map6}} \subfloat[Map 8]{\includegraphics[width=0.45\columnwidth]{./graphics/radial_map8}}\\ \subfloat{\includegraphics[width=\columnwidth]{./graphics/legend_fullhoriz.png}} \caption{Metrics of different personas in the same map.} \label{fig:radars} \end{figure} In order to see how the maps' layout can affect the diversity of playthroughs among personas, we choose two indicative maps to analyze; map 6 which has the most (significant) differences in all possible pairs of personas and for all metrics of Table \ref{table:ttest_personas}, and map 8 which has the fewest differences. The values of these metrics for different personas in each map are shown in Fig.~\ref{fig:radars}, averaged from 50 playthroughs. In map 6, the Runner and Completionist reach the exit in all playthroughs while the Monster Killer and Treasure Collector never reach the exit as their computation time consistently reaches the timeout limit of 300 sec. Surprisingly, the Treasure Collector kills more monsters (66\%), collects more treasure (38\%), drinks more potions (36\%) and generally interacts with more game objects (49\%) than all other personas. The Runner persona manages to collect more treasure (15\%) than both the Monster Killer (0\%) and the Completionist (14\%). The Completionist has the second highest interactive object ratio (32\%). Based on the heatmaps of Fig.~\ref{fig:heatmap_map6}, the Monster Killer and Treasure Collector are shown to roam around the map and then get blocked from taking a decision until the 300 sec timeout. The Runner and Completionist on the other hand follow a similar path to the exit (top right) which is actually the shortest path. Only the Treasure Collector gets the two unguarded treasures next to the entrance (bottom left), while the wounded Monster Killer (due to combat with the ogre and two blobs) ignores both unguarded potions along its path in Fig.~\ref{fig:heatmap_map6_mk}. Interestingly, no persona uses the portal. In map 8, all personas reach the exit in all playthroughs with minimal computational time, and generally their other metrics are also similar. Again the monsters killed across 50 playthroughs are lower for the Monster Killer (11\%), than for the Treasure Collector (19\%) and slightly lower than the Completionist (12\%). All personas drink no potions and collect one treasure (which is mandatory in order to reach the exit as shown in Fig.~\ref{fig:heatmap_map8}); therefore the difference in interactive object ratio is solely due to more monsters killed by the Treasure Collector. It is worth noting that Fig.~\ref{fig:heatmap_map8_tc} shows how the Treasure Collector may spend more time roaming around the map. On average, the Treasure Collector needs more computation time (2.8 sec) than the Monster Killer and Runner (0.8 sec for both). Indeed, the Treasure Collector takes on average 11.6 actions (the Runner takes 8, the Monster Killer 9 and the Completionist 9.2). This persona's behavior differs from playthrough to playthrough: in the one shown in Fig.~\ref{fig:heatmap_map8_tc}, the Treasure Collector took 23 actions and killed 5 monsters, which simply walked towards the agent (without the agent needing to explore the map). \begin{figure}[t] \centering \subfloat[R]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/Runner_Map6.png}}~ \subfloat[MK]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/MonsterKiller_Map6.png}\label{fig:heatmap_map6_mk}}~ \subfloat[TC]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/TreasureCollector_Map6.png}}~ \subfloat[C]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/Completionist_Map6.png}} \caption{Heatmaps of persona behavior in map 6.} \label{fig:heatmap_map6} \end{figure} \begin{figure}[t] \centering \subfloat[R]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/Runner_Map8.png}}~ \subfloat[MK]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/MonsterKiller_Map8.png}}~ \subfloat[TC]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/TreasureCollector_Map8.png}\label{fig:heatmap_map8_tc}}~ \subfloat[C]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/Completionist_Map8.png}} \caption{Heatmaps of persona behavior in map 8.} \label{fig:heatmap_map8} \end{figure} \begin{comment} \begin{figure}[t] \centering \subfloat[R]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/Runner_Map11.png}}~ \subfloat[MK]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/MonsterKiller_Map11.png}}~ \subfloat[TC]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/TreasureCollector_Map11.png}}~ \subfloat[C]{\includegraphics[trim={285px 50px 285px 0},clip,width=0.23\columnwidth]{./graphics/GeneratedHeatMaps/Completionist_Map11.png}} \caption{Heatmaps of persona behavior in map 11.} \label{fig:heatmap_map11} \end{figure} \end{comment} Finally, it would be interesting to see if there are maps which are ``preferred'' by all personas. Using the priorities mentioned above (lowest computation time for Runner, highest treasure ratio for Treasure Collector, highest monster ratio for Monster Killer and highest interactive object ratio for Completionist), we find that the best map for the Monster Killer is map 9, in which it kills all monsters in all playthroughs, and the worst is map 8. Map 8 is also deemed the worst by the Completionist, while map 1 is deemed the best. In contrast, map 8 is deemed the best for the Runner, and map 1 the worst. For the Treasure Collector map 5 is the best and map 10 is the worst. It is telling that often the worst map for one persona is the best for another, pointing to the fact that different priorities combined with different behaviors to achieve those priorities can saturate how each persona assesses the maps. \section{Discussion}\label{sec:discussion} The experiments of Section \ref{sec:experiments} demonstrated that the evolved personas were able to play the game more efficiently ---requiring less computational time--- than the baseline UCB1 agents, and were more robust in completing each map by reaching the exit. Despite being efficient in completing most levels (or all levels in the case of the Runner and Completionist), the evolved personas still differentiate their playstyle and in most maps perform better than other personas with regard to their core priority. The exception is the Completionist, which seems to be an inferior form of the Treasure Collector; however this persona has an important benefit in that it completes all the levels consistently while performing more interactions than the Runner. Looking at the effect that each map of MiniDungeons 2 had on each persona, we identify that different personas are sensitive to different level patterns. Such findings could influence the level design or game design of MiniDungeons 2 (i.e. creating more open areas and fewer winding corridors), or lead to a re-design of the fitness or utility functions e.g. as we find that treasures are not favored by Completionists. Finally, in most cases results differ in terms of which map from the set is best or worst for each persona according to their priorities and playstyles. Since procedural personas are intended to be a design tool, they are inherently subjective in the sense that the utility functions should be constructed by a game or level designer interested in testing their content for the game. The experiments provided here support the persona concept as useful for fulfilling specific core priorities for a game of the scope and size of MiniDungeons 2. How the method scales to games of higher complexity is an open question; any game could in principle be tested using procedural personas, as long as it includes agent control methods that can be optimized towards a particular utility function. The specification of the utility function is a complex issue for the procedural persona method: the concept is useful from a design perspective only to the extent that game designers are capable of defining appropriate utility sources and ways of weighting them. One approach to solving this problem could be learning utility functions from demonstration: e.g. from groups of observed players or from designers playing in different styles to demonstrate what particular personas should play like. This could be enabled via methods such as inverse reinforcement learning, driven by evolution or other methods. Regardless, the proposed method is supported in general by the fact that the personas exhibit significantly different behaviors in the same environments, driven by simple utility functions; it is thus likely that game creators would be able to use the personas to inform their content creation process. It also suggests that personas could successfully be integrated as critics in a procedural level generation system; this was previously done for MiniDungeons, a simpler game with simpler persona implementations~\cite{liapis2015personacritics}. Another direction for future work is to validate \emph{a posteriori} the ability of the defined personas and their behavior to map to real human playtraces. This could allow players to be mapped to one of the four personas based on the similarity of persona and player gameplay traces either on the action-by action-level or on a more macro-strategy level, as done in \cite{holmgard2015evolving}. However, the current experiments do not include human players as they test how our method can allow game/level designers to define archetypal personas \emph{a priori}, before even showing the game or level to players. The experiments have demonstrated that different behaviors can be encoded in such a way, and that the persona's behaviors (in terms e.g. of monsters killed) largely match the designers' stated intentions. \section{Conclusion} This paper presented the general concept of procedural personas, a framework for generative player modeling for automatic playtesting. Procedural personas represent a potentially general framework for representing archetypal playstyles, based on decision theory, that could inform game designers about properties of their game content as it is being created. The experiments reported in this paper show that personas are capable of showing different interaction patterns in response to game content and can help map out the playspace afforded by game levels as those are being designed. By combining evolution and MCTS, we produce a set of personas that show what different play styles might look like in MiniDungeons 2. Evaluations can be run in a short amount of time, making it a feasible method in an iterative design process. Future research will investigate how procedural personas can be used as interactive inspirational tools in the content creation process and as automated critics in procedural content generation. Future work should also focus on ways to scale the procedural persona framework to games of larger complexity and on ways in which personas can learn from demonstration instead of having their utility functions specified directly. \section*{Acknowledgment} We thank Abdallah Saffidine and Ahmed Khalifa for advice on tree search and evolving MCTS. Michael Green acknowledges financial support from the GAANN Program. \bibliographystyle{IEEEtran}
{ "timestamp": "2018-02-21T02:03:03", "yymm": "1802", "arxiv_id": "1802.06881", "language": "en", "url": "https://arxiv.org/abs/1802.06881" }
\section{Introduction} \label{sec:introduction} Since the debut of Bitcoin in 2009, its underlying technique, blockchain, has shown promising application prospects and attracted lots of attentions from academia and industry. Being the first cryptocurrency, Bitcoin was rated as the top performing currency in 2015~\cite{2} and the best performing commodity in 2016~\cite{3}, and has more than 300K confirmed transactions~\cite{4} daily in May, 2017. At the same time, the blockchain technique has been applied to many fields, including medicine~\cite{ekblaw2016case,azaria2016medrec,yue2016healthcare}, economics~\cite{huckle2016internet,bylica2015probabilistic,hurich2016virtual}, Internet of things~\cite{dorri2017blockchain,zhang2016iot,sun2016blockchain}, software engineering~\cite{xu2016blockchain,nordstrom2015personal,czepluch2015use} and so on. The introduction of Turing-complete programming languages to enable users to develop smart contracts running on the blockchain marks the start of blockchain 2.0 era. With the decentralized consensus mechanism of blockchain, smart contracts allow mutually distrusted users to complete data exchange or transaction without the need of any third-party trusted authority. Ethereum is now (May of 2017) the most widely used blockchain supporting smart contracts, where there are already 317,506 smart contracts and more than 75,000 transactions happened daily~\cite{6}. Since blockchain is one of the core technology in FinTech (Financial Technology) industry, users are very concerned about its security. Some security vulnerabilities and attacks have been recently reported. Loi et al. discover that 8,833 out of 19,366 existing Ethereum contracts are vulnerable~\cite{luu2016making}. Note that smart contracts with security vulnerabilities may lead to financial losses. For instance, in June 2016, the criminals attacked the smart contract \texttt{DAO} ~\cite{7} by exploiting a recursive calling vulnerability, and stole around 60 million dollars. As another example, in March 2014, the criminals exploited transaction mutability in Bitcoin to attack \texttt{MtGox}, the largest Bitcoin trading platform. It caused the collapse of \texttt{MtGox}, with a value of 450 million dollars Bitcoin stolen~\cite{8}. Although there are some recent studies on the security of blockchain, none of them performs a systematic examination on the risks to blockchain systems, the corresponding real attacks, and the security enhancements. The closest research work to ours is~\cite{atzeisurvey} that only focuses on Ethereum smart contracts, rather than popular blockchain systems. From security programming perspective, their work analyzes the security vulnerabilities of Ethereum smart contracts, and provides a taxonomy of common programming pitfalls that may lead to vulnerabilities~\cite{atzeisurvey}. Although a series of related attacks on smart contracts are listed in~\cite{atzeisurvey}, there lacks a discussion on security enhancement. This paper focuses on the security of blockchain from more comprehensive perspectives. The main contributions of this paper are as follows: (1). To the best of our knowledge, we conduct the \textit{first} systematic examination on security risks to popular blockchain systems. (2). We survey the real attacks on popular blockchain systems from 2009 to the present (May of 2017) and analyze the vulnerabilities exploited in these cases. (3). We summarize practical academic achievements for enhancing the security of blockchain, and suggest a few future directions in this area. The remainder of this paper is organized as follows. Section~\ref{sec:background} introduces the main technologies used in blockchain systems. Section~\ref{sec:risks} systematically examines the security risks to blockchain, and Section~\ref{sec:attackcases} surveys real attacks on blockchain systems. After summarizing the security enhancements to blockchain in Section~\ref{sec:enhancement}, we suggest a few future directions in Section~\ref{sec:future}. Finally, Section~\ref{sec:conclusion} concludes the paper. \section{Overview of Blockchain Technologies} \label{sec:background} This section introduces the main technologies employed in blockchain. We first present the fundamental trust mechanism (i.e., the consensus mechanism) used in blockchain, and then explain the synchronization process between nodes. After that, we introduce the two development stages of blockchain. \subsection{Consensus Mechanism} Being a decentralized system, blockchain systems do not need a third-party trusted authority. Instead, to guarantee the reliability and consistency of the data and transactions, blockchain adopts the decentralized consensus mechanism. In the existing blockchain systems, there are four major consensus mechanisms~\cite{zheng2016blockchain}: PoW (Proof of Work), PoS (Proof of Stake), PBFT (Practical Byzantine Fault Tolerance), and DPoS (Delegated Proof of Stake). Other consensus mechanisms, such as PoB (Proof of Bandwidth)~\cite{11}, PoET (Proof of Elapsed Time)~\cite{12}, PoA(Proof of Authority)~\cite{poa} and so on, are also used in some blockchain systems. The two most popular blockchain systems (i.e., Bitcoin and Ethereum) use the PoW mechanism. Ethereum also incorporates the PoA mechanism (i.e., Kovan public test chain~\cite{Kovan}), and some other cryptocurrencies also use the PoS mechanism, such as PeerCoin, ShadowCash and so on. \begin{figure}[ht] \centering \includegraphics[width=3in]{PoW} \caption{PoW consensus mechanism} \label{PoW} \end{figure} PoW mechanism uses the solution of puzzles to prove the credibility of the data. The puzzle is usually a computationally hard but easily verifiable problem. When a node creates a block, it must resolve a PoW puzzle. After the PoW puzzle is resolved, it will be broadcasted to other nodes, so as to achieve the purpose of consensus, as shown in Fig.\ref{PoW}. In different blockchain systems, the block structure may vary in detail. Typically in Bitcoin, each block contains \texttt{PrevHash}, \texttt{nonce}, and \texttt{Tx}~\cite{luu2017smart}. In particular, \texttt{PrevHash} indicates the hash value of the last generated block, and \texttt{Tx}s denote the transactions included in this block. The value of \texttt{nonce} is obtained by solving the PoW puzzle. A correct \texttt{nonce} should satisfy that the hash value shown in Equation~\ref{PoWequation} is less than a target value, which could be adjusted to tune the difficulty of PoW puzzle. \begin{equation} SHA256(PrevHash\left | \right |Tx1\left | \right |Tx2\left | \right | . . . \left | \right |nonce) < Target \label{PoWequation} \end{equation} PoS mechanism uses the proof of ownership of cryptocurrency to prove the credibility of the data. In PoS-based blockchain, during the process of creating block or transaction, users are required to pay a certain amount of cryptocurrency. If the block or transaction created can eventually be validated, the cryptocurrency will be returned to the original node as a bonus. Otherwise, it will be fined. In the PoW mechanism, it needs a lot of calculation, resulting in a waste of computing power. On the contrary, PoS mechanism can greatly reduce the amount of computation, thereby increasing the throughput of the entire blockchain system. \subsection{Block Propagation and Synchronization} \begin{figure}[ht] \centering \includegraphics[width=4in]{synchronisation} \caption{Block synchronization process between nodes} \label{synchronisation} \end{figure} In the blockchain, each full node stores the information of all blocks. Being the foundation to building consensus and trust for blockchain, the block propagation mechanisms can be divided into the following categories~\cite{wa14st2016security,wa14st2016ethereum,gervaissecurity}: (1). Advertisement-based propagation. This propagation mechanism is originated from Bitcoin. When node \texttt{A} receives the information of a block, \texttt{A} will send an \texttt{inv} message (a message type in Bitcoin) to its connected peers. When node \texttt{B} receives the \texttt{inv} message from \texttt{A}, it will do as follows. If node \texttt{B} already has the information of this block, it will do nothing. If node \texttt{B} does not have the information, it will reply to node \texttt{A}. When node \texttt{A} receives the reply message from node \texttt{B}, node \texttt{A} will send the complete information of this block to node \texttt{B}. (2). Sendheaders propagation. This propagation mechanism is an improvement to the advertisement-based propagation mechanism. In the sendheaders propagation mechanism, node \texttt{B} will send a \texttt{sendheaders} message (a message type in Bitcoin) to node \texttt{A}. When node \texttt{A} receives the information of a block, it will send the block header information directly to node \texttt{B}. Compared with the advertisement-based propagation mechanism, node \texttt{A} does not need to send \texttt{inv} messages, and hence it speeds up the block propagation. (3). Unsolicited push propagation. In the unsolicited push mechanism, after one block is mined, the miner will directly broadcast the block to other nodes. In this propagation mechanism, there is no \texttt{inv} message and \texttt{sendheaders} message. Compared with the previous two propagation mechanisms, unsolicited push mechanism can further improve the speed of block propagation. (4). Relay network propagation. This propagation mechanism is an improvement to the unsolicited push mechanism. In this mechanism, all the miners share a transaction pool. Each transaction is replaced by a global ID, which will greatly reduce the broadcasted block size, thereby further reducing the network load and improving the propagation speed. (5). Push/Advertisement hybrid propagation. This hybrid propagation mechanism is used in Ethereum. We assume that node \texttt{A} has n connected peers. In this mechanism, node \texttt{A} will push the block to $\sqrt{n}$ peers directly. For the other $n-\sqrt{n}$ connected peers, node \texttt{A} will advertise the block hash to them. Different blockchain systems may use diverse block synchronization processes. In Ethereum, node \texttt{A} can request block synchronization from node \texttt{B} with more total difficulty. The specific process is as follows (shown in Fig.\ref{synchronisation})~\cite{wa14st2016security,wa14st2016ethereum,gervaissecurity}: (1). Node \texttt{A} requests the header of the latest block from node \texttt{B}. This action is implemented by sending a \texttt{GetBlockHeaders} message. Node \texttt{B} will reply to node \texttt{A} a \texttt{BlockHeaders} message that contains the block header requested by \texttt{A}. (2). Node \texttt{A} requests \texttt{MaxHeaderFetch} blocks to find common ancestor from node \texttt{B}. The default value of \texttt{MaxHeaderFetch} is 256, but the number of block headers sent by node \texttt{B} to \texttt{A} can be less than this value. (3). If \texttt{A} has not found common ancestor after the above two steps, node \texttt{A} will continue to send \texttt{GetBlockHeaders} message, requesting one block header each time. Moreover, \texttt{A} repeats in binary search to find the common ancestor in its local blockchain. (4). After node \texttt{A} discovers a common ancestor, \texttt{A} will request block synchronization from the common ancestor. In this process, \texttt{A} requests \texttt{MaxHeaderFetch} blocks per request, but the actual number of nodes sent from \texttt{B} to \texttt{A} can be less than this value. \begin{figure}[ht] \centering \vspace*{-1ex} \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.4in]{Wallet1} \caption{Query Bitcoin transaction history} \label{Wallet1} \end{minipage}% \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.4in]{Wallet2} \caption{Pay with Bitcoin} \label{Wallet2} \end{minipage} \begin{minipage}[t]{0.3\linewidth} \centering \includegraphics[width=1.4in]{Wallet3} \caption{Collect payments with Bitcoin} \label{Wallet3} \end{minipage} \end{figure} \subsection{Technology Development} From the birth of the first blockchain system Bitcoin, the blockchain technology has experienced two stages of development: blockchain 1.0 and blockchain 2.0. In the blockchain 1.0 stage, the blockchain technology is mainly used for cryptocurrency. In addition to Bitcoin, there are many other types of cryptocurrencies, such as Litecoin, Dogecoin and so on. There are currently over 700 types of cryptocurrencies, and the total market capitalizations of them are over 26 billion US\$~\cite{19}. The technology stack of cryptocurrency could be divided into two layers: the underlying decentralized ledger layer and protocol layer~\cite{swan2015blockchain}. Cryptocurrency client, such as Bitcoin Wallet~\cite{BitcoinWallet}, runs in the protocol layer to conduct transactions, as shown in Fig.\ref{Wallet1} to Fig.\ref{Wallet3}. Compared with traditional currency, cryptocurrency has the following characteristics and advantages~\cite{21}: (1). Irreversible and traceable. Transfer and payment operations are irreversible using cryptocurrency. Once the behavior is completed, it is impossible to withdraw. In addition, all user behaviors are traceable, and these behaviors are permanently saved in the blockchain. (2). Decentralized and anonymous. There is no third-party organization involved in the entire structure of cryptocurrency, nor does it has central management like banks. In addition, all user behaviors are anonymous. Hence, according to the transaction information, we cannot obtain the user's real identity. (3). Secure and permissionless. The security of the cryptocurrency is ensured by the public key cryptography and the blockchain consensus mechanism, which are hard to be broken by the criminal. Moreover, there is no need to apply for any authority or permission to use cryptocurrency. Users can simply use the cryptocurrency through the relevant clients. (4). Fast and global. Transactions can be completed in only several minutes using cryptocurrency. Since cryptocurrencies are mostly based on public chains, anyone in the world can use them. Therefore, the user's geographical location has little impact on the transaction speed. \begin{figure}[ht] \centering \includegraphics[width=4.6in]{dapp} \caption{The process of smart contract's development, deployment, and interaction} \label{dapp} \end{figure} \begin{table}[ht!] \centering \vspace*{-1ex} \scriptsize \caption{Statistics of blockchain systems supporting smart contracts (until May of 2017)} \vspace{1ex} \label{smart} \begin{tabular}{|c|c|c|c|} \hline \textbf{System}&\textbf{Contract language} & \textbf{Total TXs}& \textbf{Market Capitalization /M US\$}\\ \hline \textsc Ethereum&EVM bytecode&23,102,544&8,468 \\ \hline \textsc RSK&Solidity&Unknown&N/A \\ \hline \textsc Counterparty&EVM bytecode&12,170,386&15\\ \hline \textsc Stellar&Transaction chains&Unknown&139 \\ \hline \textsc Monax&EVM bytecode&Unknown&N/A \\ \hline \textsc Lisk&JavaScript&Unknown&71 \\ \hline \end{tabular} \vspace{-2ex} \end{table} In blockchain 2.0 stage, smart contract is introduced so that developers can create various applications through smart contracts. A smart contract can be considered as a lightweight dAPP (decentralized application). Ethereum is a typical system of blockchain 2.0. Each Ethereum node runs an EVM (Ethereum Virtual Machine) that executes smart contracts. Besides Ethereum, several other blockchain systems also support smart contracts, whose information is listed in Table~\ref{smart}~\cite{bartoletti2017empirical}. In Ethereum, developers can use a variety of programming languages to develop smart contracts, such as Solidity (the recommended language), Serpent, and LLL. Since these languages are Turing-complete, smart contracts can achieve rich functions. Fig.\ref{dapp} shows the process of smart contracts' development, deployment and interaction. Each deployed smart contract corresponds to a unique address, through which users can interact with the smart contract through transactions by different clients (e.g., Parity, Geth, etc.). Since smart contracts can call each other through messages, developers can develop more feature-rich dAPPs based on available smart contracts. Compared with the traditional application, a dAPP has the following characteristics and advantages~\cite{22}: (1). Autonomy. dAPPs are developed on the basis of smart contracts, and smart contracts are deployed and run on the blockchain. Hence, dAPPs can run autonomically without the need of any third party's assistance and participation. (2). Stable. The bytecodes of smart contracts are stored in the state tree of blockchain. Each full node saves the information of all blocks and stateDB, including the bytecodes of smart contracts. Hence, the failure of some nodes will not affect its operation. This mechanism ensures that dAPPs can run stably. (3). Traceable. Since the invocation information of smart contracts is stored in the blockchain as transactions, all the behaviors of dAPPs are recorded and traceable. (4). Secure. The public key cryptography and the blockchain consensus mechanism can ensure the security and correct operations of smart contracts, so as to maximize the security of dAPPs. \section{Risks to Blockchain} \label{sec:risks} \begin{table}[ht!] \centering \vspace*{-1ex} \scriptsize \caption{Taxonomy of blockchain's risks} \vspace{1ex} \label{tab_risks} \begin{tabular}{|c|c|c|c|} \hline \textbf{Number}&\textbf{Risk}&\textbf{Cause}&\textbf{Range of Influence}\\ \hline \textsc 3.1.1&51\% vulnerability&Consensus mechanism&\multirow{5}{*}{Blockchain1.0, 2.0}\\ \cline{1-3} \textsc 3.1.2&Private key security&Public-key encryption scheme&\\ \cline{1-3} \textsc 3.1.3&Criminal activity&Cryptocurrency application&\\ \cline{1-3} \textsc 3.1.4&Double spending&Transaction verification mechanism&\\ \cline{1-3} \textsc 3.1.5&Transaction privacy leakage&Transaction design flaw&\\ \hline \textsc 3.2.1&Criminal smart contracts&Smart contract application&\multirow{4}{*}{Blockchain2.0}\\ \cline{1-3} \textsc 3.2.2&Vulnerabilities in smart contract&Program design flaw&\\ \cline{1-3} \textsc 3.2.3&Under-optimized smart contract&Program writing flaw&\\ \cline{1-3} \textsc 3.2.4&Under-priced operations&EVM design flaw&\\ \hline \end{tabular} \vspace{-2ex} \end{table} We divide the common blockchain risks into nine categories, as shown in Table~\ref{tab_risks}, and detail the causes and possible consequence of each risk. The risks described in Section 3.1 exist in blockchain 1.0 and 2.0, and their causes are mostly related to the blockchain operation mechanism. By contrast, the risks introduced in Section 3.2 are unique to blockchain 2.0, and are usually resulted from the development, deployment, and execution of smart contracts. \subsection{Common Risks to Blockchain 1.0 and 2.0} \subsubsection{51\% Vulnerability} \label{sec:51} The blockchain relies on the distributed consensus mechanism to establish mutual trust. However, the consensus mechanism itself has 51\% vulnerability, which can be exploited by attackers to control the entire blockchain. More precisely, in PoW-based blockchains, if a single miner's hashing power accounts for more than 50\% of the total hashing power of the entire blockchain, then the 51\% attack may be launched. Hence, the mining power concentrating in a few mining pools may result in the fears of an inadvertent situation, such as a single pool controls more than half of all computing power. In Jan. 2014, after the mining pool \texttt{ghash.io} reached 42\% of the total Bitcoin computing power, a number of miners voluntarily dropped out of the pool, and \texttt{ghash.io} issued a press statement to reassure the Bitcoin community that it would avoid reaching the 51\% threshold~\cite{18}. In PoS-based blockchains, 51\% attack may also occur if the number of coins owned by a single miner is more than 50\% of the total blockchain. By launching the 51\% attack, an attacker can arbitrarily manipulate and modify the blockchain information. Specifically, an attacker can exploit this vulnerability to conduct the following attacks~\cite{9}: (1). Reverse transactions and initiate double spending attack (the same coins are spent multiple times). (2). Exclude and modify the ordering of transactions. (3). Hamper normal mining operations of other miners. (4). Impede the confirmation operation of normal transactions. \subsubsection{Private Key Security} When using blockchain, the user's private key is regarded as the identity and security credential, which is generated and maintained by the user instead of third-party agencies. For example, when creating a cold storage wallet in Bitcoin blockchain, the user must import his/her private key. Hartwig et al.~\cite{mayerecdsa} discover a vulnerability in ECDSA (Elliptic Curve Digital Signature Algorithm) scheme, through which an attacker can recover the user's private key because it does not generate enough randomness during the signature process. Once the user's private key is lost, it will not be able to be recovered. If the private key is stolen by criminals, the user's blockchain account will face the risk of being tampered by others. Since the blockchain is not dependent on any centralized third-party trusted institutions, if the user's private key is stolen, it is difficult to track the criminal's behaviors and recover the modified blockchain information. \subsubsection{Criminal Activity} \label{sec:criminal} \begin{table}[ht!] \centering \vspace*{-1ex} \scriptsize \caption{Top 10 categories of items available in \texttt{Silk Road}} \vspace{1ex} \label{top20} \begin{tabular}{|c|c|c|c|} \hline \textbf{Number}&\textbf{Category}&\textbf{Items} & \textbf{Percentage}\\ \hline \textsc 1&Weed&3338&13.7\% \\ \hline \textsc 2&Drugs&2194&9.0\%\\ \hline \textsc 3&Prescription&1784&7.3\% \\ \hline \textsc 4&Benzos&1193&4.9\% \\ \hline \textsc 5&Books&955&3.9\% \\ \hline \textsc 6&Cannabis&877&3.6\% \\ \hline \textsc 7&Hash&820&3.4\% \\ \hline \textsc 8&Cocaine&630&2.6\% \\ \hline \textsc 9&Pills&473&1.9\% \\ \hline \textsc 10&Blotter (LSD)&440&1.8\% \\ \hline \end{tabular} \vspace{-2ex} \end{table} Bitcoin users can have multiple Bitcoin addresses, and the address has no relationship with their real life identity. Therefore, Bitcoin has been used in illegal activities. Through some third-party trading platforms that support Bitcoin, users can buy or sell any product. Since this process is anonymous, it is hard to track user behaviors, let alone subject to legal sanctions. Some frequent criminal activities with Bitcoin include: (1). Ransomware. The criminals often use ransomware for money extortion, and employ Bitcoin as trading currency. In July 2014, a ransomware named \texttt{CTB-Locker}~\cite{27} spread around the world by disguising itself as mail attachments. If the user clicks the attachment, the ransomware will run in the background of the system and encrypt about 114 types of each file~\cite{ctblocker}. The victim has to pay the attacker a certain amount of Bitcoin within 96 hours. Otherwise, the encrypted files will not be restored. In May 2017, another ransomware \texttt{WannaCry} (also named as \texttt{WannaCrypt})~\cite{28} infected about 230,000 victims across 150 countries in two days. It exploited a vulnerability in Windows system to spread, and encrypted users' files to ask for Bitcoin ransom. (2). Underground market. Bitcoin is often used as the currency in the underground market. For example, \texttt{Silk Road} is an anonymous, international online marketplace that operates as a Tor hidden service and uses Bitcoin as its exchange currency~\cite{christin2013traveling}. The top 10 categories of items available in \texttt{Silk Road} are listed in Table~\ref{top20}~\cite{christin2013traveling}. Most of the items sold in \texttt{Silk Road} are drugs, or some other controlled items in the real world. Since international transactions account for a significant proportion in \texttt{Silk Road}, Bitcoin makes the transaction in the underground market more convenient, which will cause harm to the social security. (3). Money laundering. Since Bitcoin has the features like anonymity and network virtual payment and has been adopted by many countries, compared with other currencies, Bitcoin carries the lowest risk of being used for money laundering~\cite{treasury2015uk}. Cody et al. propose \texttt{Dark Wallet}~\cite{29}, a Bitcoin application that can make Bitcoin transaction completely stealth and private. \texttt{Dark Wallet} can encrypt transaction information and mix the user's valid coins with chaff coins, and hence it can make money laundering much easier. \subsubsection{Double Spending} \begin{figure}[ht] \centering \includegraphics[width=4in]{double} \caption{Double spending attack model against fast payment in Bitcoin} \label{double} \end{figure} Although the consensus mechanism of blockchain can validate transactions, it is still impossible to avoid double spending~\cite{karame2015misbehavior}. Double spending refers to that a consumer uses the same cryptocurrency multiple times for transactions. For example, an attacker could leverage race attack for double spending. This kind of attack is relatively easy to implement in PoW-based blockchains, because the attacker can exploit the intermediate time between two transactions' initiation and confirmation to quickly launch an attack. Before the second transaction is mined to be invalid, the attacker has already got the first transaction's output, resulting in double spending. Ghassan et al.~\cite{karame2012double} conduct an analysis of double spending against fast payment in Bitcoin, and propose an attack model, as shown in Fig.\ref{double}. Assuming that an attacker knows the vendor's address before the attack, to perform double spending, the attacker will send two transactions, $TX_{v}$ and $TX_{a}$ and choose the same BTCs (cryptocurrency in Bitcoin) as inputs for $TX_{v}$ and $TX_{a}$. $TX_{v}$'s recipient address is set to the targeted vendor's address, and $TX_{a}$'s recipient address is set to the colluding address controlled by the attacker. If the following three conditions are met, double spending will be successful: (1) $TX_{v}$ is added to the wallet of the targeted vendor; (2) $TX_{a}$ is mined as valid into the blockchain; (3) The attacker gets $TX_{v}$'s output before the vendor detects misbehavior. If the attack is successful, $TX_{v}$ will eventually be verified as an invalid transaction, and BTCs are really spent by $TX_{a}$. The attacker has received $TX_{v}$'s output, which is the vendor's normal service. Since $TX_{a}$'s recipient address is controlled by the attacker, these BTCs are still owned by herself. In this double spending model, the attacker enjoys the service without paying any BTC. \subsubsection{Transaction Privacy Leakage} \label{sec:privacy} Since the users' behaviors in the blockchain are traceable, the blockchain systems take measures to protect the transaction privacy of users. In the Bitcoin and Zcash, they use one-time accounts to store the received cryptocurrency. Moreover, the user needs to assign a private key to each transaction. In this way, the attacker cannot infer whether the cryptocurrency in different transactions is received by the same user. In Monero, users can include some chaff coins (called ``mixins'') when they initiate a transaction so that the attacker cannot infer the linkage of actual coins spent by the transaction. \begin{table}[ht!] \centering \vspace*{-1ex} \scriptsize \caption{Linkability analysis of Monera transaction inputs with mixins} \vspace{1ex} \label{leak} \begin{tabular}{|c|c|c|c|} \hline \textbf{}&\textbf{Not deducible} & \textbf{Deducible}& \textbf{In total}\\ \hline \textsc Using newest TXO&15.07\%&4.60\%&19.67\% \\ \hline \textsc Not using newest TXO&22.61\%&57.72\%&80.33\%\\ \hline \textsc In total&37.68\%&62.32\%&100\% \\ \hline \end{tabular} \vspace{-2ex} \end{table} Unfortunately, the privacy protection measures in blockchain are not very robust. Andrews et al.~\cite{miller2017empirical} empirically evaluate two linkability weaknesses in Monero's mixin sampling strategy, and discover that 66.09\% of all transactions do not contain any mixins. 0-mixin transaction will lead to the privacy leakage of its sender. Since users may use the outputs of 0-mixin transaction as mixins, these mixins will be deducible. Moreover, they study the sampling method of mixins and find that the selection of mixins is not really random. Newer TXOs (transaction outputs) tend to be used more frequently. They further discover that 62.32\% of transaction inputs with mixins are deducible, as shown in Table~\ref{leak}~\cite{miller2017empirical}. By exploiting these weaknesses in Monero, they can infer the actual transaction inputs with 80\% accuracy. \subsection{Specific Risks to Blockchain 2.0} \subsubsection{Criminal Smart Contracts} \begin{figure}[ht] \centering \includegraphics[width=4in]{PwdTheft} \caption{Execution procedure of \texttt{PwdTheft} using SGX-enabled platform} \label{PwdTheft} \end{figure} Criminals can leverage smart contracts for a variety of malicious activities, which may pose a threat to our daily life. CSCs (Criminal Smart Contracts) can facilitate the leakage of confidential information, theft of cryptographic keys, and various real-world crimes (e.g., murder, arson, terrorism, etc.)~\cite{juels2015ring}. Juels et al. propose an example of password theft CSC \texttt{PwdTheft}, whose process is shown in Fig.\ref{PwdTheft}~\cite{juels2015ring}. \texttt{PwdTheft} can be exploited for a fair exchange between contractor \texttt{C} and perpetrator \texttt{P}. \texttt{C} will pay a reward to \texttt{P} if and only if \texttt{P} gives a valid password to \texttt{C}. The entire transaction process can be done without any third party trusted agencies involved. Since the smart contract deployed in blockchain cannot access network directly~\cite{zhang2016town}, in the actual work process of \texttt{PwdTheft}, it is combined with trusted hardware technology, such as Intel SGX (Software Guard eXtension), to prove the validity of the password through HTTPS (Hypertext Transfer Protocol Secure). SGX will create a trusted execution environment named \texttt{enclave}, which can protect the application from being attacked by others. Any privileged or unprivileged software cannot access the runtime environment of \texttt{enclave}. Furthermore, SGX can produce \texttt{quote}, a digitally signed attestation. \texttt{Quote} can get the hash value of the application run in \texttt{enclave} environment. Meanwhile, \texttt{quote} can access the relevant data during runtime of the application. The whole password exchange process is divided into three steps: (1). \texttt{PwdTheft} provides $(pk_{C}, A)$, $pk_{C}$ is the public key of \texttt{C}, and \texttt{A} is the target account for stealing. (2). The application that runs in SGX, using the \texttt{PW} provided by \texttt{P}, logs on to the server account \texttt{A} by establishing an HTTPS connection. (3). If the preceding steps are successful, the data \texttt{ct}, $\sigma$ and $\alpha$ will be transmitted to \texttt{PwdTheft}. \texttt{ct} = $enc_{pk_{C}}[PW]$ and $\sigma$ = $Sig_{sk_{app}} [ct]$. $sk_{app}$ is the signature private key of the application. $\alpha$ is a \texttt{quote} that runs on \texttt{P}'s SGX-enabled host. After \texttt{PwdTheft} receives \texttt{ct}, $\sigma$ and $\alpha$, \texttt{C} can decrypt them to verify the data, and then decide whether a reward should be paid to \texttt{P}. In this process, in order to prevent \texttt{P} from changing the password maliciously after the data transmission to \texttt{PwdTheft}, they can add a timestamp in the data. In addition, \texttt{PwdTheft} can be easily extended for conducting other malicious activities. For example, criminals can leverage CSCs to make 0-day vulnerability transactions, which are critical cyber-weaponry~\cite{juels2015ring}. \subsubsection{Vulnerabilities in Smart Contract} \label{sec:vul} \begin{table}[ht!] \centering \vspace*{-1ex} \scriptsize \caption{Taxonomy of vulnerabilities in smart contract} \vspace{1ex} \label{vul} \begin{tabular}{|c|c|c|c|} \hline \textbf{Number}&\textbf{Vulnerability} & \textbf{Cause}& \textbf{Level}\\ \hline \textsc 1&Call to the unknown&The called function does not exist&\multirow{6}{*}{Contract source code} \\ \cline{1-3} \textsc 2&Out-of-gas send&Fallback of the callee is executed& \\ \cline{1-3} \textsc 3&Exception disorder&Irregularity in exception handling& \\ \cline{1-3} \textsc 4&Type casts&Type-check error in contract execution& \\ \cline{1-3} \textsc 5&Reentrancy vulnerability&Function is re-entered before termination& \\ \cline{1-3} \textsc 6&Field disclosure&Private value is published by the miner& \\ \hline \textsc 7&Immutable bug&Alter a contract after deployment&\multirow{3}{*}{EVM bytecode} \\ \cline{1-3} \textsc 8&Ether lost&Send Ether to an orphan address& \\ \cline{1-3} \textsc 9&Stack overflow&The number of values in stack exceeds 1024& \\ \hline \textsc 10&Unpredictable state&State of the contract is changed before invoking&\multirow{3}{*}{Blockchain mechanism} \\ \cline{1-3} \textsc 11&Randomness bug&Seed is biased by malicious miner& \\ \cline{1-3} \textsc 12&Timestamp dependence&Timestamp of block is changed by malicious miner& \\ \hline \end{tabular} \vspace{-2ex} \end{table} As programs running in the blockchain, smart contracts may have security vulnerabilities caused by program defects. Nicola et al.~\cite{atzeisurvey} conduct a systematic investigation of 12 types of vulnerabilities in smart contract, as shown in Table~\ref{vul}. Loi et al.~\cite{luu2016making} propose a symbolic execution tool called \textsc{Oyente} to find 4 kinds of potential security bugs. They discover that 8,833 out of 19,366 Ethereum smart contracts are vulnerable. The details of these 4 bugs are as follows: (1). Transaction-ordering dependence. Valid transactions can change the state of Ethereum blockchain from $\sigma$ to $\sigma'$: ${\sigma \overset{T}{\rightarrow} \sigma'}$ . In every epoch, each miner proposes their own block to update the blockchain. Since a block may contain multiple transactions, blockchain state $\sigma$ may change multiple times within an epoch. When a new block contains two transactions $T_{i}$ and $T_{j}$, which invoke the same smart contract, it may trigger this vulnerability. Because the execution of the smart contract is associated with state $\sigma$, the execution order of $T_{i}$ and $T_{j}$ affects the ultimate state. The order of transactions' execution depends entirely on miners. In this case, TOD (Transaction-Ordering Dependent) contracts are vulnerable. (2). Timestamp dependence. In the blockchain, every block has a \texttt{timestamp}. Some smart contracts' trigger conditions depend on \texttt{timestamp}, which is set by the miner according to its local system time. If an attacker can modify it, timestamp-dependent contracts are vulnerable. (3). Mishandled exceptions. This category of vulnerability may occur when different smart contracts are called from each other. When contract \texttt{A} calls contract \texttt{B}, if \texttt{B} runs abnormally, \texttt{B} will stop running and return \texttt{false}. In some invocations, contract \texttt{A} must explicitly check the return value to verify if the call has been executed properly. If \texttt{A} does not correctly check the exception information, it may be vulnerable. (4). Reentrancy vulnerability. During the invocation of the smart contract, the actual state of the contract account is changed after the call is completed. An attacker can use the intermediate state to conduct repeated calls to the smart contract. If the invoked contract involves Ether transaction, it may result in illegal Ether stealing. \subsubsection{Under-Optimized Smart Contract} \begin{table}[ht!] \centering \vspace*{-1ex} \scriptsize \caption{Taxonomy of under-optimized patterns in smart contract} \vspace{1ex} \label{under} \begin{tabular}{|c|c|c|} \hline \textbf{Number}&\textbf{Under-optimized pattern} & \textbf{Category}\\ \hline \textsc 1&Dead code&\multirow{2}{*}{Useless-code related patterns} \\ \cline{1-2} \textsc 2&Opaque predicate& \\ \hline \textsc 3&Expensive operations&\multirow{5}{*}{Loop-related patterns} \\ \cline{1-2} \textsc 4&Constant outcome& \\ \cline{1-2} \textsc 5&Loop fusion& \\ \cline{1-2} \textsc 6&Repeated computations& \\ \cline{1-2} \textsc 7&Comparison with unilateral outcome& \\ \hline \end{tabular} \vspace{-2ex} \end{table} When a user interacts with a smart contract deployed in Ethereum, a certain amount of gas is charged. Gas can be exchanged with Ether, which is the cryptocurrency in Ethereum. Unfortunately, some smart contracts' development and deployment are not adequately optimized. Chen et al.~\cite{chen2017under} identify 7 gas-costly patterns and group them into 2 categories (as shown in Table~\ref{under}): useless-code related patterns, and loop-related patterns. They propose a tool named \textsc{Gasper}, which can automatically discover 3 gas-costly patterns in smart contracts: dead code, opaque predicate, and expensive operations in a loop. Leveraging \textsc{Gasper}, they find that more than 80\% smart contracts deployed in Ethereum (4,240 real smart contracts) have at least one of these 3 patterns. The details are as follows: (1). Dead code. It means that some operations in a smart contract will never be executed, but they will still be deployed into the blockchain. Since in the smart contract deployment process the consumption of gas is related to bytecode size, the dead code will cause additional gas consumption. (2). Opaque predicate. For some statements in a smart contract, their execution results are always the same and will not affect other statements and the smart contract. The presence of the opaque predicate causes the EVM to execute useless operations, thereby consuming additional gas. (3). Expensive operations in a loop. It refers to some expensive operations within a loop, which can be moved outside the loop to save gas consumption. \subsubsection{Under-Priced Operations} As mentioned earlier, each operation is set to a specific gas value in Ethereum, which can be queried in the yellow paper~\cite{26}. Ethereum sets the gas value based on the execution time, bandwidth, memory occupancy and other parameters. In general, the gas value is proportional to the computing resources consumed by the operation. However, it is difficult to accurately measure the consumption of computing resources of an individual operation, and therefore some gas values are not set properly. For example, some IO-heavy operations' gas values are set too low, and hence these operations can be executed in quantity in one transaction. In this way, an attacker can initiate a DoS (Denial of Service) attack on Ethereum. \begin{table}[ht!] \centering \vspace*{-1ex} \scriptsize \caption{Gas table modification in EIP150} \vspace{1ex} \label{EIP150} \begin{tabular}{|c|c|c|c|} \hline \textbf{Number}&\textbf{Operation}&\textbf{Old value} & \textbf{EIP150 value}\\ \hline \textsc 1&EXTCODESIZE&20&700 \\ \hline \textsc 2&EXTCODECOPY&20&700\\ \hline \textsc 3&BALANCE&20&400 \\ \hline \textsc 4&SLOAD&50&200 \\ \hline \textsc 5&CALL&40&700 \\ \hline \textsc 6&SUICIDE (does not create account)&0&5,000 \\ \hline \textsc 7&SUICIDE (creates an account)&0&25,000 \\ \hline \end{tabular} \vspace{-2ex} \end{table} Actually, attackers have exploited the under-priced operation \texttt{EXTCODESIZE} to attack Ethereum~\cite{23}. When \texttt{EXTCODESIZE} is executed, it needs to read state information and then the node will read hard disk. Since the gas value of \texttt{EXTCODESIZE} is only 20, the attacker can call it more than 50,000 times in one transaction. This will cause the user to consume a lot of computing resources, and block synchronization will be significantly slower compared with the normal situation. As another example, some attackers exploited the under-priced operation \texttt{SUICIDE} to launch DoS attacks~\cite{24}. They exploited \texttt{SUICIDE} to create about 19 million empty accounts, which need to be stored in the state tree. This attack caused a waste of hard disk resources. At the same time, the node information synchronization and transaction processing speed are significantly decreased. In order to solve the security problem caused by under-priced operations, the gas values of 7 IO-heavy operations are modified in EIP (Ethereum Improvement Proposal) 150~\cite{25}, as shown in Table~\ref{EIP150}. Note that EIP150 has already been implemented in the Ethereum public chain by hard fork, and the new gas table parameters are used after No.2463000 block. \section{Attack Cases} \label{sec:attackcases} In this section, we survey real attacks on blockchain systems, and analyze the vulnerabilities exploited in these attacks. \subsection{Selfish Mining Attack} The selfish mining attack is conducted by attackers (i.e., selfish miners) for the purpose of obtaining undue rewards or wasting the computing power of honest miners \cite{SolatP16}. The attacker holds discovered blocks privately and then attempts to fork a private chain \cite{EyalS14}. Afterwards, selfish miners would mine on this private chain, and try to maintain a longer private branch than the public branch because they privately hold more newly discovered blocks. In the meanwhile, honest miners continue mining on the public chain. New blocks mined by the attacker would be revealed when the public branch approaches the length of private branch, such that the honest miners end up wasting computing power and gaining no reward, because selfish miners publish their new blocks just before honest miners. As a result, the selfish miners gain a competitive advantage, and honest miners would be incentivized to join the branch maintained by selfish miners. Through a further consolidation of mining power into the attacker's favor, this attack undermines the decentralization nature of blockchain. Ittay et al.~\cite{EyalS14} propose an attack strategy named \textsc{Selfish-Mine}, which can force the honest miners to perform wasted computations on the stale public branch. In the initial circumstance of \textsc{Selfish-Mine}, the length of the public chain and private chain are the same. The \textsc{Selfish-Mine} involves the following three scenarios: (1). The public chain is longer than the private chain. Since the computing power of selfish miners may be less than that of the honest miners, selfish miners will update the private chain according to the public chain, and in this scenario, selfish miners cannot gain any reward. (2). Selfish miners and honest miners almost simultaneously find the first new block. In this scenario, selfish miners will publish the newly discovered block, and there will be two concurrently forks of the same length. Honest miners will mine in either of the two branches, while selfish miners will continue to mine on the private chain. If selfish miners firstly find the second new block, they will publish this block immediately. At this point, selfish miners will gain two blocks' rewards at the same time. Because the private chain is longer than the public chain, the private chain will be the ultimate valid branch. If honest miners firstly find the second new block and this block is written to the private chain, selfish miners will gain the first new block' rewards, and honest miners will gain the second new block' rewards. Otherwise, if this block is written to the public block, honest miners will gain these two new blocks' rewards, and selfish miners will not gain any reward. (3). After selfish miners find the first new block, they also find the second new block. In this scenario, selfish miners will hold these two new blocks privately, and they continue to mine new blocks on the private chain. When the first new block is found by honest miners, selfish miners will publish its own first new block. When honest miners find the second new block, the selfish miners will immediately publish its own second new block. Then selfish miners will follow this response in turn, until the length of the public chain is only 1 greater than the private chain, after which the selfish miners will publish its last new block before honest miners find this block. At this point, the private chain will be considered valid, and consequently selfish miners will gain the rewards of all new blocks. \subsection{DAO Attack} \begin{table}[ht!] \centering \vspace*{-1ex} \scriptsize \caption{Some other attacks that exploit smart contracts' vulnerabilities} \vspace{1ex} \label{vulcase} \begin{tabular}{|c|c|c|} \hline \textbf{Number}&\textbf{Attack case} & \textbf{Related vulnerabilities}\\ \hline \textsc 1&King of the Ether throne&\tabincell{c}{Out-of-gas send\\Exception disorder} \\ \hline \textsc 2&Multi-player games&Field disclosure \\ \hline \textsc 3&Rubixi attack&Immutable bug \\ \hline \textsc 4&GovernMental attack&\tabincell{c}{Immutable bug\\Stack overflow\\Unpredictable state\\Timestamp dependence} \\ \hline \textsc 5&Dynamic libraries attack&Unpredictable state \\ \hline \end{tabular} \vspace{-2ex} \end{table} The \texttt{DAO} is a smart contract deployed in Ethereum on 28th May of 2016, which implements a crowd-funding platform. The \texttt{DAO} contract was attacked only after it has been deployed for 20 days. Before the attack happened, \texttt{DAO} has already raised 150 million US\$, which is the biggest crowdfund ever. The attacker stole about 60 million US\$. The attacker exploited the reentrancy vulnerability in this case. Firstly, the attacker publishes a malicious smart contract, which includes a \texttt{withdraw()} function \texttt{call} to \texttt{DAO} in its callback function. The \texttt{withdraw()} will send Ether to the callee, which is also in the form of \texttt{call}. Therefore, it will invoke the callback function of the malicious smart contract again. In this way, the attacker is able to steal all the Ether from \texttt{DAO}. There are some other cases that exploit smart contracts' vulnerabilities (described in Section \ref{sec:vul}), which are listed in Table~\ref{vulcase}~\cite{atzeisurvey}. \subsection{BGP Hijacking Attack} BGP (Border Gateway Protocol) is a de-facto routing protocol and regulates how IP packets are forwarded to their destination. To intercept the network traffic of blockchain, attackers either leverage or manipulate BGP routing. BGP hijacking typically requires the control of network operators, which could potentially be exploited to delay network messages. Maria et al.~\cite{ApostolakiZV16} comprehensively analyze the impact of routing attacks, including both node-level and network-level attacks, on Bitcoin, and show that the number of the successfully to-be-hijacked Internet prefixes depends on the distribution of mining power. Because of the high centralization of some Bitcoin mining pools, if they are attacked by BGP hijacking, it will have a significant effect. The attackers can effectively split the Bitcoin network, or delay the speed of block propagation. Attackers conduct BGP hijacking to intercept Bitcoin miners' connections to a mining pool server, as analyzed by Dell SecureWorks in 2014~\cite{BGPHijacking}. By rerouting traffic to a mining pool controlled by the attacker, it was possible to steal cryptocurrency from the victim. This attack collected an estimated 83,000 US\$ of cryptocurrency over a two month period. Since the BGP security extensions are not widely deployed, network operators have to rely on monitoring systems, which would report rogue announcements, such as BGPMon~\cite{4804446}. However, even if an attack is detected, solving a hijacking still cost hours as it is a human-driven process consisting of altering configuration or disconnecting the attacker. For example, YouTube ever took about three hours to resolve a hijacking of its prefixes by a Pakistani ISP (Internet Service Provider)~\cite{Youtube}. \subsection{Eclipse Attack} \begin{table}[ht!] \centering \vspace*{-1ex} \scriptsize \caption{Some other attacks that may be caused by the eclipse attack} \vspace{1ex} \label{eclipse} \begin{tabular}{|c|c|c|} \hline \textbf{Number}&\textbf{Attack} & \textbf{Harm}\\ \hline \textsc 1&Engineering block races&Wasting mining power on orphan blocks \\ \hline \textsc 2&Splitting mining power&51\% vulnerability may be triggered \\ \hline \textsc 3&Selfish mining&Attacker can gain more than normal mining rewards \\ \hline \textsc 4&0-confirmation double spend&\multirow{2}{*}{The vendor would not get rewards for its service} \\ \cline{1-2} \textsc 5&N-confirmation double spend& \\ \hline \end{tabular} \vspace{-2ex} \end{table} The eclipse attack allows an attacker to monopolize all of the victim's incoming and outgoing connections, which isolates the victim from the other peers in the network \cite{SinghNDW06}. Then, the attacker can filter the victim's view of the blockchain, or let the victim cost unnecessary computing power on obsolete views of the blockchain. Furthermore, the attacker is able to leverage the victim's computing power to conduct its own malicious acts. Ethan et al.~\cite{HeilmanKZG15} consider two types of eclipse attack on Bitcoin's peer-to-peer network, namely botnet attack and infrastructure attack. The botnet attack is launched by bots with diverse IP address ranges. The infrastructure attack models the threat from an ISP, company or nation-state that has contiguous IP addresses. The Bitcoin network might suffer from disruption and a victim's view of the blockchain will be filtered due to the eclipse attack. Additionally, the eclipse attack is a useful basis for other attacks, as shown in Table~\ref{eclipse}~\cite{HeilmanKZG15}. \subsection{Liveness Attack} \begin{figure}[ht] \centering \includegraphics[width=4in]{Livenessattack} \caption{Overview of the liveness attack process} \label{Livenessattack} \end{figure} Aggelos et al.~\cite{KiayiasP16} propose the liveness attack, which is able to delay as much as possible the confirmation time of a target transaction. They also present two instantiations of such attack on Bitcoin and Ethereum. Liveness attack consists of three phases, namely attack preparation phase, transaction denial phase, and blockchain retarder phase (shown in Fig.\ref{Livenessattack}): (1). Attack preparation phase. Just like selfish mining attack, an attacker builds advantage over honest miners in some way before the target transaction TX is broadcasted to the public chain. The attacker builds the private chain, which is longer than the public chain. (2). Transaction denial phase. The attacker privately holds the block that contains TX, in order to prevent TX from being written into the public chain. (3). Blockchain retarder phase. In the growth process of the public chain, TX will no longer be able to be privately held in a certain time. In this case, the attacker will publish the block that contains TX. In some blockchain systems, when the depth of the block that contains TX is greater than a constant, TX will be regarded valid. Therefore, the attacker will continue building private chain in order to build an advantage over the public chain. After that, the attacker will publish her privately held blocks into public chain in proper time to slow down the growth rate of public chain. The liveness attack will end when TX is verified as valid in the public chain. \subsection{Balance Attack} Christopher et al.~\cite{NatoliG16a} propose the balance attack against PoW-based blockchain, which allows a low-mining-power attacker to momently disrupt communications between subgroups with similar mining power. They abstract blockchain into a DAG (Directed Acyclic Graph) tree, in which DAG = $<B,P>$. $B$ are the nodes indicating blocks' information, and they are connected through directed edges $P$. After introducing a delay between correct subgroups of equivalent mining power, the attacker issues transactions in one subgroup (called ``transaction subgroup'') and mines blocks in another subgroup (called ``block subgroup''), to guarantee that the tree of block subgroup outweighs the tree of transaction subgroup. Even though the transactions are committed, the attacker is able to outweigh the tree containing this transaction and rewrite blocks with high probability. The balance attack inherently violates the persistence of the main branch prefix and allows double spending. The attacker needs to identify the merchant-involved subgroup and create transactions to purchase goods from those merchants. Thereafter, the attacker issues transactions to this subgroup and propagates the mined blocks to the rest nodes of the group. As long as the merchant ships goods, the attacker stops delaying messages. With a high probability that the DAG tree seen by the merchant is outweighed by another tree, the attacker could successfully reissue another transaction using exactly the same coins. Balance attack proves that PoW-based blockchain is block oblivious. That is, when writing a transaction into the main chain, there is a certain probability that the attacker can override or delete the block containing this transaction. In the related experiment, the authors configure an Ethereum private chain with equivalent parameters of R3 consortium~\cite{r3}, and showed that they can successfully carry out the balance attack, which only needs to control about 5\% of total computing power. \section{Future Directions} \label{sec:future} Based on the above systematic examination on the security of current blockchain systems, we list a few future directions to stir up research efforts into this area. First, nowadays the most popular consensus mechanism used in blockchain is PoW. However, a major disadvantage of PoW is the waste of computing resources. To solve this problem, Ethereum is trying to develop a hybrid consensus mechanism of PoW and PoS. Conducting researches and developing more efficient consensus mechanisms will make a significant contribution to the development of blockchain. Second, with the growth of the number of feature-rich dAPPs, the privacy leakage risk of blockchain will be more serious. A dAPP itself, as well as the process of communication between the dAPP and Internet, are both faced with privacy leakage risks. There are some interesting techniques that can be applied in this problem: code obfuscation, application hardening, execution trusted computing (e.g., Intel SGX), etc. Third, the blockchain will produce a lot of data, including block information, transaction data, contract bytecode, etc. However, not all of the data stored in blockchain is valid. For example, a smart contract can erase its code by \texttt{SUICIDE} or \texttt{SELFDESTRUCT}, but the address of the contract will not be erased. In addition, there are a lot of smart contracts containing no code or totally the same code in Ethereum, and many smart contracts are never be executed after its deployment. An efficient data cleanup and detection mechanism is desired to improve the execution efficiency of blockchain systems. \section{Security Enhancements} In this section, we summarize security enhancements to blockchain systems, which can be used in the development of blockchain systems. \label{sec:enhancement} \subsection{SmartPool} \begin{figure}[ht] \centering \includegraphics[width=5in]{pool} \caption{Overview of \textsc{SmartPool}'s execution process} \label{pool} \end{figure} As described in Section \ref{sec:51}, there already has mining pool with more than 40\% of total computing power of blockchain. This poses a serious threat to the decentralization nature, making blockchain vulnerable to several kinds of attacks. Loi et al.~\cite{luu2017smart} propose a novel mining pool system named \textsc{SmartPool}, whose workflow is shown in Fig.\ref{pool}. \textsc{SmartPool} gets the transactions from Ethereum node clients (i.e., parity~\cite{Parity} or geth~\cite{Geth}), which contain mining tasks information. Then, the miner conducts hashing computation based on the tasks and returns the completed shares to the smartpool client. When the number of the completed shares reaches to a certain amount, they will be committed to smartpool contract, which is deployed in Ethereum. The smartpool contract will verify the shares and deliver rewards to the client. Compared with the traditional P2P pool, \textsc{SmartPool} system has the following advantages: (1). Decentralized. The core of the \textsc{SmartPool} is implemented in the form of smart contract, which is deployed in blockchain. Miners need first connect to Ethereum to mine through the client. Mining pool can rely on Ethereum's consensus mechanism to run. In this way, it ensures decentralization nature of pool miners. The mining pool state is maintained by Ethereum and no longer requires a pool operator. (2). Efficiency. Miners can send the completed shares to the smartpool contract in batches. Furthermore, miners only need to send part of shares to be verified, not all shares. Hence, \textsc{SmartPool} is more efficient than the P2P pool. (3). Secure. \textsc{SmartPool} leverages a novel data structure, which can prevent the attacker from resubmitting shares in different batches. Furthermore, the verification method of \textsc{SmartPool} can guarantee that honest miners will gain expected rewards even there exist malicious miners in the pool. \subsection{Quantitative Framework} \begin{figure}[ht] \centering \includegraphics[width=4in]{Quantitativeframework} \caption{Components of quantitative framework} \label{Quantitativeframework} \end{figure} There exist tradeoffs between blockchain's performance and security. Arthur et al.~\cite{gervaissecurity} propose a quantitative framework, which is leveraged to analyze PoW-based blockchain's execution performance and security provisions. As shown in Fig.\ref{Quantitativeframework}, the framework has two components: blockchain stimulator and security model. The stimulator mimics blockchain's execution, whose inputs are parameters of consensus protocol and network. Through the simulator's analysis, it can gain performance statistics of the target blockchain, including block propagation times, block sizes, network delays, stale block rate, throughput, etc. The stale block refers to a block that is mined but not written to the public chain. The throughput is the number of transactions that the blockchain can handle per second. Stale block rate will be passed as a parameter to the security model component, which is based on MDP (Markov Decision Processes) for defeating double spending and selfish mining attacks. The framework eventually outputs optimal adversarial strategy against attacks, and facilitates building security provisions for the blockchain. \subsection{\textsc{Oyente}} Loi et al.~\cite{luu2016making} propose \textsc{Oyente} to detect bugs in Ethereum smart contracts. \textsc{Oyente} leverages symbolic execution to analyze the bytecode of smart contracts and it follows the execution model of EVM. Since Ethereum stores the bytecode of smart contracts in its blockchain, \textsc{Oyente} can be used to detect bugs in deployed contracts. \begin{figure}[ht] \centering \includegraphics[width=4in]{oyente} \caption{Overview of \textsc{Oyente}'s architecture design and execution process} \label{oyente} \end{figure} Fig.\ref{oyente} shows \textsc{Oyente}'s architecture and execution process. It takes the smart contract's bytecode and Ethereum global state as inputs. Firstly, based on the bytecode, \texttt{CFG BUILDER} will statically build CFG (Control Flow Graph) of smart contract. Then, according to Ethereum state and CFG information, \texttt{EXPLORER} conducts simulated execution of smart contract leveraging static symbolic execution. In this process, CFG will be further enriched and improved because some jump targets are not constants; instead, they should be computed during symbolic execution. The \texttt{CORE ANALYSIS} module uses the related analysis algorithms to detect four different vulnerabilities (described in Section \ref{sec:vul}). The \texttt{VALIDATOR} module validates the detected vulnerabilities and vulnerable paths. Confirmed vulnerability and CFG information will finally be output to the \texttt{VISUALIZER} module, which can be employed by users to carry out debugging and program analysis. Currently, \textsc{Oyente} is open source for public use \cite{Oyente}. \subsection{Hawk} As described in Section \ref{sec:privacy}, privacy leakage is a serious threat to blockchain. In the era of blockchain 2.0, not only transactions but also contract-related information are public, such as contract's bytecode, invoking parameters, etc. Ahmed et al.~\cite{kosba2016hawk} propose \textsc{Hawk}, a novel framework for developing privacy-preserving smart contracts. Leveraging \textsc{Hawk}, developers can write private smart contracts, and it is not necessary for them to use any code encryption or obfuscation techniques. Furthermore, the financial transaction's information will not be explicitly stored in blockchain. When programmers develop \textsc{Hawk} contract, the contract can be divided into two parts: private portion, and public portion. The private data and financial function related codes can be written into the private portion, and codes that do not involve private information can be written into the public portion. The \textsc{Hawk} contract is compiled into three pieces. (1). The program that will be executed in all virtual machines of nodes, just like smart contracts in Ethereum. (2). The program that will only be executed by the users of smart contracts. (3). The program that will be executed by the manager, which is a special trustworthy party in \textsc{Hawk}. The \textsc{Hawk} manager is executed in Intel SGX enclave (described in Section \ref{sec:criminal}), and it can see the privacy information of the contract but will not disclose it. \textsc{Hawk} can not only protect privacy against the public, but also protect the privacy between different \textsc{Hawk} contracts. If the manager aborts the protocol of \textsc{Hawk}, it will be automatically financially penalized, and the users will gain compensation. Overall, \textsc{Hawk} can largely protect the privacy of users when they are using blockchains. \subsection{Town Crier} \begin{figure}[ht] \centering \includegraphics[width=4.5in]{tc} \caption{Basic architecture of \textsc{Town Crier} system} \label{tc} \end{figure} Smart contract often needs to interact with off-chain (i.e., external) data source. Zhang et al.~\cite{zhang2016town} propose \textsc{TC} (\textsc{Town Crier}), which is an authenticated data feed system for this data interaction process. Since the smart contract deployed in blockchain cannot access network directly, they cannot get data through HTTPS. \textsc{TC} exactly acts as a bridge between HTTPS-enabled data source and smart contracts. The basic architecture of \textsc{TC} is shown in Fig.\ref{tc}. \textsc{TC} contract is the front end of the \textsc{TC} system, which acts as API between users' contracts and \textsc{TC} server. The core program of \textsc{TC} is running in Intel SGX enclave (described in Section \ref{sec:criminal}). The main function of the \textsc{TC} server is to obtain the data requests from users' contracts, and obtain the data from target HTTPS-enabled websites. Finally, the \textsc{TC} server will return a datagram to the users' contracts in the form of digitally signed blockchain messages. \textsc{TC} can largely protect the security of the data requesting process. The core modules of \textsc{TC} are respectively running on decentralized Ethereum, SGX-enabled \texttt{enclave}, and HTTPS-enabled website. Furthermore, the \texttt{enclave} disables the function of network connection to maximize its security. \texttt{Relay} module is designed as a network communication hub for smart contracts, SGX \texttt{enclave} environment, and data source websites. Therefore, it achieves isolation between network communication and the execution of \textsc{TC}'s core program. Even if the \texttt{Relay} module is attacked, or the network communication packets are tampered, it will not change the normal function of \textsc{TC}. \textsc{TC} system provides a robust security model for the smart contracts' off-chain data interaction, and it has already been launched online as a public service~\cite{30}. \section{Conclusion} \label{sec:conclusion} In this paper, we focus on the security issues of blockchain technology. By studying the popular blockchain systems (e.g., Ethereum, Bitcoin, Monero, etc.), we conduct a systematic examination on the security risks to blockchain. For each risk or vulnerability, we analyze its causes and possible consequence. Furthermore, we survey the real attacks on the blockchain systems, and analyze the vulnerabilities exploited in these attacks. Finally, we summarize blockchain security enhancements and suggest a few future directions in this area. \section*{Acknowledgements} This work is supported in part by the National Science Foundation of China (No.61471228), the Key Project of Guangdong Province Science \& Technology Plan (No.2015B020233018), the National Natural Science Foundation of China (No.61402080, No.61572115, No.61502086, No.61572109), and China Postdoctoral Science Foundation founded project (No.2014M562307). \bibliographystyle{elsarticle-num}
{ "timestamp": "2018-03-07T02:07:10", "yymm": "1802", "arxiv_id": "1802.06993", "language": "en", "url": "https://arxiv.org/abs/1802.06993" }
\section{Introduction} In 1986, W. Thurston \cite{Thurston1986} proved a remarkable theorem about the ways in which a compact oriented $3$-manifold $M$ can fibre over the circle. He introduced a semi-norm $x \colon H^1(M;\mathbb{R}) \to [0, \infty)$ (now called the Thurston norm and often denoted by $\| \cdot \|_T$), whose unit ball is a polytope $B_x$, such that if $\phi \colon \pi_1(M) \to \mathbb{Z}$ is a homomorphism induced by a fibration of $M$, then $\phi$ lies in the cone of an open maximal face of $B_x$, and every other integral character $\pi_1(M) \to \mathbb{Z}$ lying in the same cone also comes from a fibration (hence one can talk about fibred faces of $B_x$). A year later, Bieri--Neumann--Strebel \cite{Bierietal1987} introduced the geometric (or $\Sigma$, or BNS) invariant of finitely generated groups. They observed that if $G$ is the fundamental group of the manifold $M$ above, then the $\Sigma$-invariant $\Sigma(G) \subseteq H^1(G;\mathbb{R}) \s- \{0\}$ coincides with the union of the cones of the fibred faces (with the origin removed). Thus, Thurston's theorem can be reinterpreted as a result about the structure of $\Sigma(G)$ -- for $3$-manifold groups, the BNS invariant is determined by the integral polytope $B_{x^\ast} \subset H_1(G;\mathbb{R})$ (which is the dual polytope of $B_x$), where `integral' means that the vertices lie on the integer lattice, and `determined' means that if a character lies in $\Sigma(G)$, then any other character which attains its minimum on $B_{x^\ast}$ at the same face as $\phi$ also lies in $\Sigma(G)$. We can thus mark the vertices of $B_{x^\ast}$ dual to the fibred faces, and say that $\Sigma(G)$ is determined by a marked polytope. The BNS invariants were subsequently computed for many finitely generated groups: finitely generated nilpotent or metabelian groups \cite[Theorem E]{BieriGroves1984}, $2$-generator $1$-relator groups and Houghton groups \cite[Theorem 4.2 and Proposition 8.3]{Brown1987}, right-angled Artin groups (RAAGs) \cite[Theorem 5.1]{MeierVanWyk1995}, groups with staggered presentations and graph products (with underlying graphs without a central vertex) \cite{GarilleMeier1998}, pure symmetric automorphism groups of finitely generated free groups \cite{Orlandi-Korner2000}, limit groups \cite[Corollary 30]{Kochloukova2010}, fundamental groups of compact K\"ahler manifolds \cite{Delzant2010}, pure symmetric automorphism of RAAGs \cite[Theorem A]{KobanPiggott2014}, braided Thompson group $F$ \cite[Theorem 3.4]{Zaremsky2018}, some Artin groups \cite{AlmeidaKochloukova2015a,AlmeidaKochloukova2015,Almeida2017}, pure braid groups \cite{Kobanetal2015}, \{finitely generated free\}-by-$\mathbb{Z}$ groups with polynomially growing monodromy \cite{CashenLevitt2016}, and Lodha--Moore groups \cite{Zaremsky2016}. In all of these examples, $\Sigma(G)$ is determined by an integral polytope. The invariant has also been computed in \cite[Theorem 8.1]{Bierietal1987} for finitely generated groups of piecewise linear automorphisms of the interval $[0,1]$, which are \emph{irreducible} in the sense that there are no points in the interior of the interval fixed by the entire group, and which have \emph{independent left and right derivatives} in the following sense: let $\lambda$ and $\rho$ be the left and right derivatives taken at the endpoints of the interval $[0,1]$; `independence' means that $\lambda(\ker \rho) = \lambda(G)$ and vice versa. In this case, $\Sigma(G) = H^1(G;\mathbb{R}) \s- \{\kappa \log \lambda, \kappa \log \rho \mid \kappa \geqslant 0 \}$. Thus, $\Sigma(G)$ is determined by a polytope, but not necessarily integral -- there is a specific example giving a non-integral polytope. The example is shown by Stein \cite{Stein1992} to be finitely presented and of type $\typeFP{\infty}$, but it is not of type $\typeF{}$ -- this is relatively easy to see, since the example contains Thompson's group $F$. It is worth noting that the vast majority of the papers above computes $\Sigma(G)$, and only then deduces that it is determined by a polytope. It is of course very valuable to have a complete description of $\Sigma(G)$, but such detailed knowledge cannot be obtained even for all finitely presented groups -- it was shown by Cavallo--Delgado--Kahrobaei--Ventura~\cite[Theorem 6.4]{Cavalloetal2017} that computing $\Sigma(G)$ is undecidable. Thus, it is worthwhile to study the structure of $\Sigma(G)$ from a more abstract viewpoint. The original motivation behind this article was to study the BNS invariants of free-by-cyclic groups. In particular, the author set out to prove that such BNS invariants have only finitely many connected components, which was not known prior to this work, but expected due to the analogies between free-by-cyclic groups and fundamental groups of $3$-manifolds. It turned out that the analogy goes much further, and the methods developed found also other applications. Let us summarise the results in this direction by the following statement. \begin{thm*} Let $G$ belong to one of the following classes: \begin{enumerate} \item descending HNN extensions of finitely generated free groups; or \item fundamental groups of compact connected oriented $3$-manifolds; or \item agrarian Poincar\'e duality groups of dimension $3$ and type $\typeF{}$; or \item agrarian groups of deficiency one. \end{enumerate} Then $\Sigma(G)$ is determined by an integral polytope. \end{thm*} Here, `agrarian' means that the integral group ring of the group in question embeds into a skew-field (division algebra). \smallskip We can also draw a number of conclusions related to the $L^2$-torsion polytope $P_{L^2}$ of Friedl--L\"uck~\cite{FriedlLueck2017}. This polytope is associated to every finite $L^2$-acyclic (i.e. with vanishing $L^2$-homology) classifying space of a group $G$ satisfying the Atiyah conjecture -- the Atiyah conjecture is a statement about integrality of the $L^2$-Betti numbers of CW-complexes on which $G$ acts freely, properly and cocompactly. (In fact, the polytope $P_{L^2}$ can also be defined for other CW-complexes, not only classifying spaces.) The original definition of $P_{L^2}$ takes as input a classifying space rather then its fundamental group, and if two different classifying spaces are taken, then the resulting polytopes differ by the Newton polytope of some element of the Whitehead group of $G$. The Whitehead group is a ($K$-theoretic) group consisting of obstructions for $G$-homotopy equivalences of $G$-CW complexes being $G$-simple homotopy equivalences. \begin{thm*} Let $G$ be a finitely-generated torsion-free group satisfying the Atiyah conjecture. The Newton polytope of any element of the Whitehead group $\mathrm{Wh}(G)$ vanishes. Hence, if $G$ is additionally $L^2$-acyclic and of type $\typeF{}$, then the $L^2$-torsion polytope $P_{L^2}(G)$ of $G$ is well-defined. Also, if $G$ is amenable and $G \not \cong \mathbb{Z}$, then $P_{L^2}(G)$ is a singleton. \end{thm*} \smallskip The methods used in the proofs of both theorems above are of an algebraic character: we use Sikorav's theorem (\cref{sikorav}) to relate BNS invariants to matrices over the group ring, and study these by extending scalars to a skew-field (given by the property of being agrarian -- usually, we will use the skew-field constructed by Linnell~\cite{Linnell1993} for torsion-free groups satisfying the Atiyah conjecture). Thus, the methods can be traced to the theory of $L^2$-invariants (already used for the computation of the BNS invariants of some $2$-generator $1$-relator groups by Friedl--Tillmann~\cite{FriedlTillmann2015}), but stripped of its analytic flavour. Let us also mention the article of Friedl--L\"uck~\cite{FriedlLueck2017}, which is foundational for the point of view presented here. The paper is divided into four parts. Let us briefly discuss the contents of each of these. \iffalse \begin{enumerate} \item \cite[Theorem 8.1]{Bierietal1987}, G is f.g. group of PL autos of $[0,1]$, irreducible that is there are no fixed points on the open interval. Let $\lambda$ and $\rho$ be the left and right derivatives at the endpoints. If $\lambda(\ker \rho) = \lambda(G)$ and vice versa, then $\Sigma^(G) = H^1 \s- \{ \kappa \log \lambda, \kappa \log \rho \mid \kappa > 0 \}$. Thus, $\Sigma^1(G)$ is determined by a polytope, but not necessarily integral -- there is a specific example, when the polytope is not rational. The example is shown by Stein \cite{Stein1992} to be finitely presented, but it is not of type $\typeF{}$. \item \cite[Theorem E]{BieriGroves1984} : f.g. metabelian groups or f.g. nilpotent groups have sigma determined by an integral polytope. \item \cite[Theorem 4.2]{Brown1987} int poly for 2-gen 1-rel, and for Houghton groups. \item \cite[Theorem 5.1]{MeierVanWyk1995} $\Sigma$ of RAAGs are determined by rational polytopes. (Meinert1995 extends the result to graph products, but then polyhedrality depends on the vertex groups). \cite[Main Theorem]{Meieretal1998} extends the result to higher BNS, both homotopical and homological. \item \cite[Theorem]{Orlandi-Korner2000} for the pure symmetric automorphisms of $F_n$ (taking each of a fixed generating set to a conjugate thereof), $\Sigma$ is determined by an integral polytope. \item \cite[Corollary 30]{Kochloukova2010} shows that for non-abelian limit groups, $\Sigma$ is empty. \item \cite{Delzant2010} integral poly for compact K\"ahler groups. \item \cite[Theorem A]{KobanPiggott2014} pure symmetric autos of RAAGs are given by int poly. \item \cite{AlmeidaKochloukova2015,AlmeidaKochloukova2015a,Almeida2017} int poly for some Artin groups. \item \cite{Kobanetal2015} int poly for all pure braid groups. \item \cite{CashenLevitt2016} int poly for (f.g. free)-by-$\mathbb{Z}$ with polynomially growing monodromy. \item \cite{Zaremsky2016} int poly for Lodha--Moore groups (which are related to Thompson's $F$). \item \cite{GarilleMeier1998} int poly for groups with staggered presentation; also graph product (over a graph without a central vertex) of finitely presented groups. \end{enumerate} \fi \subsection*{Twisted group rings, biorderable groups, and the Ore localisation} In \cref{sec prelims} we recall the three notions, which will be of central importance throughout the paper. \subsection*{Matrices and their polytopes} In \cref{sec dets} we prove the two main technical results of the paper. We let $H$ be a finitely generated free-abelian group, $\mathbb{K}$ be a skew-field, and denote by $\mathbb{K} H$ some twisted group ring of $H$ (this is a variation on the usual group ring, which appears naturally when one considers extensions of groups). The ring $\mathbb{K} H$ can be thought of as a group ring, but also as a ring of twisted Laurent polynomials in several commuting variables over a skew-field. We will embed $\mathbb{K} H$ into its skew-field of fractions $\mathcal{D}$ (formally, the Ore localisation), and study square matrices over $\mathbb{K} H$ which become invertible over $\mathcal{D}$. For any such matrix $A$, we have the Dieudonn\'e determinant $\det A$ at our disposal; since $\mathcal{D}$ is the skew-field of fractions, we can write $\det A$ as a fraction of two elements in $\mathbb{K} H$. Thinking of these elements as Laurent polynomials, we can take their Newton polytopes (convex hulls of their supports), and thus associate to $\det A$ a formal difference of two polytopes (one for the numerator, and one for the denominator). The first result of significance is \cref{single poly}, which shows that in fact we can represent $\det A$ by a single polytope, denoted by $P(A)$. This rather innocuous statement has interesting consequences (listed below). \cref{single poly} is also interesting in its own right: were our coefficients $\mathbb{K}$ forming a commutative field, and were the group ring $\mathbb{K} H$ not twisted, we would be dealing with matrices over an abelian ring of classical interest, the ring of Laurent polynomials. It is clear that in this case the determinant $\det A$ is a polynomial, and thus its Newton polytope is a single polytope. In our setting, the Dieudonn\'e determinant is not a polynomial (it is a rational function), but nevertheless we do obtain a single polytope, in analogy with the classical case. \smallskip The second theorem we prove in \cref{sec dets} is \cref{K-bns matrix}. It says that the shape of the Newton polytope $P(A)$ determines the existence of right-inverses of $A$ over the Novikov rings $\widehat {\mathbb{K} H}^\phi$. The superscript $\phi$ denotes a character $\phi \colon H \to \mathbb{R}$, and the Novikov ring $\widehat {\mathbb{K} H}^\phi$ is the ring of formal sums of elements in $H$ with coefficients in $\mathbb{K}$ (just like $\mathbb{K} H$), with the caveat that sums with infinite support are allowed, provided that the supports go to infinity `only in the direction of $\phi$'. \cref{K-bns matrix} says that $A$ admits a right-inverse over $\widehat {\mathbb{K} H}^\phi$ if and only if $\phi$ attains its minimum on $P(A)$ at a unique vertex. The Novikov rings play a central role in the $\Sigma$-theory (theory of the BNS invariants), since Sikorav~\cite{Sikorav1987} proved that a character $\phi$ lies in $\Sigma(G)$ if and only if $H_1(G;\widehat{ \mathbb{Z} G}^\phi)=0$ (see \cref{sikorav}). Motivated by this result, we introduce the notion of the $\Sigma$-invariant of the matrix $A$, denoted by $\Sigma(A)$; \cref{K-bns matrix} tells us that $\Sigma(A)$ is determined by $P(A)$, and this will be the source of our polytopes throughout the paper. \subsection*{Agrarian groups} The focal point of the remainder of the paper will be the class of \emph{agrarian groups}, that is groups $G$ whose group ring $\mathbb{Z} G$ embeds in a skew-field. We introduce the class in \cref{sec agrarian}, where we also introduce the potentially smaller class of equivariantly agrarian groups. It is clear that agrarian groups are torsion-free and satisfy the zero-divisor conjecture of Kaplansky; there are currently no other obstructions to being agrarian known -- there is not a single torsion-free example of a non-agrarian group. On the other hand, there are many positive examples: torsion-free amenable groups whose integral group rings have no zero-divisors, biorderable groups, and torsion-free groups satisfying the Atiyah conjecture are all examples of agrarian groups. The class is also closed under subgroups, directed unions, and many extensions, and being agrarian is a fully-residual property. Noteworthy from our point of view is that the class is known to contain all descending HNN extensions of free groups, and the fundamental groups of $3$-manifolds which fibre over the circle. Note that \cref{sec agrarian,sec dets} are completely independent. \subsection*{Applications} In \cref{sec apps} we list the applications of \cref{single poly,K-bns matrix}. Funke~\cite{Funke2018} introduced the notion of a group of \emph{polytope class}. He defined it as a subclass of the class of torsion-free groups satisfying the Atiyah conjecture and with finitely generated abelianisations. We show (in \cref{polytope class}) that in fact every group in this (a priori) larger class is of polytope class. This allows us to use the work of Funke in greater generality: we prove that the Newton polytopes of elements of the Whitehead group $\mathrm{Wh}(G)$ of a finitely-generated torsion-free group satisfying the Atiyah conjecture are trivial (\cref{Whitehead}). The Whitehead group is a quotient of $K_1(\mathbb{Z} G)$, the first $K$-group, by the subgroup $\{ \pm g \mid g \in G \}$. It plays an important role in homotopy theory: $\mathrm{Wh}(G)$ is trivial if and only if every homotopy equivalence between two CW-complexes with fundamental group $G$ is a simple-homotopy equivalence. The elements of $\mathrm{Wh}(G)$ can be thought of as matrices over $\mathbb{Z} G$, and so it makes sense to talk about Newton polytopes of elements of $\mathrm{Wh}(G)$. Our vanishing result can be seen as evidence towards the conjecture on triviality of the Whitehead groups of torsion-free groups. The vanishing of the Newton polytopes of the elements of $\mathrm{Wh}(G)$ implies that the $L^2$-torsion polytope of Friedl--L\"uck \cite{FriedlLueck2017} is a homotopy invariant, and hence it can be defined as a group invariant of $L^2$-acyclic groups of type $\typeF{}$ satisfying the Atiyah conjecture. The $L^2$-torsion polytope is obtained as the Newton polytope of (the determinant of) the universal $L^2$-torsion. This latter invariant can be thought of as an $L^2$-version of the Whitehead torsion -- we refer the reader to \cite{FriedlLueck2017}, where it is defined and discussed in more detail. The next situation we look at occurs when the group $G$ admits a finite subnormal chain terminating in an amenable group $N$. We prove that if $N$ is not abelian, or $G$ has trivial centre, then the $L^2$-torsion polytope $P_{L^2}(G)$ is a singleton. We use this to prove a conjecture of Friedl--L\"uck--Tillmann~\cite{Friedletal2016}, which states that if $G$ as above is amenable but not cyclic, then $P_{L^2}(G)$ is a singleton. Afterwards, we look at an agrarian group $G$ of deficiency one. For such a group we show (\cref{defic 1}) that $\Sigma(G)$ is determined by an integral polytope. If $G$ satisfies the Atiyah conjecture then we obtain a slightly stronger result (\cref{defic 1 atiyah}). These results confirm a conjecture of Friedl (under the additional assumption of the group being agrarian). They are also interesting in view of a conjecture of Bieri, which states that any group of deficiency one with non-trivial $\Sigma(G)$ is a descending HNN extension of a free group. The BNS-invariants of descending HNN extensions will be discussed below, but the structure we exhibit for the BNS-invariants for agrarian groups of deficiency one is of the same kind as in the case of descending HNN extensions. We proceed to discuss precisely the descending HNN extensions of finitely generated free groups. Note that this class includes \{finitely generated free\}-by-$\mathbb{Z}$ groups (usually referred to as free-by-cyclic groups). It was the author's original motivation to prove that for such a group $G$, the invariant $\Sigma(G)$ has finitely many connected components -- this was not known before, except for free-by-cyclic groups with polynomially growing monodromy, see the work of Cashen--Levitt~\cite{CashenLevitt2016}. In fact we show more; we prove a result fully analogous to the statement of Thurston for $3$-manifolds: the $L^2$-torsion polytope $P_{L^2}(G)$ admits a marking of its vertices, and a character $\phi$ lies in $\Sigma(G)$ if and only if it attains its minimum on $P_{L^2}(G)$ uniquely at a marked vertex (\cref{free by cyclic}). Note that the polytope $P_{L^2}(G)$ is (for descending HNN extensions of free groups) very much analogous to the Thurston polytope $B_{x^\ast}$ -- when the group is a $3$-manifold group (this happens for free-by-cyclic groups with geometric monodromy), the $L^2$-torsion polytope coincides with $B_{x^\ast}$ (see \cite{FriedlLueck2017}). Further analogies between $P_{L^2}(G)$ and $B_{x^\ast}$ for general descending HNN extensions of free groups were studied by Funke and the author in ~\cite{FunkeKielak2018}. In particular, $B_{x^\ast}$ contains enough information to recover the Thurston norm itself. Using the same technique one can use $P_{L^2}(G)$ to induce a function $H^1(G;\mathbb{R}) \to \mathbb{R}$. It was shown in~\cite{FunkeKielak2018} that in fact this function is also a semi-norm, and given an integral character it returns (up to a sign) the $L^2$-Euler characteristic of the kernel of the character. In particular, when the kernel is finitely generated, it returns its (usual) Euler characteristic, and so determines the rank of such a kernel (which is necessarily a free group). In the two final sections, we look at Poincar\'e duality groups of type $\typeF{}$ in dimension $3$, and then at $3$-manifolds themselves. In the first situation, again we prove that $\Sigma(G)$ is determined by an integral marked polytope (\cref{pd}). When $M$ is a $3$-manifold, we reprove Thurston's theorem. Note that the proof we give is completely independent, and of an algebraic character. \subsection*{Acknowledgements} The author would like to thank Kai-Uwe Bux for comments on an earlier version of the article, as well as Stefan Friedl, Fabian Henneke, Wolfgang L\"uck, and Stefan Witzel for helpful discussions. The author was supported by the Priority Programme 2026 \href{https://www.spp2026.de/}{`Geometry at infinity'} of the German Science Foundation (DFG). \section{Twisted group rings, biorderable groups, and the Ore localisation} \label{sec prelims} We start by introducing three concepts that will be of the utmost importance. \subsection{Twisted group rings} Throughout the article, rings are unital and not necessarily commutative, modules are by default right-modules, and groups are discrete. \begin{dfn} Let $R$ be a ring. We denote its group of units by $R^\times$. An element $r$ is a \emph{zero-divisor} if and only if $r$ is non-zero, and there exist $x \in R$ such that $xr = 0$ or $rx = 0$. \end{dfn} Let $R$ be a ring and $G$ a group. We denote the set of functions $G \to R$ by $R^G$. Clearly, pointwise addition turns $R^G$ into an abelian group. Since $R$ has a distinguished $0$, we can talk about supports. \begin{dfn}[Support] For $x \in R^G$, we define its \emph{support} to be \[\operatorname{supp} x = \{ g \in G \mid x(g) \neq 0 \}\] \end{dfn} Note that for an element $x \in R^G$ we will use both the function notation, and the sum notation \[ x = \sum_{g \in G} x(g) g \] In the sum notation, we will often ignore the elements $g$ with $x(g) = 0$, that is we will focus on the values $x$ attains on its support. The sum notation is particularly common when talking about the subgroup $R G$ of $R^G$ consisting of functions with finite support. We now explain how to endow $R G$ with a ring structure. \begin{dfn}[Twisted group ring] Let $\phi \colon G \to \operatorname{Aut}(R)$ and $\mu \colon G \times G \to R^\times$ be two functions satisfying \begin{align*} \phi(g) \circ \phi(g') &= c\big(\mu(g,g')\big) \circ \phi(gg') \\ \mu(g,g') \cdot \mu(gg',g'') &= \phi(g)\big(\mu(g',g'')\big) \cdot \mu(g,g'g'') \end{align*} where $c \colon R^\times \to \operatorname{Aut}(R)$ takes $r$ to the left conjugation by $r$. The functions $\phi$ and $\mu$ are called \emph{structure functions}, and turn $RG$ into a \emph{twisted group ring} by setting \begin{equation} \tag{$\star$} \label{twisted conv} r g \cdot r' g' = r \phi(g)(r') \mu(g,g') gg' \end{equation} and extending to sums by linearity (here we are using the sum notation). When $\phi$ and $\mu$ are trivial, we say that $RG$ is \emph{untwisted} (in this case we obtain the usual group ring). We adopt the convention that the group ring with $\mathbb{Z}$-coefficients is always untwisted. \end{dfn} The following is the key example of a twisted group ring, and we will use it repeatedly throughout the article. \begin{ex} \label{key example} Let $\mathbb{Z} G$ be the (untwisted) integral group ring. Let $\alpha \colon G \to H$ be a quotient with kernel $K$, and let $s \colon H \to G$ be a set-theoretic section of $\alpha$. Then the map \[ g \mapsto \Big(g \cdot \big(s \circ \alpha(g)\big)^{-1}\Big) \alpha(g) \] induces an isomorphism of twisted group rings $\mathbb{Z} G \cong (\mathbb{Z} K)H$, where $\mathbb{Z} G$ and $\mathbb{Z} K$ are untwisted, and the structure functions of $(\mathbb{Z} K)H$ are as follows: $\phi(h)$ is the automorphism of $\mathbb{Z} K$ induced by the (left) conjugation by $s(h)$, and $\mu(h,h') = s(h)s(h') \big( s(hh')\big)^{-1}$. Note that the inverse isomorphism $(\mathbb{Z} K)H \cong \mathbb{Z} G$ is given by \[ \sum_{h \in H} x(h) h \mapsto \sum_{h \in H} x(h) s(h) \] where the coefficients $x(h)$ lie in $\mathbb{Z} K$, the sum on the left-hand side is formal, and the sum on the right-hand side is taken in $\mathbb{Z} G$. \end{ex} Note that in the generality that we introduced them, the rings $R G$ are sometimes called \emph{crossed products}. The structure functions do not appear in the notation, but they are (implicitly) assumed to be specified. Note that the structure functions can be used to define products on other subsets of $R^G$ as follows. Let $x,y \in R^G$ be two functions. For every $g \in G$ define $X_g$ and $Y_g$ to be the smallest subsets of $G$ such that if $g = a \cdot b$ and $(a,b) \in \operatorname{supp} x\times \operatorname{supp} y$, then $(a,b) \in X_g \times Y_g$. If $X_g$ and $Y_g$ are finite for every $g$, then \eqref{twisted conv} gives a method of calculating the product $x\cdot y$ by setting its value at $g$ to the value at $g$ of \[ \Big( \sum_{a \in X_g} x(a) a \Big) \cdot \Big( \sum_{b \in Y_g} y(b) b \Big) \] We will refer to this product as induced by the \emph{twisted convolution \eqref{twisted conv}}. \subsection{Biorderable groups} Biorderable groups form a class of groups whose (twisted) group rings are well-behaved. \begin{dfn} \label{biord} A group $G$ is \emph{biorderable} if and only if there exists a total ordering $\leqslant$ on $G$ which is invariant under right and left multiplication. \end{dfn} The class of biorderable groups is clearly closed under taking subgroups and arbitrary products -- for the latter, it is enough to order the factors of a product in some way, and then use lexicographic order on the resulting group. Together these two properties imply that residually biorderable groups are themselves biorderable. The class is also closed under central extensions, and contains finitely generated free-abelian groups. Putting these properties together yields that residually torsion-free nilpotent groups are biorderable: this is a rather large class, containing free groups and surface groups (see e.g. \cite{Baumslag2010}), as well as RAAGs (as shown by Droms \cite{Droms1983}). The class of biorderable groups is also closed under directed unions and free products -- see the book by Deroin--Navas--Rivas \cite[Section 1.1.1 and Theorem 2.1.9]{Deroinetal2016}. The key property for us is that twisted group rings of biorderable groups with skew-field coefficients embed in skew-fields. \begin{thm}[Malcev~{\cite{Malcev1948}}; Neumann~{\cite{Neumann1949}}] \label{malcev-neumann} Let $G$ be a biorderable group, let $\mathbb{K}$ be a skew-field, and let $\mathbb{K} G$ be a twisted group ring. The ring $\mathbb{K} G$ embeds into a skew field $\mathcal F$, its \emph{Malcev--Neumann completion} \end{thm} Let us briefly describe the construction: fix a biordering $\leqslant$ on $G$. We define $\mathcal F \subseteq \mathbb{K}^G$ to be the subset of those functions $G \to \mathbb{K}$ whose support is \emph{well-ordered} with respect to $\leqslant$, that is such that any non-empty subset of the support has a $\leqslant$-minimum. It is clear that $\mathcal F$ is an abelian group; it can be turned into a ring using the twisted convolution \eqref{twisted conv} -- one has to argue that when taking products, it is always sufficient to take product of two finitely supported functions, but this follows from the fact that functions in $\mathcal F$ have well-ordered supports. It turns out that the ring $\mathcal F$ is actually a skew-field. \begin{lem} \label{units of biord} Let $G$ be a biorderable group, let $\mathbb{K}$ be a skew-field, and let $\mathbb{K} G$ denote some twisted group ring of $G$. The units of $\mathbb{K} G$ have supports of cardinality $1$. \end{lem} \begin{proof} Let $\leqslant$ be a biordering of $G$. Take $x \in \mathbb{K} G^\times$; by definition, there exists $y \in \mathbb{K} G$ such that $xy = 1$. Write \[ x = \sum_{i=0}^n \lambda_i g_i, \ y = \sum_{j=o}^m \nu_j h_j \] where the finite sequences $(g_i)$ and $(h_j)$ of elements of $G$ are strictly $\leqslant$-increasing, and the coefficients $\lambda_i$ and $\nu_j$ are never $0$. We are now going to carry out the twisted multiplication and compute $x\cdot y$. It is immediate that the support of $xy$ does not contain any group element $\leqslant$-smaller than $g_0 h_0$. The coefficient at $g_0h_0$ of $xy$ is \[\ \lambda_0 \phi(g_0)(\nu_0)\mu(g_0,h_0) \] Crucially, none of the factors above are zero, and the multiplication is carried out in a skew-field $\mathbb{K}$. Therefore $g_0h_0$ is the strictly $\leqslant$-smallest element of the support of $xy$. Arguing analogously, we conclude that $g_nh_m$ is the strictly $\leqslant$-greatest element of the support of $xy$. But $xy = 1$, and therefore $1$ is the only element in $\operatorname{supp} xy$. We conclude that $g_0 h_0 = g_n h_m$, which is only possible when $n=0=m$, in which case $x$ has support of cardinality $1$, as claimed. \end{proof} \iffalse \begin{proof} The existence part of the statement is precisely what Malcev and Neumann proved. To prove the equivariance, let us first briefly explain the construction. where the product is calculated over $\mathbb{K} G$. The fact that $xy \in X$, that is that the support of $xy$ is well-ordered, is stated in \cite[Lemma 3.2]{Neumann1949}. We are now ready to prove the equivariance. To this end, let $x\in \operatorname{Aut}(G)$ be an automorphism which extends to $\mathbb{K} G$. The construction of $\mathcal{D}$ start with choosing a biordering $\leqslant$ on $G$. Using $x$ we define another biordering $\leqslant'$ by $g \leqslant' h$ if and only if $x^{-1}(g) \leqslant x^{-1}(h)$. We can now use $\leqslant'$ and construct another skew-field, say $\mathcal{D}'$, into which $\mathbb{K} G$ embeds. Now, both $\mathcal{D}$ and $\mathcal{D}'$ are defined as subsets of $\mathbb{K}^G$, consisting precisely of the functions whose supports are well ordered (with respect to $\leqslant$ and $\leqslant'$, respectively). It is clear that $x$ takes the former subset onto the latter, and so $x$ extends to a bijection $\mathcal{D} \to \mathcal{D}'$. Since $x$ extends to an automorphism of $\mathbb{K} G$, it is also clear that $x$ respects the ring structures of $\mathcal{D}$ and $\mathcal{D}'$, since these are determined by the ring structure of $\mathbb{K} G$. \end{proof} \fi \subsection{Ore localisation} In this section we review the notion of Ore localisation, and show how it comes into play when considering various twisted group rings of amenable groups. \begin{dfn} Let $R$ be a ring. A subset $S \subseteq R$ is said to satisfy the \emph{left Ore condition} if and only if for all $(p,r) \in R \times S$ there exists $(q,s) \in R \times S$ such that \[ sp = qr \] (this can be interpreted as the existence of left common multiples). The \emph{right Ore condition} is defined analogously by the equation \[ ps = rq \] The ring $R$ is said to be an \emph{Ore ring} if and only if the subset $S$ of $R$ consisting of all non-trivial non-zero-divisors satisfies both the left and the right Ore condition. If additionally $R$ has no zero-divisors, then we say that it is an \emph{Ore domain}. \end{dfn} \begin{dfn} Given a ring $R$ and a subset $S \subseteq R \s- \{0\}$ closed under multiplication and satisfying the left and right Ore conditions, we define the \emph{localisation} of $R$ at $S$ by \[ RS^{-1} = (R,S) /\sim \] where $\sim$ is the transitive closure of the relation identifying $(p,r)$ with $(px,rx)$ for any $x \in S$. When $S$ is the subset of $R$ consisting of all non-zero non-zero-divisors, we call $R S^{-1}$ the \emph{Ore localisation}. \end{dfn} One should think of $(p,r)$ as the right fraction $p/r$. The Ore conditions allow us to change left fractions into right ones and vice versa; they also allow us to find common denominators. Therefore the localisation is itself a ring, and the map $p \mapsto p/1$ is a ring morphism. The construction is explained in Cohn's book \cite[Section 1.2]{Cohn1977}, and in more details in Passman's book~\cite[Section 4.4]{Passman1985} \begin{rmk} \label{Ore rmk} Clearly, the Ore condition can be used to find common denominators for any finite number of elements in the localisation $R S^{-1}$. \end{rmk} The following facts may also be found in \cite[Section 1.2]{Cohn1977}. \begin{prop} \label{Ore loc} Let $R$ be an Ore ring, and let $S$ denote the set of non-zero non-zero-divisors. \begin{enumerate} \item $R$ embeds into its Ore localisation $ R S^{-1}$. \item Every automorphism of $R$ extends to an automorphism of $R S^{-1}$. \label{Ore autos} \item \label{Ore funct} When $R$ is an Ore domain, then any ring monomorphism $R' \hookrightarrow R$ extends to a monomorphism $R' S'^{-1} \hookrightarrow R S^{-1}$, where $S'$ is the group of units of $R'$. \item When $R$ is an Ore domain, then $R S^{-1}$ is a skew-field. \end{enumerate} \end{prop} \iffalse \begin{proof}[Sketch proof] \begin{enumerate} \item Suppose that $p/1 = 0 = 1\backslash 0$ in $RS^{-1}$. The equality means precisely that $0 \cdot 1 = 1 \cdot p$ which implies $p=0$. \item Any automorphism of $R$ preserves $S$, and hence induces an automorphism of $R S^{-1}$. \item Let $R$ be an Ore domain. Take $p/r \in RS^{-1} \s- \{0\}$. Clearly $p \neq 0$, and so $p \in S$. Therefore the fraction $r/p \in RS^{-1}$ is the right inverse of $p/r$. Since we can convert right fractions into left fractions, we construct the left inverse analogously. \qedhere \end{enumerate} \end{proof} \fi Let us now introduce amenability. \begin{dfn}[Amenable groups] A countable group $G$ is \emph{amenable} if and only if there exists a sequence $(F_i)$ of non-empty finite subsets of $G$ (a \emph{F\o lner sequence}), such that for every $g \in G$ we have \[ \frac {\vert F_i \vartriangle g.F_i \vert} {\vert F_i \vert} \longrightarrow 0 \] as $i \longrightarrow \infty$ (here $\vartriangle$ denotes the symmetric difference). \end{dfn} Note that in the definition above one can easily replace the left translates $g . F_i$ by the right translates $F_i . g$. In the context of group rings, the Ore condition is (almost) equivalent to amenability of the underlying group, as shown by the author in an appendix to an article of Bartholdi~\cite{Bartholdi2016}. \begin{thm}[{\cite[Theorem A.1]{Bartholdi2016}}] Let $G$ be a group, and suppose that the group ring $\mathbb{Z} G$ has no zero-divisors. Then $\mathbb{Z} G$ is an Ore domain if and only if $G$ is amenable. \end{thm} One of the implications was shown by Tamari~\cite{Tamari1957}. We will go back to his proof, since we will require a version of his theorem for twisted group rings. \begin{thm}[Tamari~{\cite{Tamari1957}}] \label{tamari} Let $G$ be an amenable group, and let $\mathbb{K}$ be a skew-field. If a twisted group ring $\mathbb{K} G$ does not contain zero-divisors, then $\mathbb{K} G$ is an Ore domain. \end{thm} \begin{proof} Since $\mathbb{K} G$ admits an anti-automorphism induced by $g \mapsto g^{-1}$, it is enough to check the Ore condition on one side. (Note that this anti-automorphism sends $\lambda g$ to $g^{-1} \lambda = \phi(g^{-1})(\lambda) g^{-1}$ for $\lambda \in \mathbb{K}$. Let $p,r \in \mathbb{K} G $ with $r \neq 0$. Let $F$ be a F\o lner set such that \[ \frac {\vert F \vartriangle F.g \vert} {\vert F \vert} < \frac 1 { \vert U \vert} \] for every $g \in U = \operatorname{supp} p \cup \operatorname{supp} r$. Consider $q = \sum_{f \in F} \lambda_f f$ and $s = \sum_{f \in F} \lambda'_f f$; we treat the elements $\lambda_f, \lambda_f' \in \mathbb{K}$ as $2 \vert F \vert$ variables. We now try to solve the linear equation \[ qr(h) = sp(h) \] for every $h \in G$. Note that this equation is trivial except possibly for \[h \in F \cup \bigcup_{g \in U} F.g\] that is except for fewer than $2 \vert F \vert$ elements $h$. Hence we are solving a system of fewer than $2 \vert F \vert$ equations with $2 \vert F \vert$ variables. Since we are working over a skew-field, a non-zero solution exists. Let us pick one such -- this way we have defined $q$ and $s$ such that $qr = sp$. If $s=0$ then $r$ is a zero-divisor, which is a contradiction. Therefore the pair $(q,s)$ is as required. \end{proof} For the sake of completeness, let us also introduce the elementary amenable groups. \begin{dfn}[Elementary amenable groups] The class of \emph{elementary amenable} groups is the smallest class containing all finite groups and all abelian groups, and closed under subgroups, quotients, direct unions, and extensions. \end{dfn} It is classical that all elementary amenable groups are amenable -- the class of amenable groups contains finite groups and $\mathbb{Z}$, and is closed under subgroups, colimits, quotients, and extensions. In \cref{sec: vanishing} we will need the following lemma. \begin{lem} \label{inheriting Ore} Let $G$ be any group, and let $N$ be a normal subgroup. Suppose that $S \subseteq \mathbb{Z} N \s- \{0\}$ is closed under multiplication and satisfies the left Ore condition in $\mathbb{Z} N$. Then $S^G$, the multiplicative closure of the $G$-conjugates of $S$, satisfies the left Ore condition in $\mathbb{Z} G$. \end{lem} \begin{proof} We first argue that $S^G$ satisfies the left Ore condition in $\mathbb{Z} N$, and then we argue that any $G$-invariant subset of $\mathbb{Z} N$ closed under multiplication and satisfying the left Ore condition in $\mathbb{Z} N$ actually also satisfies the left Ore condition in $\mathbb{Z} G$. To prove the first claim, take $p \in \mathbb{Z} N$ and $s = s_1 \dots s_n$ where each $s_i$ lies in some $G$-conjugate of $S$. We will find left common multiples of $p$ and $s$. We argue by induction on $n$. If $n=1$ then the claim follows from the Ore condition for $S$ (which clearly holds for all conjugates of $S$ as well). Otherwise, using the Ore condition, there exist $q \in \mathbb{Z} N$ and $t_n$ in the same conjugate of $S$ as $s_n$, such that \[ t_n p = q s_n \] Now by the inductive hypothesis there exist $r \in \mathbb{Z} N$ and $ u \in S^G$ such that \[ uq = r s_1\dots s_{n-1} \] and therefore \[ ut_n p = r s \] This proves the first claim. \smallskip To prove the second claim, take $s \in S$ and $p \in \mathbb{Z} G$, where we assume that $S$ is multiplicatively closed and $G$-invariant. We have $\mathbb{Z} G = (\mathbb{Z} N) \, G/N$ (as in \cref{key example}), and so we can write \[ p = \sum_{i=1}^n p_i \] where $p_i = \kappa_i g_i$, with $g_i \in G/N$ and $\kappa_i \in \mathbb{Z} N$. Using the Ore condition in $\mathbb{Z} N$ we conclude that there exist $\kappa_1' \in \mathbb{Z} N$ and $s_1 \in S$ such that $s_1 {\kappa_1} = \kappa_1' s^{{g_1}^{-1}}$, where exponentiation denotes right conjugation, and we are using the fact that $S$ is $G$-invariant. We now argue by induction on $n$: if $n=1$, then \[ {s_1} p = {s_1} \kappa_1 g_1 = \kappa_1' s^{{g_1}^{-1}} g_1 = \kappa_1' g_1 s \] and the Ore condition is verified. Otherwise, we have \[ s_1 p - \kappa_1' g_1 s = s_1 p - s_1 p_1 = \sum_{i=2}^n s_1\kappa_i p_i \] and $s_1 \kappa_i \in \mathbb{Z} N$ for every $i$. By the inductive hypothesis there exist $q \in \mathbb{Z} G$ and $t \in S$ such that \[ t \cdot (\sum_{i=2}^n s_1 \kappa_i p_i) = q s \] and so \[ t s_1 p = (t \kappa_1' g_1 + q)s \qedhere \] \end{proof} \section{Matrices and their polytopes} \label{sec dets} In this section, we consider a finitely generated free-abelian group $H$. With the applications in mind, one will not err by thinking of $H$ as the free part of the abelianisation of a finitely generated group. On the other hand, the twisted group ring $\mathbb{K} H$ (where $\mathbb{K}$ is a skew-field), which will be the main focal point of the section, can be interpreted as a ring of twisted Laurent polynomials in finitely many commuting variables, and the more algebraically-minded reader might prefer this point of view. \subsection{Polytopes} We start by looking at polytopes. Note that $H_1(H;\mathbb{R}) = H\otimes_\mathbb{Z} \mathbb{R}$ is a finite dimensional vector space over $\mathbb{R}$. \begin{dfn}[Polytopes] A \emph{polytope} in $H_1(H;\mathbb{R})$ is a compact subset which is the intersection of finitely many affine halfspaces. Note that, in particular, the empty set is a polytope. Given a polytope $P$ and a character $\phi \in H^1(H;\mathbb{R})$, we define \[ F_\phi(P) = \big\{ p \in P \mid \phi(p) = \min_{q \in P} \phi(q) \big\} \] (clearly, this is precisely the set of points in $P$ on which $\phi$ attains its minimum). The image $F_\phi(P)$ is also a polytope. The collection $\{ F_\phi(P) \mid \phi \in H^1(H;\mathbb{R}) \}$ is the collection of \emph{faces} of $P$. A face is called a \emph{vertex} if and only if it has dimension $0$. Note that $P = F_0(P)$ is also a face of $P$. A polytope $P$ is \emph{integral} if and only if its vertices lie in $H \subseteq H_1(H;\mathbb{R})$; a polytope $P$ is \emph{$\phi$-flat} if and only if $P = F_\phi(P)$. The \emph{Minkowski sum} of two polytopes $P$ and $Q$ defined by \[ P + Q = \{p+q \mid p \in P, q \in Q \} \] turns the set of all non-empty polytopes in $H_1(H;\mathbb{R})$ into a cancellative abelian monoid. It is clear that the Minkowski sum restricts to an operation on the set of non-empty integral polytopes in $H_1(H;\mathbb{R})$, turning this set into an abelian cancellative monoid as well. We define $\P(H)$ to be the Grothendieck group of fractions of the monoid of non-empty integral polytopes in $H_1(H;\mathbb{R})$: the group $\P(H)$ is constructed from the free-abelian group with basis equal to the set of non-empty integral polytopes by factoring out relations given by the Minkowski sum. It is easy to see that every element in $\P(H)$ is represented by a formal difference $P - Q$ of polytopes. We say that an element is a \emph{single polytope} if and only if we can take $Q$ to be a singleton. In this case $P$ is uniquely determined up to translation. Note that $F_\phi$ induces a homomorphism $F_\phi \colon \P(H) \to \P(H)$ given by $F_\phi(P-Q) = F_\phi(P) - F_\phi(Q)$. We also define $\P_T(H)$ to be the quotient of $\P(H)$ by the subgroup consisting of formal differences of singletons. It is immediate that this is equivalent to considering pairs of polytopes $P-Q$ up to translation of the first polytope. Note that a single polytope in $\P(H)$ is represented by a unique single polytope in $\P_T(H)$. \end{dfn} Let us now introduce duals, which establish a correspondence between polytopes in $H_1(H;\mathbb{R})$ and subsets of $H^1(H;\mathbb{R})$. \begin{dfn}[Duals] Let $P \subset H_1(H;\mathbb{R})$ be a polytope. Given a face $Q$ of $P$, we define its \emph{duals} to be the connected components of \[ \{\phi \in H^1(H;\mathbb{R}) \s-\{0\} \mid F_\phi(P) = Q \} \] \end{dfn} \begin{rmk} The only case in which a face $Q$ admits more then one dual occurs when $Q = P$ and $P$ is of codimension $1$ in $ H_1(H;\mathbb{R})$. In this case, there are precisely two duals, each of which consists of a single ray. \end{rmk} \begin{rmk} Observe also that duals are convex, and if $P$ is not empty, then every dual of a vertex of $P$ is open, and the union of duals of vertices of $P$ is dense in $H^1(H;\mathbb{R})$. \end{rmk} \begin{lem} \label{faces decomp} Let $P = \sum_{i=1}^n P_i$ be a Minkowski sum of polytopes, and let $Q$ be a face of $P$. There exist unique faces $Q_i$ of $P_i$ such that $Q = \sum_{i=1}^n Q_i$. \end{lem} \begin{proof} Let $Q$ be a face of $P$. By definition, this means that there exists a character $\psi$ such that $Q = F_\psi(P)$. Let $Q_i = F_\psi(P_i)$; we clearly have $Q = \sum Q_i$. We now argue that the definition of $Q_i$ is in fact independent of the choice of $\psi$. Perturb $\psi$ slightly inside of the dual of $Q$ containing it; this way we can potentially decrease the dimensions of some of the faces $F_\psi(P_i)$ without increasing the dimension of any other. Since $Q$ is well-defined, this proves that the faces $Q_i$ do not depend on the choice of $\psi$ locally. Now, the dual of a face is unique or there are two duals, each consisting precisely of one character, the two characters being antipodal. In the former case, the fact that $Q_i$ is well defined follows immediately from connectivity; in the latter case, it is clear that if $Q = F_\psi(P) = F_{-\psi}(P)$ then $F_\psi(P_i) = F_{-\psi}(P_i)$ for each $i$. \end{proof} We now state a result of Funke which gives us a method of recognising single polytopes among the elements of $\P(H)$. \begin{prop}[Funke~{\cite[Lemma 4.3]{Funke2018}}] \label{funke} Suppose that $H$ is of rank at least $2$. For every $P \in \P(H)$, the element $P$ is a single polytope if and only if $F_\phi(P)$ is a single polytope for each $\phi \in H^1(H;\mathbb{Z}) \s- \{0\}$. \end{prop} The following corollary occurs as \cite[Lemma 4.7]{Funke2018}. \begin{cor} \label{flat polys} Let $P \in \P(H)$. Suppose that for every $\phi \in H^1(H;\mathbb{Z}) \s- \{0\}$ there exists a $\phi$-flat polytope $X_\phi$ such that $P+X_\phi$ is a single polytope. Then $P$ is a single polytope. \end{cor} \begin{proof} We will argue by induction on the rank of $H$. When this rank is $0$, every element of $\P(H)$ is a single polytope. When the rank is $1$, then the only $\phi$-flat polytopes are singletons, and so $P$ is a single polytope. Now suppose that the result holds for $n-1$, and let $H$ be of rank $n$ (with $n\geqslant 2$). We will argue that for every $\psi \in H^1(H;\mathbb{Z}) \s- \{0\}$, the face $F_\psi(P)$ is a single polytope; in view of \cref{funke}, this suffices. Note that $F_\psi(X_\phi)$ is a $\phi$-flat polytope. Also, $F_\psi(P+X_\phi) = F_\psi(P) + F_\psi(X_\phi)$ is a single polytope, since $P + X_\phi$ is. But, up to translation, we have $F_\psi(P) = Q - Q'$ such that $Q,Q'$ and $F_\psi(X_\phi)$ lie in $\P(\ker \psi)$, and the rank of $\ker \psi$ is lower than that of $H$. Hence, by the inductive hypothesis, $F_\psi(P)$ is a single polytope. \end{proof} \subsection{Dieudonn\'e determinant} Let us start by introducing an important convention: whenever we talk about a matrix, we will explicitly state over which ring it lies. In particular, properties like invertibility will always be taken in the ring over which the matrix is defined. If we want to consider a matrix as a matrix over a larger ring (via extension of scalars), we will use tensor notation. Unless specified otherwise, we tensor over $\mathbb{Z} G$. We will use $M_n(R)$ to denote the ring of $n \times n$ matrices over a ring $R$. We are now going to introduce the Dieudonn\'e determinant which can be computed for square matrices over a skew-field. We will later show how to associate a polytope to such determinants. \begin{dfn} Let $A=(a_{ij}) \in M_n(\mathcal{D})$, where $\mathcal{D}$ is a skew-field. The \emph{canonical representative} of the Dieudonn\'e determinant $\det^c A \in \mathcal{D}$ is defined inductively as follows: \begin{enumerate} \item If $n=1$ then $\det^c A = a_{11}$. \item If the last row of $A$ consists solely of zeros, then $\det^c A = 0$. \item If $a_{nn} \neq 0$ then we form an $(n-1) \times (n-1)$ matrix $A' = (a'_{ij})$ by setting $a'_{ij} = a_{ij} - a_{in} a_{nn}^{-1} a_{nj}$, and declare $\det^c A = \det^c A' \cdot a_{nn}$. \item If $a_{nn} =0$ and not every entry in the least row of $A$ is zero then let $j$ be maximal such that $a_{nj} \neq 0$. Let $B$ be the symmetric matrix interchanging $j$ and $n$. We declare $\det^c A = -\det^c (AB)$. \end{enumerate} The \emph{Diedonn\'e determinant} $\det A$ is defined to be the image of $\det^c A$ in \[\mathcal{D}^\times /[\mathcal{D}^\times,\mathcal{D}^\times] \sqcup \{0\}\] \end{dfn} The procedure in step $(3)$ corresponds to multiplying $A$ on the right by elementary matrices until the last row has a single non-zero element. Thus, computing the canonical representative of the Dieudonn\'e determinant consists of putting the matrix $A$ into an upper-diagonal form, and then taking the product of the diagonal entries. Hence it is clear that over commutative fields the Dieudonn\'e determinant coincides with the usual determinant. Note that when writing $\det A$ we do not need to specify the skew-field, since we have adopted the convention that every matrix comes with a specified ring over which it lies. \begin{thm}[Dieudonn\'e~{\cite{Dieudonne1943}}] For any $n$, the Dieudonn\'e determianant \[\det \colon M_n(\mathcal{D}) \to \mathcal{D}^\times /[\mathcal{D}^\times,\mathcal{D}^\times] \sqcup \{0\}\] is multiplicative. \end{thm} \subsection{The Newton polytope of a matrix} Recall that $\mathbb{K}$ is a skew-field, and $\mathbb{K} H$ is a twisted group ring of $H$, a finitely generated free-abelian group. We start by introducing Newton polytopes of elements in $\mathbb{K} H$. Note that $\mathbb{K} H$ has no zero-divisors -- this follows directly from \cref{malcev-neumann}, since $H$ is biorderable. To study the Newton polytopes let us first introduce minima $\mu_\phi$, which are algebraic counterparts to the face maps $F_\phi$. \begin{dfn}[$\mu_\phi$] \label{minima} For any character $\phi \in H^1(H;\mathbb{R})$ we define $\mu_\phi \colon { \mathbb{K} H} \to \mathbb{K} H$ by setting \[ \mu_\phi(x)(h) = \left\{ \begin{array}{cl} x(h) & \textrm{ if } \phi(h) \textrm{ is minimal in } \phi(\operatorname{supp} x) \\ 0 & \textrm{ otherwise} \end{array} \right. \] (we are using the function notation here). \end{dfn} Note that $\operatorname{supp} \mu_\phi(x)$ consists of elements in $H$ with the same value under $\phi$. Since $\mathbb{K} H$ has no zero-divisors, it is an easy exercise to see that $\mu_\phi$ is multiplicative. \begin{dfn} Let $p \in \mathbb{K} H$ be an element. The associated \emph{Newton polytope} $P(p)$ is the convex hull of $\operatorname{supp} p$ taken in the $\mathbb{R}$-vector space $H_1(H;\mathbb{R})$. \end{dfn} Note that, for every $\phi \in H^1(H;\mathbb{R})$ and every $p \in \mathbb{K} H$, we have \[ F_\phi\big(P(p)\big) = P\big( \mu_\phi(p) \big) \] \begin{lem} \label{P is hom} The map $P \colon \mathbb{K} H \s- \{0\} \to \P(H)$ satisfies \[ P(pq) = P(p) + P(q) \] for every $p,q \in \mathbb{K} H \s- \{0\}$ \end{lem} \begin{proof} The proof is an induction on the rank $n$ of $H$. If $n=0$, then both sides of the desired equation are trivial. If $n=1$, then $\mathbb{K} H$ is a twisted Laurent polynomial ring in one variable, and the result is immediate. Now suppose that $n\geqslant 2$. Take any character $\phi \in H^1(H;\mathbb{R}) \s- \{0\}$. We have \[ F_\phi\big( P(pq) \big) = P\big( \mu_\phi(pq) \big) = P\big( \mu_\phi(p) \cdot \mu_\phi(q) \big) \] Now we use the inductive hypothesis (we might have to translate the polytopes $P\big(\mu_\phi(p)\big)$ and $P\big( \mu_\phi(q) \big)$ first, so that they both lie in the same hyperspace of $H$). We obtain \[ P\big( \mu_\phi(p) \cdot \mu_\phi(q) \big) = P\big( \mu_\phi(p)\big) + P\big( \mu_\phi(q) \big) = F_\phi\big( P(p) \big) + F_\phi\big( P(q) \big) = F_\phi\big( P(p) + P(q) \big) \] Hence, for every non-trivial $\phi$, we have \[F_\phi\big( P(pq) - P(p) - P(q)\big)=0= {F_\phi\big( -P(pq) + P(p) + P(q)\big)}\ But then \cref{funke} tells us that $P(pq) - P(p) - P(q)$ and $-\big(P(pq) - P(p) - P(q)\big)$ are single polytopes, which is only true when $P(pq) - P(p) - P(q)$ is a singleton. Using any of the maps $F_\phi$ we immediately see that this singleton is precisely the origin of $H_1(H;\mathbb{R})$, and therefore $P(pq) = P(p)+P(q)$. \end{proof} Since $H$ is amenable and $\mathbb{K} H$ has no zero-divisors, $\mathbb{K} H$ is an Ore domain by \cref{tamari}; let $\mathcal{D}$ denote the Ore localisation of $\mathbb{K} H$ -- recall that $\mathcal{D}$ is a skew-field containing $\mathbb{K} H$, and elements of $\mathcal{D}^\times$ are fractions of the form $pq^{-1}$ with $p,q \in \mathbb{K} H \s- \{0\}$. Thanks to \cref{P is hom}, we immediately see that the map $P$ induces a homomorphism $P \colon \mathcal{D}^\times \to \P(H)$ defined by \[ P(pq^{-1}) = P(p) - P(q) \] Since $\P(H)$ is abelian, the homomorphism $P$ gives a well-defined group homomorphism \[\mathcal{D}^\times / [\mathcal{D}^\times, \mathcal{D}^\times] \to \P(H)\] Hence, for every square matrix $A$ over $\mathbb{Z} H$, we may talk about $P(\det A \otimes \mathcal{D}) \in \P(H) \sqcup \{ \emptyset \}$ (setting $P(0) = \emptyset$). \begin{dfn}[Newton polytope] Given a square matrix $A$ over $\mathbb{K} H$, we define its \emph{Newton polytope} to be $P(A) = P(\det A\otimes \mathcal{D}) \in \P(H) \sqcup \{ \emptyset\}$. \end{dfn} We are now ready for the first result of the article. \begin{thm}[Single polytope] \label{single poly} Let $A$ be a square matrix over $\mathbb{K} H$. Then $P(A)$ is empty or a single polytope. \end{thm} \begin{proof} Take $\phi \in H^1(H;\mathbb{Z}) \s- \{0\}$, and let $L = \ker \phi$. Take $z\in H$ with $\phi(z) = 1$, the generator of $\operatorname{im} \phi = \mathbb{Z}$. Let $\mathbb{K} L$ denote the subring of $\mathbb{K} H$ consisting of elements of support lying in $L$. It is clear that $\mathbb{K} L$ is a twisted group ring of $L$ with coefficients $\mathbb{K}$, and hence an Ore domain; let us denote the Ore localisation of $\mathbb{K} L$ by $\mathbb{L}$. The localisation $\mathbb{L}$ embeds into the Ore localisation $\mathcal{D}$ of $\mathbb{K} H$ by \cref{Ore loc}(\ref{Ore funct}), and the action by conjugation of $z$ on $\mathbb{K} L$ extends to an action on $\mathbb{L}$ by \cref{Ore loc}(\ref{Ore autos}). Thus, we have an embedding of the twisted group ring $\mathbb{L} \mathbb{Z}$ (with $\mathbb{Z}$ generated by $z$) into $\mathcal{D}$. We will think of $\mathbb{L} \mathbb{Z}$ as a twisted Laurent polynomial ring with variable $z$. We apply Euclid's algorithm over $z$ to $A \otimes \mathbb{L} \mathbb{Z}$: using only elementary matrices over $\mathbb{L} \mathbb{Z}$ we put $A \otimes \mathbb{L} \mathbb{Z}$ into an upper-triangular form. Thus $\det A \otimes \mathcal{D}$ can be represented by $\sum_{i=-n}^n \kappa_i z^i$, a Laurent polynomial in $z$ with coefficients in $\mathbb{L}$. Since $\mathbb{L}$ is the Ore localisation of $\mathbb{K} L$, there exist $\mu_i, \nu \in \mathbb{K} L$ such that $\kappa_i = \nu^{-1} \mu_i$ for every $i$ (see \cref{Ore rmk}). Hence $\det A \otimes \mathcal{D}$ is represented by $\nu^{-1} \sum_{i=-n}^n \mu_i z^i$. If $P(A) \neq \emptyset$, it immediately follows that \[ P(A) = P(\det A \otimes \mathcal{D} ) = P(\sum_{i=-n}^n \mu_i z^i) - P (\nu) \] Crucially, $P(\sum_{i=-n}^n \mu_i z^i)$ and $P (\nu)$ are single polytopes, and $P(\nu)$ is $\phi$-flat. Thus \cref{flat polys} tells us that $P(A)$ is a single polytope. \end{proof} \iffalse The result above will be used in \cref{polytope class} to show that every torsion-free group satisfying the Atiyah conjecture is of polytope-class, as defined by Funke. It will also be used to show that the $L^2$-torsion polytope of a descending HNN extension of a finitely generated free group is a single polytope, thus strengthening the analogy between such groups and $3$-manifold groups. Moreover, it occurs as a technical ingredient in the proof of \cref{K-bns matrix}. \cref{single poly} is also an interesting result in its own right: the group ring $\mathbb{K} H$ can be thought of as a twisted ring of Laurent polynomials in several commuting variables and coefficients in a skew-field. Were the polynomials not twisted, and were the skew-field a commutative field, we would be in the familiar situation of having a square matrix over an abelian ring. Taking the determinant in this context can be executed by using the standard formula, and the result will itself be a Laurent polynomial (rather than a rational function, as the Dieudonn\'e determinants end up being). In particular, the Newton polytope of the determinant is then clearly a single polytope. The theorem above can thus be thought of as a generalisation of this fact. \fi \subsection{Novikov rings} To see why one should be interested in Newton polytopes of matrices over group rings, we need to introduce the Novikov rings. \begin{dfn}[Truncated support] Let $R$ be a ring and $G$ a group. Let $x \in R^G$ be a function. Given a character $\phi \in H^1(G;\mathbb{R})$ and a constant $\kappa \in \mathbb{R}$ we define the \emph{truncated support} to be \[ \operatorname{supp}_{\phi,\kappa} x = \{ g \in G \mid x(g) \neq 0 \textrm{ and } \phi(g) \leqslant \kappa \} \] \end{dfn} \begin{dfn}[Novikov ring] Given a character $\phi \in H^1(G;\mathbb{R})$ we define \[ \widehat {R G}^\phi = \{ x \colon G \to R \mid \operatorname{supp}_{\phi,\kappa} x \textrm{ is finite for every } \kappa \in \mathbb{R} \} \] Pointwise addition turns $\widehat {R G}^\phi$ into an abelian group; to endow it with a ring structure we use the twisted convolution \eqref{twisted conv} -- this way the set-theoretic inclusion $R G \subseteq \widehat {R G}^\phi$ turns into an embedding of rings. For any $S \subseteq H^1(H;\mathbb{R})$ we define $\widehat {R G}^S = \bigcap_{\phi \in S} \widehat {R G}^\phi$. \end{dfn} We will treat $\widehat {R G}^\phi$ as a left $R G$ module, and a right $\widehat {R G}^\phi$ module, so we will tensor $R G$ modules with $\widehat {R G}^\phi$ on the right. The Novikov rings play a crucial role in the Bieri--Neumann--Strebel invariants via the theorem of Sikorav (\cref{sikorav}), which we will discuss later. The key technical feature of the current article is that we will discuss $\Sigma$-invariants of matrices. \subsection{\textSigma-invariants of matrices} \begin{dfn}[$\Sigma(A)$] Let $R$ be a ring, $G$ a group, and $R G$ a twisted group ring. Let $A$ be a (not necessarily square) matrix over $R G$. We define $\Sigma(A) \subseteq H^1(G;\mathbb{R})$, the \emph{$\Sigma$-invariant} of $A$, by declaring $\phi \in \Sigma(A)$ if and only if $A \otimes \widehat {R G }^\phi$ admits a right inverse. \end{dfn} Recall our convention: $A \otimes \widehat {R G }^\phi$ admitting a right inverse means precisely that the inverse lies over $\widehat {R G }^\phi$ as well. \begin{dfn}[$\phi$-identity] Let $\phi \in H^1(G;\mathbb{R})$ be given. An $n \times m$ matrix $A$ over $\widehat {R G}^\phi$ is said to be a \emph{$\phi$-identity} if and only if for every entry $x$ of $A - \mathrm{I}$ we have $\phi(\operatorname{supp} x) \subseteq (0,\infty)$. (Here, $\mathrm{I}$ denotes the identity matrix extended by a zero matrix either on the right or at the bottom, so that $\mathrm{I}$ is an $n \times m$ matrix.) \end{dfn} For the purpose of the above definition, we treat elements in $\widehat {R G}^\phi$ as $1 \times 1$ matrices. \begin{lem} \label{phi id} Suppose that a square matrix $A$ over $R G$ is a $\phi$-identity for some ${\phi \in H^1(G;\mathbb{R})}$. There exists an open neighbourhood $U \subseteq H^1(G;\mathbb{R})$ of $\phi$ such that $A \otimes \widehat {R G}^U$ is right and left invertible. \end{lem} \begin{proof} Let $B$ be defined by the equation $A = \mathrm{I} - B$. Observe that the definition of $\phi$-identity tells us that $\phi$ is strictly positive on the supports of the entries of $-B$, and hence of $B$. Since the supports of the entries of $B$ are finite, there exist $\kappa>0$ and an open neighbourhood $U$ of $\phi$ in $H^1(G;\mathbb{R})$ such that for every $\psi \in U$ the supports of the entries of $B$ are sent by $\psi$ to $(\kappa,\infty)$. Thus \[ Y = \sum_{i=0}^\infty B^i \] defines a matrix over $\widehat {R G}^U$, as the supports of the entries of $B^i$ are mapped by every $\psi \in U$ to $(i \kappa, \infty)$. Clearly $AY = (\mathrm{I} - B)Y = \mathrm{I}$ and $YA = Y(\mathrm{I}-B) = \mathrm{I}$. \end{proof} \begin{lem} \label{entries far away} \label{bns open} Let $A$ be an $n \times m$ matrix over $R G$, and let $\phi \in \Sigma(A)$. There exists an $m \times n$ matrix $X$ over $R G$ such that $AX$ is a $\phi$-identity. Moreover, there exists an open neighbourhood $U$ of $\phi$ in $H^1(G;\mathbb{R})$ such that $A\otimes \widehat{R G}^U$ admits a right inverse. \end{lem} \begin{proof} We let $\operatorname{supp} A$ denote the union of the supports of the entries of $A$. Take $C \in \mathbb{N}$ such that \[ C > - \min \phi(\operatorname{supp} A) \] Since $A \otimes \widehat {R G}^\phi$ is right-invertible, there exists a matrix $B = (b_{ij})$ over $\widehat {R G}^\phi$ such that $AB = \mathrm{I}$. Let a pair $(i,j)$ be fixed for a moment. We have $b_{ij} = \sum \lambda_g g$ (where the sum is typically infinite). We set \[ b_{ij}^0 = \sum_{\phi(g) < C} \lambda_g g \in R G \] and define $b_{ij}^+ \in \widehat {R G}^\phi$ by the equation $b_{ij} = b_{ij}^0 + b_{ij}^+$. We now apply the procedure to every pair $(i,j)$, and define matrices $B^0 = (b_{ij}^0)$ and $B^+ = (b_{ij}^+)$; observe that we have $B =B^0 + B^+$. By the choice of $C$, the supports of the entries of $A B^+$ are mapped by $\phi$ to $(0,\infty)$. But $\mathrm{I} = AB = AB^0 + AB^+$, and so $AB^0$ is a $\phi$-identity. We set $X = B^0$. The second assertion follows from \cref{phi id}, observing that $AX$ is a square matrix. \end{proof} Note that, in particular, the above result implies that $\Sigma(A)$ is an open subset of $H^1(G;\mathbb{R})$, as one would expect. \smallskip As before, let $\mathbb{K}$ be a skew-field, $\mathbb{K} H$ a twisted group ring, and $\mathcal{D}$ the Ore localisation of $\mathbb{K} H$. In order to study the Novikov rings $\widehat {\mathbb{K} H}^\phi$, we will use the minima $\mu_\phi$: recall that we introduced them (in \cref{minima}) as maps $\mathbb{K} H \to \mathbb{K} H$. It is however immediate that the definition extends and defines multiplicative maps $\mu_\phi \colon \widehat {\mathbb{K} H}^\phi \to \mathbb{K} H$. In what follows, all tensoring takes place over $\mathbb{K} H$. \begin{lem} \label{malcev-neumann H} For every $\phi \in H^1(H;\mathbb{R})$ there exists a skew-field $\mathcal F$ such that $\widehat {\mathbb{K} H}^\phi$ and $\mathcal{D}$ both embed into $\mathcal F$ in such a way that the embeddings agree on $\mathbb{K} H$. \end{lem} \begin{proof}[Sketch proof] The skew-field $\mathcal F$ will be the Malcev--Neumann skew-field, which we discussed in \cref{malcev-neumann}; a similar embedding was constructed in \cite[Lemma 5.5]{FunkeKielak2018}. Let $\leqslant$ be a biordering of $H$ which makes $\phi \colon H \to \mathbb{R}$ into an order-preserving homomorphism. Then $\mathcal F$, the subset of $\mathbb{K}^H$ consisting of functions with supports well-ordered with respect to $\leqslant$, forms a skew-field. Now $\widehat {\mathbb{K} H}^\phi$ is actually a subset of $\mathcal F$. This also implies that $\mathbb{K} H$ is a subset of $\mathcal F$; let us denote the resulting embedding by $\iota$. It is easy to see that the embedding $\iota \colon \mathbb{K} H \hookrightarrow \mathcal F$ induces an embedding of the Ore localisation of $\mathbb{K} H$, that is of $\mathcal{D}$ -- we simply set $\iota(pq^{-1}) = \iota(p) \iota(q)^{-1}$. The injectivity of this extended $\iota$ is immediate. \end{proof} We obtain the following, immediate corollary. \begin{cor} \label{no zero divs} The ring $\widehat {\mathbb{K} H}^\phi$ has no zero-divisors \end{cor} Recall that we have implicit structural functions in the definition of $\mathbb{K} H$. The same functions (via formula \eqref{twisted conv}) are used to define multiplication in $\widehat {\mathbb{K} H}^\phi$; in fact they can be also used to define the multiplications \[\mathbb{K} H \times \mathbb{K}^H \to \mathbb{K}^H \textrm{ and } \mathbb{K}^H \times \mathbb{K} H \to \mathbb{K}^H\] \begin{dfn} We say that an element $x\in \mathcal{D}$ is \emph{represented} by $y \in \mathbb{K}^H$ if and only if $x=pq^{-1}$ with $p,q \in \mathbb{K} H$ and $p = yq$. \end{dfn} Note that $x$ can be represented by different elements in $\mathbb{K}^H$, since $\mathbb{K}^H$ contains zero-divisors with respect to the twisted convolution. The situation is however different if $x$ is represented by an element of $\widehat{\mathbb{K} H}^\phi$. \begin{lem} \label{bns unique} Let $x \in \mathcal{D}$ be an element represented by $y$ and $y'$, both lying in $\widehat{\mathbb{K} H}^\phi$ for some $\phi \in H^1(H;\mathbb{R})$. Then $y=y'$. \end{lem} \begin{proof} By definition, we have $x = pr^{-1} = p'r'^{-1}$ with $p,r,p',r' \in \mathbb{K} H$ and $r,r' \neq 0$. We also have $p = yr$ and $p'=y'r'$. Now, using the Ore condition, there exist $q,s \in \mathbb{K} H$ such that $s\neq 0$ and $sp= qr$. Now $pr^{-1} = p'r'^{-1}$ means precisely that $sp'=qr'$ as well. Thus \[ qr' = sp' = sy'r' \] and so $(q-sy')r' = 0$. As $r' \neq 0$ and $\widehat{\mathbb{K} H}^\phi$ contains no zero-divisors, we conclude that $q = sy'$. Therefore \[ syr = sp = qr = sy'r \] and arguing as above (using $s,r \neq 0$) we see that $y - y' = 0$. \end{proof} Because of the above lemma, we will say that $x\in \mathcal{D}$ is equal to $y \in \widehat{\mathbb{K} H}^\phi$ if and only if it is represented by $y$. \begin{lem} \label{unique RI} Let $A$ be a square matrix over $\mathbb{K} H$, such that $A \otimes \mathcal{D}$ is invertible. Let $C\subseteq H^1(H;\mathbb{R})$ be a non-empty subset. If $A\otimes \widehat{\mathbb{K} H}^{C}$ admits a right-inverse, then this inverse is unique. \end{lem} \begin{proof} Take $\phi \in C$. Let $\mathcal F$ denote the skew-field from \cref{malcev-neumann H}, which contains $\widehat {\mathbb{K} H}^\phi$ and $\mathcal{D}$ as subrings. We will carry all matrix multiplication out in $\mathcal F$. Suppose that we have two right-inverses, $X$ and $X'$. Then \[A \cdot (X - X') = 0\] But we have assumed that $A\otimes \mathcal{D}$ is invertible, and so there exists a matrix $Y$ over $\mathcal{F}$ such that $Y \cdot A = \mathrm{I}$. Hence \[ X - X' =Y\cdot A \cdot (X - X' ) = Y \cdot 0 = 0 \] and so $X - X' = 0$. \end{proof} \begin{cor} \label{bns union} Let $A$ be a square matrix over $\mathbb{K} H$, such that $A \otimes \mathcal{D}$ is invertible. Let $X$ be a right inverse of $A\otimes \widehat{\mathbb{K} H}^C$ and $X'$ be a right inverse of $A \otimes \widehat{\mathbb{K} H}^{C'}$ for some $C,C' \subseteq H^1(H;\mathbb{R})$ with $C \cap C' \neq \emptyset$. Then $X = X'$, and $X$ is a right inverse of $A\otimes \widehat{\mathbb{K} H}^{C \cup C'}$. \end{cor} \begin{proof} Let $\phi \in C \cap C'$. By assumption, there exist right inverses $X$ and $X'$ of, respectively, $A\otimes \widehat{\mathbb{K} H}^C$ and $A \otimes \widehat{\mathbb{K} H}^{C'}$. In particular, both $X$ and $X'$ can be viewed as matrices over $\widehat{\mathbb{K} H}^\phi$, and so $X = X'$ by \cref{unique RI}. Now $X$ is a matrix over $\widehat{\mathbb{K} H}^{C} \cap \widehat{\mathbb{K} H}^{C'} =\widehat{\mathbb{K} H}^{C \cup C'}$. \end{proof} \begin{lem} \label{bns convex} Let $S \subseteq H^1(H;\mathbb{R})$ be any subset, and let $C$ denote its convex hull. Then \[ \widehat{\mathbb{K} H}^S = \widehat{\mathbb{K} H}^C \] \end{lem} \begin{proof} Let $x \in \widehat{\mathbb{K} H}^S$ be any element. Take $\phi_1, \dots, \phi_k \in S$, and $t_1, \dots, t_k \in [0,1]$ with $\sum_{i=1}^k t_i = 1$. Let $\psi = \sum_{i=1}^k t_i \phi_i$. It is enough to show that $x \in \widehat{\mathbb{K} H}^\psi$. Pick $\kappa \in \mathbb{R}$ and for each $\rho \in H^1(H;\mathbb{R})$ consider the truncation $x_{\rho,\kappa}$ defined by \[ x_{\rho,\kappa}(h) = \left\{ \begin{array}{cl} x(h) & \textrm{if } \rho(h) \leqslant \kappa \\ 0 & \textrm{otherwise} \end{array} \right. \] We need to show that $\operatorname{supp} x_{\psi,\kappa}$ is finite for every $\kappa$. To this end, take $h \in \operatorname{supp} x_ {\psi,\kappa}$. If $\phi_i(h) > \kappa$ for all $i$, then the same is true for $\psi(h)$, which is a contradiction. So there exists $i$ such that $\phi(h) \leqslant \kappa$. Thus $h$ lies in the support of $x_{\phi_i, \kappa}$. So $\operatorname{supp} x_{\psi, \kappa}$ is contained in the union of the supports of the elements $x_{\phi_i, \kappa}$. But this union is finite. \end{proof} Let us summarise the results obtained so far. \begin{prop} \label{bns summary} Let $A$ be a square matrix over $\mathbb{K} H$, such that $A \otimes \mathcal{D}$ is invertible. Every connected component $C$ of $\Sigma(A)$ is open and convex, and $A \otimes \widehat{\mathbb{K} H}^C$ is right-invertible. \end{prop} \begin{proof} For every $\phi \in C$, by \cref{bns open}, there exists an open neighbourhood $U_\phi$ of $\phi$ in $\Sigma(A)$ such that $A \otimes \widehat{\mathbb{K} H}^{U_\phi}$ admits a right-inverse $X_\phi$. We may assume $U_\phi$ to be connected. Since $C$ is a connected component, it is immediate that $U_\phi \subseteq C$ for every $\phi$. Consider the transitive closure $\sim$ of the relation on $\mathcal U = \{ U_\phi \mid \phi \in C\}$ given by having non-empty intersection. The union of all sets $U_\phi$ belonging to an equivalence class of $\sim$ is open, and two such unions for two distinct equivalence classes are disjoint. Since $C$ is connected, $\sim$ admits only one equivalence class. Therefore for any two sets $U_\phi$ and $U_\psi$ in $\mathcal U$ there exists a finite sequence of sets $U_\phi = U_0, U_1, \dots, U_n = U_\psi$ in $\mathcal U$ such that $U_i \cap U_{i+1} \neq \emptyset$ for every $i$. Let $X_i$ denote the matrix corresponding to $U_i$. \cref{bns union} implies that $X_i = X_{i+1}$ for every $i$, and therefore $X_\phi = X_\psi$. Thus, letting $X = X_\phi$ for any $\phi \in C$, we see that $X$ lies over $\bigcap_\phi \widehat{\mathbb{K} H}^{U_\phi} = \widehat{\mathbb{K} H}^{\bigcup_\phi U_\phi} = \widehat{\mathbb{K} H}^C$. The connected component $C$ is convex by \cref{bns convex} and open by the discussion above (or by \cref{bns open}). \end{proof} Before stating the main theorem of this section let us look at the following lemma, whose central purpose is to elucidate the statement of the theorem, as well as some arguments used in its proof. \begin{lem} \label{inv of novikov elements} Let $x \in \mathbb{K} H$ be any element, let $\phi \in H^1(H;\mathbb{R})$ be any character, and let $C$ the unique dual of a face of $P(x)$ containing $\phi$. The following are equivalent. \begin{enumerate} \item The element $x$ is invertible in $\widehat {\mathbb{K} H}^C$. \item The element $x$ is invertible in $\widehat {\mathbb{K} H}^\phi$. \item The support $\operatorname{supp} \mu_\phi(x)$ is a singleton. \item The dual $C$ is a dual of a vertex of $P(x)$. \end{enumerate} \end{lem} \begin{proof} \noindent \textbf{(1)$\Rightarrow$(2)} This is immediate, since $\widehat {\mathbb{K} H}^C \leqslant \widehat {\mathbb{K} H}^\phi$ by definition. \medskip \noindent \textbf{(2)$\Rightarrow$(3)} There exists $z \in \widehat {\mathbb{K} H}^\phi$ such that $xz=1$. Using the minima we see that \[ \mu_\phi(x) \mu_\phi(z) = 1 \] as well. But the minima lie in $\mathbb{K} H$, and the units in $\mathbb{K} H$ have support of size $1$ by \cref{units of biord}. Thus $\mu_\phi(x)$ is supported on a singleton. \medskip \noindent \textbf{(3)$\Rightarrow$(4)} We have $F_\phi\big( P(x) \big) = P\big( \mu_\phi(x) \big)$, which is a singleton. Thus, $\phi$ lies in a dual of a vertex of $P(x)$, by the very definition of dual. But this dual is $C$, and so $C$ is as claimed. \medskip \noindent \textbf{(4)$\Rightarrow$(1)} For every $\psi \in C$, we have $F_\psi\big( P(x) \big)$ equal to a singleton, say $h \in H$. Thus $\mu_\psi(x) = \lambda h$ with $\lambda \in \mathbb{K} \s- \{0\}$. Therefore $\mu_\psi(x)$ is invertible in $\mathbb{K} H$, and $y = \mu_\psi(x)^{-1} x$ is an element of $\mathbb{K} H$ which is a $\psi$-identity. Hence $y$, and therefore also $x$, is invertible over $\widehat {\mathbb{K} H}^{U_\psi}$, where $U_\psi$ is some open neighbourhood of $\psi$, by \cref{phi id}. Since the open sets $U_\psi$ cover $C$, we have $C\subseteq \Sigma(x)$. Therefore $x$ is invertible in $\widehat {\mathbb{K} H}^C$ by \cref{bns summary} (applied to the $1\times 1$ matrix $x$). \end{proof} We may view $x$ above as a $1 \times 1$ matrix; in this case we have $P(x) = P(\det x \otimes \mathcal{D})$, and the invertibility of $x$ over $\widehat {\mathbb{K} H}^\phi$ is equivalent to $\phi \in \Sigma(x)$. The following result generalises this to square matrices of arbitrary size, and constitutes the second main result of the article. \begin{thm}[$\Sigma$-invariants of matrices] \label{K-bns matrix} Let $\mathcal{D}$ denote the Ore localisation of $\mathbb{K} H$, and suppose that $A$ is a square matrix over $\mathbb{K} H$ with $A \otimes \mathcal{D}$ invertible. Let $\phi \in H^1(H;\mathbb{R})$ be a character, and let $C$ denote the unique dual of a face of $P(A)$ containing $\phi$. The following are equivalent. \begin{enumerate} \item The matrix $A\otimes \widehat {\mathbb{K} H}^C$ is right-invertible. \item We have $\phi \in \Sigma(A)$. \item The dual $C$ is a dual of a vertex of $P(A)$. \end{enumerate} \end{thm} \begin{proof} \noindent \textbf{(1)$\Rightarrow$(2)} This is immediate, since $\widehat {\mathbb{K} H}^C \leqslant \widehat {\mathbb{K} H}^\phi$ by definition, and $\phi \in \Sigma(A)$ means precisely that $A\otimes \widehat {\mathbb{K} H}^\phi$ is right-invertible. \medskip \noindent \textbf{(2)$\Rightarrow$(3)} Set $P = P(A) = P(\det A \otimes \mathcal{D})$. Recall that this is a single polytope by \cref{single poly} -- it cannot be empty, since $A\otimes \mathcal{D}$ is invertible and so its Dieudonn\'e determinant is not zero. We start by taking $\phi \in \Sigma(A)$, and claiming that $C$ is a dual of a vertex of $P$. By \cref{entries far away}, there exists a matrix $X$ over $\mathbb{K} H$ such that $AX$ is a $\phi$-identity. Set $AX = (x_{ij})$. Consider the last row of $AX$. Its last entry $x_{nn}$ is invertible over $\widehat {\mathbb{K} H}^\phi$ by \cref{phi id}, since it is itself a $\phi$-identity. Thus, we may multiple $AX$ on the right by a number of elementary matrices in such a way that the resulting matrix $A'$ has zeroes in the last row, with the exception of the last entry (which remains unchanged). Observe that we may use elementary matrices whose non-zero off-diagonal entries are precisely the elements $-x_{nn}^{-1}x_{ni}$ with $i \in \{1, \dots, n-1\}$. Each of these off-diagonal entries is a product of an element of $\mathbb{K} H$ and an inverse of such an element, the inverse taken in $\widehat{ \mathbb{K} H}^\phi$. Thus, we may see each of these entries as lying over $\mathcal{D}$, since $\mathcal{D}$ contains all inverses of elements in $\mathbb{K} H$. Moreover, the support of such an off-diagonal entry is taken by $\phi$ into $(0,\infty)$, since $AX$ is a $\phi$-identity, and therefore $\phi(\operatorname{supp} x_{nn}) \subset [0,\infty)$ and $\phi(\operatorname{supp} x_{in}) \subset (0,\infty)$, implying that $\phi(\operatorname{supp} x_{nn}^{-1} x_{in}) \subset (0,\infty)$ as well. It is easy to see that multiplying a $\phi$-identity by elementary matrices whose non-zero off-diagonal entries are as above yields another $\phi$-identity. Therefore $A'$ is a $\phi$-identity. Repeating the process for other rows, we conclude the existence of a matrix $Y$ over $\mathcal{D}$, with $\det Y = 1$, and such that $AXY$ is upper triangular, and the diagonal entries are $\phi$-identities. Hence $\det AX\otimes \mathcal{D} = \det AXY\otimes \mathcal{D}$ can be represented by an element $x \in \widehat{ \mathbb{K} H}^\phi$ which is also a $\phi$-identity. We can also view $x$ as an element in $\mathcal{D}$, that is as a fraction of elements $p,q \in \mathbb{K} H$ (since $\mathcal{D}$ is the Ore localisation of $\mathbb{K} H$). Thus we have $x = p q^{-1}$, and so \[ xq = p \] Since $p$ and $q$ have finite support in $H$, we can carry out the multiplication above in $\widehat {\mathbb{K} H}^\phi$. Since $x$ is a $\phi$-identity, we have $\mu_\phi(x) = 1$, and therefore $\mu_\phi(q) = \mu_\phi(p)$. Thus we have \[ F_\phi(P(p)) = F_\phi(P(q)) \] But we know that $P(\det AX \otimes \mathcal{D})$ is a single polytope by \cref{single poly}, and so $F_\phi\big(P(\det AX \otimes \mathcal{D})\big) = F_\phi(P(p)) - F_\phi(P(q))$ is a singleton. We have \[P(AX) = P(A) + P(X)\]since $P\colon \mathcal{D}^\times \to \P(H)$ is a homomorphism. Both $P=P(A)$ and $P(X)$ are single polytopes, again using \cref{single poly}. Therefore so are $F_\phi\big( P(A) \big)$ and $F_\phi\big( P(X) \big)$, and their sum is a singleton. This is only possible when both $F_\phi\big( P(A) \big)$ and $F_\phi\big( P(X) \big)$ are singletons. In particular, $F_\phi\big( P(A) \big)$ is a singleton, which means precisely that $C$ is a dual of a vertex of $P(A)$ (as $\phi \in C$). This proves the claim. \medskip \noindent \textbf{(3)$\Rightarrow$(1)} Now let $C$ be a dual of a vertex of $P$. We aim at showing that $A\otimes \widehat{\mathbb{K} H}^C$ admits a right-inverse. Since $C$ is open, there exists $\phi \in C \cap H^1(H;\mathbb{Z})$. Let us take such a $\phi$. We will proceed in $3$ steps. \step{1} We claim that there exists a $\phi$-flat polytope $Q$, such that for every intersection $C_0$ of a dual of a vertex of $Q$ with $C$, the matrix $A\otimes \widehat{ \mathbb{K} H}^{C_0}$ is right-invertible. To prove the claim we start by introducing some extra notation (which is exactly the same as the one used in the proof of \cref{single poly}). Let $L = \ker \phi$. The ring $\mathbb{K} H$ embeds into $\mathbb{L} \mathbb{Z}$, where $\mathbb{L}$ is the Ore localisation of $\mathbb{K} L$, and the embedding is induced by $\phi \colon H \to \mathbb{Z}$. We now use Euclid's algorithm (treating $\mathbb{L} \mathbb{Z}$ as a Laurent polynomial ring in one variable): it gives us a matrix $X$ over $\mathbb{L} \mathbb{Z}$ (a product of elementary matrices) such that $A\otimes \mathbb{L} \mathbb{Z} \cdot X = Y$ and $Y$ is an upper-triangular matrix over $\mathbb{L} \mathbb{Z}$. Also, $\det X \otimes \mathcal{D} = 1$. Let us focus first on $X$. It is a matrix over $\mathbb{L} \mathbb{Z}$, and $\mathbb{L}$ is the Ore localisation of $\mathbb{K} L$. Thus, there exists an element $p \in \mathbb{K} L$ such that for every entry $x$ of $X$ we have $px \in \mathbb{K} L$ (see \cref{Ore rmk}). This implies that for every $\zeta \in H^1(H;\mathbb{R})$, if $p$ is invertible in $\widehat {\mathbb{K} H}^\zeta$ then every entry of $X$ can be represented by an element in $\widehat {\mathbb{K} H}^\zeta$ (in this case, we will simply say that $X$ lies over $\widehat {\mathbb{K} H}^\zeta$). In fact more is true: if $C'$ denotes a dual of some vertex of $P(p)$, then $X$ lies over $\widehat {\mathbb{K} H}^{C'}$, as \cref{inv of novikov elements} tells us that $p$ is invertible in $\widehat {\mathbb{K} H}^{C'}$. Now let us look at $Y=(y_{ij})$. Since $Y$ is upper-triangular, we have $\det Y\otimes \mathcal{D}$ represented by $\prod_i y_{ii}$. Therefore \[ P=P(\det A \otimes \mathcal{D}) = P(\det Y \otimes \mathcal{D}) = \sum_i P(y_{ii}) \] Take $\psi \in C$. Since $\phi, \psi \in C$, we have $F_\phi(P) = F_\psi(P)$, and so \[ \sum_i F_\phi\big( P(y_{ii}) \big) = F_\phi(P) = F_\psi(P) = \sum_i F_\psi\big( P(y_{ii}) \big) \] Now \cref{faces decomp} implies that \[ F_\psi P(y_{ii}) = F_\phi P(y_{ii}) \] for every $i$, since the decomposition of a face of $P$ into a sum of faces of the polytopes $P(y_{ii})$ is unique. Recall that $y_{ii}$ lies in $\mathbb{L} \mathbb{Z}$ for every $i$. Let us now argue as for the matrix $X$ before: we see that for every $i$ there exists $q_i \in \mathbb{K} L$ such that \[q_i y_{ii} \in (\mathbb{K} L) \mathbb{Z}\] For notational convenience, let us fix $i$ for the moment. We then have \[q_i y_{ii} = (r_0 + r_1 z + \dots + r_m z^m)z^n\] for some $m \in \mathbb{N}$ and $n \in \mathbb{Z}$, where $z$ stands for a generator of $\operatorname{im} \phi = \mathbb{Z}$, the coefficients $r_i$ lie in $\mathbb{K} L$, and $r_0 \neq 0 \neq r_m$. We have \[ \mu_\phi (q_i y_{ii}) = r_0 z^n \] Since $q_i \in \mathbb{K} L$, we also have $\mu_\phi(q_i) = q_i$, and so \[ q_i \cdot \mu_\phi ( y_{ii}) = r_0 z^n \] Applying the polytope map and using that $F_\psi P(y_{ii}) = F_\phi P(y_{ii})$ we obtain \[ P(q_i) + F_\psi\big( P(y_{ii}) \big) = P(r_0 z^n) \] It is now clear that replacing $P(q_i)$ by $F_\psi\big( P(q_i) \big)$ on the left-hand side results in obtaining a polytope on the right-hand side which is a subset of $P(r_0 z^n)$. But we also have \[ F_\psi\big(P(q_i)\big) + F_\psi\big( P(y_{ii}) \big) = F_\psi\big( P((r_0 + r_1 z + \dots + r_m z^m)z^n) \big) \] We conclude that $F_\psi\big( P((r_0 + r_1 z + \dots + r_m z^m)z^n) \big)$ is a subset of $P(r_0 z^n)$, which implies that $\mu_\psi((r_0 + r_1 z + \dots + r_m z^m)z^n) = \mu_\psi(r_0 z^n)$. Let $C''$ denote the non-empty intersection of $C$ with a dual of some specified vertex of $P(r_0)$, and take $\rho \in C''$. Since $\rho$ lies in a dual of a vertex of $P(r_0)$, and hence also its translate $P(r_0z^n) = P(r_0) + P(z^n)$, \cref{inv of novikov elements} tells us that $\mu_\rho(r_0 z^n)$ is supported on a singleton. Taking $\psi = \rho$ above, we see that \[ \mu_\rho\big((r_0 + r_1 z + \dots + r_m z^m)z^n\big) = \mu_\rho(r_0 z^n) \] is also supported on a singleton. Therefore, using \cref{inv of novikov elements} again, we see that $q_iy_{ii} = (r_0 + r_1 z + \dots + r_m z^m)z^n$ is invertible in $\widehat{ \mathbb{K} H}^{C'''}$, where $C'''$ is a dual of a vertex of $P(q_i y_{ii})$ containing $\rho$. Now it is clear that $C'' \subseteq C'''$ since for every $\xi \in C''$ we have \[ \mu_\xi(q_iy_{ii}) = \mu_\xi(r_0 z^n) = \mu_\rho(r_0 z^n) = \mu_\rho(q_iy_{ii}) \] Hence $q_iy_{ii}$, and therefore also $y_{ii}$, is invertible in $\widehat{ \mathbb{K} H}^{C''}$. We set $s_i = r_0$, and again allow $i$ to vary. We set $Q = P(p) + \sum P(s_i)$. Let $C_0$ be the intersection of $C$ with some dual of a vertex of $Q$. A dual of a vertex of $Q$ is a subset of duals of vertices of the summands $P(p)$ and $P(s_i)$. Therefore, by the discussion above, the matrix $A\otimes \widehat {\mathbb{K} H}^{C_0}$ is right-invertible, as claimed. \step{2} We now claim that $C \s- \Sigma(A)$ has empty interior and is \emph{$\phi$-convex}, that is for any $\psi \in C \s- \Sigma(A)$ we have $s \psi + t \phi \in C \s- \Sigma(A)$ for every $(s,t) \in (0,\infty) \times [0,\infty)$. The first property is immediate, as $C \cap \Sigma(A)$ contains the intersection of $C$ with the union of all duals of vertices of $Q$ (by Step $1$). The union of all duals of vertices of any polytope is dense in $H^1(H;\mathbb{R})$; this in particular applies to $Q$. Since $C$ is open, its intersection with a dense subset of $H^1(H;\mathbb{R})$ is dense in $C$. Now let us look at the second property. Since $Q$ is $\phi$-flat, every dual of a face of $Q$ is $\phi$-convex (this is easy to verify). Also, the intersection of such a dual with $C$ is still $\phi$-convex, as $C$ is $\phi$-convex (since it is convex and contains $\phi$), and the intersection of two $\phi$-convex sets is $\phi$-convex (this is immediate). Therefore, it is enough to show that $C \cap \Sigma(A)$, and therefore also $C \s- \Sigma(A)$, is the intersection of $C$ with some union of duals of faces of $Q$. Let $\psi \in C\cap \Sigma(A)$ be any character, and let $F$ denote a face of $Q$ in whose dual $\psi$ lies. Let $C_1$ denote the connected component of $\psi$ in $C \cap \Sigma(A)$. By \cref{bns open}, there exists an open neighbourhood $U$ of $\psi$ in $\Sigma(A)$. It is easy to see that for every vertex $v$ of $F$, the neighbourhood $U$ intersects some dual $C_v$ of $v$ in a non-trivial manner. \cref{bns summary} implies that $C_1$ contains $C \cap C_v$ for every $v$, as the sets $C_v$ are contained in $C\cap\Sigma(A)$ by Step $1$, and they are convex and thus connected; moreover, \cref{bns summary} says that $C_1$ is convex. Thus, $C_1$ contains the entire intersection of $C$ with the dual of $F$ containing $\psi$. This shows that every $C_1$ is the intersection of $C$ with some union of duals of faces of $Q$, as claimed. \step{3} We now claim that $C \subseteq \Sigma(A)$, which will finish the proof thanks to \cref{bns summary}, since $C$ is connected, and therefore is a subset of a connected component of $\Sigma(A)$. Suppose for a contradiction that $C \not\subseteq \Sigma(A)$: let $\rho \in C \s- \Sigma(A)$ be a character. Since $C$ is a dual of a vertex, it contains an open neighbourhood $U$ of $\rho$; we find integral characters $\phi_1, \dots, \phi_k \in U$ which form a basis of the $\mathbb{R}$-vector space $H^1(H;\mathbb{R})$, and such that $\rho$ cannot be generated (over $\mathbb{R}$) using fewer than $k$ of these characters. We span a non-degenerate $k$-simplex in $H_1(H;\mathbb{R})$ with vertices $\rho, \phi_1, \dots, \phi_k$, which necessarily has non-empty interior. Now we use the fact that $C \s- \Sigma(A)$ is $\phi_i$-convex for every $i$ (shown in Step $2$, as $\phi$ was an arbitrary element of $C \cap H^1(H;\mathbb{Z})$), and conclude that the interior of the simplex lies in $C \s- \Sigma(A)$. But this contradicts the fact that $C \s- \Sigma(A)$ has empty interior. \end{proof} \section{Agrarian groups} \label{sec agrarian} In this section we introduce agrarian groups, that is groups whose (untwisted) integral group rings embed in skew-fields (division algebras). Such groups will be of central importance for us -- the embedding into a skew-field allows us to take determinants of square matrices over the group ring; the determinants in turn lead to their Newton polytopes which in many cases control the Bieri--Neumann--Strebel invariants. To prove the inheritance properties and to list known classes of examples of agrarian groups we need to first cover the Atiyah conjecture. \subsection{The Atiyah conjecture} Recall our important convention: whenever we talk about a matrix, we will explicitly state over which ring it lies. Let $A$ be a matrix over the integral group ring $\mathbb{Z} G$ of a group $G$. Naturally, $A$ represents a linear map between two finitely generated free right $\mathbb{Z} G$ modules, say \[ A \colon \mathbb{Z} G^n \to \mathbb{Z} G ^m \] Tensoring with $L^2(G)$ we obtain \[ A \otimes L^2(G) \colon L^2(G)^n \to L^2(G)^m \] Let $\mathcal N(G)$ denote the \emph{von Neumann algebra} of $G$, that is the algebra of bounded $G$-equivariant operators on $L^2(G)$. The kernel of $A \otimes L^2(G)$ is a \emph{Hilbert $\mathcal{N}(G)$-module}, which essentially means that it is a topological space with a continuous module structure over $\mathcal N(G)$. The von Neumann algebra has two properties of interest here. Firstly, the non-zero-divisors in $\mathcal N(G)$ satisfy the Ore condition, and so $\mathcal N(G)$ embeds into its Ore localisation. Secondly, Hilbert $\mathcal{N}(G)$-modules have a \emph{von Neumann dimension}, which is an element of $[0,\infty]$ -- for details see the book of L\"uck~\cite[Section 1.1]{Lueck2002}). \begin{dfn} Let $G$ be a torsion free group. We say that $G$ satisfies the \emph{Atiyah conjecture} if and only if for every matrix $A$ over $\mathbb{Z} G$, the von Neumann dimension of $\ker A \otimes L^2(G)$ is an integer. \end{dfn} The Atiyah conjecture can also be formulated for groups with torsion -- for such a formulation, as well as a discussion of counterexamples, see \cite[Chapter 10]{Lueck2002}. There are however no torsion-free groups known which do not satisfy the Atiyah conjecture. Also, one can consider the Atiyah conjecture over group rings of $G$ with other coefficients. This is however not relevant to our discussion here. \begin{thm}[Linnell~{\cite{Linnell1993}}; Schick~{\cite{Schick2002}}; Linnell--Okun--Schick~{\cite{Linnelletal2012}}; Schreve {\cite{Schreve2014}}] \label{atiyah groups} Let $\mathcal C_1$ be the smallest class of groups containing all free groups which is closed under directed unions and extension by elementary amenable groups. Let $\mathcal C_2$ be the smallest class of groups containing the trivial group, closed under subgroups, colimits and inverse limits over directed systems, and such that if we have a quotient $q \colon G \to H$ with $G$ torsion-free, $H$ elementary amenable and $q^{-1}(F) \in \mathcal C_2$ for every finite subgroup $F$ of $H$, then $G \in \mathcal C_2$. Let $\mathcal C_3$ denote the union of the class of \{right-angled Artin\}-by-\{elementary amenable\} groups and the class of \{right-angled Coxeter\}-by-\{elementary amenable\} groups where in the latter case in addition the finite subgroups of the elementary amenable quotients are $2$-groups. Let $\mathcal C_4$ denote the class of virtually cocompact special groups. If $G$ is a torsion-free group lying in $\mathcal C_1 \cup \mathcal C_2\cup \mathcal C_3 \cup \mathcal C_4$, then $G$ satisfies the Atiyah conjecture \end{thm} Note that $\mathcal C_2$ contains all residually \{torsion-free solvable\} groups, which is a very rich class. Also, let us elaborate on the class $\mathcal C_4$: it contains random groups (in the density model) with density less than $1/6$ -- this follows from the result of Olliver--Wise~\cite{OlliverWise2011}, who have shown that such random groups are cocompactly cubulated, combined with the fact that they are hyperbolic, and cocompactly cubulated hyperbolic groups are virtually cocompact special by the work of Agol~\cite[Theorem 1.1]{Agol2013}. \begin{lem}[{\cite[Lemma 10.4]{Lueck2002}}] \label{atiyah inherit} The class of torsion-free groups satisfying the Atiyah conjecture is closed under taking subgroups and directed unions. \end{lem} In fact one can also give statements for groups with torsion, but they will not be relevant in our context. We did not give a proper definition of the von Neumann dimension appearing in the Atiyah conjecture, since we will be using a reformulation due to Linnell. To state it we need one more definition. \begin{dfn}[Division closure] Let $R$ be a subring of a ring $S$. The \emph{division closure} of $R$ in $S$ is the smallest subring of $S$ containing $R$ such that if an element of this subring is invertible in $S$, then it is also invertible in the subring. \end{dfn} \begin{thm}[Linnell~{\cite{Linnell1993}}] \label{atiyah field} Let $G$ be a torsion-free group, and let $\mathcal{D}(G)$ denote the division closure of $\mathbb{Z} G$ in the Ore localisation of the von Neumann algebra $\mathcal N(G)$. The group $G$ satisfies the Atiyah conjecture if and only if $\mathcal{D}(G)$ is a skew-field. \end{thm} Note that in Linnell's paper, $\mathcal{D}(G)$ denotes the division closure of $\mathbb{C} G$, rather than $\mathbb{Z} G$. The proofs however work verbatim for $\mathbb{Z} G$. \begin{prop} \label{props of D} Let $G$ be a torsion-free group satisfying the Atiyah conjecture. Every automorphism of $G$ extends to an automorphism of $\mathcal{D}(G)$. Also, if $K$ is a subgroup of $G$ then the natural embedding $\mathbb{Z} K \hookrightarrow \mathbb{Z} G$ extends to an embedding $\mathcal{D}(K) \to \mathcal{D}(G)$. \end{prop} \begin{proof} Let $x$ be an automorphism of $G$. It is easy to see that $x$ extends to an automorphism of $L^2(G)$, hence of $\mathcal N(G)$, and therefore of the Ore localisation of $\mathcal N(G)$ The second statement follows immediately from observing that the von Neumann algebra of a subgroup embeds into the von Neumann algebra of a supergroup in such a way that non-zero-divisors stay non-zero-divisors. \end{proof} The above proposition allows us in particular to construct a twisted group ring $\mathcal{D}(K) G/K$, when $K$ is normal. \begin{prop}[{\cite[Lemma 10.69]{Lueck2002}}] \label{linnell ore} If $G$ is a torsion-free group satisfying the Atiyah conjecture and if $G$ fits into a short exact sequence \[ K \to G \to H \] where $H$ is finitely-generated and abelian, then the Linnell skew-field $\mathcal{D}(G)$ is isomorphic to the Ore localisation of the twisted group ring $\mathcal{D}(K) H$. \end{prop} Since we are talking about the Atiyah conjecture, let us introduce the $L^2$-homology. \begin{dfn} The $n^{th}$ $L^2$-homology group of a group $G$ is defined to be \[H_n(G;L^2(G))\] with the caveat that one computes the homology topologically, that is one divides the cycles by the closure of the boundaries. A group $G$ is \emph{$L^2$-acyclic} if and only if its $L^2$-homology vanishes for every $n$. In this context one may in fact take the usual notion of homology, without taking closures. \end{dfn} \begin{prop}[L\"uck~{\cite[Lemma 10.28(3)]{Lueck2002}}] \label{l2 acyclic via D} Let $G$ be a torsion-free group satisfying the Atiyah conjecture. Then \[\dim_{\mathcal N (G)} H_n(G;L^2(G)) = \dim_{\mathcal{D}(G)}H_n(G;\mathcal{D}(G))\] for every $n$. \end{prop} Again, L\"uck deals with the division closure of $\mathbb{C} G$, but the proof carries over to the case of $\mathcal{D}(G)$ being the division closure of $\mathbb{Z} G$. \subsection{Agrarian and equivariantly agrarian groups} We now introduce the class of groups of main interest to us. \begin{dfn} A group $G$ is \emph{agrarian} if and only if the integral group ring $\mathbb{Z} G$ embeds into a skew-field. If the embedding can be made equivariant with respect to $\operatorname{Aut}(G)$ (which also acts on $\mathbb{Z} G$), then $G$ is said to be \emph{equivariantly agrarian}. \end{dfn} If $\mathbb{Z} G$ embeds in a skew-field $\mathcal{D}$, then we treat $\mathcal{D}$ as a left $\mathbb{Z} G$ module, and a right $\mathcal{D}$ module. Thus we will tensor $\mathbb{Z} G$ modules on the right with $\mathcal{D}$, and obtain right $\mathcal{D}$ modules. Note that $\mathbb{Z} G$ embeds in a skew-field $\mathcal{D}$ if and only if $\mathbb{Q} G$ embeds in $\mathcal{D}$. \smallskip Let us list some properties of agrarian groups. \begin{prop} \begin{enumerate} \item Every equivariantly agrarian group is agrarian. \item Subgroups of agrarian groups are agrarian. \item Countable directed unions of agrarian groups are agrarian. \label{agrarian unions} \item Agrarian groups are torsion free and satisfy the zero-divisor conjecture of Kaplansky, that is their integral group rings have no zero-divisors. \item Torsion-free groups satisfying the Atiyah conjecture are equivariantly agrarian. \item Amenable groups whose integral group rings have no zero-divisors are equivariantly agrarian. \item Biorderable groups are agrarian. \item \{Equivariantly agrarian\}-by-$H$ groups are agrarian, provided that $H$ is biorderable or $H$ is amenable and its twisted group rings with skew-field coefficients have no zero-divisors. \item Countable fully-residually agrarian groups are agrarian. \end{enumerate} \end{prop} \begin{proof} \begin{enumerate} \item Obvious. \item Obvious. \item Let $(G_i)$ be a sequence of agrarian groups, and suppose that $j_i \colon \mathbb{Z} G_i \to \mathcal{D}_i$ is an embedding into a skew-field for every $i$. Let $\mathcal{D}$ denote the ultraproduct of the skew-fields $\mathcal{D}_i$; it is a standard exercise to show that $\mathcal{D}$ is itself a skew-field. Now we define $j \colon \mathbb{Z} (\bigcup G_i) \to \mathcal{D}$ by putting $j(g) = (j_i(g))$ for every $g \in \bigcup G_i$ (where we put $j_i(g) = 0$ if $g \not \in G_i$), and then extending linearly. Take $x \in \mathbb{Z} (\bigcup G_i)$: it is clear that $j(x) = 0$ implies that $j_i(x) = 0$ for infinitely many values $i$, and hence that $x = 0$. Thus $j$ is injective. \item Skew-fields have no zero-divisors, and groups with torsion have zero-divisors in their integral group rings. \item Follows from \cref{atiyah field}. \item Note that if $\mathbb{Z} G$ has no zero divisors, then $\mathbb{Q} G$ also does not admit any. Therefore, we may apply \cref{tamari} and \cref{Ore loc}(2),(4) to embed $\mathbb{Q} G$, and hence $\mathbb{Z} G$, into a skew-field in an equivariant way. \item Follows from \cref{malcev-neumann}. \item Let $K \to G \to H$ be an extension, with $K$ equivariantly agrarian. The very definition tells us that the untwisted group ring $\mathbb{Z} K$ can be embedded into a skew-field $\mathbb{K}$ in an $\operatorname{Aut}(K)$-equivariant fashion. In particular, the embedding is equivariant with respect to the conjugation action of $G$ on $K$, and so we have \[ \mathbb{Z} G = (\mathbb{Z} K) H \hookrightarrow \mathbb{K} H \] Now the statement follows from \cref{malcev-neumann} when $H$ is biorderable and from \cref{tamari} when $H$ is amenable and $\mathbb{K} H$ has no zero-divisors. \item This is similar to \cref{agrarian unions}: if $G$ is fully-residually agrarian, then we obtain a sequence $q_i$ of epimorphisms $q_i \colon G \to H_i$ such that every $H_i$ is agrarian, and for every $x \in \mathbb{Z} G$ there exists $j_0$ such that $q_j(x) \neq 0$ for every $j \geqslant j_0$, where $q_j$ now denotes the induced map on group rings. We embed $\mathbb{Z} H_i$ into a skew-field $\mathcal{D}_i$, and then take $\mathcal{D}$ to be the ultraproduct of the skew-fields. The maps $q_j$ define an embedding $\mathbb{Z} G \hookrightarrow \mathcal{D}$. \qedhere \end{enumerate} \end{proof} We see that agrarian groups form a large class -- they contain torsion-free groups satisfying the Atiyah conjecture (including all elementary amenable groups and free groups), but they also contain extensions of such groups by free groups (as free groups are biorderable). Thus, for example, free-by-free groups are agrarian, but are not all known to satisfy the Atiyah conjecture; a related class of hyperbolic extensions of free groups has been studied by Dowdall--Taylor~\cite{DowdallTaylor2018}. Another example of agrarian groups which are not known to satisfy the Atiyah conjecture is the class of fundamental groups of $4$-manifolds which up to homotopy are surface-by-surface bundles (studied e.g. by Hillman~\cite{Hillman1991}) -- again, these are agrarian since they are Atiyah-by-biorderable. There is no torsion-free example of a non-agrarian group known. \begin{lem} \label{agrarian embedding} Let $G$ be an agrarian group, and suppose that $K \to G \to H$ is an exact sequence of groups. Then $\mathbb{Z} K$ embeds into a skew-field $\mathbb{K}$ in a $G$-equivariant way, and so we have an embedding \[ \mathbb{Z} G \cong (\mathbb{Z} K) H \hookrightarrow \mathbb{K} H \] \end{lem} \begin{proof} Let $\mathbb{K}$ be the skew-field coming from the definition of agrarianism of $G$. Since $G$ embeds therein, we have an action of $G$ on $\mathbb{K}$ by conjugation. It is clear that this action preserves $\mathbb{Z} K$. \end{proof} Note that if we know that $G$ is torsion free and satisfies the Atiyah conjecture, then we may take $\mathbb{K} = \mathcal{D}(K)$ above, and then the Ore localisation of $\mathcal{D}(K) H$ is isomorphic to $\mathcal{D}(G)$ by \cref{linnell ore}. \section{Applications} \label{sec apps} Throughout the applications $G$ is a group, and we set $H = \fab{G}$, the free part of the abelianisation of $G$, $K = \ker( G \to \fab{G})$, and identify the untwisted group ring $\mathbb{Z} G$ with the twisted group ring $(\mathbb{Z} K) H$, as in \cref{key example}. If we know that $G$ is agrarian (which we typically will), we will embed $\mathbb{Z} K$ into a skew-field $\mathbb{K}$, as explained in \cref{agrarian embedding}, and hence obtain an embedding \[ \mathbb{Z} G \cong (\mathbb{Z} K) \hookrightarrow \mathbb{K} H \] We will use $\mathcal{D}$ to denote the Ore localisation of $\mathbb{K} H$. Tensoring in this section is always taken over $\mathbb{Z} G$. \subsection{Bieri--Neumann--Strebel invariants} In \cite{Bierietal1987} Bieri--Neumann--Strebel introduced the geometric invariant $\Sigma(G)$ (also known as the $\Sigma$-invariant or the BNS-invariant), where $G$ is any finitely-generated group. Later, Bieri--Renz~\cite{BieriRenz1988} extended the definition to a series of invariants $\Sigma^m(G;\mathbb{Z})$, with $\Sigma(G) = \Sigma^1(G;\mathbb{Z})$. We recall the definition below. (Note that for us $\mathbb{Z}$ is a right module, and so we avoid the minus sign which usually appears in the equality above.) \begin{dfn} For every $m \in \mathbb{N} \sqcup \{\infty\}$, we define \[\Sigma^m(G;\mathbb{Z})\subseteq H^1(G;\mathbb{R}) \s- \{0\}\] by declaring $\phi \in \Sigma^m(G;\mathbb{Z})$ if and only if the monoid $\{ g \in G \mid \phi(g) \geqslant 0 \}$ is of type $\typeFP{m}$. We will write $\Sigma^1(G)$ for $\Sigma^1(G;\mathbb{Z})$. Recall that a monoid $M$ is of type $\typeFP{m}$ if and only if the trivial $\mathbb{Z} M$-module $\mathbb{Z}$ admits a resolution by projective $\mathbb{Z} M$-modules such that the first $m$ terms of the resolution are finitely generated (here $\mathbb{Z} M$ is the untwisted monoid ring). When $m=\infty$ we require all the terms of the resolution to be finitely generated. \end{dfn} Note that $M$ is of type $\typeFP{\infty}$ if and only if it is of type $\typeFP{m}$ for all $m \in \mathbb{N}$, and so $\Sigma^\infty(G;\mathbb{Z}) = \bigcap_{m\in \mathbb{N}} \Sigma^m(G;\mathbb{Z})$. The definition of type $\typeFP{m}$ applies of course also to groups. In the context of groups let us introduce one more notion. \begin{dfn} A group $G$ is \emph{of type $\typeF{}$} if and only if it admits a finite CW-model of its classifying space, and \emph{of type $\typeF{\infty}$} if and only if it admits a CW-model of its classifying space with finite skeleton in every dimension. \end{dfn} \begin{rmk} Note that Bieri--Renz talk about $\Sigma^m(G;\mathbb{Z})$ as a subset of the character sphere obtained by identifying classes in $H^1(G;\mathbb{R}) \s- \{0\}$ under positive homotheties. We prefer to define $\Sigma^m(G;\mathbb{Z})$ as a subset of the whole cohomology -- in our setup $\Sigma^m(G;\mathbb{Z})$ is of course stable under multiplication by positive scalars. \end{rmk} \begin{thm}[Bieri--Renz~{\cite[Theorem A]{BieriRenz1988}}] \label{bns open orig} Let $G$ be a finitely generated group, and let $m \in \mathbb{N}$. The sets $\Sigma^m(G;\mathbb{Z})$ are open subsets of $H^1(G;\mathbb{R})$. \end{thm} \iffalse \begin{rmk} One can also define the higher $\Sigma$-invariants, but they do not play any role in this paper. For the sake of completeness let us mention here that $\Sigma(G)$ is the first $\Sigma$-invariant, and is sometimes denoted by $\Sigma^1(G)$. \end{rmk} \fi \subsection{Sikorav's theorem} Fundamental to the way we will treat BNS-invariants is the following theorem of Sikorav~\cite{Sikorav1987}, which gives an interpretation of $\Sigma^m(G;\mathbb{Z})$ in terms of group homology of $G$ with coefficients in the Novikov ring $\widehat {\mathbb{Z} G}^\phi$. (We give a reference to an easily available paper of Bieri, rather than to the original thesis of Sikorav.) \begin{thm}[Sikorav {\cite[Theorem 2]{Bieri2007}}] \label{sikorav} Let $G$ be a group of type $\typeFP{m}$. For every $\phi \in H^1(G;\mathbb{R}) \s- \{0\}$ we have $\phi \in \Sigma^m(G;\mathbb{Z})$ if and only if $H_i(G;\widehat {\mathbb{Z} G}^\phi) = 0$ for every $i \leqslant m$. \end{thm} In practice, we will compute $H_i(G;\widehat {\mathbb{Z} G}^\phi)$ using the cellular chain complex of the universal covering of some CW-model for the classifying space of $G$; the chain complex is naturally a free resolution of $\mathbb{Z}$ by $\mathbb{Z} G$-modules. The group homology with coefficients in the Novikov completion $\widehat {\mathbb{Z} G}^\phi$ is obtained by tensoring the chain complex with $\widehat {\mathbb{Z} G}^\phi$ and then computing the homology. When the group $G$ is of type $\typeF{\infty}$, then we can arrange for the above complex to be a complex of free finitely-generated $\mathbb{Z} G$-modules; hence the boundary maps are naturally matrices over $\mathbb{Z} G$, and the homology computation is related to their invertibility upon extension of scalars. This is the reason for our interest in $\Sigma$-invariants of matrices. \subsection{\texorpdfstring{$\Sigma$}{Sigma}-invariants of matrices revisited} Let us record the following, which highlights some similarities between BNS-invariants for groups and for matrices over a group ring. \begin{lem} Let $G$ be a group, and let $A$ be a square matrix over $\mathbb{Z} G$. Then $\Sigma(A)$ is stable under positive homotheties, and is an open subset of $H^1(G;\mathbb{R})$. \end{lem} \begin{proof} The first assertion is obvious, since multiplying $\phi$ by a positive scalar does not change the ring $\widehat{\mathbb{Z} G}^\phi$. The latter assertion follows immediately from \cref{entries far away}. \end{proof} To discuss the $\Sigma$-invariants of matrices further, we introduce the notion of a marked polytope. \begin{dfn} A function $m \colon H^1(G;\mathbb{R}) \s- \{0\} \to \{0,1\}$ (where $\{0,1\}$ is endowed with the discrete topology) is a \emph{marking function} if and only if $m^{-1}(0)$ is closed. A character mapped to $1$ will be called \emph{marked}. Let $P$ be a polytope. We say that $m$ is a \emph{marking of $P$} if and only if $m$ is constant on duals of faces of $P$. A dual mapped to $1$ will be called \emph{marked}. A polytope endowed with a marking will be called a \emph{marked polytope}. When $m^{-1}(1)$ consists solely of duals of vertices, we say that $P$ is a polytope with \emph{marked vertices}. \end{dfn} Note that the definition above leads to statements which may seem a little awkward at first, since we will often assume the existence of a marked polytope, and then immediately treat being marked as a property of a character. \begin{rmk} When every face of $P$ admits a single dual, the definition of a marked polytope above gives the more natural notion of a marking, where it is faces of the polytope which are marked. The fact that $m^{-1}(0)$ is closed guarantees that faces of marked faces are also marked. \end{rmk} \smallskip Recall that when $G$ is agrarian, we have a skew-field $\mathbb{K}$ containing $\mathbb{Z} K$, where $K = \ker ( G \to \fab{G})$, and hence we also have the Ore localisation $\mathcal{D}$ of the twisted group ring $\mathbb{K} \fab{G}$. \begin{thm} \label{bns matrix} Suppose that $G$ is agrarian. Let $A$ be a square matrix over $\mathbb{Z} G$ with $A \otimes \mathcal{D}$ invertible. The polytope $P(A)$ admits a marking of its vertices, such that for every ${\phi \in H^1(H;\mathbb{R}) \s- \{0\}}$ we have $\phi \in \Sigma(A)$ if and only if $\phi$ is marked. \end{thm} \begin{proof} This is a fairly straightforward corollary of \cref{K-bns matrix}: we know that $A\otimes \widehat {\mathbb{K} H}^\phi$ admits a right-inverse if and only if $\phi$ lies in a dual of a vertex of $P(A)$. Now being right-invertible over $\widehat {\mathbb{K} H}^\phi$ is certainly a necessary condition for being right-invertible over $\widehat {\mathbb{Z} G}^\phi$, as $\widehat {\mathbb{Z} G}^\phi$ is a subring of $\widehat {\mathbb{K} H}^\phi$. Thus characters lying in duals of faces of dimension at least $1$ do not lie in $\Sigma(A)$. We now define the marking on $P(A)$. A dual $C$ of a vertex is marked if and only if $C$ intersects $\Sigma(A)$ non-trivially. We need only show that if $C$ intersects $\Sigma(A)$ non-trivially, then in fact $C \subseteq \Sigma(A)$. Take $\phi, \psi \in C$, with $\phi \in \Sigma(A)$. \cref{K-bns matrix} tells us that $A\otimes \widehat {\mathbb{K} H}^C$ is right-invertible; let $X$ denote its inverse. We also know that $A$ admits a right inverse $Y$ over $\widehat {\mathbb{Z} G}^\phi$. Right-inverses over $\widehat {\mathbb{K} H}^\phi$ are unique by \cref{unique RI}, and so $X=Y$, which in turn implies that $X$ lies over $\widehat {\mathbb{K} H}^C \cap \widehat {\mathbb{Z} G}^\phi=\widehat {\mathbb{Z} G}^C$, and we are done. \end{proof} Let us now discuss ways of combining various marked polytopes into a single one. \begin{dfn} An \emph{atlas of markings} consists of a finite open cover $U_1, \dots, U_n$ of $H^1(G;\mathbb{R}) \s- \{0\}$, and a finite collection $(P_1,m_1), \dots, (P_n,m_n)$ of marked polytopes in $H_1(G;\mathbb{R})$, such that for every pair $(i,j)$ we have $m_i\vert_{U_i \cap U_j} = m_j\vert_{U_i \cap U_j}$. \end{dfn} It is clear that an atlas of markings gives a well defined marking function $m\colon H^1(G;\mathbb{R}) \s- \{0\} \to \{0,1\}$. Hence, we will often specify $m$ instead of the markings $m_i$. It is clear that $m^{-1}(1)$ is open, since it is open inside every chart $U_i$. Thus $m^{-1}(0)$ is closed. \begin{lem} \label{one marked polytope} Let $U_1, \dots, U_n$ and $P_1, \dots, P_n$ be an atlas of markings. The induced marking function $m$ defines a marking of $P = \sum_{i=1}^n P_i$. \end{lem} \begin{proof} Recall \cref{faces decomp}: for any face $Q$ of $P$, we have unique faces $Q_i$ of $P_i$ such that $Q = \sum_{i=1}^n Q_i$. Let $C$ denote a dual of a face $Q$ of $P$. We argue that either all characters in $C $ are marked, or none is. To this end, let $\phi \in C$. There exists $i$ such that $\phi \in U_i$, and $\phi$ is marked if and only if the dual of $F_\phi(P_i)$ containing $\phi$ is marked. Now, for every $\psi$ in the connected component of $C \cap U_i$ containing $\phi$ we have $F_\phi(P_i) = Q_i = F_\psi(P_i)$, and so $\psi$ is marked if and only if $\phi$ is. This shows that the marking map $m \colon \bigcup U_i \to \{0,1\}$ is continuous on $C$, which is connected. Since the target of the marking map is discrete, the map must be constant on $C$. \end{proof} \begin{lem} \label{one marked L2 polytope} Let $\S = \{s_1, \dots, s_n\}$ be a finite generating set of $G$, let $P$ be a polytope in $H_1(G;\mathbb{R})$, and let $k_i \in \mathbb{N} \cup \{0\}$ be given for every $i \in \{ 1, \dots, n\}$. Suppose that for every $i$ there exists a marking of vertices of $P_i = P + k_i \cdot P(1-s_i)$ such that the collections $U_1, \dots, U_n$ and $P_1, \dots, P_n$ form an atlas of markings, where \[U_i = \{ \phi \in H^1(G;\mathbb{R}) \mid \phi(s_i) \neq 0 \}\] Then the induced marking function $m$ defines a marking of vertices of $P$. \end{lem} \begin{proof} The proof is identical to the previous one, with only one exception: the proof that $F_\phi(P_i) = F_\psi(P_i)$ holds for every $\psi$ in the connected component of $C \cap U_i$ containing $\phi$, where $C$ is a dual, is different. To show the desired statement, take such $C,U_i, \phi$ and $\psi$. Since $\phi$ and $\psi$ lie in a connected component of $U_i$, the real numbers $\phi(s_i)$ and $\psi(s_i)$ have the same sign. Therefore \[F_\phi\big(k_i P(1-s_i)\big) = F_\psi\big(k_i P(1-s_i)\big)\] Also, $\phi, \psi \in C$ implies that $F_\phi(P) = F_\psi(P)$. Thus $F_\phi(P_i) = F_\psi(P_i)$ as required. \end{proof} \subsection{Polytope class} In \cite{Funke2018} Funke introduced the following notion. \begin{dfn} Let $G$ be a torsion-free group satisfying the Atiyah conjecture and such that the abelianisation of $G$ is finitely generated. We say that $G$ is of \emph{polytope class} if and only if for every square matrix $A$ over $\mathbb{Z} G$ the following holds: if $A \otimes \mathcal{D}(G)$ is invertible then the Newton polytope $P(\det A \otimes \mathcal{D}(G))$ is a single polytope. \end{dfn} Recall that $\mathcal{D}(G)$ is the skew-field of Linnell -- see \cref{atiyah field}. Funke proved in ~\cite[Theorem 4.1]{Funke2018} that torsion-free amenable groups $G$ satisfying the Atiyah conjecture and with finitely generated abelianisation are of polytope class. We show that one does not need to assume amenability. \begin{thm} \label{polytope class} Every torsion free group $G$ with finitely generated abelianisation, and with $G$ satisfying the Atiyah conjecture, is of polytope class. \end{thm} \begin{proof} Recall that in this context we have the Linnell skew-field $\mathcal{D}(G)$ containing $\mathbb{Z} G$. Moreover, $\mathcal{D}(G)$ is isomorphic to the Ore localisation of the twisted group ring $\mathcal{D}(K) \fab{G}$, where $K = \ker (G \to \fab{G})$, by \cref{linnell ore}. Thus, we take $\mathbb{K} = \mathcal{D}(K)$ and $\mathcal{D} = \mathcal{D}(G)$, and we apply \cref{single poly}. \end{proof} This fact has interesting consequences for the Whitehead group. \begin{dfn} Let $\operatorname{GL}(\mathbb{Z} G)$ denote the directed union over $n$ of the groups of invertible $n \times n$ matrices over $\mathbb{Z} G$, where the embeddings are given by introducing the identity matrix in the bottom-right corner. The group $K_1(G)$ is the abelianisation of $\operatorname{GL}(\mathbb{Z} G)$. The \emph{Whitehead group} $\mathrm{Wh}(G)$ is the quotient of $K_1(G)$ by the subgroup generated by $1 \times 1$ matrices of the form $(\pm g)$, with $g \in G$. \end{dfn} Suppose now that $G$ is agrarian, and let $\mathcal{D}$ be a skew-field as discussed at the beginning of \cref{sec apps}. It is not hard to see that the map $A \mapsto \det A \otimes \mathcal{D}$ is well-defined on $\operatorname{GL}(\mathbb{Z} G)$. Hence, one can talk about the Newton polytopes of elements of $\operatorname{GL}(\mathbb{Z} G)$. It is even easier to see that if we quotient the polytope group $\P(\fab{G})$ by the translations (and obtain $\P_T(\fab{G})$), then in fact we obtain a well defined map $\mathrm{Wh}(G) \to \P_T(\fab{G})$. Thus, we may talk about the Newton polytopes of elements of the Whitehead group. It is an open question whether there exists a torsion-free group $G$ with non-trivial $\mathrm{Wh}(G)$. We show that in our setting at least the Newton polytopes vanish. \begin{cor} \label{Whitehead} Let $G$ be an agrarian group with finitely generated abelianisation. The Newton polytope of every element of $\mathrm{Wh}(G)$ is a singleton. \end{cor} \begin{proof} Since $G$ is agrarian, fix a skew-field $\mathcal{D}$ into which $\mathbb{Z} G$ embeds. Let $A \in \mathrm{Wh}(G)$ be given. We can think of $A$ as a square matrix over $\mathbb{Z} G$, which is invertible. Recall that $P(A)$ denotes $P(\det A \otimes \mathcal{D})$. We have \[ P(A) + P(A^{-1}) = P(AA^{-1}) = P (\det \mathrm{I} \otimes \mathcal{D}) = 0 \] and so $P(A) = -P(A^{-1})$. But both are single polytopes, and so $P(A) = P(A^{-1}) = 0$ in $\P_T(G)$. \end{proof} Note that a version of the above corollary for torsion-free groups satisfying the Atiyah conjecture is proven in \cite[Lemma 3.4]{Funke2018} under the extra assumption that $G$ is of $P\leq0$-class, which is implied by being of polytope class. The corollary has a direct application to the study of the $L^2$-torsion polytopes. These polytopes are defined for any free finite $G$-CW-complex $X$ which is $L^2$-acyclic, provided that $G$ satisfies the Atiyah conjecture. The construction is due to Friedl--L\"uck~\cite{FriedlLueck2017}, and is a little complicated -- roughly speaking, to $X$ one associates an $L^2$-version of the Whitehead torsion, called the universal $L^2$-torsion. This torsion is naturally an element of $\mathcal{D}(G)^\times / [\mathcal{D}(G)^\times, \mathcal{D}(G)^\times]$, and hence one can take its Newton polytope up to translation. This element of $\P_T(\fab{G})$ is precisely the $L^2$-torsion polytope $P_{L^2}(X,G)$. The universal $L^2$-torsion is a simple homotopy invariant, but not a homotopy invariant -- when one passes to a complex $G$-homotopy equivalent to $X$, the universal $L^2$-torsion gets multiplied by an element of $\mathrm{Wh}(G)$. When passing to the $L^2$-torsion polytope, the situation was a priori the same. However, the corollary above tells us that the Newton polytopes of elements in $\mathrm{Wh}(G)$ are trivial, and so the $L^2$-torsion polytope is actually a homotopy invariant. Let us summarise this discussion. \begin{thm} Let $G$ be an $L^2$-acyclic group of type $\typeF{}$, and suppose that $G$ satisfies the Atiyah conjecture. For any two finite models $X$ and $Y$ for the classifying space of $G$, we have \[ P_{L^2}(X,G) = P_{L^2}(Y,G) \] Therefore, we may purge $X$ from the notation and talk about $P_{L^2}(G)$, the $L^2$-torsion polytope of $G$. \end{thm} \subsection{Vanishing of the \texorpdfstring{$L^2$}{L\texttwosuperior}-torsion polytope in the presence of amenability} \label{sec: vanishing} We now use \cite[Lemma 5.2]{Funke2018} and the proof of \cite[Theorem 5.3]{Funke2018} of Funke to prove the following. \begin{thm} \label{amenable} Let $G$ be a group of type $\typeF{}$ satisfying the Atiyah conjecture. Suppose that $G$ admits a finite chain of subnormal subgroups with the last term being a non-trivial amenable group $N$. Suppose further that $N$ is not abelian or that $G$ has trivial centre. Then $G$ is $L^2$-acyclic and the $L^2$-torsion polytope of $G$ is a singleton. \end{thm} \begin{proof} Note that $N$ is amenable and infinite, since $G$ is of type $\typeF{}$, and thus torsion free. Therefore the $L^2$-acyclicity of $G$ follows from \cite[Theorem 1.44]{Lueck2002} of L\"uck. Since $G$ is of type $\typeF{}$, it is finitely generated, and hence so is its abelianisation. Thus $G$ is of polytope class by \cref{polytope class} Now, to fix notation, we denote the elements of the subnormal chain by $N_i$, with $N_0 = G$ and $N_k = N$. Let $M = N \cap K$ (recall that $K = \ker G \to \fab{G}$). \smallskip \noindent \textbf{Case 1:} Suppose that $M$ is not trivial, and hence $M$ is an infinite amenable group. Let $S$ be the set of non-zero elements of $\mathbb{Z} M$. Note that the untwisted group ring $\mathbb{Z} G$ has no zero-divisors, since it embeds into the skew-field $\mathcal{D}(G)$. Thus $\mathbb{Z} M$ has no zero-divisors either. Therefore the subset $S$ of $\mathbb{Z} M$ satisfies the Ore condition inside of $\mathbb{Z} M$. Note that $M$ is a normal subgroup of $N_{k-1}$, and that $S$ is $N_{k-1}$-invariant. Now \cref{inheriting Ore} tells us that $S$ satisfies the left Ore condition in $\mathbb{Z} N_{k-1}$. Set $S_{k-1} = S$. Replacing $S_{k-1}$ by $S_{k-2}$, the multiplicative closure of the $N_{k-2}$-conjugates of $S_{k-1}$, and using \cref{inheriting Ore} again, we see that $S_{k-2}$ satisfies the left Ore condition in $\mathbb{Z} N_{k-2}$. We repeat the procedure until we arrive at the closure $S_0$ which satisfies the left Ore condition in $\mathbb{Z} G$. The same argument shows that $S_0$ satisfies the right Ore condition, and hence the Ore conditions on both sides. Note that the Newton polytope of every element in $S_0$ is a singleton, since this is true for elements in $S$ by construction (as these lie in the group ring of the kernel $K$) and is clearly preserved under conjugation and multiplication. We are now going to use \cite[Lemma 5.2]{Funke2018} of Funke. Considering only a classifying space for the group in question, and using \cref{polytope class} to remove the assumption of $P \geq 0$-class (which is weaker than being of polytope class), we may state it as follows: \begin{quote} Let $G$ be a group of type $\typeF{}$ satisfying the Atiyah conjecture. Let $T \subseteq \mathbb{Z} G$ be a multiplicative subset satisfying the left and right Ore conditions, and such that $P(t)$ is a singleton for every $t \in T$. If $H_i(G;T^{-1} \mathbb{Z} G) = 0$ for every $i$, then $P_{L^2}(G)$ is a singleton. \end{quote} The assumptions of \cite[Lemma 5.2]{Funke2018} are satisfied (taking $T = S_0$) -- we need only that \[H_n(G; S_0^{-1} \mathbb{Z} G) = 0\] for all $n$. To prove it, note that extending scalars to a localisation is exact, and so the only potential problem occurs for $H_0(G; S_0^{-1} \mathbb{Z} G)$, since $H_i(G; \mathbb{Z} G) = 0$ for all $i > 0$, and $H_0(G; \mathbb{Z} G) = \mathbb{Z}$. But $S_0$ contains an element of the form $1-g$ with $g \in M \s- \{1\} \subseteq G \s- \{1\}$ by assumption, and so $H_0(G; S_0^{-1} \mathbb{Z} G) = 0$ (as we could compute it from a presentation complex coming from a presentation of $G$ containing $g$ as a generator, and then the first boundary map is clearly onto over $S_0^{-1} \mathbb{Z} G$). \smallskip \noindent \textbf{Case 2:} Suppose that $M$ is trivial. Then $N$ embeds into $H = \fab{G}$, and so is abelian. Thus $G$ has trivial centre by hypothesis. Since $N$ is normal in $N_{k-1}$, and embeds into $H = \fab{G}$, the action of $N_{k-1}$ on $N$ must be trivial, and so $N$ lies in the centre of $N_{k-1}$. We replace $N$ by the centre of $N_{k-1}$: if this new $N$ does not embed into $H$, we are back in case 1. Otherwise, note that $N$ is in fact characteristic in $N_{k-1}$, and so normal in $N_{k-2}$. We repeat the argument, and replace $N$ by the centre of $N_{k-2}$. We continue doing so until either the new $N$ intersects $K$ non-trivially, or until we have produced a non-trivial centre of $G$, which is impossible. \end{proof} The theorem above allows us to confirm a conjecture of Friedl--L\"uck--Tilmann. \begin{conj}[Friedl--L\"uck--Tillmann~{\cite[Conjecture 6.4]{Friedletal2016}}] Let $G$ be an $L^2$-acyclic group of type $\typeF{}$ satisfying the Atiyah conjecture and with $\mathrm{Wh}(G) = 0$. If $G$ is amenable but not virtually $\mathbb{Z}$, then $P_{L^2}(G)$ is a singleton. \end{conj} We prove the following, stronger version. \begin{cor} \label{FLT conj} Let $G$ be an $L^2$-acyclic group of type $\typeF{}$ satisfying the Atiyah conjecture. If $G$ is amenable but not isomorphic to $\mathbb{Z}$, then $P_{L^2}(G)$ is a singleton. \end{cor} \begin{proof} If $G$ is non-abelian, then the result follows directly from \cref{amenable} by taking $N = G$. If $G$ is abelian, then type $\typeF{}$ tells us that $G$ is a finitely generated free abelian group, which is not $\mathbb{Z}$. Therefore the universal $L^2$-torsion of $G$ vanishes by \cite[Theorem 2.5(5)]{FriedlLueck2017}. Hence $P_{L^2}(G) = 0$. \end{proof} Note also that $G=\mathbb{Z}$ fails the assertion of the conjecture: we have \[P_{L^2}(\mathbb{Z}) = - P(1-z)\] where $z$ is a generator of $\mathbb{Z}$. \begin{rmk} \cref{amenable} corresponds nicely with \cite[Theorem 4.10]{Wegner2009} of Wegner: we are assuming that $G$ satisfies the Atiyah conjecture, whereas Wegner assumed that $G$ has \emph{semi-integral determinant}, which is a related condition. Also, in his theorem one requires $G$ to posses an elementary amenable subnormal subgroup, whereas for our purposes amenable subnormal subgroups suffice. Wegner proved the vanishing of the $L^2$-torsion, whereas we prove the vanishing of the $L^2$-torsion polytope. Both invariants are determined by the universal $L^2$-torsion of Friedl--L\"uck \cite{FriedlLueck2017}. \end{rmk} \subsection{Agrarian groups of deficiency \texorpdfstring{$1$}{1}} Throughout this section, $G$ is going to be an agrarian group of deficiency $d \geqslant 1$, that is $G$ will admit a presentation with the number of generators exceeding the number of relators by $d$ (formally speaking, it also means that there is no presentation with an even greater discrepancy). We will fix a presentation of $G$ with a finite generating set $\S = \{s_1, \dots, s_k\}$ realising the deficiency. We will also build a CW-classifying space for $G$ whose $2$-skeleton coincides with the presentation complex, and let $(C_\ast, \partial_\ast)$ denote the cellular chain complex of the universal covering of the classifying space, with $\partial_n \colon C_n \to C_{n-1}$. We will choose representatives for the $G$-orbits of cells, and hence identify $C_\ast$ with a chain complex of free $\mathbb{Z} G$-modules. Note that, by our convention, $C_\ast$ is a chain complex of right modules, and so the boundary maps are matrices acting on the left. We will look at two boundary maps more closely. The first one, $\partial_1$, is the row vector $(1-s_i)_i$. The second one, $\partial_2$, is a matrix with $d$ more rows than columns, and we will denote it by $A$. We let $A_i$ denote $A$ with the $i^{th}$ row removed. \begin{thm}[Bieri--Neumann--Strebel~{\cite[Theorem 7.2]{Bierietal1987}}] For any finitely generated group $G$, if $d>1$ then $\Sigma^1(G) = \emptyset$. \end{thm} From now on we will assume that $d=1$. Crucially for us, in this case the matrices $A_i$ are square, since $C_2$ has rank precisely one less than $C_1$. Friedl conjectured that a group $G$ of deficiency one which admits an aspherical presentation complex realising the deficiency has $\Sigma^1(G)$ determined by a polytope. We prove this conjecture under the assumption of $G$ being agrarian, but we will not need the assumption of asphericity of the presentation complex. \begin{thm} \label{defic 1} Let $G$ be an agrarian group of deficiency $1$. Then there exists a marked integral polytope $P$ in $H_1(G;\mathbb{R})$, such that $\phi \in \Sigma^1(G)$ if and only if $\phi$ is marked. \end{thm} \begin{proof} In view of Sikorav's Theorem (\cref{sikorav}), for every character \[\phi \in H^1(G;\mathbb{R}) \s- \{0\}\] we have $\phi \in \Sigma^1(G)$ if and only if \[ H_1(C_\ast \otimes \widehat{\mathbb{Z} G}^\phi) = H_0(C_\ast \otimes \widehat{\mathbb{Z} G}^\phi) = 0 \] Let $s_i \in \S$. If $\phi(s_i) \neq 0$, we can modify the complex $C_\ast\otimes \widehat {\mathbb{Z} G}^\phi$ without changing its homology as follows: let $v$ denote the $i^{th}$ basis vector of $C_1$. Since $\phi(s_i) \neq 0$, the element $1-s_i$ is invertible in $\widehat {\mathbb{Z} G}^\phi$, and so the boundary map $\partial_1 \otimes \widehat {\mathbb{Z} G}^\phi$ restricts to an isomorphism from the $\widehat {\mathbb{Z} G}^\phi$-span of $v$ onto $C_0 \otimes \widehat {\mathbb{Z} G}^\phi$. This immediately implies that $H_0(C_\ast \otimes \widehat{\mathbb{Z} G}^\phi) = 0$. Also, we may split off the span of $v$ from $C_1 \otimes \widehat{\mathbb{Z} G}^\phi$ and write \[ C_1 \otimes \widehat{\mathbb{Z} G}^\phi = \langle v \rangle \oplus C_1' \] where $C_1'$ is spanned by the remaining basis vectors of $C_1$. We will use this procedure repeatedly, whenever we are working over a ring $R$ in which $1-s_i$ is invertible. As a shorthand, we will say that we `replace $C_1 \otimes R$ by the $R$-module $C_1'$'. It is immediate that $H_1(C_\ast \otimes \widehat{\mathbb{Z} G}^\phi) = 0$ if and only if the composition \[ C_2\otimes \widehat{\mathbb{Z} G}^\phi \to C_1 \otimes \widehat{\mathbb{Z} G}^\phi \to C_1' \] of $\partial_2$ and the projection is onto. But this map is precisely $A_i \otimes \widehat{\mathbb{Z} G}^\phi$, and it is onto if and only if $\phi \in \Sigma(A_i)$. This in turn is equivalent to $\phi$ being marked under $m_i$, where the marking $m_i$ comes from \cref{bns matrix} and is a marking of $P(A_i)$. Now the collections $U_1, \dots, U_k$ and $P(A_1), \dots, P(A_k)$ form an atlas of markings, where \[U_i = \{ \psi \in H^1(G;\mathbb{R}) \s- \{0\} \mid \psi(s_i) \neq 0 \}\] An application of \cref{one marked polytope} allows us to combine the marked polytopes \[P(A_1), \dots, P(A_k)\] into a single marked polytope $P$ which satisfies the assertion of our theorem. \end{proof} \begin{rmk} The bulk of the above proof will be used repeatedly in the sequel. \end{rmk} Under stronger hypotheses we can in fact prove a somewhat stronger theorem: we will assume now that $G$ satisfies the Atiyah conjecture, and use the $L^2$-torsion polytope $P_{L_2}(G)$. \begin{thm} \label{defic 1 atiyah} Let $G$ be a torsion-free group of deficiency $1$, which satisfies the Atiyah conjecture and has trivial first $L^2$-Betti number. We have $\Sigma^1(G) = \Sigma^\infty(G;\mathbb{Z})$, and there exists an integral polytope $P$ in $H_1(G;\mathbb{R})$ with marked vertices, such that $\phi \in \Sigma^1(G)$ if and only if $\phi$ is marked. Moreover, the Cayley $2$-complex of any presentation of $G$ realising the deficiency is contractible. \end{thm} \begin{proof} We continue with the notation of the proof of \cref{defic 1}. We first claim that the boundary map $\partial_3 \colon C_3 \to C_2$ is trivial. Since $G$ satisfies the Atiyah conjecture and is torsion free, we have the skew-field $\mathcal{D}(G)$ of Linnell (introduced in \cref{atiyah field}) at our disposal. Since we also know that the first $L^2$-Betti number of $G$ vanishes, we have $ H_1(G;\mathcal{D}(G)) = 0 $ by \cref{l2 acyclic via D}. Take $v$ to be the first basis vector of $C_1$. Since $(1-s_1)$ is invertible over $\mathcal{D}(G)$, we argue precisely as in the proof of \cref{defic 1}, and replace $C_1 \otimes \mathcal{D}(G)$ by $C_1'$, a free module over $\mathcal{D}(G)$ of rank one less than $C_1$. The vanishing of $H_1(G;\mathcal{D}(G))$ tells us that the matrix \[A_1\otimes \mathcal{D}(G) \colon C_2\otimes \mathcal{D}(G) \to C_1'\] is onto. But the ranks of $C_2 \otimes \mathcal{D}(G)$ and $C_1'$ are equal, and so $A_1\otimes \mathcal{D}(G)$ is also injective. Thus $\partial_2$ is injective, and so $\partial_3 = 0$. This implies that the Cayley $2$-complex of $G$ is contractible, and so we may take $X$ to be the presentation complex (which is of dimension $2$). Now we argue precisely as in the proof of \cref{defic 1}: letting $A = \partial_2$, and setting $A_i$ as before to be a square matrix over $\mathbb{Z} G$, we conclude that for every $\phi \in H^1(G;\mathbb{R}) \s- \{0\}$ we have $\phi \in \Sigma^1(G)$ if and only if $\phi$ lies in the dual of a marked vertex of $P(A_i)$ for every $i$ with $\phi(s_i) \neq 0$. Now, instead of combining the polytopes $P(A_i)$ into a single polytope by taking the sum, we will use the fact that \[ P(A_i) - P(1-s_i) = P_{L^2}(G) \] for every $i$ -- this follows from the additivity of the universal $L^2$-torsion that underpins the $L^2$-torsion polytope, see \cite[Lemma 1.9]{FriedlLueck2017}. (For the special case of descending HNN extensions of free groups, a more direct reference is \cite[Theorem 3.2(1)]{FunkeKielak2018}; the computation given there applies to the current setting as well.) If the rank of $H= \fab{G}$ is $0$ then there is nothing to prove. Suppose that the rank of $H$ is equal to $1$. We take $P = P(1-s_i)$, where $s_i$ is not trivial in $H$. We mark the vertices of $P$ in the unique way making the assertion about $\Sigma^1(G)$ true. Now suppose that the rank of $H$ is at least $2$. We claim that $P_{L^2}(G)$ is a single polytope. To this end, let $\phi \in H^1(G;\mathbb{Z})$ be non-trivial. There exists $s_i \in \S$ such that $\phi(s_i) \neq 0$, and so \[ F_\phi\big( P_{L^2}(G) \big) = F_\phi\big( P(A_i) \big) \] up to translation, since $F_\phi\big( P(1-s_i) \big)$ is a singleton. But $P(A_i)$, and so $F_\phi\big( P(A_i) \big)$, is a single polytope. Now the claim follows from \cref{funke}. We conclude the existence of the desired marked polytope by applying \cref{one marked L2 polytope} with $P = P_{L^2}(G)$ and $P_i = P(A_i)$. \smallskip We are left with the claim about $\Sigma^i(G;\mathbb{Z})$ for $i>0$. Let $\phi \in H^1(G;\mathbb{R}) \s- \{0\}$, and take $i$ such that $\phi(s_i) \neq 0$. We have shown above that $\phi \in \Sigma^1(G)$ if and only if $A_i$ admits a right inverse over $\widehat { \mathbb{Z} G}^\phi$. But Bieri~\cite[Theorem 3]{Bieri2007} (building on the work of Kochloukova~\cite{Kochloukova2006}) showed that $\widehat { \mathbb{Z} G}^\phi$ is \emph{von Neumann finite}, which means that a square matrix over $\widehat { \mathbb{Z} G}^\phi$ admits a right-inverse if and only if it admits a two-sided inverse. Thus $\phi \in \Sigma^1(G)$ if and only if $A_i \otimes \widehat { \mathbb{Z} G}^\phi$ is an isomorphism. Also, $\phi \in \Sigma^2(G;\mathbb{Z})$ if and only if $\phi \in \Sigma^1(G)$ and $A_i \otimes \widehat { \mathbb{Z} G}^\phi$ is injective, by \cref{sikorav}. Thus we have established $\Sigma^1(G) = \Sigma^2(G;\mathbb{Z})$. For $i>2$ we have $H_i(G;\widehat {\mathbb{Z} G}^\phi) = 0$ since $X$ is of dimension $2$, and so another application of \cref{sikorav} completes the proof. \end{proof} \subsection{Descending HNN extensions of free groups} We are now going to focus on a special kind of groups of deficiency $1$, namely descending HNN extensions of finitely generated free groups. \begin{dfn} Let $F_n = \langle a_1, \dots, a_n \rangle$ denote the free group of rank $n$, and let $g \colon F_n \to F_n$ be a monomorphism. The \emph{descending HNN extension} $F_n \ast_g$ of $F_n$ by $g$ is defined by the presentation \[ F_n \ast_g = \langle a_1, \dots, a_n, t \mid t^{-1} a_1 t g(a_1)^{-1}, \dots, t^{-1} a_n t g(a_n)^{-1} \rangle \] \end{dfn} In particular, \{finitely generated free\}-by-$\mathbb{Z}$ groups are descending HNN extensions of free groups, where $g$ is an automorphism Note that changing the sign of the stable letter $t$ in the presentation gives the definition of an \emph{ascending} HNN extension. Let us mention the following conjecture of Bieri: \begin{conj}[Bieri~{\cite[Conjecture]{Bieri2007}}] \label{bieris conj} Let $G$ be a group of deficiency $1$ with $\Sigma^1(G) \neq \emptyset$. Then $G$ is a descending HNN extenson of a finitely generated free group. \end{conj} The conjecture is known to hold (by \cite[Corollary B]{Bieri2007}) if there exists a character $\phi \in H^1(G;\mathbb{R}) \s- \{0\}$ such that $\{\phi,-\phi\} \subseteq \Sigma^1(G)$. Before proceeding to the main statement of this section, let us discuss semi-norms induced by polytopes, which will be important in the proof. \begin{dfn}[Semi-norms] Let $P$ be a polytope in $H_1(G;\mathbb{R})$, where $G$ is a group with finitely generated abelianisation. We define $ \| \cdot \|_P \colon H^1(G;\mathbb{R}) \to [0,\infty) $ by \[ \| \phi \|_P = \max_{x,y \in P} |\phi(x) - \phi(y)| \] It is easy to see that $ \| \cdot \|_P$ is a semi-norm. Let $P-Q$ represent an element $R$ of $\P_T(G)$, where $P$ and $Q$ are single polytopes. We define \[\| \cdot \|_R = \| \cdot \|_P - \| \cdot \|_Q \colon H^1(G;\mathbb{R}) \to \mathbb{R}\] Again, one readily sees that this function is independent of the choice of $P$ and $Q$. \end{dfn} When $P = P_{L^2}(G)$ is defined, it is interesting to study $\| \cdot \|_P$. This has in particular been done for descending HNN extensions of non-trivial free groups in \cite{FunkeKielak2018}, where it was shown that the function is indeed a semi-norm (see \cite[Corollary 3.5]{FunkeKielak2018}); this semi-norm was christened the \emph{Thurston norm} (and denoted $\| \cdot \|_T$), due to analogies with the $3$-manifold case (which we will discuss later). For every character $\phi$ with $\operatorname{im} \phi = \mathbb{Z}$, the number $\| \phi \|_T$ is equal to minus the $L^2$-Euler characteristic of $\ker \phi$. When $\ker \phi$ is finitely generated (which happen precisely when $\{\phi, -\phi\} \subset \Sigma^1(G)$), then it is in fact a free group (as shown by Geoghegan--Mihalik--Sapir--Wise \cite[Theorem 2.6 and Remark 2.7]{Geogheganetal2001}). Thus, for such a $\phi$, the number $\| \phi \|_T$ is equal to minus the usual Euler characteristic of $\ker \phi$ (see \cite[Theorem 1.35(2)]{Lueck2002} for a proof of this fact), that is we have \[ G \cong F_{1+\| \phi \|_T} \rtimes \mathbb{Z} \] with $\phi$ being equal to the projection map to $\mathbb{Z}$ \begin{thm} \label{free by cyclic} Let $G$ be a descending HNN extension of a finitely generated non-trivial free group $F_n$. We have $\Sigma^1(G) = \Sigma^\infty(G;\mathbb{Z})$, and there exists a marking of the vertices of the $L^2$-torsion polytope $P_{L^2}(G)$ such that $\phi \in \Sigma^1(G)$ if and only if $\phi$ is marked. \end{thm} \begin{proof} The group $G$ satisfies the Atiyah conjecture: it fits into a short exact sequence \[ \langle \! \langle F_n \rangle \! \rangle \to G \to \mathbb{Z} \] with a locally-free kernel. Hence, $G$ belongs to the class $\mathcal C_1$ of \cref{atiyah groups}. It is clear that $G$ is also torsion-free. The group $G$ admits an obvious $2$-dimensional classifying space -- see \cite[Lemma 3.1]{FunkeKielak2018}. It is $L^2$-acyclic since it is a mapping torus, and mapping tori are $L^2$-acyclic by \cite[Theorem 1.39]{Lueck2002}. Hence the assumptions of \cref{defic 1 atiyah} are satisfied. We now argue as in the proof of \cref{defic 1 atiyah}. If the rank of $\fab{G}$ is at least $2$, then $P_{L^2}(G)$ is the polytope constructed in the proof of \cref{defic 1 atiyah}. Suppose that the rank of $\fab{G}$ is $1$ (which is the smallest possible value). \cite[Corollary 3.5]{FunkeKielak2018} tells us that $P_{L^2}(G)$ is a single polytope -- it is a difference of two segments, inducing a semi-norm, and so in particular a non-negative function. Thus we may argue as in the proof of \cref{defic 1 atiyah} even when the rank of $\fab{G}$ is $1$. \end{proof} Let us note that understanding the BNS invariants of free-by-cyclic groups is related to a large body of current research, see \cite{Dowdalletal2014,Dowdalletal2015,Dowdalletal2017} by Dowdall--I. Kapovich--Leininger and \cite{Algom-Kfiretal2015} by Algom-Kfir--Hironaka--Rafi. In particular, Dowdall--I. Kapovich--Leininger~\cite[Theorem 1.2]{Dowdalletal2017a} showed that if $G = F_n \rtimes_g \mathbb{Z}$ is hyperbolic, $\phi$ denotes the induced character, and $g$ is fully irreducible, then every other integral character in the same component of $\Sigma^1(G)$ as $\phi$ also comes from a free-by-cyclic decomposition of $G$ with a fully-irreducible monodromy. In our language, this implies that if $G$ is hyperbolic then we can talk about \emph{fully-irreducible vertices} of $P_{L^2}(G)$. \subsection{Poincar\'e duality groups} We are moving towards (aspherical) manifolds in dimension $3$. First, however, we will discuss their homological cousins. \begin{dfn} A \emph{Poincar\'e duality group in dimension $n$} is a group $G$ such that there exists a $G$-module $\mathcal O$ (the \emph{orientation module}) isomorphic to $\mathbb{Z}$ as an abelian group, and a class $f \in H_n(G;\mathcal O)$ (the \emph{fundamental class}), such that for any $G$-module $M$ the cup product with $f$ gives an isomorphism \[ H^i(G;M) \cong H_{n-i}(G; M \otimes \mathcal O) \] When $\mathcal O$ is the trivial $G$-module, we say that $G$ is \emph{orientable}. \end{dfn} Note that the notation above requires a $G$-bimodule structure on $M$ and $\mathcal O$. This is readily available for any right $\mathbb{Z} G$ module -- the left multiplication by $g$ is equal to the right multiplication by $g^{-1}$. A natural example of a Poincar\'e duality group is the fundamental group of a closed aspherical manifold. Davis~\cite{Davis2000} constructed other examples, which however are not finitely presented. It is open whether there exist finitely presented Poincar\'e duality groups which are not fundamental groups of closed aspherical manifolds. Note that the fundamental class $f$ is not trivial, since $H_0(G;\mathbb{Z})$ is not trivial. The author suspects that the following is well-known, but was unable to find a reference in the literature. \begin{prop} \label{pd finite complex} Let $G$ be a Poincar\'e duality group in dimension $n$ (with $n\geqslant 3$) which is of type $\typeF{}$. There exists a free resolution of length $n$ of the trivial module $\mathbb{Z}$ by finitely generated $\mathbb{Z} G$-modules. Moreover, we can arrange for the $0^{th}$ term of the resolution to be of rank $1$, and the boundary map $\partial_1$ can be taken to be a row vector $(1-s_i)_i$, where $\S = \{s_1, \dots, s_k\}$ is some generating set of $G$. \end{prop} \begin{proof} Let $C = (C_\ast, \partial_\ast)$ be the cellular chain complex of the universal cover of some finite CW-classifying space $X$ of $G$. We denote the cochain complex by $(C^\ast, \partial^\ast)$ with $\partial^i \colon C^{i-1} \to C^{i}$. By collapsing a maximal tree in the $1$-skeleton of $X$, we easily arrange for $C_0$ to be of rank $1$ as a $\mathbb{Z} G$-module, and for $\partial_1$ to be as desired. Let us represent the fundamental class $f$ by a cycle $c_f$ in $C_n$. Wall in the proof of \cite[Lemma 1.1]{Wall1967} explains how to make sense of taking the cup product with a cycle; thus the cup product with $c_f$ gives us a chain map $\xi \colon C^\ast \to C_{n-\ast}$. The map $\xi$ induces the isomorphism $H_i(C^\ast \otimes M) \cong H_{n-i}(C_\ast \otimes M \otimes \mathcal O)$ for every $\mathbb{Z} G$-module $M$. In particular, this holds for $M = \mathbb{Z} G$; observe that $H_{i}(C_\ast \otimes \mathcal O) = H_{i}(C^\ast)$ for every $i$, as both coincide with the homology of the universal cover of $X$. Since the modules $C_n$ and $C^n$ are free (and hence projective) and finitely generated, Whitehead's theorem tells us that $\xi$ is a chain homotopy equivalence. This means that there exist chain maps $\xi' \colon C_{\ast} \to C^{n-\ast}$, $p \colon C_\ast \to C_{\ast+1}$ and $p' \colon C^\ast \to C^{\ast-1}$ such that $ \xi \xi' = \operatorname{id} - \partial p - p \partial \textrm{ and } \xi' \xi = \operatorname{id} - \partial p' - p' \partial $. \[ \xymatrix{ \cdots \ar[r] & C_{i+1} \ar[r]^{\partial_{i+1}} & C_0 \ar[r]^{\partial_{i}} & C_{i-1} \ar[r] & \cdots \\ \cdots \ar[r] & C^{n-i-1} \ar[r]^{\partial^{n-i}} \ar[u]^{\xi_{n-i-1}} & C^{n-i} \ar[r]^{\partial^{n-i+1}} \ar[u]^{\xi_{n-i}} & C^{n-i+1} \ar[r] \ar[u]_{\xi_{n-i+1}} & \cdots \\ \cdots \ar[r] & C_{i+1} \ar[r]^{\partial_{i+1}} \ar[u]^{\xi'_{i+1}} & C_{i} \ar[r]^{\partial_{i}} \ar[u]_{\xi'_{i}} \ar@{.>}[uul]|<<<<<<<<{p_i} & C_{i-1} \ar[r] \ar[u]_{\xi'_{i-1}} \ar@{.>}[uul]|<<<<<<<<{p_{i-1}} & \cdots \\ \cdots \ar[r] & C^{n-i-1} \ar[r]^{\partial^{n-i}} \ar[u]^{\xi_{n-i-1}} & C^{n-i} \ar[r]^{\partial^{n-i+1}} \ar[u]_{\xi_{n-i}} \ar@{.>}[uul]|<<<<<<<<{p'_{n-i}} & C^{n-i+1} \ar[r] \ar[u]_{\xi'_{i-1}} \ar@{.>}[uul]|<<<<<<<<{p'_{n-i+1}} & \cdots } \] Let $m$ be maximal such that $C_m \neq 0$. Note that $m\geqslant n$, since $H_n(G; \mathbb{Z}) \neq 0$, as it contains $f$. We claim that we can modify the chain complex $C$ so that $m=n$, and the new complex is still a free resolution of $\mathbb{Z}$ by finitely generated $\mathbb{Z} G$-modules. If $m=n$ then we are done. Suppose that $m>n$. We modify $C$ to form a new chain complex, $D$, as follows: $D$ is identical to $C$ except $D_m = 0$, and $D_{m-2} = C_{m-2} \oplus C_m$; the boundary map $D_{m-1} \to D_{m-2}$ is equal to the transpose $(\partial_{m-1}, p_{m-1})^T$, and the boundary map $D_{m-2} \to D_{m-3}$ is equal to $(\partial_{m-2}, 0)$. \[ \xymatrixcolsep{3pc} \xymatrix{ C = & C_{m} \ar[r]^{\partial_{m}} & C_{m-1} \ar[r]^{\partial_{m-1}} & C_{m-2} \ar[r]^{\partial{m-2}} & C_{m-3} \ar[r] & \cdots \\ D = & 0 \ar[r] & C_{m-1} \ar[r]^-{(\partial_{m-1}, p_{m-1})^T } \ar@{.>}[lu]^{p_{m-1}} & C_{m-2}\oplus C_m \ar[r]^-{(\partial_{m-2},0)} & C_{m-3} \ar[r] & \cdots } \] It is immediate that $D$ is a chain complex of finitely generated free modules; it is also immediate that it is exact everywhere, except possibly at $D_{m-1}$ and $D_{m-2}$. Crucially, since $C^{n-m} = 0$, we have $\xi_{n-m} = 0$ and so $p_{m-1} \partial_m = \operatorname{id}$. Let $x$ be a cycle in $D_{m-1} = C_{m-1}$. By exactness of $C$, we see that $x \in \operatorname{im} \partial_{m}$; let $y$ denote a preimage of $x$ in $C_m$. Since $x$ is a cycle, we have $p_{m-1}(x) = 0$. But $p_{m-1}(x) = p_{m-1} \partial_m(y) = y$, and so $y=0$ and thus $x=0$. Now let $x \in D_{m-2}$ be a cycle. We write $x = (x_0,x_1)^T \in C_{m-2} \oplus C_{m}$; being a cycle means that $x_0$ is a cycle in $C_{m-2}$, and so there exists $y \in C_{m-1}$ such that $x_0 = \partial_{m-1}(y)$. We can pick $y$ so that $p_{m-1}(y) = 0$, using exactness of $C$ and the fact that $p_{m-1} \partial_m = \operatorname{id}$. Now \[ (\partial_{m-1},p_{m-1})^T(y+\partial_m(x_1)) = (x_0, p_{m-1}(y) + p_{m-1} \partial_m(x_1) )^T = (x_0,x_1)^T = x \] since $p_{m-1}(y) = 0$ (by construction) and $p_{m-1} \partial_m = \operatorname{id}$. We replace $C$ by $D$; note that we can still define the chain homotopy equivalence $\xi$ between $D$ and its dual cochain complex. Alternatively, we can realise $D$ as the cellular chain complex of the universal cover of a finite CW-classifying space of $G$: we can remove each $m$-cell from $X$ and attach an $(m-2)$ sphere instead of it. Then we modify the attaching maps of the $(m-1)$ cells so that they coincide (in the universal cover) with the boundary map in $D$. The resulting space is aspherical, since $D$ is exact away from degree $0$. Since $m > n \geqslant 3$, the fundamental group did not change under these modifications. Note that when passing from $C$ to $D$ we did not alter $C_1$ nor the boundary map $\partial_1 \colon C_1 \to C_0$. We now repeat the argument until the length of the resolution is $n$. \end{proof} \begin{thm} \label{pd} Let $G$ be a Poincar\'e duality group in dimension $3$. Suppose further that $G$ is agrarian and of type $\typeF{}$. We have $\Sigma^\infty(G;\mathbb{Z})= \Sigma^1(G)$, and there exists a marked polytope $P$ such that for every $\phi \in H^1(G;\mathbb{R}) \s-\{0\}$ we have $\phi \in \Sigma^1(G)$ if and only if $\phi$ is marked \end{thm} \begin{proof} \cref{pd finite complex} tells us that there exists a free resolution $C = (C_\ast, \partial_\ast)$ of length $3$ of the trivial $\mathbb{Z} G$-module $\mathbb{Z}$ by finitely generated $\mathbb{Z} G$-modules. We also know that $C_0$ has rank $1$, and $\partial_1 = (1-s)_{i}$, where $\S = \{s_1, \dots, s_k \}$ is a finite generating set of $G$. Take $\phi \in H^1(G;\mathbb{R}) \s- \{0\}$, and take $s \in \S$ with $\phi(s) \neq 0$. Let $U_\phi$ denote a closed neighbourhood of $\phi$ such that for every $\psi \in U_\phi$ we have $\psi(s) \neq 0$. Assume further that $U_\phi$ is the convex hull of finitely many characters. It is immediate that $\partial_1 \otimes \widehat{\mathbb{Z} G}^{U_\phi}$ is onto (since $1-s$ is invertible in $\widehat{\mathbb{Z} G}^{U_\phi}$), and so $H_0(C \otimes \widehat{\mathbb{Z} G}^{U_\phi}) = 0$. Poincar\'e duality tells us that the transpose of $\partial_3 \otimes \widehat{\mathbb{Z} G}^{U_\phi}$ is onto as well. Thus, since $C_3 \otimes \widehat{\mathbb{Z} G}^{U_\phi}$ is free, we see that it is a summand of $C_2 \otimes \widehat{\mathbb{Z} G}^{U_\phi}$. Therefore we have \[ C_2 \otimes \widehat{\mathbb{Z} G}^{U_\phi} = C_3 \otimes \widehat{\mathbb{Z} G}^{U_\phi} \oplus C_2' \] for some (stably free) $\widehat {\mathbb{Z} G}^{U_\phi}$-module $C_2'$. We will modify the complex $C$ by taking the direct sum with the complex \[ \xymatrix{ C_3 \ar[r]^\operatorname{id} & C_3 \ar[r] & 0 } \] (where the $0$ module lies in degree $0$). It is clear that the new chain complex $D = (D_\ast, \delta_\ast)$ is a free resolution of $\mathbb{Z}$ by finitely generated $\mathbb{Z} G$-modules, and thus it computes the homology of $g$ in the same way as $C$ did. Let $k_i$ denote the rank of the free module $D_i$; we have already mentioned that $k_0=1$. Consider the homology of $D \otimes \mathcal{D}$. Since $\mathcal{D}$ is a skew-field, we see that the Euler characteristic of $D \otimes \mathcal{D}$ is $k_0 - k_1 + k_2 - k_3$. But, since $G$ is a Poincar\'e duality group of odd dimension, it is immediate that this Euler characteristic must be equal to $0$. So \[ k_1 + k_3 = k_2 + 1 \] We have \begin{align*} D_2 \otimes \widehat{\mathbb{Z} G}^{U_\phi} &= C_2 \otimes \widehat{\mathbb{Z} G}^{U_\phi} \oplus C_3 \otimes \widehat{\mathbb{Z} G}^{U_\phi} \\ &= \big( C_3 \otimes \widehat{\mathbb{Z} G}^{U_\phi} \oplus C_2' \big) \oplus C_3 \otimes \widehat{\mathbb{Z} G}^{U_\phi} \\ &\cong C_3 \otimes \widehat{\mathbb{Z} G}^{U_\phi} \oplus C_2 \otimes \widehat{\mathbb{Z} G}^{U_\phi} \end{align*} The last isomorphism implies the existence of an invertible matrix $M$ over $\widehat{\mathbb{Z} G}^{U_\phi}$ (the change of basis matrix of $D_2 \otimes \widehat{\mathbb{Z} G}^{U_\phi}$), such that $M \delta_3$ is the identity $k_3 \times k_3$ matrix extended at the bottom by the $(k_2-k_3) \times k_3$ zero matrix. Now we will replace the matrix $M$ by a matrix $M_0$ over $\mathbb{Z} G$ by truncating the entries of $M$ (the argument is very similar to the one used in \cref{entries far away}). Recall that $U_\phi$ is the convex hull of finitely many characters, say $\rho_1, \dots, \rho_l$. Pick $C \in \mathbb{R}$ such that $C + \rho_i(g) > 0$ for every $i$ and every $g \in G$ appearing in the support of some entry of $\delta_3$ (which is a matrix over $\mathbb{Z} G$). Now, let $M_0$ denote a matrix over $\mathbb{Z} G$ obtained from $M$ by truncating the entries of $M$ to the subset $\bigcup_i \rho_i^{-1}\big( (0,C) \big)$ of $G$ -- formally speaking, we treat the entries of $M$ as functions $G \to \mathbb{Z}$, we first restrict these functions to $\bigcup_i \rho_i^{-1}\big( (0,C) \big)$ and then extend back to functions from $G$ by setting their values outside of $\bigcup_i \rho_i^{-1}\big( (0,C) \big)$ to be $0$. By the choice of $C$, we see that $M_0 \delta_3$ is a $\rho_i$-identity for every $i$. Now, a simple convexity argument immediately shows that in fact $M_0 \delta_3$ is a $\psi$-identity for every $\psi \in U_\phi$. The fact that $M_0 \delta_3$ is a $\psi$-identity implies that the span of the last $(k_2-k_3)$ basis vectors of $D_2\otimes \widehat{\mathbb{Z} G}^{U_\phi}$ is a complement to the image of the (injective) map $M_0 \delta_3 \colon D_3\otimes\widehat{\mathbb{Z} G}^{U_\phi} \to D_2\otimes \widehat{\mathbb{Z} G}^{U_\phi}$. Therefore, when computing the homology of $D \otimes \widehat{\mathbb{Z} G}^{U_\phi}$, we can disregard $D_3$ and replace $D_2$ by this complement $D'_2$. This already tells us that $H_i(G;\widehat {\mathbb{Z} G}^\psi) = 0$ for every $i>2$ and every $\psi \in U_\phi$. Crucially, the differential $D'_2 \to D_1$ is a matrix defined over $\mathbb{Z} G$, say $A$. The matrix $A$ is a $k_1 \times (k_2-k_3)$ matrix. But $k_2 - k_3 = k_1 - 1$, and so $A$ has one row more than column. We now form $A_\phi$ by removing the row of $A$ corresponding to $s$, that is, if $s=s_i$ then we remove the $i^{th}$ row. Again, $A_\phi$ is defined over $\mathbb{Z} G$, and $\phi \in \Sigma^1(G)$ if and only if $A_\phi$ is right-invertible. But, by definition, $A_\phi$ is right-invertible if and only if $\phi \in \Sigma(A_\phi)$. Moreover, $\phi \in \Sigma^2(G;\mathbb{Z})$ if and only if $\phi \in \Sigma^1(G)$ and $A_\phi$ is injective. We have already seen that a result of Bieri~\cite[Theorem 3]{Bieri2007} tells us that if $A_\phi$ is right-invertible then it is also injective. Hence we obtain: $\phi \in \Sigma^1(G)$ if and only if $\phi \in \Sigma^2(G;\mathbb{Z})$ if and only if $\phi \in \Sigma(A_\phi)$. We have also already shown that higher homology groups of $G$ with $\widehat{\mathbb{Z} G}^{\phi}$ coefficients are trivial, and so $\Sigma^1(G) = \Sigma^\infty(G)$. By \cref{bns matrix}, $\phi \in \Sigma(A_\phi)$ is equivalent to $\phi$ being marked by a marking of $P(A_\phi)$. Now, we observe that there are finitely many characters $\phi_1, \dots, \phi_n$ such that every character in $H^1(G;\mathbb{R}) \s- \{0\}$ is contained in at least one of the neighbourhoods $U_{\phi_i}$ up to positive homothety -- this follows immediately from the compactness of the unit sphere in $H^1(G;\mathbb{R})$. Therefore, $U_{\phi_1}, \dots, U_{\phi_n}$ and $P(A_{\phi_1}), \dots, P(A_{\phi_n})$ form an atlas of markings, and we are done after an application of \cref{one marked polytope}, which combines the marked polytopes $P(A_{\phi_i})$ into a single marked polytope $P$. \end{proof} Note that if we were to strengthen the hypothesis by assuming $G$ to satisfy the Atiyah conjecture, it is not clear whether we would be able to extract a stronger result -- it is not even clear that the $L^2$-torsion polytope (even though it does exist) would be a single polytope; more importantly for us, it is not clear what the contribution to the polytope coming from the top-degree differential is. In the case of an honest $3$-manifold, it is the same as the contribution of the differential of the lowest degree, but this does not seem to follow from Poincar\'e duality alone. \subsection{\texorpdfstring{$3$}{3}-manifolds} \label{sec 3mfld} We will now give a new proof of the following theorem of Thurston. \begin{thm}[Thurston {\cite[Theorem 5]{Thurston1986}}] \label{thurstons thm} Let $M$ be a compact, oriented $3$-manifold. The set $F$ of cohomology classes in $H^1(M)$ representable by non-singular closed $1$-forms is some union of the cones on open faces of $B_x$, minus the origin. The set of elements in $H^1(M;\mathbb{Z})$ whose Lefschetz dual is represented by a fibre of a fibration consists of the set of all lattice points in $F$. \end{thm} The object $B_x$ appearing above is the unit ball of the \emph{Thurston semi-norm} $x \colon H^1(M;\mathbb{R}) \to [0,\infty)$. The semi-norm is first defined on the integral characters, then defined on positive multiples of such characters by the formula $x(\lambda \phi) = \lambda x(\phi)$, and lastly it is extended by continuity to the whole of $H^1(M;\mathbb{R})$. Given an integral character $\phi$ (that is a character with $\operatorname{im} \phi = \mathbb{Z}$, the standard copy of $\mathbb{Z}$ in $\mathbb{R}$), we define \[ x(\phi) = \min_\Sigma -\chi_-(\Sigma) \] where $\Sigma$ runs through all embedded (not necessarily connected) oriented surfaces in $M$ dual to $\phi$ via Poincar\'e--Lefschetz duality, and $\chi_-(\Sigma)$ denotes the sum of the Euler characteristics of the connected components of $\Sigma$ which are not spheres. Thurston also defined the polytope $B_{x^\ast} \subseteq H_1(M;\mathbb{R})$ dual to $B_x$; he proved in \cite[Theorem 2]{Thurston1986} that $B_{x^\ast}$ is an integral polytope. He also proved in \cite[Theorem 3]{Thurston1986} that the open faces of $B_{x}$ forming the union of cones $F$ are all faces of maximal dimension (called \emph{fibred faces}) -- on the level of the dual polytope $B_{x^\ast}$ this translates to the statement that $F$ is the union of some duals of vertices of $B_{x^\ast}$. The identification of the set of elements in $H^1(M;\mathbb{Z})$ whose Lefschetz dual is represented by a fibre of a fibration with the set of all (integral) lattice points in $F$ follows immediately from the work of Tischler~\cite{Tischler1970}: one has to observe that (topological) $3$-manifolds can always be taken to be smooth -- it was shown by Moise~\cite[Theorems 1 and 3]{Moise1952} that $3$-manifolds can be triangulated in such a way that closed stars of vertices are piecewise-linearly homeomorphic to a simplex in $\mathbb{R}^3$. This is enough to construct a smooth structure. Now let us argue that without loss of generality one may take $M$ to be connected in the statement of Thurston's theorem: suppose that we have shown the result for connected manifolds, and let $M = M_1 \sqcup \dots \sqcup M_m$ be a decomposition into connected components. Note that $H^1(M;\mathbb{R}) = \prod_i H^1(M_i;\mathbb{R})$, and the Thurston norm $x$ restricted to $H^1(M_i;\mathbb{R})$ is equal to the Thurston norm $x_i$ on $M_i$. Thus, $B_{x^\ast}$ is simply the product of the polytopes $B_{x_i^\ast}$, and duals of faces of $B_{x^\ast}$ are products of duals of corresponding faces of $B_{x_i^\ast}$. Now, a cohomology class in $H^1(M;\mathbb{R})$ is representable by a non-singular closed $1$-form if and only if its component in every $H^1(M_i;\mathbb{R})$ is. When $M$ is connected, Bieri--Neumann--Strebel observed in \cite[Corollary F]{Bierietal1987} that $F = \Sigma^1(\pi_1(M))$, thus connecting Thurston's result to the BNS invariants. Let us summarise this discussion in a statement that is equivalent to Thurston's theorem above. \begin{thm}[Reformulation of Thurston's theorem] Let $M$ be a compact, connected, oriented $3$-manifold, and let $G = \pi_1(M)$. There exists a marking of vertices of $B_{x^\ast}$ such that for every $\phi \in H^1(G;\mathbb{R}) \s- \{0\}$ we have $\phi \in \Sigma^1(G)$ if and only if $\phi$ is marked. \end{thm} \begin{proof} If $\Sigma^1(G) = \emptyset$, then the statement holds vacuously. Let us assume that $\Sigma^1(G) \neq \emptyset$. We will first argue that $G$ satisfies the Atiyah conjecture. Since $\Sigma^1(G)$ is open (by \cref{bns open orig}), it contains an integral surjective character $\phi \colon G \to \mathbb{Z}$, and so $M$ fibres smoothly over the circle (by \cite{Tischler1970}). The fibre must be a compact orientable surface. Irrespectively of whether it is closed or not, its fundamental group is residually \{torsion-free nilpotent\} (see e.g. \cite[Corollary 2]{Baumslag2010}). Note that residual nilpotence is equivalent to saying that the intersection of all the terms in the lower central series is trivial. But the lower central series consists of characteristic subgroups, and so any extension of a residually nilpotent group by $\mathbb{Z}$ is residually solvable. Moreover, every extension of a residually \{torsion-free nilpotent\} group by $\mathbb{Z}$ is residually \{torsion-free solvable\}. Such groups satisfy the Atiyah conjecture -- they are torsion free elements of the class $\mathcal C_2$ of \cref{atiyah groups}. In particular, our fundamental group $G$ satisfies the Atiyah conjecture. \smallskip By a result of L\"uck~\cite[Theorem 1.39]{Lueck2002}, $M$ is $L^2$-acyclic, as it is a mapping torus over a finite CW complex. Also, since $M$ fibres with fibre an oriented surface, it is immediate that either $M$ is homeomorphic to $\mathbb S^2 \times \mathbb S^1$ (as there are no non-trivial fibrations over $\mathbb S^1$ with fibre $\mathbb S^2$), or $M$ is aspherical, since the circle and any oriented surface which is not $\mathbb S^2$ are aspherical. In the latter case it is also clear that the boundary of $M$ is either empty or a collection of tori. When $M \cong \mathbb S^1 \times \mathbb S^2$, then $\Sigma^1(G) = H^1(G;\mathbb{R}) \s- \{0\}$, and we declare all faces of $B_{x^\ast}$ marked. One can compute $B_{x^\ast}$ directly, and it turns out to be a single point, the origin of $H^1(G;\mathbb{R})$. Thus $B_{x^\ast}$ is a polytope with marked vertices. \smallskip We are now ready for the main case: $M$ is an $L^2$-acyclic aspherical manifold with empty or toroidal boundary whose fundamental group satisfies the Atiyah conjecture. At this point we can already use \cref{pd}, but since we have the Atiyah conjecture at our disposal and the manifold structure, we will actually learn more about the marked polytope occurring in \cref{pd}. Pick a finite CW-structure for $M$, and let $C$ denote the cellular chain complex of the universal cover of $M$. The complex $C$ is a free $\mathbb{Z} G$-resolution of the trivial module $\mathbb{Z}$, since $M$ is aspherical. Suppose first that $M$ is closed. McMullen in the proof of~\cite[Theorem 5.1]{McMullen2002} shows that we may take $C$ as follows: \[ \xymatrix{ C= & C_3 \ar[r]^{\partial_3} & C_2 \ar[r]^{\partial_2} & C_1 \ar[r]^{\partial_1} & C_0 } \] where $C_0$ and $C_3$ are of rank one, $C_1$ and $C_2$ are of the same rank, $\partial_3$ is the transpose of $\partial_1$, which in turn is the vector $(1-s)_{s \in \S}$ with $\S$ being a finite generating set of $G$ as usual. For any $\phi \in H^1(G;\mathbb{R}) \s- \{0\}$ there exists $s_i \in \S$ with $\phi(s_i) \neq 0$. Now let $A_i$ denote the matrix obtained from $\partial_2$ by removing the $i^{th}$ row and the $i^{th}$ column. The matrix $A_i$ is a square matrix over $\mathbb{Z} G$, and the structure of $\partial_1$ and $\partial_3$, together with the invertibility of $1-s_i$ over $\mathbb{Z} G^\phi$, immediately imply that $H_1(G;\widehat {\mathbb{Z} G}^\phi)=0$ if and only if $\phi \in \Sigma(A_i)$. Thus $\Sigma(A_i)$ induces a marking of the vertices of $P(A_i)$, using \cref{bns matrix} as before. Since $G$ is $L^2$-acyclic, the $L^2$-torsion polytope $P_{L^2}(G)$ is defined, and we have $P_{L^2}(G) = P(A_i) - 2 P(1-s_i)$ -- this follows from additivity of the universal $L^2$-torsion, see \cite[Lemma 1.9]{FriedlLueck2017}. In fact, $P_{L^2}(G)$ is a single polytope: if the rank of $\fab{G}$ is at least $2$, it follows from \cref{flat polys}; otherwise it follows from \cite[Theorem 2.39]{FriedlLueck2017}, since the induced function $\| \cdot \|_{P_{L^2}(G)}$ can be a semi-norm only when $P_{L^2}(G)$ is a single polytope (in the case of $\fab{G}$ of rank $1$). Now we need to define a marking on the vertices of $P_{L^2}(G)$, using the given markings on the polytopes $P(A_i)$. This is achieved by \cref{one marked L2 polytope}, taking $U_i = \{\psi \in H^1(G;\mathbb{R}) \s- \{0\} \mid \psi(s_i) \neq 0 \}$. \smallskip Now suppose that $M$ has boundary. Again, using the proof of~\cite[Theorem 5.1]{McMullen2002}, we see that \[ \xymatrix{ C= & C_2 \ar[r]^{\partial_2} & C_1 \ar[r]^{\partial_1} & C_0 } \] where $C_1$ is of rank one greater than $C_2$. We argue precisely as before, reducing the problem of whether $\phi \in \Sigma^1(G)$ to whether the matrix $A_i$ obtained by removing the column of $\partial_2$ corresponding to $s_i$ admits a right-inverse over $\widehat {\mathbb{Z} G}^\phi$ (as usual, we have $\phi(s_i) \neq 0$). In this case we have $P_{L^2}(G) = P(A_i) - P(1-s_i)$, and the argument proceeds exactly as before. It was shown by Friedl--L\"uck~\cite[Theorem 3.37]{FriedlLueck2017} that $P_{L^2}(G)$ coincides with $B_{x^\ast}$. This concludes the proof. \end{proof} It is immediate from Thurston's construction that the equality of semi-norms $x = \| \cdot \|_{B_{x^\ast}}$ is satisfied. Friedl--L\"uck proved that $B_{x^\ast} = P_{L^2}(G)$ when $M$ is aspherical, not homeomorphic to $\mathbb S^1 \times \mathbb S^2$, and $G$ satisfies the Atiyah conjecture. This equality yields a new point of view on the value $x(\phi)$ for $\phi$ with $\operatorname{im} \phi = \mathbb{Z}$: when $\ker \phi$ is finitely generated, it was known that $x(\phi)$ gives minus the Euler characteristic of the surface, which is the fibre of the corresponding fibration. When $\ker \phi$ is not finitely generated, we see now that $x(\phi)$ gives minus the $L^2$-Euler characteristic of the kernel. \bibliographystyle{math}
{ "timestamp": "2019-09-04T02:36:48", "yymm": "1802", "arxiv_id": "1802.07049", "language": "en", "url": "https://arxiv.org/abs/1802.07049" }
\section{Background} \label{sec:background} \subsection{Canonical Diagrams} Several approaches have been proposed to check an arithmetic circuit against its functional specification. Different variants of canonical, graph-based representations have been proposed, including Binary Decision Diagrams (BDDs) \cite{bryant:1986-bdd}, Binary Moment Diagrams (BMDs) \cite{bmd95} \cite{bryant:tr97}, Taylor Expansion Diagrams (TED) \cite{ted:tcomp06}, and other hybrid diagrams. While BDDs have been used extensively in logic synthesis, their application to verification of arithmetic circuits is limited by the prohibitively high memory requirement for complex arithmetic circuits, such as multipliers. BDDs are being used, along with many other methods, for local reasoning, but not as monolithic data structure \cite{kaivola:2009-intel}. BMDs and TEDs offer a better space complexity but require word-level information of the design, which is often not available or is hard to extract from bit-level netlists. While the canonical diagrams have been used extensively in logic synthesis, high-level synthesis, and verification, their application to verify large arithmetic circuits remains limited by the prohibitively high memory requirement of complex arithmetic circuits \cite{ciesielski2015verification}\cite{kalla:tcad13}. \subsection{SAT, ILP and SMT Solvers} Arithmetic verification problems have been typically modeled using Boolean satisfiability (SAT). Several SAT solvers have been developed to solve Boolean decision problems, including ABC \cite{mishchenko:abc-2007}, MiniSAT \cite{sorensson:2005-minisat}, and others. Some of them, such as CryptoMiniSAT \cite{soos:PoS-2010}, target specifically {\sc xor}-rich circuits, and are potentially useful for arithmetic circuit verification, but are all based on a computationally expensive DPLL (Davis, Putnam, Logemann, Loveland) decision procedure \cite{davis1962machine}. Some techniques combine automatic test pattern generation (ATPG) and modular arithmetic constraint-solving techniques for the purpose of test generation and assertion checking \cite{Cheng:tcad01}. Others integrate linear arithmetic constraints with Boolean SAT in a unified algebraic domain \cite{hsat:dac98}, but their effectiveness is limited by constraint propagation across the Boolean and word-level boundary. To avoid this problem, methods based on ILP models of arithmetic operators have been proposed \cite{Brinkman:aspdac02} \cite{LPSAT:jsa05}, but in general ILP techniques are known to be computationally expensive and not scalable to large scale systems. \textit{SMT solvers} depart from treating the problem in a strictly Boolean domain and integrate different well-defined theories (Boolean logic, bit vectors, integer arithmetic, etc.) into a DPLL-style SAT decision procedure \cite{SMT-book:2008}. Some of the most effective SMT solvers, potentially applicable to our problem, are Boolector \cite{niemetz:2015boolector}, Z3 \cite{de:2008-z3}, and CVC \cite{barrett2011cvc4}. However, SMT solvers still model functional verification as a decision problem and, as demonstrated by extensive experimental results, neither SAT nor SMT solvers can efficiently solve the verification problem of large arithmetic circuits \cite{kalla:tcad13} \cite{yu:2016-tcad-verification}. \subsection{Theorem Provers} Another class of solvers include Theorem Provers, deductive systems for proving that an implementation satisfies the specification, using mathematical reasoning. The proof system is based on a large and strongly problem-specific database of axioms and inference rules, such as simplification, term rewriting, induction, etc. Some of the most popular theorem proving systems are: HOL \cite{gordon:1993-introduction}, PVS \cite{owre1992pvs}, ACL2 \cite{brock:1996-acl2}, and the term rewriting method described in \cite{Vasudevan:tcomp07}. These systems are characterized by high abstraction and powerful logic expressiveness, but they are highly interactive, require intimate domain knowledge, extensive user guidance, and expertise for efficient use. The success of verification using theorem provers depends on the set of available axioms and rewrite rules, and on the choice and order in which the rules are applied during the proof process, with no guarantee for a conclusive answer \cite{kapur:1998-rrl}. \subsection{Computer Algebra Approaches} \label{sec:computer-algebra} The most advanced techniques that have potential to solve the arithmetic verification problems are those based on Symbolic Computer Algebra. The verification problem is typically formulated as a proof that the implementation satisfies the specification \cite{ciesielski2015verification}\cite{kalla:tcad13}\cite{farahmandi2015groebner}\cite{STABLE:date11}\cite{sayedformal:date-2016}. This is accomplished by performing a series of divisions of the specification polynomial $F$ by a set of implementation polynomials $B$, representing circuit components, the process referred to as reduction of $F$ modulo $B$. Polynomials $f_1,...,f_s \in B$ are called the bases, or {\it generators}, of the ideal $J$. Given a set $f_1,...,f_s$ of generators of $J$, a set of all simultaneous solutions to a system of equations $f_1$=0; ... ,$f_s$=0 is called a {\it variety} $V(J)$. Verification problem is then formulated as a test if the specification $F$ vanishes on $V(J)$, i.e., if $F \in V(J)$. This is known in computer algebra as {\it ideal membership} testing problem \cite{kalla:tcad13}. Standard procedure to test if $F \in V(J)$ is to divide polynomial $F$ by the polynomials \{$f_1,...,f_s$\} of $B$, one by one. The goal is to cancel, at each iteration, the leading term of $F$ using one of the leading terms of $f_1,...,f_s$. If the remainder $r$ of the division is 0, then $F$ vanishes on $V(J)$, proving that the implementation satisfies the specification. However, if $ r \ne 0 $, such a conclusion cannot be made; $B$ may not be sufficient to reduce $F$ to 0, and yet the circuit may be correct. To reliably check if $F$ is reducible to zero, a {\it canonical} set of generators, $G=\{g_1,...,g_t\}$, called {\it Gr{\" o}bner basis}, is needed. It has been shown that for combinational circuits with no feedback, certain conditions automatically make the set $B$ a Groebner basis \cite{stoffel:tcad04}. Specifically, if the polynomials $f_1,...,f_s \in B$ are ordered in reverse topological order of logic gates, from primary outputs to primary inputs, and the leading term of each polynomial is the output of a logic gate, then set $B$ is automatically a Groebner basis. Some of the authors use Gaussian elimination, rather than explicit polynomial division, to speed up the polynomial reduction process \cite{kalla:tcad13}\cite{farahmandi2015groebner}. The polynomials corresponding to fanout-free logic cones can be precomputed to reduce the size of the problem \cite{farahmandi2015groebner}. The polynomial reduction technique has been successfully applied to both integer arithmetic circuits \cite{sayedformal:date-2016} and Galois field arithmetic \cite{kalla:tcad13}. Verification work of Galois field arithmetic has been presented in \cite{kalla:tcad13} \cite{kalla:dac2014}. Formulation of problems in GF arithmetic takes advantage of known properties of Galois field during polynomial reductions. Specifically, the problem reduces to the ideal membership testing over a larger ideal that includes ideal $J_0 = \langle x^2-x \rangle $ in ${\mathbb{F}}_2$, for each internal signal $x$ of the circuit. Inclusion of this ideal basically assures that each signal assumes a binary value. In this paper, we provide comparison between this technique and our approach. \subsection{Function Extraction} \textit{Function extraction} is an arithmetic verification method originally proposed in \cite{ciesielski2015verification} for arithmetic circuits in modular integer arithmetic $\mathbb{Z}_{2^m}$. It extracts a unique bit-level polynomial function implemented by the circuit directly from its gate-level implementation. Instead of expensive polynomial division, extraction is done by \textit{backward rewriting}, i.e., transforming the polynomial representing encoding of the primary outputs (called the \textit{output signature}) into a polynomial at the primary inputs (the \textit{input signature}) using algebraic models of the logic gates of the circuit. That is, the rewriting is performed in a reverse topological order. This technique has been successfully applied to large integer arithmetic circuits, such as 512-bit integer multipliers. However, it is not directly applicable to large Galois Field multipliers because of potentially exponential number of polynomial terms, before the internal term cancellations takes place during rewriting. Fortunately, arithmetic GF($2^{m}$) circuits offer an inherent parallelism which can be exploited in backward rewriting, without memory explosion. In the rest of the paper, we first describe how to apply such parallel rewriting in GF($2^{m}$) circuits while avoiding memory explosion experienced in integer arithmetic circuits. Using this approach, we extract the function of each output bit in $\mathbb{F}_{2^m}$ and the function is represented in a {\it pseudo-Boolean polynomial} expression, where all variables are Boolean. Finally, we propose a method to reverse engineer the GF($2^m$) designs by analyzing these expressions. \section{Parallel Extraction in Galois Field} \label{sec:parallel_GF} In this section, we introduce our method for extracting the unique algebraic expressions of the output bits (e.g. Figure \ref{fig:4-bit}) using computer algebraic method. This can be used to verify the GF($2^m$) multipliers when the binary encoding of inputs and output and the irreducible polynomial are given. We introduce a parallel function extraction framework in GF($2^m$), which allows us to individually extract the algebraic expression of each output bit. This framework is used for reverse engineering, since our reverse engineering approach is based on analyzing the algebraic expression of output bits in GF(2), as introduced in Section \ref{sec:introduction}. \subsection{Computer Algebraic model} { The circuit is modeled as a network of logic elements of arbitrary complexity, including basic logic gates (AND, OR, XOR, INV) and complex standard cell gates (AOI, OAI, etc.) generated by logic synthesis and technology mapping. We extend the algebraic model of Boolean operators developed in \cite{yu:2016-tcad-verification} for integer arithmetic to finite field arithmetic in $GF(2)$, i.e., modulo 2.} For example, the pseudo-Boolean model of XOR($a,b$)=$a+b$ $- 2ab$ is reduced to $(a + b + 2ab)$ mod $2$ = $(a + b)$ mod $2$. The following algebraic equations are used to describe basic logic gates in $GF(2^{m})$ \cite{kalla:tcad13}: \vspace{-4mm} \begin{equation} \begin{aligned} \text{~~} &\\ & \neg a = 1 + a \\ & a \wedge b = a\cdot b \\ & a \vee b = a + b + a\cdot b \\ & a \oplus b = a + b \end{aligned} \label{eq:boolean-poly} \end{equation} \subsection{Outline of the Approach} { Similarly to the work of \cite{ciesielski2015verification} and \cite{yu:2016-tcad-verification}, the arithmetic function computed by the circuits is obtained by transforming (rewriting) the polynomial representing the encoding of the primary outputs (called \textit{output signature}) into the polynomial at the primary inputs, the \textit{input signature}. The output signature of a $GF(2^{m})$ multiplier, $Sig_{out} = \sum _{i=0} ^{m-1} z_i x^i$, with $z_i \in GF(2)$. The input signature of a $GF(2^{m})$ multiplier, $Sig_{in}$ = $\sum _{i=0} ^{m-1} \mathbb{P}_i x^i$, with coefficients $\mathbb{P}_i \in GF(2)$ being product terms, and addition operation performed modulo 2. {If the irreducible polynomial $P(x)$ is provided, $Sig_{in}$ is know; otherwise, it will be computed by backward rewriting from $Sig_{out}$. The goal is to transform the output signature, $Sig_{out}$, using polynomial representation of the internal logic elements (\ref{eq:boolean-poly}), into an input signature $Sig_{in}$ in $GF(2^m)$, which determines the arithmetic function (specification) computed by the circuit.} \textbf{Theorem 1:} \textit{Given a combinational arithmetic circuit in $GF(2^m)$, composed of logic gates, described by Eq. 1, input signature $Sig_{in}$ computed by backward rewriting is unique and correctly represents the function implemented by the circuit in $GF(2^m)$.} \textbf{Proof:} The proof of correctness relies on the fact that each transformation step (rewriting iteration) is correct. That is, each internal signal is represented by an algebraic expression, which always evaluates to a {\it correct value} in $GF(2^{m})$. This is guaranteed by the correctness of the algebraic model in Eq. (\ref{eq:boolean-poly}), which can be proved easily by inspection. For example, the algebraic expression of \textit{XOR(a,b)} in $\mathbb{Z}_{2^m}$ is $a+b-2ab$. When implemented in $GF(2^{m})$, the coefficients in the expression must be in $GF(2)$, hence \textit{XOR(a,b)} in $GF{2^m}$ is represented by $a+b$. The proof of uniqueness is done by induction on $i$, the step of transforming polynomial $F_i$ into $F_{i+1}$. A detailed induction proof of this theorem is provided in \cite{ciesielski2015verification} for expressions in $\mathbb{Z}_{2^m}$. \hfill $\square$ \begin{algorithm} \scriptsize \caption{Backward Rewriting in $GF(2^{m})$}\label{alg:commonlogic} \textbf{Input: Gate-level netlist of $GF(2^{m})$ multiplier}\\ \textbf{Input: Output signature $Sig_{out}$, and (optionally) input signature, $Sig_{in}$} \\ \textbf{Output: GF function of the design; return $Sig_{out}$==$Sig_{in}$} \begin{algorithmic}[1] \State $\mathcal{P}$=\{$p_{0},p_{1},...,p_{n}$\}: polynomials representing gate-level netlist \State $F_{0}$=$Sig_{out}$ \For{each polynomial $p_{i}$ $\in \mathcal{P}$} \For{output variable $v$ of $p_{i}$ in $F_{i}$} \State replace every variable $v$ in $F_{i}$ by algebraic expression of $p_{i}$ \State $F_{i}$ $\rightarrow$ $F_{i+1}$ \For{each monomial $M$ in $F_{i+1}$} \If {the coefficient of $M$\%2==0 \\~~~~~~~~~~~~~~~~~or $M$ is a constant, $M$\%2==0} \State remove $M$ from $F_{i+1}$ \EndIf \EndFor \EndFor \EndFor \\ \Return $F_{n}$ and $F_{n}=?Sig_{in}$ \end{algorithmic} \end{algorithm} Theorem 1, together with the algebraic model of Boolean gates (\ref{eq:boolean-poly}), provide the basis for polynomial reduction using backward rewriting. This is described by Algorithm 1. The method takes the gate-level netlist of a GF($2^{m}$) multiplier as input and first converts each logic gate into an algebraic expression using Eq. (1). The rewriting process starts with the output signature $F_{0}=Sig_{out}$ and performs rewriting in reverse topological order, from outputs to inputs. It ends when all the variables in $F_{i}$ are primary inputs, at which point it becomes the input signature $Sig_{in}$ \cite{ciesielski2015verification}. {Each iteration includes two basic steps: 1) substitute the variable of the gate output using the expression in the inputs of the gate (Eq.1), and name the new expression $F_{i+1}$ (lines 3 - 6); and 2) simplify the new expression in two ways: a) by eliminating terms that cancel each other (as in the integer arithmetic case \cite{ciesielski2015verification}), and b) by removing all the monomials (including constants) that reduce to 0 in GF($2$) (line 3 and lines 7 - 10). The algorithm outputs the arithmetic function of the design in GF($2^m$) after $n$ iterations, where $n$ is the number of gates in the netlist. The final expression $F_{n}=Sig_{in}$ can be used to verify if the circuit performs the desired arithmetic function by checking if the computed polynomial $Sig_{in}$ matches the expected specification, if known. This equivalence check can be readily performed using canonical word-level representations, such as BMD \cite{bmd95} or TED \cite{ted:tcomp06} which can efficiently check equivalence of two polynomials. Alternatively, if the specification is not known, the computed signature can serve as the specification extracted from the circuit.} \begin{figure}[t] \begin{center} \includegraphics[scale=0.35]{2-bit-gf-mapped.pdf} \caption{The gate-level netlist of post-synthesized and mapped 2-bit multiplier over GF($2^2$). The irreducible polynomial is $P(x)=x^{2}+x+1$.} \vspace{-2mm} \label{fig:netlist-2bit} \end{center} \end{figure} \input{2-bit-rewritting} {\bf Example 2} (Figure \ref{fig:netlist-2bit}): We illustrate our method using a post-synthesized 2-bit multiplier in $GF(2^2)$, shown in Figure \ref{fig:netlist-2bit}. The irreducible polynomial is $P(x)$ = $x^{2}+x+1$. The output signature is $Sig_{out} = z_{0}$+$z_{1}x$, and input signature is $Sig_{in}=(a_{0}b_{0}$+$a_{1}b_{1}$)+($a_{1}b_{1}$+$a_{1}b_{0}$+$a_{0}b_{1}$)$x$. First, $F_{init}=Sig_{out}$ is transformed into $F_{8}$ using polynomial of gate $g8$, $z_{1}$=$i_{5}+i_{6}$ and simplified to $F_{8}=z_{0}+i_{5}x+i_{6}x$. Then, the polynomials $F_{i}$ are successively derived from $F_{i+1}$ and checked for a possible reduction. The first reduction happens when $F_{5}$ is transformed into $F_{4}$, where $i_{4}$ (at gate $g_4$) is replaced by ($1+a_{0}b_{0}$). After simplification, a monomial $2x$ is identified and removed by modulo 2 from $F_{4}$. Similar reductions are applied during the transformations $F_{3} \rightarrow F_{2}$ and $F_{2} \rightarrow F_{1}$. Finally, the function of the design is extracted as expression $F_{1}$. A complete rewriting process is shown in Figure \ref{fig:rewriting}. We can see that $F_{1}=Sig_{in}$, which indicates that the circuit indeed implements the $GF(2^2)$ multiplication with $P(x)$=$x^{2}+x+1$. An important observation is that the potential reductions take place only within the expression associated with the same degree of polynomial ring ($Sig_{out}$). In other words, the reductions happen in a logic cone of every output bit $independently$ of other bits, regardless of logic sharing between the cones. For example, the reductions in $F_{4}$ and $F_{2}$ happen within the logic cone of output $z_{1}$ only. Similarly, in $F_{1}$, the reduction is within logic cone of $z_{0}$. Details of the proof are provided in \cite{yu:aspdac17}. \subsection{Implementation} \label{sec:implementation} This section describes the implementation of our parallel verification method for Galois field multipliers. Our approach takes the gate-level netlist as input, and outputs the extracted function of the design. It includes four steps: {\bf Step1: Convert netlist to equations.} Parse the gate-level netlist into algebraic equations based on Equation 1. The equations are listed in topological order, to be rewritten by backward rewriting in the next step. $m$ copies of this equation file will be made for a GF($2^m$) multiplier. {\bf Step2: Generate signatures.} Split the output signature of GF($2^{m}$) multipliers into $m$ polynomials, with $Sig_{out\_i}$=$z_{i}$. Insert the new \textit{signatures} into the $m$ copies of the equation file generated from Step1. Each signature represents a single output bit. {\bf Step3: Parallel extraction.} Apply Algorithm 1 to each equation file to extract the polynomial expression of each output in parallel. In contrast to work on integer arithmetic \cite{ciesielski2015verification}, the internal expression of each output bit does not offer any polynomial reduction (\textit{monomial cancellations}) with other bits. Ideally, our approach can extract GF($2^m$) multiplier in $m$ threads. However, due to the limited computing resources, it is impossible to extract GF($2^m$) multipliers in $m$ threads when $m$ is very large. Hence, our approach puts a limit on the number of parallel threads $T$ (T = 5, 10, 20 and 30 have been tested in this work). This process is illustrated in Figure \ref{fig:flow}. The $m$ extraction tasks are organized into several task sets, ordered from LSB to MSB. In each set, the extractions are performed in parallel. Since the runtime of each extraction within the set can differ, the tasks in the next set will start as soon as any previous task terminated. {\bf Step4: Finalization.} Compute the final function of the multiplier. Once the algebraic expression of each output bit in GF($2$) is computed, our method computes the final function by constructing the $Sig_{out}$ using the rewriting process in step 3. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.50]{parallel_threads.pdf} \caption{Step3: parallel extraction of a GF($2^m$) multiplier with $T$ threads.} \label{fig:flow} \end{center} \end{figure} \noindent { Our algorithm uses a data structure that efficiently implements iterative substitution and elimination during backward rewriting. It is similar to the data structure employed in function extraction for integer arithmetic circuits \cite{ciesielski2015verification}, suitably modified to support simplifications in finite fields algebra. Specifically, in addition to cancellation of terms with opposite signs, it performs modulo 2 reduction of monomials and constants.} The data structure maintains the record of the terms (monomials) in the expression that contain the variable to be substituted. It reduces the cost of finding the terms that will have their coefficients changed during substitution. Each element represents one monomial consisting of the variables in the monomials and its coefficient. The expression data structure is a C++ object that represents a pseudo-Boolean expression, which contains of all the elements in the data structure. It supports both fast addition and fast substitution with two C++ maps, implemented as binary search trees, a terms map, and a substitution map. This data structure includes two cases of simplifications: 1) after substitution the coefficients of all the monomials are updated and the monomials with coefficient zero are eliminated; 2) the monomials whose coefficient modulo 2 evaluate to 0 are eliminated. The second case is applied after each substitution. \input{parallel_example.tex} {\bf Example 3} (Figure \ref{fig:parallel}): We illustrate our parallel extraction method using a 2-bit multiplier in GF($2^2$) in Figure \ref{fig:netlist-2bit}. The output signature $Sig_{out}$ = $z_0$+$z_{1}x$ is split into two signatures, $Sig_{out0}=z_0$ and $Sig_{out1}=z_1$. Then, the rewriting process is applied to $Sig_{out0}$ and $Sig_{out1}$ in parallel. When $Sig_{out0}$ and $Sig_{out1}$ have been successfully extracted, the two signatures are merged into $Sig_{out0}$ + $x \cdot $$Sig_{out1}$, resulting in the polynomial $Sig_{in}$. In Figure \ref{fig:rewriting}, we can see that elimination happens three times ($F_4$, $F_2$, and $F_1$). As expected, this happens within each element in GF($2^n$). In Figure \ref{fig:parallel} one elimination in $Sig_{out0}$ and two eliminations in $Sig_{out1}$ have been done independently, as shown earlier (refer to Example 2). \section{Galois Field Multiplication} \label{GF-multiplication} Galois field (GF) is a number system with a finite number of elements and two main arithmetic operations, addition and multiplication; other operations such as division can be derived from those two \cite{paar2009understanding}. Galois field with $p$ elements is denoted as GF($p$). The most widely-used finite fields are \textit{Prime Fields} and \textit{Extension Fields}, and particularly {\it Binary Extension Fields}. Prime field, denoted GF($p$), is a finite field consisting of finite number of integers \{$1,2, ....,p-1$\}, where $p$ is a prime number, with additions and multiplication performed \textit{modulo p}. Binary extension field, denoted GF($2^m$) (or $\mathbb{F}_{2^m}$), is a finite field with $2^m$ elements. Unlike in prime fields, however, the operations in extension fields are not computed \textit{modulo $2^{m}$}. Instead, in one possible representation (called \textit{polynomial basis}), each element of GF($2^m$) is a {\it polynomial ring} with $m$ terms with coefficients in GF(2), modulo $P(x)$. Addition of field elements is the usual addition of polynomials, with coefficient arithmetic performed modulo 2. Multiplication of field elements is performed modulo {\it irreducible polynomial} $P(x)$ of degree $m$ and coefficients in GF(2). The irreducible polynomial $P(x)$ is analogous to the prime number $p$ in prime fields $GF(p)$. { In this work, we focus on the verification problem of GF($2^m$) multipliers that appear in many cryptography and in some DSP applications.} \subsection{GF Multiplication Principle} Two different GF multiplication structures, constructed using different irreducible polynomials $P_{1}(x)$ and $P_{2}(x)$, are shown in Figure \ref{fig:4-bit-gf-multiplication}. The integer multiplication takes two $n$-bit operands as input and generates a $2n$-bit word, where the values computed at lower significant bits ripple through the carry chain all the way to the most significant bit (MSB). In contrast, in GF($2^m$) implementations the number of outputs is reduced to $n$ using irreducible polynomial P(x). The product terms are added for each column (output bit position) modulo 2, hence there is no carry propagation. For example, to represent the result in GF{($2^{4}$)}, with only four output bits, the four most significant bits in the result of the integer multiplication have to be reduced to GF($2^4$). The result of such a reduction is shown in Figure \ref{fig:4-bit-gf-multiplication}. In GF($2^4$), the input and output operands are represented using polynomials $A(x)$, $B(x)$ and $Z(x)$, where $A(x)$=$\sum_{n=0}^{n=3} a_{n}\cdot x^{n} $, $B(x)$=$\sum_{n=0}^{n=3} b_{n}\cdot x^{n}$, and $Z(x)$=$\sum_{n=0}^{n=3} z_{n}\cdot x^{n}$, respectively. \textbf{Example 1:} The function of each multiplication bit $s_{i}$ ($i$ $\in$ [0, 6]) is represented using polynomials in GF(2), namely: $s_{0}$=$a_{0}b_{0}$, $s_{1}$=$a_{1}b_{0}$+$a_{0}b_{1}$, etc. ..., up to $s_{6}$=$a_{3}b_{3}$\footnote{For polynomials in GF(2), "+" are computed as modulo 2.}. The output bits $z_{n}$ ($n \in$ [0, 3]) are computed modulo the irreducible polynomial $P(x)$. Using $P_{2}(x)$=$x^4$+$x$+1, we obtain : $z_{0}$=$s_{0}$+$s_{4}$, $z_{1}$=$s_{1}$+$s_{4}$+$s_{5}$, $z_{2}$=$a_0$$b_2$+$a_1$$b_1$+$a_2$$b_0$+$a_2$$b_3$+$a_3$$b_2$+$a_3b_3$, and $z_{3}$=$a_0$$b_3$+$a_1$$b_2$+$a_2$$b_1$+$a_3$$b_0$+$a_3b_3$. The coefficients of the multiplication results are shown in Figure \ref{fig:4-bit}. In digital circuits, partial products are implemented using {\sc and} gates, and addition modulo 2 is done using {\sc xor} gates. Note that, unlike in integer multiplication, in GF($2^m$) circuits there is no carry out to the next bit. For this reason, as we can see in Figure \ref{fig:4-bit-gf-multiplication}, the function of each output bit can be computed independently of other bits. \input{date17-gf_example.tex} \input{4-bit-poly-exp.tex} \subsection{Irreducible Polynomials}\label{sec:irreducible_poly} In general, there are various irreducible polynomials that can be used for a given field size, each resulting in a different multiplication result. For constructing efficient arithmetic functions over GF($2^m$), the irreducible polynomial is typically chosen to be a trinomial, $x^m$+$x^a$+1, or a pentanomial $x^m$+$x^a$+$x^b$+$x^c$+1 \cite{nist-recommend}. {For efficiency reason, coefficients $m,~a$ are chosen such that $m$ - $a$ $\geq$ $m/2$}. An example of constructing GF($2^4$) multiplication using two different irreducible polynomials is shown in Figure \ref{fig:4-bit-gf-multiplication}. We can see that each polynomial produces a unique multiplication result. The size of the corresponding multiplier can be estimated by counting the number of XOR operations in each multiplication. Since the number of AND and XOR operations for generating partial products (variables $s_{i}$ in Figure \ref{fig:4-bit-gf-multiplication}) is the same, the difference is only caused by the reduction of the corresponding polynomials modulo $P(x)$. The number of two-input XOR operations introduced by the reduction with $P(x)$ can be obtained as the number of terms in each column minus one. For example, the number of XORs using $P_1(x)$ is 3+1+2+3=9; and using $P_2(x)$, the number of XORs is 1+2+2+1=6. As will be shown in the next section, given the structure of the GF($2^m$) multiplication, such as the one shown in Figure \ref{fig:4-bit-gf-multiplication}, one can readily identify the irreducible polynomial $P(x)$ used during the $GF$ reduction. This can be done by extracting the terms $s_k$ corresponding to the entry $s_m$ (here $s_4$) in the table and generating the irreducible polynomial beyond $x^m$. We know that $P(x)$ must contain $x_m$, and the remaining terms $x^k$ of $P(x)$ are obtained from the non-zero terms corresponding to the entry $s_m$. For example, for the irreducible polynomial $P_1(x)=x^4+x^3+x^0$, the terms $x^3$ and $x^0$ are obtained by noticing the placement of $s_4$ in columns $z_3$ and $z_0$. Similarly, for $P_2(x)=x^4+x^1+x^0$, the terms $x^1$ and $x^0$ are obtained by noticing that $s_4$ is placed in columns $z_1$ and $z_0$. The reason for it and the details of this procedure will be explained in the next section. \section{Introduction} \label{sec:introduction} \IEEEPARstart{D}{espite} considerable progress in verification of random and control logic, advances in formal verification of arithmetic circuits have been lagging. This can be attributed to the difficulty in efficient modeling of arithmetic circuits and datapaths, without resorting to computationally expensive Boolean methods. Contemporary formal techniques, such as \textit{Binary Decision Diagrams} (BDDs), \textit{Boolean Satisfiability} (SAT), \textit{Satisfiability Modulo Theories} (SMT), etc., are not directly applicable to verification of integer and finite field arithmetic circuits \cite{kalla:tcad13}\cite{ciesielski2015verification}. This paper concentrates on formal verification and reverse engineering of finite (Galois) field arithmetic circuits. Galois field (GF) is a number system with a finite number of elements and two main arithmetic operations, addition and multiplication; other operations can be derived from those two \cite{paar2009understanding}. { GF arithmetic plays an important role in coding theory, cryptography, and their numerous applications. Therefore, developing formal techniques for hardware implementations of GF arithmetic circuits, and particularly for finite field multiplication, is essential.} The elements in field GF($2^m$) can be represented using polynomial rings. The field of size $m$ is constructed using \textit{irreducible polynomial} $P(x)$, which includes terms of degree with $d$ $\in$ [$0, m$] with coefficients in GF(2). The arithmetic operation in the field is then performed modulo $P(x)$. The choice of the irreducible polynomial has a significant impact on the hardware implementation of the GF circuit and its performance. Typically, the irreducible polynomial with a minimum number of elements gives the best performance \cite{ciet2002short}, but it is not always the case. Due to the rising number of threats in hardware security, analyzing finite field circuits becomes important. Computer algebra techniques with polynomial representation seem to offer the best solution for analyzing arithmetic circuits. Several works address the verification and functional abstraction problems, both in Galois field arithmetic \cite{kalla:tcad13}\cite{kalla:dac2014}\cite{PrussKE16TCAD} and integer arithmetic implementations \cite{STABLE:date11}\cite{ciesielski2015verification}\cite{farahmandi2015groebner}\cite{sayedformal:date-2016}\cite{yu:2016-tcad-verification}. Symbolic computer algebra methods have also been used to reverse engineer the word-level operations for GF circuits and integer arithmetic circuits to improve verification performance \cite{yu:2016-abstraction}\cite{sayedequivalence}\cite{kalla:dac2014}. The verification problem is typically formulated as proving that the implementation satisfies the specification, and is accomplished by performing a series of divisions of the specification polynomial by the implementation polynomials. In the work of Yu {\it et al.} \cite{yu:2016-abstraction}, the authors proposed an original spectral method based on analyzing the internal algebraic expressions during the rewriting procedure. Sayed-Ahmed et al. \cite{sayedequivalence} introduced a reverse engineering technique in Algebraic Combinational Equivalence Checking (ACEC) process by converting the function into canonical polynomials and using \textit{Gr{\" o}bner Basis}. However, the above mentioned algebraic techniques have several limitations. Firstly, they are restricted to implementations with a known binary encoding of the inputs and outputs. This information is needed to generate the specification polynomial that describes the circuit functionality regarding its inputs and outputs, necessary for the polynomial reduction process (described in Section \ref{sec:computer-algebra}). Secondly, these methods are unable to explore parallelism (inherent in GF circuits), as they require that the polynomial division is applied iteratively using reverse-topological order \cite{ciesielski2015verification}\cite{sayedformal:date-2016}\cite{PrussKE16TCAD}. Thirdly, the approaches applied specifically to GF($2^m$) arithmetic circuits \cite{kalla:dac2014}\cite{PrussKE16TCAD}, require knowledge of the irreducible polynomial $P(x)$ of the circuit. In this work, we present a formal approach to reverse engineer the gate-level finite field arithmetic circuits that exploit inherent parallelism of the GF circuits. The method is based on a parallel algebraic rewriting approach \cite{yu:aspdac17} and applied specifically to multipliers. The objective of reverse engineering is as follows: given the netlist of a gate-level GF multiplier, extract the bit positions of input and output bits and the irreducible polynomial used in constructing the GF multiplication; then extract the specification of the design using this information. Bit position $i$ indicates the location of the bit in the binary word according to its significance (LSB vs MSB). Our approach solves this problem by transforming the algebraic expressions of the output bits into an algebraic expression of the input bits (specification), and is done in parallel for each output bit. Specifically, it includes the following steps\footnote{Our tool and benchmarks used in this journal paper are released publicly at our project website at\\ \url{https://ycunxi.github.io/Parallel_Formal_Analysis_GaloisField}}: \begin{itemize} \item Extract the algebraic expression of each output bit. \item Determine the bit position of the outputs. \item Determine the bit position of the inputs. \item Extract the irreducible polynomial $P(x)$. \item Extract the specification by algebraic rewriting. \end{itemize} We demonstrate the efficiency of our method using GF($2^m$) \textit{Mastrovito} and \textit{Montgomery} multipliers of up to 571-bit width in a bit-blasted format (i.e., flattened to bit-level), implemented using various irreducible polynomials. \section{Conclusion} This paper presents a parallel approach to verification and reverse engineering of gate-level Galois Field multipliers using computer algebraic approach. It introduces a parallel rewriting method that can efficiently extract functional specification of Galois Field multipliers as polynomial expressions. We demonstrate that compared to the best known algorithms, our approach tested on $T$=30 threads provides on average 44$\times$ and 17$\times$ speedup in verification of Montgomery and Mastrovito multipliers, respectively. We presented a novel approach that reverse engineers the gate-level Galois Field multipliers, in which the irreducible polynomial, as well as the bit position of the inputs and outputs are unknown. We demonstrated that our approach can efficiently reverse engineer the Galois Field multipliers implemented using different irreducible polynomials. Future work will focus on formal verification of prime field arithmetic circuits and complex cryptography circuits. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank Prof. Kalla, University of Utah, for his valuable comments and the benchmarks; and Dr. Arnaud Tisserand, University Rennes 1 ENSSAT, for his valuable discussion. This work has been funded by NSF grants, CCF-1319496 and CCF-1617708. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran} \subsection{Reverse Engineering of GF($2^m$) Multipliers} \label{results-reverse-engin} The reverse engineering technique presented in this paper was implemented in the framework described in Section \ref{sec:reverse-engin} in C++. It reverse engineers bit-blasted GF($2^m$) multipliers by analyzing the algebraic expressions of each element using the approach presented in Section \ref{sec:parallel_GF}. The program was tested on a number of gate-level $GF(2^{m})$ multipliers with different irreducible polynomials, including Montgomery multipliers and Mastrovito multipliers. The multiplier generator, taken from \cite{kalla:tcad13}, takes the bit-width and the irreducible polynomial as inputs and generates the multipliers in the equation format. The experimental results show that our technique can successfully reverse engineer various GF($2^m$) multipliers, regardless of the GF($2^m$) algorithm and the irreducible polynomials. We set the number of threads to 16 for all the reverse engineering evaluations in this section. This is dictated by the fact that T=16 gives most promising performance (runtime) and scalability (memory usage) metrics on our platform, based on the analysis presented in Section \ref{sec:parallel_analysis} (Figure \ref{fig:tradeoff}). \input{synthesis_RE.tex} Our program takes the netlist/equations of the GF($2^m$) implementations, and the number of threads as input. Hence, the users can adjust the parallel efforts depending on the limitation of the machines. In this work, all results are performed in 16 threads. Typical designs that require reverse engineering are those that have been bit-optimized and mapped using a standard-cell library. Hence, we apply our technique to the bit-optimized Mastrovito and Montgomery multipliers (Table \ref{tbl:syn}). For the purpose of our experiments, the multipliers are optimized and mapped using ABC \cite{abc-link}. Compared to the verification runtime of synthesized multipliers (Table \ref{tbl:synth-verification}), the CPU time spent on analyzing the extracted expressions for reverse engineering is less than 10\% of the extraction process. This is because most computations of reverse engineering approach are associated with those for extracting the algebraic expressions, as presented in Section \ref{sec:parallel_analysis}, Table \ref{tbl:synth-verification}. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.34]{other_px.pdf} \caption{\small Result of reverse engineering GF($2^{233}$) Mastrovito multipliers implemented with different P(x).} \vspace{-5mm} \label{fig:other_px} \end{center} \end{figure} The reverse engineering approach has been further evaluated using four Mastrovito multipliers, each implemented with a different irreducible polynomial $P(x)$ in GF($2^{233}$). The polynomials are obtained from \cite{scott2007optimal} and optimized using ABC synthesis tool. The results are shown in Figure \ref{fig:other_px}. We can see that the multipliers implemented with trinomial $P(x)$ are much easier to be reverse engineered than those based on a pentanomial $P(x)$. This is because the multipliers implemented with pentanomial $P(x)$ contain more gates and have longer critical path, since the reduction over pentanomial requires more XOR operations. The CPU runtime for irreducible polynomial of the same class (trinomials or pentanomials) is almost the same. As discussed in Section \ref{sec:irreducible_poly}, comparison of the two trinomials shows that the efficient trinomial irreducible polynomial, $x^m$+$x^a$+1, typically satisfies $m$-$a$$>$$m/2$. The results for designs synthesized with 14nm technology library are shown in Figure \ref{fig:analysis_design_cost}. It shows that the area and delay of the Mastrovito multiplier implemented with $P(x)$=$x^{233}$+$x^{74}$+$1$ are 5.7\% and 7.4\% less than for $P(x)$=$x^{233}$+$x^{159}$+$1$, respectively. \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.34]{his-gf233.pdf} \caption{\small Evaluation of the design cost using GF($2^233$) Mastrovito multipliers with irreducible polynomials $x^{233}$+$x^{159}$+$1$ and $x^{233}$+$x^{74}$+$1$.} \vspace{-8mm} \label{fig:analysis_design_cost} \end{center} \end{figure} \section{Results} \label{sec:results} The experimental results of our method are presented in two subsections: 1) evaluation of parallel verification of GF($2^m$) multipliers; and 2) evaluation of reverse engineering of GF($2^m$) multipliers. The results given in this section include data (total time and maximum memory) for the entire verification or reverse engineering process, including translating the gate-level verilog netlist to the algebraic equation, performing backward rewriting and other required functions. \subsection{Parallel Verification of GF($2^m$) Multipliers} The verification technique for GF($2^m$) multipliers presented in Section \ref{sec:parallel_GF} was implemented in C++. It performs backward rewriting with variable substitution and polynomial reductions in Galois field in parallel fashion. The program was tested on a number of combinational gate-level $GF(2^{m})$ multipliers taken from \cite{PrussKE16TCAD}, including the Montgomery multipliers \cite{koc1998montgomery} and Mastrovito multipliers \cite{sunar1999mastrovito}. The bit-width of the multipliers varies from 32 to 571 bits. The verification results for various Galois field multipliers obtained using SAT, SMT, ABC \cite{abc-link}, and Singular \cite{singular}, have already been presented in works of \cite{kalla:tcad13} and \cite{PrussKE16TCAD}. They clearly demonstrate that techniques based on computer algebra perform significantly better than other known techniques. Hence, in this work, we only compare our approach to those two, and specifically to the tool described in \cite{PrussKE16TCAD}. However, in contrast to the previous work on Galois field verification, all the GF($2^m$) multipliers used in this paper are bit-blasted gate-level implementations. The bit-level multipliers are taken from \cite{PrussKE16TCAD} and mapped onto gate-level circuits using ABC \cite{abc-link}. Our experiments were conducted on a PC with Intel(R) Xeon CPU E5-2420 v2 2.20 GHz $\times$12 with 32 GB memory. As described in the next section, our technique can verify Galois field multipliers in multiple threads by applying Algorithm 1 to each output bit in parallel. The number of threads is given as input to the tool. The experimental results of our approach and comparison with \cite{PrussKE16TCAD} are shown in Table \ref{tbl:mas-verification} for gate-level Mastrovito multipliers with bit-width varying from 32 to 571 bits. These multipliers are directly mapped using ABC without any optimization. The largest circuit includes over 1.6 million gates. This is also the number of polynomial equations and the number of rewriting iterations (see Section \ref{sec:parallel_GF}). The results generated by the tool, presented in \cite{PrussKE16TCAD} are shown in columns 3 and 4 of Table \ref{tbl:mas-verification}. We performed four different series of experiments, with the number of threads $T$ varying from 5 to 30. The table shows CPU runtime and memory usage for different values of $T$. The timeout limit (TO) was set to 12 hours and memory limit (MO) to 32 GB. The experimental results show that our approach provides on average 26.2$\times$, 37.8$\times$, 42.7$\times$, and 44.3$\times$ speedup, for $T=$ 5, 10, 20, and 30 threads, respectively. Our approach can verify the multipliers up to 571 bit-wide multipliers in 1.5 hours, while that of \cite{PrussKE16TCAD} fails after 12 hours. The reported memory usage of our approach is the maximum memory usage {\it per thread}. This means that our tool experiences maximum memory usage with all $T$ threads running in the process; in this case, the memory usage is $T \cdot Mem$. This is why the 571-bit Mastrovito multipliers could be successfully verified with $T$ = 5 and 10, but failed with $T$ = 20 and 30 threads. For example, the peak memory usage of 571-bit Mastrovito multiplier with $T=20$ is $2.6 \times 20=52$ GB, which exceeds the available memory limit. We also tested Montgomery multipliers with bit-width varying from 32 to 283 bits; the results are shown in Table \ref{tbl:mont-verification}. These experiments are different than those in \cite{PrussKE16TCAD}. In our work, we first flatten the Montgomery multipliers before applying our verification technique. That is, we assume that only the positions of the primary inputs and outputs are known, without the knowledge of any high-level structure. In contrast, \cite{PrussKE16TCAD} verifies the Montgomery multipliers that are represented with four hierarchical blocks. For 32- to 163-bit Montgomery multipliers, our approach provides on average a 9.2$\times$, 15.9$\times$, 16.6$\times$, and 17.4$\times$ speedup, for $T=$ 5, 10, 20, and 30, respectively. Notice that \cite{PrussKE16TCAD} cannot verify the flattened Montgomery multipliers larger than 233 bits in 12 hours. Analyzing Table \ref{tbl:mas-verification} we observe that the rewriting technique of our approach when applied to Montgomery multipliers require significantly more time than for Mastrovito multipliers. The main reason for this difference is the internal architecture of the two multiplier types. Mastrovito multipliers are obtained directly from the standard multiplication structure, with the partial product generator followed by an XOR-tree structure, as in modular arithmetic. Since the algebraic model of XOR in GF arithmetic is linear, the size of the polynomial expressions generated during rewriting of this architecture is relatively small. In contrast, in a Montgomery multiplier the two inputs are first transformed into Montgomery form; the products of these Montgomery forms are called \textit{Montgomery products}. Since the polynomial expressions in Montgomery forms are much larger than partial products, the increase in size of intermediate expressions is unavoidable. \subsubsection{\textbf{Dependence on $P(x)$}} In Table \ref{tbl:mont-verification}, we observe that CPU runtime for verifying a 163-bit multiplier is greater than that of a 233-bit multiplier. This is because the computational complexity depends not only on the bit-width of the multiplier, but also on the irreducible polynomial $P(x)$ used in constructing the multiplier. We illustrate this fact using two GF($2^4$) multiplications implemented using two different irreducible polynomials (c.f. Figure \ref{fig:4-bit-gf-multiplication}). We can see that for $P_1(x)$=$x^{4}+x^{3}+1$, the longest logic paths for $z_{3}$ and $z_{0}$, include ten and seven products that need to be generated using XORs, respectively. However, when $P_2(x)$=$x^{4}+x+1$, the two longest paths, $z_{1}$ and $z_{2}$, have only seven and six products. This means that the GF($2^4$) multiplication requires 9 XOR operations using $P_{1}(x)$ and 6 XOR operations using $P_{2}(x)$. In other words, the gate-level implementation of the multiplier implemented using $P_{1}(x)$ has more gates compared to $P_{2}(x)$. In conclusion, we can see that irreducible polynomial $P(x)$ has significant impact on both design cost and the verification time of the GF($2^m$) multipliers. \subsubsection{\textbf{Runtime vs. Memory}}\label{sec:parallel_analysis} \begin{figure}[!hbt] \centering \includegraphics[width=0.42\textwidth]{Runtime_memory_r2.pdf} \caption{Runtime and memory usage of parallel verification approach as a function of the number of threads $T$.} \label{fig:tradeoff} \end{figure} In this section, we discuss the tradeoff of runtime and memory usage of our parallel approach to Galois Field multiplier verification. The plots in Figure \ref{fig:tradeoff} show the average runtime and memory usage for different number of threads, over the set of multipliers shown in Tables \ref{tbl:mas-verification} and \ref{tbl:mont-verification} (32 to 283 bits). The vertical axis on the left is CPU runtime (in seconds), and on the right is memory usage (MB). The horizontal axis represents the number of threads $T$, ranging from 1 to 30. The runtime is significantly improved for $T$ ranging from 5 to 15. However, there is not much speedup when $T$ is greater than 20, most likely due to the memory management synchronization overhead between the threads. Similarly to the results for Mastrovito multipliers (Table \ref{tbl:mas-verification}), our approach is limited here by the memory usage when the size of the multiplier and the number of threads $T$ are large. In our work, $T=20$ seems to be the best choice. Obviously, $T$ varies for different platforms, depending on the number of cores and the memory. We also analyzed the runtime complexity of our verification algorithm for a single thread (T=1) computation; it is shown in Figure \ref{fig:analysis_single}. The y-axis shows the total runtime of rewriting the polynomial expressions, and x-axis indicates the size of the Mastrovito multiplier. {The result shows that the overall speedup is roughly the same for each value of T. Montgomery multipliers exhibit similar behavior, regardless of the choice of the irreducible polynomial.} \begin{figure}[!htb] \begin{center} \includegraphics[scale=0.40]{single-thread.pdf} \caption{\small{Single thread runtime analysis for Mastrovito multipliers.}} \vspace{-4mm} \label{fig:analysis_single} \end{center} \end{figure} \subsubsection{\textbf{Effect of Synthesis on Verification}} In \cite{yu:2016-tcad-verification} the authors conclude that highly bit-optimized integer arithmetic circuits are harder to verify than their original, pre-synthesized netlists. This is because the efficiency of the rewriting technique relies on the amount of cancellations between the different terms of the polynomial, and such cancellations strongly depend on the order in which signals are rewritten. A good ordering of signals is difficult to achieve in highly bit-optimized synthesized circuits. To see the effect of synthesis on parallel verification of GF circuits, we applied our approach to {\it post-synthesized} Galois field multipliers with operands up to 409 bits (571-bit multipliers could not be synthesized in a reasonable time). We synthesized \textit{Mastrovito} and \textit{Montgomery} multipliers using $ABC$ tool \cite{abc-link}. We repeatedly used the commands \textit{resyn2} and \textit{dch}\footnote{"dch" is the most efficient bit-optimization function in ABC.} until the number of AIG levels or nodes could not be reduced anymore. The synthesized multipliers were mapped using a 14nm technology library. The verification experiments shown in Table \ref{tbl:synth-verification} are performed by our tool with $T=20$ threads. Our tool was able to verify both 409-bit \textit{Mastrovito} and \textit{Montgomery} multipliers within just 13 minutes. We observed that in our parallel approach Galois field multipliers are easier to be verified after optimization than in their original form. For example, the verification of a 283-bit Montgomery multiplier takes 15,300 seconds for $T=$20. After optimization, the runtime dropped to just 169.2 seconds, which means that such a verification is 90x faster than of the original implementation. The memory usage has also been reduced from 488 MB to 194 MB. In summary, in contrast to verification problems of integer multipliers \cite{yu:2016-tcad-verification}, the bit-level optimization actually reduces the complexity of backward rewriting process. This is because extracting the function of an output bit of a GF multiplier depends only on the logic cone of that bit and does not require logic expression from other bits to be simplified (c.f. Theorem 3). Hence, the complexity of function extraction is naturally reduced if the logic cone is minimized. \input{synthesis_Verify.tex} \section{Reverse Engineering} \label{sec:reverse-engin} In this section, we present our approach to perform reverse engineering of GF($2^m$) multipliers. Using the \textit{extraction} technique presented in the previous section, we can extract the algebraic expression of each output bit. In contrast to the algebraic techniques of \cite{PrussKE16TCAD}\cite{yu:2016-tcad-verification}, our extraction technique can extract the algebraic expression of each output bit independently. This means that the extraction can be done without the knowledge of the bit position of the inputs and outputs. Two theorems are provided and proved to support this claim. In a GF($2^m$) multiplication, let $s_{i}$ ($i$ $\in$ \{0,1,...,2$m$-1\}) be a set of partial products generated by AND gates and combined with an XOR operations. For example, in Figure \ref{fig:4-bit-gf-multiplication}, there are six product sets, $s_0$, $s_1$, ..., $s_6$, where $s_1$=$a_1$$b_0$+$a_0$$b_1$; or written as a set: $s_1$=\{$a_1$$b_0$, $a_0$$b_1$\}, etc. These product sets are divided into two groups: those with index $i \le m-1$, called \textit{in-field} product sets; and those with index $i \ge m$, called \textit{out-of-field} product sets. The in-field product sets $s_i$, in this case $s_0, s_1, s_2, s_3$, correspond to the output bits $z_i$. The out-of-field product sets will be reduced into the field GF($2^m$) using mod $P(x)$ operation, and assigned to the respective output bit, as determined by $P(x)$. In the case of Figure \ref{fig:4-bit-gf-multiplication}, the out-of-field sets are $s_4$, $s_5$, $s_6$. In general, for a GF($2^m$) multiplication, $m$ product sets are \textit{in-field}, and $m$-1 product sets are \textit{out-of-field} \cite{yu2017reverse}. \subsection{Output Encoding Determination} We will now demonstrate how to determine the encoding, and hence bit position, of the outputs. { \textbf{Theorem 2:} Given a GF($2^m$) multiplication, the in-field product sets ($s_0$, $s_1$, ..., $s_{m-1}$) appear in exactly one element of GF($2^m$) each, and the out-of-field product sets ($s_m$, $s_{m+1}$, ..., $s_{2m-1}$) appear in at least two elements (outputs) of GF($2^m$), as a result of reduction mod $P(x)$. \textbf{Proof:} An irreducible polynomial in GF($2^m$) has the standard form $P(x) = x^m + P'(x)$, where the tail polynomial $P'(x)$ contains at least two monomials $x^d$ with degree $d < m$. For example, there are two such monomials for a trinomial, four for pentanomial, etc. Since $P(x) = 0$ we have $x^m = P'(x)$ in GF($2^m$). Hence the variable $x^m$, associated with the first out-of-field partial product set $s^m$ will appear in at least two outputs, determined by $P'(x)$. Other variables, $x^k$, associated with out-of-field partial product set $s_k$, for $k > m$, can be expressed as $x^k = x^{k-m} x^m = x^{k-m} P'(x)$ and will contain at least two elements. QED \hfill $\square$ } { In fact, the number of outputs in which the out-of-field set $s_k$ will appear is equal to the number of monomials in the above product $x^{k-m} P'(x)$, provided that every monomial $x^j$ with $j > m$ is recursively reduced mod $P'(x)$, i.e., by using relation $x^m = P'(x)$. We illustrate this fact with an example of multiplication in GF($2^4$) using irreducible polynomial $P_1(x)= x^4+x^3+1$ shown in the left side of Figure \ref{fig:4-bit-gf-multiplication}. The in-field sets, associated with outputs $z_0, z_1, z_2, z_3$, are $s_0, s_1, s_2, s_3$. Since $P_1(x) = x^4+x^3+1 = 0$, we obtain $ x^4 = x^3+1$. This means that set $s_4$ appears in two output columns, $z_3$ and $z_0$. Then \[ x^5 = x\cdot x^4 = x(x^3+1) = x^4 + x = x^3+x+1,\] which means that $s_5$ appears in three outputs: $z_3, z_1, z_0$. Finally, \[ x^6 = x\cdot x^5 = x(x^3+x+1) = x^4 + x^2+x = x^3+x^2+x+1,\] that is, $s_6$ will appear in four outputs: $z_3, z_2, z_1, z_0$. As expected, this matches the left Table in Figure \ref{fig:4-bit-gf-multiplication}. Note the recursive derivation of $x^k$ for $k > m$, which increases the number of columns to which a given set $s_k$ is assigned.} Based on Theorem 2, we can find the in-field product sets, $s_0$, $s_1$, ..., $s_{m-1}$, by searching the unique products in the resulting algebraic expressions of the output bits. In this context, \textit{unique products} are the products that exist in only one of the extracted algebraic expressions. Since the in-field product set indicates the bit position of the output, we can determine the bit positions of the output bits as soon as all the in-field product sets are identified. \textbf{Example 4 (Figure \ref{fig:4-bit}):} We illustrate the procedure of determining bit positions with an example of a GF($2^4$) multiplier implemented using irreducible polynomial $P_2(x)$=$x^4$+$x$+$1$ (see Figure \ref{fig:4-bit-gf-multiplication}). Note that in this process the labels do not offer any knowledge of the bit positions of inputs and outputs. The extracted algebraic expressions of the four output bits are shown in Figure \ref{fig:4-bit}. The labels of the variables do not indicate any binary encoding information. We first identify the unique products that include set $s_0$=$a_0$$b_0$ in algebraic expression of $z_{0}$; set $s_1$=($a_0$$b_1$+$a_1$$b_0$) in $z_{1}$; set $s_2$=($a_0$$b_2$+$a_1$$b_1$+$a_2$$b_0$) in $z_{2}$; and set $s_3$=($a_0$$b_3$+$a_1$$b_2$+$a_2$$b_1$+$a_3$$b_0$) in $z_{3}$. Note that the number of products in the in-field product set $s_{i}$ is $i$. Hence, we find all the in-field product sets and their relation to the extracted algebraic to be as follows: \vspace{2mm} \noindent $s_0$ = $a_0$$b_0$, $z_0$ $\rightarrow$ Least significant bit (LSB) \noindent $s_1$ = $a_0$$b_1$+$a_1$$b_0$, $z_1$ $\rightarrow$ $2^{nd}$ output bit \noindent $s_2$ = $a_0$$b_2$+$a_1$$b_1$+$a_2$$b_0$, $z_2$ $\rightarrow$ $3^{rd}$ output bit \noindent $s_3$ = $a_0$$b_3$+$a_1$$b_2$+$a_2$$b_1$+$a_3$$b_0$, $z_3$ $\rightarrow$ Most significant bit (MSB) \subsection{Input Encoding Determination} \begin{algorithm} \scriptsize \caption{Input encoding determination for $GF(2^{m})$}\label{alg:commonlogic} \textbf{Input: a set of algebraic expressions represent the in-field product sets $S$}\\ \textbf{Output: bit position of input variables} \begin{algorithmic}[1] \State $S$=\{$s_0, s_1, ..., s_{m-1}$\} \State initialize a vector of variables $V$ $\leftarrow$ \{\} \For{i=0, i$\leq$m-1, i++} \For{each variable $v$ in algebraic expression of $s_{i}$} \If {$v$ does not exist in $V$} \State assign bit position value of $v$ = $i$ \State store $v$ in variable set $V$ \EndIf \EndFor \EndFor \\ \Return $V$ \end{algorithmic} \end{algorithm} We can now determine the bit position of the input variables using the procedure outlined in Algorithm 2. The input bit position can be determined by analyzing the in-field product sets, obtained in the previous step. Based on the GF multiplication algorithm, we know that $s_0$ is generated by an AND function with two LSBs of the two inputs; and the two products in $s_1$ are generated by the AND and XOR operations using two LSBs and two $2^{nd}$ input bits, etc. For example in a GF($2^4$) multiplication (Figure \ref{fig:4-bit-gf-multiplication}), $s_0$=$a_0$$b_0$, where $a_0$ and $b_0$ are LSBs; $s_1$=$a_1$$b_0$+$a_0$$b_1$, where $a_0$, $b_0$ are LSBs; $a_1$, $b_1$ are $2^{nd}$ LSBs. This allows us to determine the bit position of the input bits recursively by analyzing the algebraic expression of $s_{i}$. We illustrate this with the GF($2^4$) multiplier implemented using $P_2(x)$ = $x^4$+$x$+$1$ (Figure \ref{fig:4-bit}). \textbf{Example 5 (Algorithm 2):} The input of our algorithm is a set of algebraic expressions of the in-field product sets, $s_{0}$, $s_{1}$, $s_{2}$, $s_{3}$ (line 1). We initialize vector $V$ to store the variables in which their bit positions are assigned (line 2). The first algebraic expression is $s_0$. Since the two variables, $a_0$ and $b_0$ are not in $V$, the bit positions of these two variables are assigned index $i=0$ (line 4-8). In the second iteration, $V$=\{$a_0$, $b_0$\}, and the input algebraic expression is $s_1$, including variables $a_0$, $b_0$, $a_1$ and $b_1$. Because $a_1$ and $b_1$ are not in $V$, their bit position is $i=1$. The loop ends when all the algebraic expressions in $S$ have been visited, and returns $V$=\{$(a_0, b_0)_{0}$, $(a_1, b_1)_{1}$, $(a_2, b_2)_{2}$, $(a_3, b_3)_{3}$\}. The subscripts are the bit position values of the variables returned by the algorithm. Note that this procedure only gives the bit position of the input bits; the information of how the input words are constructed is unknown. There are $2^{m-1}$ combinations from which the words can be constructed using the information returned in $V$. For example, the two input words can be $W_{0}$=$a_0$+2$a_1$+4$b_2$+8$a_3$ and $W_{1}$=$b_0$+2$b_1$+4$a_2$+8$b_3$; or they can be $W'_{0}$=$a_0$+2$a_1$+4$b_2$+8$b_3$ and $W'_{1}$=$b_0$+2$b_1$+4$a_2$+8$a_3$. Although there may be many combinations for constructing the input words, the specification of the GF($2^m$) is unique. \subsection{Extraction of the Irreducible Polynomial} \textbf{Theorem 3:} \textit{Given a multiplication in GF($2^m$), let the first out-of-field product set be $s_{m}$. Then, the irreducible polynomial $P(x)$ includes monomials $x^m$ and $\{x^i\}$ iff all products in the set $s_{m}$ appear in the algebraic expression of the $i^{th}$ output bits, for all $i$ $ < $ $m$.} \textbf{Proof:} Based on the definition of field arithmetic for GF($2^m$), the polynomial basis representation of $s_{m}$ is $x^m s_m$. To reduce $s_{m}$ into elements in the range [0, $m-1$], the field reductions are performed modulo irreducible polynomial $P(x)$ with highest degree $m$ ($c.f.$ the proof of Theorem 2). As before, let $P(x)$ = $x^m + P'(x)$. Then, \[ x^m s_m ~mod~ (x^m+P'(x))~=~s_{m}P'(x) \] {Hence, if $x^i$ exists in $P'(x)$, it also exists in $P(x)$. Therefore, $x^i$ exists in $P(x)$, iff $x^i s_m $ exists in $x^m s_m$ mod $P(x)$.} \hfill $\square$ Even though the input bit positions have been determined in the previous step, we cannot directly generate $s_m$ since the combination of the input bits for constructing the input words is still unknown. In Example 5 ($m$=4), we can see that $s_m$=\{$a_1$$b_3$, $a_2$$b_2$, $a_3$$b_1$\} when input words are $W_{0}$ and $W_{1}$; but $s_m$=\{$a_1$$a_3$, $a_2$$b_2$, $b_1$$b_3$\} when inputs words are $W_{0}'$ and $W'_1$. To overcome this limitation, we create a set of products $s_m'$, which includes all the possible products that can be generated based on all input combinations. The set $s_m'$ includes the $true$ products, i.e., those that exist in the first out-of-field product set; and it also includes some $dummy$ products. The dummy products are those that never appear in the resulting algebraic expressions. Hence, we first generate the set $s_m'$ and eliminate the dummy products by searching the algebraic expressions. After this, we obtain $s_m$. Then, we use $s_m$ to extract the irreducible polynomial $P(x)$ using Algorithm 3. {\bf Example 6: }We illustrate the method of reverse engineering the irreducible polynomial using the $GF(2^4)$ multiplier of Fig. 1. The procedure is outlined in Algorithm 3. The extracted algebraic expressions $S$ (line 1 at Algorithm 3) is shown in Figure \ref{fig:4-bit}. The bit position of input bits is determined by Algorithm 2 (line 2). Based on the result of Algorithm 2, we generate $s_m'$=\{$a_1$$a_3$, $b_1$$b_3$, $a_2$$b_2$, $a_3$$b_1$, $a_1$$b_3$\}. To eliminate the dummy products from $s_m'$, we search all algebraic expressions in $S$, and eliminate the products that cannot be part of the resulting products. In this case, we find that $a_1$$a_3$ and $b_1$$b_3$ are the dummy products. Hence, we get $s_m$=\{$a_3$$b_1$, $a_2$$b_2$, $a_1$$b_3$\} (line 3). Based on the definition of irreducible polynomial, $P(x)$ must include $x^m$; in this example $m=4$ (line 4). While looping over all the algebraic expressions, the expressions for $z_{0}$ and $z_{1}$ contain all the products of $s_m$. Hence, $x^0$ and $x^1$ are included in $P(x)$, so that $P(x)$ = $x^4$+$x^1$+$x^0$. We can see that it is the same as $P_2(x)$ in Figure \ref{fig:4-bit-gf-multiplication}. \begin{algorithm} \scriptsize \caption{Extracting irreducible polynomial in $GF(2^{m})$}\label{alg:commonlogic} \textbf{Input: the algebraic expressions of output bits $S$}\\ \textbf{Input: the first out-of-field product set $s_m$}\\ \textbf{Output: Irreducible polynomial $P(x)$} \begin{algorithmic}[1] \State $S$ = \{$exp_0, exp_1, ..., exp_{m-1}$\} \State $V$ $\leftarrow$ Algorithm 2($S$) \State $s_m$ $\leftarrow$ $eliminate$\_$dummy$($s_m'$ $\leftarrow$ $V$, $S$) \State $P(x)$=$x^m$: initialize irreducible polynomial \For{i=0, i$\leq$m-1, i++} \If {all products in $s_{m}$ exist in $exp_i$} \State $P(x)$~+=~$x^i$ \EndIf \EndFor \\ \Return $P(x)$ \end{algorithmic} \end{algorithm} \input{mas_mult_results_Verify} \input{mont_mult_results_Verify} In summary, using the framework presented in Section \ref{sec:implementation}, we first extract the algebraic expressions of all output bits. Then, we analyze the algebraic expressions to find the bit position of the input bits and the output bits, and extract the irreducible polynomial $P(x)$. In the example of the GF($2^4$) multiplier implemented using $P(x)$ = $x^4$+$x$+$1$, shown in Figure \ref{fig:4-bit-gf-multiplication}, the final results returned by our approach gives the following: 1) the input bits set $V$= \{$(a_0, b_0)_{0}$, $(a_1, b_1)_{1}$, $(a_2, b_2)_{2}$, $(a_3, b_3)_{3}$\}, where the subscripts represent the bit position; 2) $z_0$ is the least significant bit (LSB), $z_1$ is the $2^{nd}$ output bit, $z_2$ is the $3^{rd}$ output bit, and $z_3$ is the most significant bit (MSB); 3) irreducible polynomial is $P(x)$ = $x^4$+$x$+$1$; 4) the specification can be verified using the approach presented in Section \ref{sec:parallel_GF} with the reverse engineered information.
{ "timestamp": "2018-02-21T02:02:39", "yymm": "1802", "arxiv_id": "1802.06870", "language": "en", "url": "https://arxiv.org/abs/1802.06870" }
\section{Introduction} Various distance sensors such as mm-wave sensors are implemented in cars to prevent traffic accidents and improve the comfort of driving. Because some of these sensors have ranges larger than 100 meters, they can gather environment information. This environment information is used by the car itself and can be useful even for other cars or people. If such information is used by other people for other applications, this is called vehicular-based participatory sensing or crowd sensing. Ideally, vehicular-based participatory sensing should be implemented without using location information in order to protect location privacy. (Although location privacy has been widely researched \cite{survey_privacy}, \cite{survey_privacy2}, it is not within the scope of this paper.) This study attempts to implement an application that estimates object shape without using location information. That is, without vehicles' location or moving direction information, we estimate the shape of a target object at an unknown location. Such an estimation intuitively seems impossible due to there being too many unknown factors, and some theoretical results shown in the next section suggest it is impossible. However, by using mobile sensors that continuously measure the distance between individual sensors and the target object, this paper proposes a theoretical framework for successfully estimating the target-object shape. To the best of our knowledge, this is the first time a target-object shape has been estimated by using the data of mobile distance sensors without using their locations. This can be the first step to widely expanding the possibility of software sensors implemented by participatory sensing under complete location privacy. In addition, this paper also suggests that the secondary use of IoT (internet of things) \cite{IoT,IoTsurvey,commag,newIoTsurvey} information can be wider than expected. The contributions of this paper are: \begin{itemize} \item This paper proposes a theoretical framework for estimating the shape of a convex polygon target object at an unknown location by using distance sensors the locations of which are not given. Each sensor moves on an unknown line at a known speed and continuously measures the distance from it to the target object. The estimation framework does not require any positioning function, anchor-location information, or additional mechanisms to obtain side information such as angle of arrival of signal. \item The estimation problem includes a unique aspect: the sensing information includes unknown factors. That is, neither the sensor's location nor its moving direction is given. The proposed framework estimates each part of the target-object shape and the combinations of each part. This is a new type of estimation algorithm. \end{itemize} \section{Related work} The fundamental questions related to the research topic of this paper is whether we can estimate the shape of a target object by using many simple sensors such as distance sensors or binary sensors without a positioning function or location information and how we estimate it if possible. Our prior studies suggested that we can estimate only a small number of parameters such as the size and perimeter length of a target object by using randomly deployed sensors such as binary sensors and distance sensors and cannot estimate other parameters \cite{infocom,ieice-invite,arXiv}. Thus, these studies introduced composite sensors that are composed of several simple sensors and are randomly deployed. By using them, additional parameters were able to be estimated \cite{signalProcess,mobileComp}. The studies used the sensing results at a certain sensing epoch and estimated parameters using them. Even when the studies used the sensing results at multiple sensing epochs, they did not take into account sensing epoch information. Only one study \cite{time-variant} among these studies took account of sensing epochs and the temporary behavior of sensing results, but it focused on estimating the size and perimeter length of the target object. Recently, we have developed a framework for estimating the shape of a target object moving on a unknown trajectory at a unknown speed by using distance sensors at unknown locations \cite{fixed_sensor}. The estimation method structure in which parts of the target object and their connectivities are estimated is similar, but there are major differences between this paper and that paper. (i) The sensing area model in that paper is a special case of this paper. (ii) The estimation in that paper needs to estimate the target object's moving speed. As far as we know, no studies other than those mentioned above have directly tackled these questions. However, there has been a considerable amount of studies on developing an estimation method that uses location-unknown sensors. These studies took a different approach. Most first estimated the sensor locations \cite{locating_nodes} because it is believed that ^^ ^^ the information gathered by such sensor nodes will generally be useless without determining the locations of these nodes" \cite{flip_amb} or ^^ ^^ the measurement data are meaningless without knowing the location from where the data are obtained" \cite{local_4}. Once sensors' locations are estimated, shape estimation is no longer difficult. However, an approach of estimating the sensor locations often requires additional mechanisms or side information, such as locations of anchor sensors, angle-of-arrival measurements, training data and period, and distance-related measurements \cite{locating_nodes,local_4,local_2,local_3}. Concrete examples are intersensor distance information \cite{flip_amb}, location-known anchor sensors \cite{tsp2002}, a set of signals between sensors \cite{acm_sensor}, and the system dynamic model and location ambiguity of a small range \cite{bernoulli}. In addition, there has been a research into capturing the shape of a target object by using cameras that cannot cover the whole shape of the target object \cite{camera}. \section{Model} A fixed target object $T$ in a bounded convex set $\Omega\subset\mathbb{R}^2$ and is a convex polygon. Its boundary $\partial T$ is closed and simple (no holes or double points) and consists of directional edges $\{L_j\}_j$ where $j =1,2,\cdots,n_e$ (Fig. \ref{model}). Here, $n_e$ is the number of edges. Let $\lambda_{j}$ be the length of $L_{j}$, and let $\xi_{j}$ be the angle formed by $L_{j}$ and the reference direction where $0\leq \xi_j<2\pi$. Note that the inner angle formed by $L_{j}$ and $L_{j+1}$ is $\gamma_j=\pi-\xi_{j+1}+\xi_j$. Here, $\{L_{j}\}_j$ are counted counterclockwise along $\partial T$, and the head of $L_{j}$ is the tail of $L_{j+1}$. We do not know any of $\{\lambda_{j}, \xi_{j}, \gamma_j\}_{j}$. That is, we do not know the target-object shape or size. \begin{figure}[tb] \begin{center} \includegraphics[width=11cm,clip]{model} \caption{Illustration of target object model} \label{model} \end{center} \end{figure} A vehicle is running at a speed $v$ on a randomly placed straight line the direction of which is $\phi$ from the reference direction where $\phi$ is an independent random variable uniformly distributed in $[0,2\pi)$. That is, the vehicle's location (that is, the sensor's location) $(x_s(t),y_s(t))$ is given by $(vt\cos\phi+x_s(0),vt\sin\phi+y_s(0))$. For simplicity, assume that $v$ is time-invariant. However, the extension to a time-variant $v$ is straightforward. The vehicle is equipped with a directional distance sensor and a speed meter measuring $v$. (That is, $v$ is known, but $\phi$ is unknown.) The distance sensor measures the distance from the sensor to the nearest point of $T$ within the sensing area. (In practice, $(x_s(t),y_s(t))$ may not be able to be in $T$, but the vehicle is assumed to run on a straight line passing through $T$ for simplicity.) Its sensing area is a sector shape of radius $r_{max}$ and direction range $[-\theta_{max},\theta_{max}]$ from its moving direction where $0<\theta_{max}\leq \pi/2$. That is, the sensing area is $(x_s(t)+u\cos(\phi+\theta),y_s(t)+u\sin(\phi+\theta))$ for $0\leq \forall u\leq r_{max},-\theta_{max}\leq\forall\theta\leq\theta_{max}$. The sensor continuously measures the distance $r(t)$ at $t$ from the sensor to the target object and sends the sensing result with $v$ to a server collecting sensing results from individual sensors. Thus, $r(t)$ is given as follows: $ r(t)=\cases{\tilde r(t), &if $\tilde r(t)\leq r_{max}$,\cr \emptyset,&if $\tilde r(t)>r_{max}$.} $ Here, $\tilde r(t)\defeq\min_{(x_s(t)+u\cos(\phi+\theta),y_s(t)+u\sin(\phi+\theta))\in T, -\theta_{max}\leq\theta\leq\theta_{max}} u$. In particular, $r(t)=0$ if $(x_s(t),y_s(t))\in T$. Define the detecting direction $\theta^*$ and the detected point as follows. $\theta^*$ is $\theta\in [-\theta_{max},\theta_{max}]$ minimizing $\{u|(x_s(t)+u\cos(\phi+\theta),y_s(t)+u\sin(\phi+\theta))\in T\}$ and the detected point is $(x_s(t)+r(t)\cos(\phi+\theta^*),y_s(t)+r(t)\sin(\phi+\theta^*))$ on $\partial T$ for $r(t)>0$. The sensor continuously sends a report of $r(t)$ and $v$ to an estimation server. (If $r(t)=\emptyset$, NO DETECTION is reported.) That is, we can use $r(t)$ and $v$ of each sensor. Neither the vehicle's location $(x_s(t),y_s(t))$ nor moving direction $\phi$ is given to protect location privacy. There are $n_s$ vehicles monitoring $\Omega$. $\phi,v,r(t)$ of the $i$-th vehicle or its sensor are described as $\phi_i,v_i,r_i(t)$. Table \ref{p_list} lists the variables and parameters used in the remainder of this paper for the reader's convenience. \begin{table} \caption{List of variables and parameters} \begin{center}\label{p_list} \begin{tabular}{ll} \hline $T$&target object\\ $L_{j}$&$j$-th directional line segment of $\partial T$\\ $\lambda_{j}$&length of $L_{j}$\\ $\xi_{j}$&angle formed by $L_j$ and reference direction\\ $\gamma_j$&inner angle formed by $L_j$ and $L_{j+1}$\\ $n_e$&number of edges in $\partial T$\\ $n_s$& number of sensors\\ $r_{max}$&maximum sensing range\\ $\phi$&angle of vehicle's moving direction\\ $v$&moving speed of vehicle\\ $\theta_{max}$&sensing direction range from vehicle's moving direction\\ $\theta^*$&detecting direction\\ $r(t)$&measured distance to $T$ at $t$\\ $\zeta$&$\xi+\pi/2-\phi$\\ $\zeta_p$&$\xi+\pi/2+\theta_{max}$\\ $\zeta_m$&$\xi+\pi/2-\theta_{max}$\\ $p_d(L)$&period of $r(t)$ detecting a whole edge $L$\\ $l_d(L)$&length in time of $p_d(L)$\\ $s_d(L)$&slope of $r(t)$ during $p_d(L)$\\ $n_d(\lambda)$&number of sensors detecting the whole edge of length $\lambda$\\ $n_d(\gamma)$&number of sensors detecting a vertex of angle $\gamma$\\ $\estcan_set(x)$&set of candidate estimates derived from $x$\\ $k_{sub}$& ratio of total number of candidate estimates to number of sub-intervals\\ $c(\widehat{x})$&number of occurrences of estimates ($x=\lambda$ or $\gamma$)\\ $\est_set(x)$&set of estimates of edge length ($x=\lambda$) or angle ($x=\gamma$)\\ $n_\lambda$&number of whole edge detection samples\\ $n_\gamma$&number of vertex detection samples\\ $\widehat{N_\lambda}$&estimated number of edges of length $\lambda$\\ $\widehat{N_\gamma}$&estimated number of vertexes of angle $\gamma$\\ \hline \end{tabular} \end{center} \end{table} In the remainder of this paper, we use the following notations. For a set $X\subset \mathbb{R}^2$, $\partial X$ denotes its boundary, $\lengthx{X}$ denotes its perimeter length, and $\sizex{X}$ denotes its area size. $\sharp(S)$ is the number of elements in a discrete set $S$, $\bfone(z)\defeq\cases{1, &if $z$ is true,\cr 0, &otherwise,}$, $\bfone_\emptyset(z)\defeq\cases{1, &if $z$ is true,\cr \emptyset, &otherwise,}$,$[z]^+\defeq z\bfone(z>0)$, and $\widehat{z}$ is an estimator of $z$. In addition, $\arcsin(t)$, $\arccos(t)$, and $\arctan(t)$ take values in $[-\pi/2,\pi/2)$, $[0,\pi)$, and $[-\pi/2,\pi/2]$, respectively. \section{Basic properties}\label{basic_pro} This section discusses basic properties of $r(t)$. A sensor detecting $L_j$ with a fixed $\theta^*\in[-\theta_{max},\theta_{max}]$ needs to satisfy \bq \xi_j-\theta^*\leq\phi\leq\xi_j-\theta^*+\pi.\label{fix-theta-xi} \eq Note that the detecting direction $\theta^*$ is $\zeta\defeq\xi+\pi/2-\phi$ or $\pm\theta_{max}$ for an edge of direction $\xi$ when the detected point is not at an end of the edge (Fig. \ref{basic}). When \bq -\theta_{max}\leq \zeta\leq\theta_{max},\label{vert-d} \eq $\theta^*=\zeta$. When \bqn &&-\theta_{max}-\pi/2<\zeta<-\theta_{max} \cr &&(\theta_{max}<\zeta<\theta_{max}+\pi/2),\label{max-d} \eqn $\theta^*=-\theta_{max}$ ($\theta^*=\theta_{max}$). Equivalently, \bq \theta^*=\cases{ \theta_{max}, &for $\zeta_m-\pi/2<\phi<\zeta_m,$\cr \zeta, &for $\zeta_m\leq\phi\leq \zeta_p,$\cr -\theta_{max}, &for $\zeta_p<\phi<\zeta_p+\pi/2$. }\label{detecting_direction} \eq Eq. (\ref{vert-d}) means that the sensor can detect the distance to the edge at the vertical direction of the edge. Because the distance sensor normally detects the minimum distance to an object, this is a normal case. Eq. (\ref{max-d}) means that the sensor cannot detect the distance of the edge at the vertical direction. In this case, the detecting direction becomes $\pm\theta_{max}$, which is the closest direction to the vertical direction of the edge within the sensing direction range. \begin{figure}[tbh] \begin{center} \includegraphics[width=8cm,clip]{basic} \caption{Illustration of detection} \label{basic} \end{center} \end{figure} Consider a line on which an edge of direction $\xi$ exists. Without loss of generality, we can assume that this line passes through the origin. Then, this line can be expressed as \bq y=(\tan\xi)x\label{line_xi} \eq on the $(x,y)$-coordinate. \subsection{Relationship between $s_d, l_d$ and parameters of $T$} \subsubsection{For $\theta^*=\zeta$} When the detecting direction is $\theta^*=\zeta$, the line the direction of which is the same as the detecting direction and that passes through the sensor's location is \bq y=(\tan(\xi+\pi/2))(x-tv\cos\phi-x_s(0))+tv\sin\phi+y_s(0). \eq The intersection $(x^*,y^*)$ of this line and the line defined by Eq. (\ref{line_xi}) (the edge of direction $\xi$ is on) is \bqn x^*(t)&=&(tv\cos\phi+x_s(0))\cos^2\xi\cr &&+(tv\sin\phi+y_s(0))\sin\xi\cos\xi,\\ y^*(t)&=&(tv\sin\phi+y_s(0))\sin^2\xi\cr &&+(tv\cos\phi+x_s(0))\sin\xi\cos\xi. \eqn Thus, the relative location $(\triangle x(t), \triangle y(t))$ of this intersection from the sensor's location is $(x^*(t)-tv\cos\phi-x_s(0), y^*(t)-tv\sin\phi-y_s(0))$. Because $r(t)=|\triangle x(t)/\cos(\xi+\pi/2)|$ when the intersection is on the edge (that is, the intersection becomes a detected point), $r(t)$ is a linear function of $t$ when the sign of $\frac{\triangle x(t)}{\cos(\xi+\pi/2)}$ is fixed and its slope $s_d$ (the amount of increase/decrease of $r(t)$ per a unit of time) is \bq s_d= \pm v\sin(\xi-\phi). \eq Because $\xi-\phi$ must satisfy Eq. (\ref{vert-d}), \bqn &&\xi-\phi\cr &=&\cases{\pm\arcsin(s_d/v)\bfone_{\emptyset}(-\theta_{max}\leq \zeta\leq\theta_{max}),\cr (\pi\mp\arcsin(s_d/v))\bfone_{\emptyset}(-\theta_{max}\leq \zeta\leq\theta_{max}).}\label{s_d1} \eqn When the sensor observes the line on which the edge of direction $\xi$ exists from $t_s$ to $t_e$ with $\theta^*=\zeta$, the length on this line between the detected point at $t_s$ and that at $t_e$ is $|x^*(t_e)-x^*(t_s)|/|\cos\xi|=(t_e-t_s)|v\cos(\phi-\xi)|$ (Fig. \ref{basic}). When the detected points at $t_s$ and $t_e$ are two end points of an edge of length $\lambda$ and direction $\xi$, this length on this line is $\lambda$. Therefore, \bq \lambda=l_d|v\cos(\phi-\xi)|\label{lam1b} \eq where $l_d=t_e-t_s$ is the length in time taken to detect this edge. Thus, due to Eq. (\ref{s_d1}) and the fact that $\xi-\phi$ must satisfy Eq. (\ref{vert-d}), \bq \lambda=l_d v\sqrt{1-(s_d/v)^2}\bfone_{\emptyset}(-\theta_{max}\leq \zeta\leq\theta_{max}).\label{lam1} \eq \subsubsection{For $\theta^*=\pm\theta_{max}$} When the detecting direction is $\theta^*=\pm\theta_{max}$, the line the direction of which is the same as the detecting direction and that passes through the sensor's location is \bq y=(\tan(\phi\pm\theta_{max}))(x-tv\cos\phi-x_s(0))+tv\sin\phi+y_s(0). \eq The intersection $(x^*,y^*)$ of this line and the line defined by Eq. (\ref{line_xi}) is \bqn x^*(t)&=&\frac{1}{\tan(\phi\pm\theta_{max})-\tan\xi}\cr &&\{(\tan(\phi\pm\theta_{max}))(tv\cos\phi+x_s(0))\cr &&-tv\sin\phi-y_s(0)\},\cr y^*(t)&=&(\tan\xi)x^*(t). \eqn Because $r(t)=|\triangle x(t)/\cos(\phi\pm\theta_{max})|=|(x^*(t)-tv\cos\phi-x_s(0))/\cos(\phi\pm\theta_{max})|$ when the intersection is on the edge, $r(t)$ is a linear function of $t$ and its slope $s_d$ is \bq s_d= \pm\frac{v\sin(\xi-\phi)}{\sin(\phi\pm\theta_{max}-\xi)}. \eq Because $\xi-\phi$ must satisfy Eq. (\ref{max-d}), \bqn &&\xi-\phi\cr &=& (\arctan\frac{s_d\sin\theta_{max}}{s_d\cos\theta_{max}\pm v}+(0\,{\rm or}\, \pi))\bfone_\emptyset(\theta_{max}<\zeta),\cr &&\xi-\phi\cr &=& (-\arctan\frac{s_d\sin\theta_{max}}{s_d\cos\theta_{max}\pm v}+(0\,{\rm or}\, \pi))\bfone_\emptyset(\zeta<-\theta_{max}).\label{s_d2}\cr && \eqn Similarly to the derivation of Eq. (\ref{lam1b}), \bq \lambda=l_d|\frac{v\sin\theta_{max}}{\sin(\phi\pm\theta_{max}-\xi)}|. \eq Due to Eq. (\ref{s_d2}), $\lambda=\frac{l_d|s_d|\sin\theta_{max}}{|\sin(\xi-\phi)|}$ and $\sin(\xi-\phi)=\frac{s_d\sin\theta_{max}}{\sqrt{(s_d\sin\theta_{max})^2+(s_d\cos\theta_{max}\pm v)^2}}$. Thus, \bqn \lambda&=&l_d\sqrt{(s_d\sin\theta_{max})^2+(s_d\cos\theta_{max}\pm v)^2}\cr &&\bfone_\emptyset((\theta_{max}<\zeta)\cup(\zeta<-\theta_{max})).\label{lam2} \eqn \subsection{Shape of $r(t)$}\label{r-shape} A sensor keeps detecting an edge $L$, $r(t)$ becomes continuous and a line segment. At a vertex, a detecting direction changes and $r(t)$ may become a curve. A curve appears between $t_1$ and $t_2$ when $\theta^*=\xi_j-\phi+\pi/2\in [-\theta_{max},\theta_{max}]$ just before a vertex (Fig. \ref{curve}-(a)). This is because the detected point is at the vertex while the sensor is moving between $t_1$ and $t_2$ and because the distance between the vertex and the sensor is not a linear function of $t$ between $t_1$ and $t_2$. When $\theta^*=\pm\theta_{max}$ just before a vertex, no curve appears because the detected point is moving as the sensor is moving (Fig. \ref{curve}-(b)). \begin{figure}[tb] \begin{center} \includegraphics[width=10cm,clip]{curve} \caption{Illustration of $r(t)$} \label{curve} \end{center} \end{figure} In the remainder of this paper, we focus on the sensing results $r(t)>0$. When $r(t)$ becomes a line segment during a period detecting the {\it whole} $L_{j}$ with $r(t)>0$ by a sensor and the period starts at $t_s$ and ends at $t_e$, an event corresponding to $t_s$ is (i) a change of slope at $r(t_s)>0$ (a curve may end at $t_s$) or (ii) $r(t_s)<r_{max}$ and $r(t_s-dt)=\emptyset$ and an event corresponding to $t_e$ is (i) a change of slope at $r(t_e)>0$ (a curve may start at $t_e$) or (ii) $r(t_e-dt)<r_{max}$ and $r(t_e)=\emptyset$. Here, $0<dt\ll 1$ and $t-dt$ means ^^ ^^ just before $t$." (Note that the period does not include $r(t)=0$.) Let $p_d(L)$ be a period of $r(t)$ starting and ending with the above-mentioned events of $L$ with $r(t)>0$ and let $l_d(L)$ be the length in time of $p_d(L)$. We also define $s_d(L)$: $s_d(L)$ is the slope of $r(t)$ detecting $L$. We can obtain $s_d$ if we cannot observe the whole $L$ but observe a partial $L$ through a single sensor. Therefore, to obtain $s_d$, the start epoch $t_s$ can be $r(t_s)=0,r_{max}$ in addition to (i)-(ii) for $r(t_s)$ mentioned above and the end epoch $t_e$ can be $r(t_e)=0,r_{max}$ in addition to (i)-(ii) for $r(t_e)$ mentioned above. \begin{remark}\label{remark1} $s_d(L_j)$ for the period with $r(t_e)=0$ and $s_d(L_{j+1})$ for the period with $r(t_s)=0$ are not useful for the following reason. As proposed in a later section, to estimate an angle $\gamma_j$, we need $s_d(L_j)$ and $s_d(L_{j+1})$, i.e., $s_d$ for consecutive edges. This is because we derive the estimate of $\gamma_j$ from the estimate of $\xi_j-\phi$ and that of $\xi_{j+1}-\phi$. If there are a period ending at $t_e$ with $r(t_e-dt)>0$, another period of $r(t)=0$ for $\forall t\in [t_e, t_s']$, and a period starting at $t_s'$ with $r(t_s'+dt)>0$, we cannot estimate an angle from $s_d$ for the combination of the first and third periods. This is because these two pieces of $s_d$ may not be $s_d$ for consecutive edges. \end{remark} \subsection{Probability that sensor detects whole $L$}\label{sec-p_d} According to Fig. \ref{omega2}, if a directional line $G$ on which the sensor moves is in the strip of width $r_{max}\sin\theta^*-\lambda|\sin(\xi-\phi)|$ and if $\phi$ satisfies Eq. (\ref{fix-theta-xi}), the sensor can detect the whole $L$. Here, $\theta^*$ is determined by Eq. (\ref{detecting_direction}). Because the strip width must be non-negative, $$r_{max}|\sin(\zeta)|>\lambda|\sin(\xi-\phi)|$$ for $\zeta\in[-\theta_{max},\theta_{max}]$ and $$r_{max}|\sin\theta_{max}|>\lambda|\sin(\xi-\phi)|$$ for $\zeta\in[\theta_{max},\theta_{max}+\pi/2]\cup[-\theta_{max}-\pi/2,-\theta_{max}]$. Thus, for $\zeta\in[-\theta_{max},\theta_{max}]$, \begin{eqnarray*} \phi-\xi&\in&[-\arctan\frac{r_{max}}{\lambda},\arctan\frac{r_{max}}{\lambda}]\cr &&\cup[\pi-\arctan\frac{r_{max}}{\lambda},\pi+\arctan\frac{r_{max}}{\lambda}], \end{eqnarray*} and for $\zeta\in[\theta_{max},\theta_{max}+\pi/2]\cup[-\theta_{max}-\pi/2,-\theta_{max}]$, $\phi-\xi\in[-\eta,\eta]\cup[\pi-\eta,\pi+\eta]$. Here, $\eta(\lambda)\defeq\cases{\pi/2, & for $r_{max}|\sin\theta_{max}|\geq\lambda$,\cr \arcsin\frac{r_{max}|\sin\theta_{max}|}{\lambda}\in [0,\pi/2], &otherwise.}$ \begin{figure}[tb] \begin{center} \includegraphics[width=11cm,clip]{omega2} \caption{Location of sensors detecting whole edge with $r(t)>0$} \label{omega2} \end{center} \end{figure} Note that the measure of the set of $G$ on which sensors monitor $\Omega$ (Fig. \ref{omega2}) is given by Eq. (5.2) in \cite{santalo} and is $\lengthx{\Omega}+\pi r_{max}\sin\theta_{max}$. Also note that the measure $\measure_1$ of the set of $G$ that is in this strip and has a direction satisfying Eq. (\ref{fix-theta-xi}) is as follows where $\Phi_{1,1}(\xi)\defeq[\zeta_m,\zeta_p]\cap([\xi-\arctan\frac{r_{max}}{\lambda},\xi+\arctan\frac{r_{max}}{\lambda}]\cup[\xi+\pi-\arctan\frac{r_{max}}{\lambda},\xi+\pi+\arctan\frac{r_{max}}{\lambda}])$ and $\Phi_{1,2}(\xi)\defeq([\zeta_m-\pi/2,\zeta_m]\cup[\zeta_p,\zeta_p+\pi/2])\cap([\xi-\eta,\xi+\eta]\cup[\xi+\pi-\eta,\xi+\pi+\eta])$. \bqn &&\measure_1(\lambda)\cr &=&\int_{\Phi_{1,1}(\xi)}r_{max}|\sin(\zeta)|-\lambda|\sin(\xi-\phi)|d\phi\cr &+&\int_{\Phi_{1,2}(\xi)}r_{max}|\sin\theta_{max}|-\lambda|\sin(\xi-\phi)|d\phi\cr &=&\bfone(\frac{\pi}{2}-\theta_{max}<\arctan\frac{r_{max}}{\lambda})2\int_{\pi/2-\theta_{max}}^{\arctan\frac{r_{max}}{\lambda}}r_{max}\cos x -\lambda|\sin x|dx\cr &+&\bfone(\eta\geq\max(\pi/2-\theta_{max},\theta_{max}))2\int_{-\theta_{max}}^{\pi/2-\theta_{max}}r_{max}\sin\theta_{max}-\lambda|\sin x|dx\cr &+&\bfone(\pi/2-\theta_{max}\leq\eta<\theta_{max})2\int_{-\eta}^{\pi/2-\theta_{max}}r_{max}\sin\theta_{max}-\lambda|\sin x|dx\cr &+&\bfone(\theta_{max}\leq\eta<\pi/2-\theta_{max})2\int_{-\theta_{max}}^{\eta}r_{max}\sin\theta_{max}-\lambda|\sin x|dx\cr &+&\bfone(\eta<\min(\pi/2-\theta_{max},\theta_{max}))2\int_{-\eta}^{\eta}r_{max}\sin\theta_{max}-\lambda|\sin x|dx\cr &=&\bfone(\frac{\pi}{2}-\theta_{max}<\arctan\frac{r_{max}}{\lambda})\cr &&\qquad 2\{r_{max}(\frac{r_{max}}{\sqrt{\lambda^2+r_{max}^2}}-\cos\theta_{max})+\lambda(\frac{\lambda}{\sqrt{\lambda^2+r_{max}^2}}-\sin\theta_{max})\}\cr &+&\bfone(\eta\geq\max(\pi/2-\theta_{max},\theta_{max}))\cr &&\qquad2\{(\pi/2) r_{max}\sin\theta_{max}-\lambda(2-\cos\theta_{max}-\sin\theta_{max})\}\cr &+&\bfone(\pi/2-\theta_{max}\leq\eta<\theta_{max})\cr &&\qquad2\{(\pi/2-\theta_{max}+\eta) r_{max}\sin\theta_{max}-\lambda(2-\cos\eta-\sin\theta_{max})\}\cr &+&\bfone(\theta_{max}\leq\eta<\pi/2-\theta_{max})\cr &&\qquad2\{(\eta+\theta_{max})r_{max}\sin\theta_{max}-\lambda(2-\cos\theta_{max}-\cos\eta)\}\cr &+&\bfone(\eta<\min(\pi/2-\theta_{max},\theta_{max}))4\{\eta r_{max}\sin\theta_{max}-\lambda(1-\cos\eta)\},\label{measure_1} \eqn Because the probability $q_d(\lambda)$ that the sensor detects the whole $L$ of length $\lambda$ is given by the ratio of these measures in accordance with the definition of geometric probability \cite{santalo}, \bq q_d(\lambda)=\frac{\measure_1(\lambda)}{2(\lengthx{\Omega}+2\pi r_{max} \sin\theta_{max})}\label{q_d_lam} \eq (The denominator doubles because $G$ is directional.) Therefore, the expected number $E[n_d(\lambda)]$ of sensors detecting the whole $L$ of length $\lambda$ with $r(t)>0$ is given by \bq E[n_d(\lambda)]=n_sq_d(\lambda).\label{num_detects1} \eq \subsection{Probability that sensor detects a vertex} Here, we pay attention to the number of sensors that have sensing results that cover a vertex of $T$. Such sensing results may not cover a whole edge. \begin{figure}[tb] \begin{center} \includegraphics[width=11cm,clip]{single-edge} \caption{Location of sensors detecting whole edge with $r(t)>0$} \label{single-edge} \end{center} \end{figure} Focus on a vertex formed by $L_j,L_{j+1}$ and assume that the vertex is on the left-side of line $G$ on which a sensor is moving. Because the vertex on the left-side of line $G$ is detected, $0<\theta^{(k)}<\pi$ for $k=j,j+1$. Here, $\theta^{(k)}$ is the detecting direction for $L_k$. Thus, due to Eq. (\ref{detecting_direction}), $\{\xi_k-\theta_{max}<\phi<\xi_k-\theta_{max}+\pi/2\}\cup(\{0<\xi_k+\pi/2-\phi<\pi\}\cap\{\xi_k-\theta_{max}+\pi/2<\phi<\xi_k+\theta_{max}+\pi/2\})$. This means that $\xi_k-\theta_{max}<\phi<\xi_k+\pi/2$. To detect $L_j,L_{j+1}$ around the vertex, \bqn \phi\in\Phi_2 &\defeq&(\xi_j-\theta_{max},\xi_j+\pi/2)\cap(\xi_{j+1}-\theta_{max},\xi_{j+1}+\pi/2).\label{Phi_2} \eqn In addition, we need a condition in which the detected point on $L_j$ is not always at the vertex. This condition is equivalent to (i) $\theta^{(j)}=\theta_{max}$ or (ii) $0<\theta^{(j)}=\xi_j+\pi/2-\phi<\pi, \xi_j-\theta_{max}+\pi/2<\phi<\xi_j+\theta_{max}+\pi/2$ and the detected point is on $L_j$ not equal to the vertex formed by $L_j,L_{j+1}$. Otherwise, we do not obtain any information on $s_d(L_j)$. This (i) means $\theta_{max}-\pi/2\leq\xi_j-\phi<\theta_{max}$, and (ii) means $-\pi/2\leq\xi_j-\phi<-\pi/2+\theta_{max}$ (equivalently, $3\pi/2\leq\xi_j-\phi<3\pi/2+\theta_{max}$) and $h<h_{max}$ (Fig. \ref{single-edge}). Here, $h_{max}(\phi,\xi_j)\defeq r_{max}\sin(\xi_j-\phi+\pi/2)$. For simplicity, define $h_{max}=\infty$ for $\theta_{max}-\pi/2\leq\xi_j-\phi<\theta_{max}$ and 0 for $\theta_{max}\leq\xi_j-\phi<3\pi/2$. According to Fig. \ref{single-edge}, the measure $\measure_2$ of the set of $G$ on which a sensor detects a part around the vertex is \bqn \measure_2&=&2\int_{\Phi_2}\min(w_j,w_{j+1},h_{max}(\phi,\xi_j))d\phi.\label{measure_2} \eqn Here, 2 on the right-hand side of the equation above appears due to the symmetry of the assumption ^^ ^^ the vertex is on the left-side of line $G$", and $w_k=r_{max}\sin(\xi_k+\pi/2-\phi)$ for $\phi\in[\xi_k+\pi/2-\theta_{max},\xi_k+\pi/2+\theta_{max}]$ and $w_k=r_{max}\sin\theta_{max}$ otherwise. It is possible to calculate $\measure_2$ like $\measure_1$ although it is omitted. Numerical integration based on Eq. (\ref{measure_2}) is also possible to obtain $\measure_2$. \begin{remark} $\measure_2$ is a function of $\xi_j$ and $\xi_{j+1}$. However, it is a function of $\gamma_j$ because we can use any direction as a reference direction. \end{remark} Similar to in Subsection \ref{sec-p_d}, the probability $q_d(\gamma_j)$ that the sensor can detect the vertex formed by $L_j,L_{j+1}$ with $r(t)>0$ is given by the following. \bq q_d(\gamma_j)=\frac{\measure_2}{2(\lengthx{\Omega}+2\pi r_{max} \sin\theta_{max})}. \eq Therefore, the expected number $E[n_d(\gamma)]$ of sensors detecting a vertex of inner angle $\gamma$ with $r(t)>0$ is given by \bq E[n_d(\gamma)]=n_s q_d(\gamma). \label{num_vertex} \eq \begin{remark} Due to Eq. (\ref{Phi_2}), $\phi$ must be in $\Phi_2=[\max(\xi_{j+1},\xi_j)-\theta_{max},\min(\xi_{j+1},\xi_j)+\pi/2]$ to detect a vertex formed by $L_j,L_{j+1}$. Thus, the length of the range of $\phi$ satisfying $\Phi_2$ is $[\gamma_j-\pi/2+\theta_{max}]^+$. Therefore, we cannot detect a vertex of angle $\gamma<\pi/2-\theta_{max}$. \end{remark} \section{Estimation method} \subsection{Estimating edge lengths and angles of $T$} According to the definition of starting and ending events on $r(t)$ defined in Subsection \ref{r-shape}, we obtain $s_d$ or $l_d$ for each sensor detecting $T$. By using them, we estimate the shape of $T$. Edge length $\lambda$ is estimated through Eqs. (\ref{lam1}) and (\ref{lam2}) and angle $\gamma$ through Eqs. (\ref{s_d1}) and (\ref{s_d2}). For the $\lambda$ estimation, applying Eqs. (\ref{lam1}) and (\ref{lam2}) to $l_d$ (and $s_d$) can directly yield three estimates of $\lambda$. \bqn \widehat{\lambda}&=&l_d v\sqrt{1-(s_d/v)^2}\\ \widehat{\lambda}&=&l_d\sqrt{(s_d\sin\theta_{max})^2+(s_d\cos\theta_{max}\pm v)^2}\label{hat_lam2} \eqn On the other hand, Eqs. (\ref{s_d1}) and (\ref{s_d2}) directly estimate $\xi-\phi$ but not $\gamma$. On the basis of the estimates of $\xi-\phi$, we derive the estimate of $\gamma$. Assume that a sensor detects $L_j$ and $L_{j+1}$ and that the slopes of $r(t)$ for them are $s_d^{(j)}$ and $s_d^{(j+1)}$. Because $\gamma_j=\pi-\xi_{j+1}+\xi_j$, we obtain the estimates of $\gamma_j$. \bq \widehat{\gamma_j}=\pi-\widehat{\xi_{j+1}-\phi}+\widehat{\xi_{j}-\phi}\label{hat_gamma} \eq Here, $\widehat{\xi_{k}-\phi}$ is given by Eq. (\ref{s_d1}) or (\ref{s_d2}) and $s_d^{(k)}$. If we exactly obtain $s_d$ and $l_d$ (and its associated period determined by $t_s$ and $t_e$), the estimates mentioned above are exact. However, we cannot uniquely obtain estimate $\lambda$ or $\gamma$. This is because $\theta^*$ is unknown and because Eqs. (\ref{s_d1}), (\ref{s_d2}), and (\ref{hat_gamma}) cannot uniquely determine $\widehat{\xi_{k}-\phi}$ or $\widehat{\lambda}$. We need to overcome this non-uniqueness. In our proposed method, we choose an estimate from multiple estimates as follows. For each pair of $(l_d,s_d)$ or each pair of $(s_d^{(j)},s_d^{(j+1)})$, we can obtain multiple $\widehat{\lambda}$ or $\widehat{\gamma_j}$. Call these multiple $\widehat{\lambda}$ or $\widehat{\gamma_j}$ candidate estimates of $\lambda$ or $\gamma_j$. Let $\estcan_set(l_d,s_d)$ ($\estcan_set(s_d^{(j)},s_d^{(j+1)})$) be the set of these candidate estimates derived from $(l_d,s_d)$($(s_d^{(j)},s_d^{(j+1)})$). The set of these candidate estimates includes at least one exact estimate because the estimates mentioned above do not include errors. When the number of $(l_d,s_d)$-samples ($(s_d^{(j)},s_d^{(j+1)})$-samples) is large, the number of occurrences of exact estimates is also large. For example, if $k$ $(l_d,s_d)$-samples are detecting an edge of length $\lambda$, the number of occurrences of $\widehat{\lambda}=\lambda$ is larger than or equal to $k$. On the other hand, the number of occurrences of other candidate estimates is less than the exact one because other candidate estimates depend on $(l_d,s_d)$ or $(s_d^{(j)},s_d^{(j+1)})$. Figure \ref{simple_default} clarifies this. Additional comments are provided in Appendix. Thus, by counting the numbers of occurrences, we adopt the candidate estimates that occur many more times than others as estimates. Specifically, the proposed method chooses an estimate among $\cup_{(l_d,s_d)}\estcan_set(l_d,s_d)$ ($\cup_{(s_d^{(j)},s_d^{(j+1)})}\estcan_set(s_d^{(j)},s_d^{(j+1)})$) as follows. Equally divide the interval between the smallest candidate estimate $\min\{\widehat{\lambda}\in\cup_{(l_d,s_d)}\estcan_set(l_d,s_d)\}$ and the largest candidate estimates $\max\{\widehat{\lambda}\in\cup_{(l_d,s_d)}\estcan_set(l_d,s_d)\}$ (the smallest $\min\{\widehat{\gamma}\in\cup_{(s_d^{(j)},s_d^{(j+1)})}\estcan_set(s_d^{(j)},s_d^{(j+1)})\}$ and the largest $\max\{\widehat{\gamma}\in\cup_{(s_d^{(j)},s_d^{(j+1)}}\estcan_set(s_d^{(j)},s_d^{(j+1)})\}$) into $n_{sub}$ sub-intervals. Determine $n_{sub}$ such that the peak of the number of occurrences of candidate estimates is clear (Figure \ref{simple_default}). Typically, $k_{sub}$ (the ratio of the total number of candidate estimates to the number of sub-intervals) is several such as five or ten. Formally, $k_{sub}$ is defined as $\sharp(\cup_{(l_d,s_d)}\estcan_set(l_d,s_d))/n_{sub}$ or $\sharp(\cup_{(s_d^{(j)},s_d^{(j+1)})}\estcan_set(s_d^{(j)},s_d^{(j+1)}))/n_{sub}$. Then, count the number $c(\widehat{\lambda})$ ($c(\widehat{\gamma})$) of occurrences of candidate estimates in a sub-interval where the average of candidate estimates in this sub-interval is $\widehat{\lambda}$ $(\widehat{\gamma})$. If the count in a certain sub-interval is larger than a threshold, calculate $\widehat{\lambda}$ $(\widehat{\gamma})$, which is the average of candidate estimates in that sub-interval, and adopt it as an estimate. In the remainder of this section, we focus on the adopted estimate. \begin{remark} If some errors, typically sensing errors, are likely contained, the number of sub-intervals should decrease (equivalently, the sub-interval length should be longer) to merge similar candidates. This is because there are many candidate estimates around the exact length or angle under some errors. To use the method to estimate the number of edges and vertexes described in the next subsection, these candidates should be merged. \end{remark} \subsection{Estimating the number of edges and vertexes} We need to estimate an edge length $\lambda$, a vertex angle $\gamma$, the number of the edges of length $\lambda$, and the number of the vertexes of angle $\gamma$. The previous subsection covers the estimation method for $\lambda$ and $\gamma$. This subsection covers the latter two estimations on the basis of Eqs. (\ref{num_detects1}) and (\ref{num_vertex}). Let $n_\lambda$ be the number of whole edge detection samples and $n_\gamma$ be the number of vertex detection samples. $n_\lambda$ is equal to the number of samples of $l_d>0$, and $n_\gamma$ be the number of $s_d$ pairs of consecutive vertexes. Note that $c(\widehat{x})/\sum_{\widehat{y}\in\est_set(x)} c(\widehat{y})$ for $x=\lambda,\gamma$ is the ratio of the number of occurrences of estimates $\widehat{x}$ among the total number of occurrences of estimates. Here, $\est_set(\lambda)$ ($\est_set(\gamma)$) means the set of estimates of edge length (angle). Thus, the mean number of whole edge detection samples for edge length $\widehat{\lambda}$ (or vertex detection samples for angle $\widehat{\gamma}$) is $n_xc(\widehat{x})/\sum_{\widehat{y}\in\est_set(x)} c(\widehat{y})$. Then, the estimated number $\widehat{N_{\lambda}}$ of edges of length $\lambda$ and the estimated number $\widehat{N_{\gamma}}$ of vertexes of angle $\gamma$ are \bqn \widehat{N_{\widehat{x}}}&=&\frac{n_xc(\widehat{x})}{E[n_d(\widehat{x})]\sum_{\widehat{y}\in\est_set(x)} c(\widehat{y})}.\label{N_x} \eqn After deriving edge length estimates $\widehat{\lambda}$ and angle estimates $\widehat{\gamma}$, evaluate Eqs. (\ref{num_detects1}) and (\ref{num_vertex}), obtain $E[n_d(\widehat{\lambda})]$ and $E[n_d(\widehat{\gamma})]$, and use them in Eq. (\ref{N_x}) to derive $\widehat{N_{\widehat{x}}}$. \subsection{Estimating the shape of $T$} Even when we estimate the length of each edge and the angle of each vertex, we cannot identify the shape of $T$. We need to know the sequence of edges or vertexes. To identify the sequence, use the following method. Assume that there exist sensing results $l_d(L_j),s_d^{(j)},s_d^{(j+1)}$ for a single sensor. That is, there is a sensor that detects the whole $L_j$ and a part of $L_{j+1}$ including a vertex formed by $L_j$ and $L_{j+1}$. (As mentioned in Remark \ref{remark1}, there should not be a period of $r(t)=0$ between the period of $p_d(L_j)$ and that detecting $L_{j+1}$.) Through the sensing results $l_d(L_j),s_d^{(j)}$, we obtain the estimate $\widehat{\lambda_j}$. We can also obtain the estimate $\widehat{\gamma_j}$ through $s_d^{(j)},s_d^{(j+1)}$. This means that one vertex of an edge of length $\widehat{\lambda_j}$ has an angle $\widehat{\gamma_j}$. Note that we cannot identify $j$. Thus, this means that we know there is an edge of length $\widehat{\lambda}$ connected at a vertex of angle $\widehat{\gamma}$. Let $\lambda[k]$ and $\gamma[k]$ be such a pair of an edge length estimate and an angle estimate the edge connects where they were derived by the $k$-th sensor. By using many sensing results, we obtain $\{\lambda[k],\gamma[k]\}_k$. Let $\sensor_id(a,b)$ be the set of sensors (sensor identifiers) $\{k|\lambda[k]=a\in\est_set(\lambda), \gamma[k]=b\in\est_set(\gamma) \}$. If an angle estimate and an edge length estimate are independent, the expected number $E_{ind}[\sharp\sensor_id(a,b)]$ of the elements in $\sensor_id(a,b)$ is \bqn E_{ind}[\sharp\sensor_id(a,b)] &=&\frac{E[N_d(a)]E[N_d(b)]\sum_{\widehat{\lambda}\in\est_set(\lambda),\widehat{\gamma}\in\est_set(\gamma)}\sharp\sensor_id(\widehat{\lambda},\widehat{\gamma})}{\sum_{\widehat{\lambda}\in\est_set(\lambda),\widehat{\gamma}\in\est_set(\gamma)}E[N_d(\widehat{\lambda})]E[N_d(\widehat{\gamma})]}. \eqn This is because $E[N_d(\lambda)]\defeq \widehat{N_{\lambda}}E[n_d(\lambda)]$ is the expected number of whole edge detections for any edge of length $\lambda$ and $E[N_d(\gamma)]\defeq \widehat{N_{\gamma}}E[n_d(\gamma)]$ is the expected number of vertex detections for any vertex of angle $\gamma$. If, however, the observed number of elements in this set is much larger (less) than this theoretical value, an edge of length $a$ connecting at a vertex of angle $b$ is likely (unlikely) to exist. By finding the set of pairs of an edge length and a vertex angle connected by the edge, we can make a table such as Table \ref{comb_table1}. Such a table enables us to guess that an edge of a certain length connects two vertexes of certain angles or that a vertex of a certain angle is formed by two edges of certain lengths. Then, by sequentially connecting them, we can identify the shape of $T$. For example, if such a table suggests that there are a single vertex (A) formed by edges (a) and (b), a single vertex (B) formed by edges (b) and (c), and a single vertex (C) formed by edges (c) and (a), we can estimate that $T$ is a triangle of vertexes (A), (B), and (C) and that the edge between (A) and (B) ((B) and (C); (C) and (A)) is (b) ((c) and (a)). \begin{remark}\label{shape_remark} More precisely, we may not able to identify the shape of $T$ or sequentially connect edges or vertexes even when two vertex angles of each edge are given or two edge lengths forming each vertex are given. For example, when all the vertex angles are the same, the shape of $T$ is difficult to identify. For two long edges connecting vertexes of $\pi/2$ and two short edges connecting vertexes of $\pi/2$, $T$ may be a rectangle. However, the two short edges may be consecutive, and the two long edges may also be consecutive. If so, the shape of $T$ becomes unnatural because we cannot obtain a close boundary of $T$. However, we cannot conclude that a non-close boundary is unnatural because even the rectangle $T$ may not have a close boundary due to estimation errors. \end{remark} \section{Numerical examples} \subsection{Default conditions} Unless explicitly mentioned otherwise, numerical examples in this paper use the following conditions. $\Omega$ is a disk with radius 100. $T$ is placed near the center of $\Omega$ to remove the boundary effect. $r_{max}=100$, $\theta_{max}=\pi/2$, $v=0.1$, $n_s=1000$. Sensing areas of all the sensors intersect $\Omega$ but may not detect $T$. In the simulation, we move vehicles at each time unit, that is, a discrete time, not a continuous time. As a result, observed parameters such as $l_d$ cannot take a continuous value and cause some observation errors. This may result in estimation errors. To understand the proposed estimation framework, we provide a simple figure as $T$ as a default. (Realistic $T$s are used later.) The simple $T$ is a right triangle of edge lengths of 50, 25, and $25\sqrt{3}$. \subsection{Estimation under default conditions} Figure \ref{simple_default} plots the number of candidate estimates in each sub-interval for a edge length or an angle under the default conditions. Table \ref{result_table1} summarizes the estimated lengths, angles, and their estimated numbers. Here, ^^ ^^ Estimation error" is defined by $100(\widehat{\gamma}/\gamma-1)$ or $100(\widehat{\lambda}/\lambda-1)$ where we consider that $\widehat{\gamma}$ ($\widehat{\lambda}$) is the estimate of $\gamma$ ($\lambda$), which is the closest to $\widehat{\gamma}$ ($\widehat{\lambda}$). As shown clearly in Fig. \ref{simple_default}, the proposed method can estimate the edge lengths and angles. The angle estimates were more accurate than the length estimates (Table \ref{result_table1}). This seems to be because the discrete time sampling affects edge-lengths more than angles. \begin{figure}[tb] \begin{center} \includegraphics[width=11cm,clip]{simple_default} \caption{Estimated angles and lengths of simple target object under default conditions} \label{simple_default} \end{center} \end{figure} $\widehat{N_\lambda}$ and $\widehat{N_\gamma}$ were less accurate than $\widehat{\lambda}$ and $\widehat{\gamma}$. In particular, $\widehat{N_\gamma}$ for vertexes (B) and (C) in Table \ref{result_table1} was overestimated by more than 20\%. This is because the estimation for the lengths and angles in the proposed framework does not include errors if we can correctly sample data and judge the whole edge detection and vertex detection. However, $\widehat{N_\lambda}$ and $\widehat{N_\gamma}$ uses the comparison between the expected number of the whole edge detection (vertex detection) and its sample number, and the sample number is a random variable and can include errors. Thus, $\widehat{N_\lambda}$ and $\widehat{N_\gamma}$ can become inaccurate and their accuracy seems to depend on the number of samples or the number of sensors. \begin{table} \caption{Summary of estimated results for a triangle $T$ under default conditions} \begin{center}\label{result_table1} \begin{tabular}{rlll} \hline Edge ID&$\widehat{\lambda}$&Estimation error (\%)&$\widehat{N_\lambda}$\\ a&24.84&-0.6093&0.8766\\ b&43.21&-0.2064&1.001\\ c&50.28&0.5501&0.9042\\ Vertex ID&$\widehat{\gamma}$&Estimation error (\%)&$\widehat{N_\gamma}$\\ A&1.570&-0.02019&0.9840\\ B&1.046&-0.0996&1.202\\ C&0.5241&-0.08829&1.223\\ \hline \end{tabular} \end{center} \end{table} Table \ref{comb_table1} summarizes $\sharp\sensor_id(a,b)$ normalized by $E_{ind}[\sharp\sensor_id(a,b)]$. For example, Table \ref{comb_table1} strongly suggests that edge (a) connects vertexes (A) and (B), edge (b) connects vertexes (A) and (C), and edge (c) connects vertexes (B) and (C). Hence, we can identify the shape of the triangle $T$. \begin{table} \caption{Observed $\sharp\sensor_id(a,b)/E_{ind}[\sharp\sensor_id(a,b)]$ for a triangle $T$ under default conditions} \begin{center}\label{comb_table1} \begin{tabular}{rlll} \hline Edge ID/vertex ID&A&B&C\\ a&1.535&1.282&0.5409\\ b&1.462&0.1504&1.249\\ c&0.1617&1.153&0.9825\\ \hline \end{tabular} \end{center} \end{table} \subsection{Impact of number of sensors} Here, we discuss the impact of the number $n_s$ of sensors. Figure \ref{num_sensors} plots the number of estimates in each sub-interval for a edge length or an angle when 200 or 500 sensors are used. The accuracy was slightly worse than that for $n_s=1000$, but the estimates with 200 or 500 sensors are acceptable. However, $\widehat{N_\gamma}$ became inaccurate for $n_s=200$ (Table \ref{result_table_200}). $\widehat{N_\gamma}$ became nearly three for vertex (C), although it should be one. This is because the number of samples affects $\widehat{N_\gamma}$ and $\widehat{N_\lambda}$ but barely affects $\widehat{\gamma}$ and $\widehat{\lambda}$. ($\widehat{N_\lambda}$ was much more accurate than $\widehat{N_\gamma}$ because $n_\lambda \gg n_\gamma$.) \begin{figure}[tb] \begin{center} \includegraphics[width=11cm,clip]{num_sensors} \caption{Estimated angles and lengths for various number of sensors} \label{num_sensors} \end{center} \end{figure} \begin{table} \caption{Summary of estimated results for a triangle $T$ with $n_s=200$} \begin{center}\label{result_table_200} \begin{tabular}{rlll} \hline Edge ID&$\widehat{\lambda}$&Estimation error (\%)&$\widehat{N_\lambda}$\\ a&24.094&-3.624&0.9337\\ b&42.67&-1.461&1.036\\ c&48.86&-2.280&1.096\\ Vertex ID&$\widehat{\gamma}$&Estimation error (\%)&$\widehat{N_\gamma}$\\ A&1.567&-0.3183&1.407\\ B&1.057&0.9508&1.326\\ C&0.5281&0.8723&2.917\\ \hline \end{tabular} \end{center} \end{table} \subsection{Impact of $r_{max}$} As $r_{max}$ becomes shorter, the shape estimation becomes difficult. A problem appeared in the number of edge-vertex pairs because the number of observed edge-vertex pairs became small (Table \ref{comb_table_rmax50}). In particular, the number of observed edge-vertex pairs of the longest edge (c) is extremely small. This is because it becomes difficult to observe the whole edge of the longest edge by using a short $r_{max}$. Thus, we can no longer estimate the shape of $T$. Therefore, $r_{max}$ should be much longer than any of the edge lengths of $T$. \begin{table} \caption{Observed $\sharp\sensor_id(a,b)/E_{ind}[\sharp\sensor_id(a,b)]$ for a triangle $T$ with $r_{max}=50$} \begin{center}\label{comb_table_rmax50} \begin{tabular}{rlll} \hline Edge ID/vertex ID&A&B&C\\ a&1.143&1.241&0.7756\\ b&2.566&0.2786&0.8706\\ c&0.1996&0.3033&0\\ \hline \end{tabular} \end{center} \end{table} \subsection{Impact of sensing errors} The proposed framework does not assume sensing errors, but sensing errors can exist in practice. Here, assume two types of sensing errors: one for $s_d$ and one for $l_d$. Sensing errors $\epsilon_s$ for $s_d$ are normally distributed with mean 0 and standard deviation $\sigma$. Due to the sensing error, $s_d$ becomes $\tan(\arctan(s_d)+\epsilon_s)$. The other type of sensing error divides $p_d$ into pieces. This type of error typically occurs when sensing reports are lost or slope changes are misjudged. Assume that errors of this type independently occur with probability $\epsilon_l$ at each sensing report during $p_d$. As a result, $l_d$ is divided into short $l_d$s. Estimates under sensing errors are plotted in Fig. \ref{noise}. Sensing errors for $l_d$ ($s_d$) in Fig. \ref{noise}-(a) are more serious than those in Fig. \ref{noise}-(b) (Fig. \ref{noise}-(c)). For all cases except for the edge length estimates in (c), it is difficult to find three estimates. We cannot find clear sharp peaks of the number of candidate estimates, but there are many peaks. Even (c) and (a) contain a peak in angle estimates nearly $\pi$. This peak is caused by divided $l_d$, and each divided $l_d$ has a similar $s_d$. The proposed estimation method naturally judged there to be a vertex of angle nearly $\pi$ for consecutive edges providing similar $s_d$. Thus, this peak did not appear in (b) because of small $\epsilon_l$. As a whole, (c) looks better than (b). That is, accurate $s_d$ is needed to obtain good estimates for this example. \begin{figure}[tbh] \begin{center} \includegraphics[width=8cm,clip]{noise} \caption{Impact of sensing errors: (a) $\epsilon_l=10^{-4}, \epsilon_s=10^{-3}$, (b) $\epsilon_l=10^{-5}, \epsilon_s=10^{-3}$, and (c) $\epsilon_l=10^{-4}, \epsilon_s=10^{-4}$.} \label{noise} \end{center} \end{figure} \subsection{Non-straight line driving route} In this paper, a straight line driving route is assumed. Unfortunately, however, real driving routes are not straight. Here, a driving route has a $\pi/2$ right or left turn once in $\Omega$. The turn is randomly placed and goes right or left with probability 0.5. Such information is not available for estimation. The results are shown in Fig. \ref{non-straight}. We can find two peaks of angle estimates, but it becomes difficult to find an angle nearly $\pi/6$. If we pick out an angle nearly $\pi/6$, we should also pick out an angle nearly $\pi/2$. (There are two estimates nearly $\pi/2$ in this case.) $\widehat{N_\lambda}$ is 1.468, 1.223, and 1.044 for the short, middle-length, and long edges, and is barely acceptable. On the other hand, $\widehat{N_{\gamma}}$ is overestimated particularly for $\widehat{\gamma}\approx\pi/2$ when we use two or four angle estimates. This is because the $\pi/2$ turn made the proposed method incorrectly estimate there to be many vertexes of angle $\pi/2$. \begin{figure}[tb] \begin{center} \includegraphics[width=11cm,clip]{non-straight} \caption{Estimation under non-straight driving route} \label{non-straight} \end{center} \end{figure} \subsection{Realistic target object} Here, we estimate the shape of the building or cars shown in Fig. \ref{real_target}. \begin{figure}[tb] \begin{center} \includegraphics[width=11cm,clip]{real_target} \caption{Realistic target objects} \label{real_target} \end{center} \end{figure} \subsubsection{Estimating building} Table \ref{result_building} shows that angle and edge length estimates were acceptable for the building in Fig. \ref{real_target} and that $\widehat{N_\lambda}$ was almost exact and that $\widehat{N_\gamma}$ was barely acceptable. In addition, Table \ref{comb_building} suggests that edge (a) does not connect vertex (B) and that edge (d) does not connect vertex (A). Because $\widehat{N_\lambda}\approx 2$ edge (a) and $\widehat{N_\lambda}\approx 1$ for other edges and $\widehat{N_\gamma} \approx 3$ for vertex (A) and $\widehat{N_\gamma}\approx 2$ for vertex (B), the shape of $T$ was obtained (Fig. \ref{shape_result}-(a)). \begin{table} \caption{Summary of estimated results for target object (a) under default conditions} \begin{center}\label{result_building} \begin{tabular}{rlll} \hline Edge ID&$\widehat{\lambda}$&Estimation error (\%)&$\widehat{N_\lambda}$\\ a&19.82&-0.9186&1.797\\ b&4.736& -5.287&1.291\\ c&5.071&1.416&0.8639\\ d&21.16&-0.2660&0.7770\\ Vertex ID&$\widehat{\gamma}$&Estimation error (\%)&$\widehat{N_\gamma}$\\ A&1.571&-0.005921&3.420\\ B&2.356&-0.02124&1.654\\ \hline \end{tabular} \end{center} \end{table} \begin{table} \caption{Edge-vertex combination for target object (a) under default conditions} \begin{center}\label{comb_building} \begin{tabular}{rll} \hline Edge ID/vertex ID&A&B\\ a&1.444&0.2291\\ b&0.7870&1.254\\ c&0.8915&1.454\\ d&0.4132&1.852\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[tb] \begin{center} \includegraphics[width=11cm,clip]{shape_result} \caption{Estimated shape under default conditions} \label{shape_result} \end{center} \end{figure} \subsubsection{Estimating polygon car} We estimated car (b) in Fig. \ref{real_target}. The sum of the estimated number of edges was 7 for $n_s=1000$ and $\widehat{N_\lambda}\approx 1$ for $\widehat{\lambda}\approx 30$. Therefore, we failed to identify the shape of $T$. This result seems to be because car (b) has more angles and edges (eight angles and edges) than other target objects in this paper. Thus, we estimated car (b) with $n_s=10,000$. The results are shown in Table \ref{result_car_b}. The estimation accuracy of $\widehat{\lambda}$ was fair, and that of $\widehat{\gamma}$ was good. In particular, the small difference in $\gamma$ was accurately estimated. $\widehat{N_\lambda}\approx 2$ for $\widehat{\lambda}\approx 30$ (edges (d) and (f)) and $\widehat{N_\lambda}\approx 1$ for $\widehat{\lambda}\approx 3$ (edge (c)) were desirable results. Although $\lambda =4$ and $\lambda =5$ were not clearly distinguished, $\widehat{N_\lambda}\approx 5$ for $\widehat{\lambda}\approx 4$ or 5 was an acceptable result. Because all the $\widehat{\gamma}$ were almost the same, we cannot identify the shape of $T$ (Remark \ref{shape_remark}). Although we cannot formally identify the shape, we can illustrate a car under some assumptions: it has an almost symmetrical shape and a head slightly wider than its tail. Two short edges connect vertexes (A), and the four consecutive edges of these edges are edges (a) or (b). One estimated shape is plotted in Fig. \ref{shape_result}-(b). Its shape is not uniquely identified even under this assumption, but the estimated shape becomes almost the same as the actual shape. \begin{table} \caption{Summary of estimated results for target object (b) with $n_s=10,000$} \begin{center}\label{result_car_b} \begin{tabular}{rlll} \hline Edge ID&$\widehat{\lambda}$&Estimation error (\%)&$\widehat{N_\lambda}$\\ a&5.593& -1.1312&2.832\\ b&5.465&-3.390&1.066\\ c&2.911&-2.978&0.8249\\ d&29.86&-0.5221&0.9389\\ e&4.954&-0.9157&0.7653\\ f&29.99&-0.09659&0.8947\\ Vertex ID&$\widehat{\gamma}$&Estimation error (\%)&$\widehat{N_\gamma}$\\ A&2.356&-0.01143&3.701\\ B&2.389&0.2820&1.882\\ C&2.323&-0.3115&1.836\\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Estimating non-polygon car} Here, we show the estimated results for car (c) in Fig. \ref{real_target} with $n_s=1000$. Note that this target object is not a polygon. Figure \ref{car} plots the estimated angle and edge length. Table \ref{result_car} suggests that this target object is a quadrangle: two long edges (a), a single short edge (b), and a single short edge (c); two vertex (A) and two vertex (B). All the vertex angles are nearly $\pi/2$. These results are derived because the round corners of this target result in curves of $r(t)$. Because a curve in $r(t)$ can appear at a vertex of a polygon $T$ (Fig. \ref{curve}-(a)), the curve is not distinguishable from the curves at the round corners. Thus, the estimation method connected edges directly without round corners. Because the estimated angles of vertex (A) and vertex (B) are similar, an edge-vertex combination did not clearly estimate which edge connects vertex (A) and which edge connects vertex (B) (Remark \ref{shape_remark}). In Fig. \ref{shape_result}-(b), the estimated shape of this target object is plotted under the assumption that edge (b) connects two vertex (B). The estimated shape was almost identical to the shape with four corners of the original $T$ removed. \begin{figure}[tb] \begin{center} \includegraphics[width=11cm,clip]{car} \caption{Estimated angles and lengths of target object (c) under default conditions} \label{car} \end{center} \end{figure} \begin{table} \caption{Summary of estimated results for target object (b) under default conditions} \begin{center}\label{result_car} \begin{tabular}{rlll} \hline Edge ID&$\widehat{\lambda}$&Estimation error (\%)&$\widehat{N_\lambda}$\\ a&30.10&0.2854&1.522\\ b&2.971&-0.9795&1.029\\ c&4.944&-1.123&1.030\\ Vertex ID&$\widehat{\gamma}$&Estimation error (\%)&$\widehat{N_\gamma}$\\ A&1.538&--& 2.227\\ B&1.605&--&1.67\\ \hline \end{tabular} \end{center} \end{table} \section{Conclusion} This paper proposed a theoretical framework for estimating target object shape by using distance sensors and speed meters mounted on vehicles. Here, the location and moving direction of these vehicles are unknown. Thus, the location privacy of vehicles is maintained. Several examples show that the proposed estimation framework is feasible. However, the proposed framework assumes some fairly strict conditions are met: the target object is a convex polygon, vehicles move on straight lines, and no sensing errors exist. Numerical examples suggest that the proposed framework may work even when these conditions are not satisfied, for example, vehicles move on a non-straight line or some sensing errors are imposed. For a non-polygon target object, the estimated shape becomes similar to the original target object shape without the non-polygon parts. Of course, we need additional effort to make the estimation method more robust under various conditions such as more serious erroneous sensing results and more frequent turns in a driving route. In addition, the proposed method requires many sensors for a complicated $T$. This is because we need to estimate the number of edges of a certain length or vertexes of a certain angle and because this number is large for a complicated $T$. Therefore, although the current proposed method does not use all the sensing data, a more efficient way of using sensing data should be developed. A remaining large problem is the estimation for a non-convex target object. For a non-convex target object, $r(t)$ becomes non-continuous and the analysis for $E[n_d(\widehat{\lambda})],E[n_d(\widehat{\gamma})]$ becomes complicated. However, it seems possible to extend the proposed framework to cover a non-convex target object. If so, we can also estimate the target object in the environment that there are many obstacles. This is because we can estimate the total environment including the target object as a target object, although we cannot identify which is the original target object.
{ "timestamp": "2018-03-06T02:14:23", "yymm": "1802", "arxiv_id": "1802.06882", "language": "en", "url": "https://arxiv.org/abs/1802.06882" }
\section{Introduction} \label{sec:introduction} The Color Glass Condensate (CGC) formalism is an effective field theory approach to Quantum Chromodynamics (QCD) at small $x$ where gluon densities in the nucleus or proton are large. With $x$ the ratio of the hard scale $M^2$ of a certain hard process and $s$ the center-of-mass energy squared, the limit $x \to 0$ at fixed $M^2$ corresponds to the perturbative Regge limit of QCD. In such a scenario, the smallness of the strong coupling $\alpha_s(M^2) \ll 1$ can be compensated by logarithms in $x$, $\alpha_s(M^2) \ln 1/x \sim 1$, which requires the resummation of terms $\left( \alpha_s(M^2) \ln 1/x \right)^n$ to all orders. For perturbative scattering amplitudes, such a resummation is achieved at leading \cite{Fadin:1975cb,Balitsky:1978ic} and next-to-leading order \cite{Fadin:1998py} by the Balitsky-Fadin-Kuraev-Lipatov (BFKL) evolution equation. Even though BFKL evolution is successfully applied to the description of collider data at currently accessible center-of-mass energies, see {\it e.g.} \cite{Bautista:2016xnp}, the power-like rise of the gluon distribution predicted by BFKL evolution will eventually drive cross-sections to a region of phase space, where parton densities are no longer perturbative; BFKL evolution will therefore break down in such a regime. Instead it is more appropriate to treat the hadron or nucleus as a coherent color field rather than a collection of incoherent and individual partons. This is the region of phase space which is addressed by the initially mentioned CGC, see~\cite{gijv} for a review. At the classical level, the CGC generalizes scattering via exchange of a single gluon to multiple gluon exchanges within high energy factorization. Including furthermore quantum effects, one arrives at a resummation of logarithms in $1/x$, generalizing BFKL evolution to the case of large gluon densities. The resulting Balitsky-JIMWLK evolution~\cite{Balitsky:1995ub, jimwlk1,jimwlk2,jimwlk6,jimwlk8} provides finally an evolution equation for Wilson lines which sum up the strong gluonic field in the target. \\ In the present article we discuss Lipatov's high energy effective action~\cite{Lipatov:1995pn,Lipatov:1996ts} and its relation to the above mentioned formulation of an CGC effective theory. One of the main advantages of Lipatov's high energy effective action is that it provides a gauge invariant factorization of QCD amplitudes in the high energy limit through introducing a new type of field, {\it i.e.} the reggeized gluon. Using this effective action it has been possible to both reproduce and derive a number of next-to-leading order (NLO) results, most notable the calculation of NLO correction to forward jet production without \cite{quarkjet,gluonjet} and with rapidity gap~\cite{Hentschinski:2014esa}, the gluon Regge trajectory up to two loop~\cite{traject}, and the NLO kernel of the Bartels-Kwiecinski-Praszalowicz evolution equation \cite{Bartels:2012sw}, see also the review \cite{review}; for the determination of NLO corrections for reggeized quarks see \cite{Nefedov:2017qzc}. The description of scattering amplitudes for multiple reggeized gluon exchange has been also studied by a number of authors, see {\it e.g.} \cite{Braun:2017qij,Hentschinski:2009zz,Hentschinski:2009ga}. At the same time the ability of Balitsky-JIMWLK evolution to reproduce scattering amplitudes with multiple reggeized gluon states has been demonstrated for various cases, see {\it e.g.} \cite{Bartels:2004ef,Ayala:2014nza}, hinting at a possible equivalence of both formalisms. Furthermore the Color Glass Condensate formalism and the high energy effective action have been compared directly at the level of the Lagrangian, see {\it e.g.} \cite{jimwlk2, Hatta:2005rn, Bondarenko:2018pvv}. In particular \cite{ Bondarenko:2018pvv} demonstrate that it is possible to reproduce the classical gluon fields of the CGC approach from the Lipatov's high energy effective action. \\ Instead of comparing the two approaches on the level of the resulting effective Lagrangians, we take here a pragmatic approach and attempt to answer the question whether Lipatov's high energy effective action can be used to reproduce the quark and gluon propagators in the presence of a strong gluonic field. Such propagators are one of the core elements in calculations of scattering of dilute projectiles on dense targets within the Color-Glass-Condensate formalism. We find that this can be indeed achieved by choosing a special parametrization of the gluonic field already proposed in~\cite{Lipatov:1995pn}. Moreover, since Lipatov's high energy effective action provides a gauge invariant factorization of QCD amplitudes in the high energy limit, the resulting propagators are not restricted to a certain gauge, such as light-cone gauge. The obtained propagators allow furthermore to rederive leading order Balitsky-JIMWLK evolution directly from Lipatov's high energy effective action. As an aside, our result confirms that the definition of the reggeized gluon as the logarithm of an adjoint Wilson lines, proposed in \cite{Caron-Huot:2013fea}, is consistent with Lipatov's high energy effective action. \\ The outline of this paper is as follows. Sec.~\ref{sec:eff} provides a short summary of Lipatov's high energy effective action. Sec.~\ref{sec:parametrize} introduces the special parametrization of the gluonic field proposed in~\cite{Lipatov:1995pn} and demonstrates how it can be used to derive resummed partonic propagators in the presence of a strong reggeized gluon field. Sec.~\ref{sec:literature} contains a comparison of our result with the literature. Sec.~\ref{sec:balitsky-jimwlk-evol} presents a derivation of Balitsky-JIMWLK evolution from Lipatov's high energy effective action. In Sec.~\ref{sec:conclusion-summary} we summarize our results and draw our conclusions. Some details of the calculations are summarized in two appendices. \section{The High-Energy Effective Action} \label{sec:eff} Within the framework provided by Lipatov's effective action \cite{Lipatov:1995pn,Lipatov:1996ts}, QCD amplitudes are in the high energy limit decomposed into gauge invariant sub-amplitudes which are localized in rapidity space. The effective Lagrangian then describes the coupling of quarks ($\psi$) and gluon ($v_\mu$) fields to a new degree of freedom, the reggeized gluon field $A_\pm (x)$. The latter is introduced as a convenient tool to reconstruct the complete QCD amplitudes in the high energy limit out of the sub-amplitudes restricted to small rapidity intervals. Lipatov's effective action is obtained by adding an induced term $ S_{\text{ind.}}$ to the QCD action $S_{\text{QCD}}$, \begin{align} \label{eq:effac} S_{\text{eff}}& = S_{\text{QCD}} + S_{\text{ind.}} , \end{align} where the induced term $ S_{\text{ind.}}$ describes the coupling of the gluonic field $v_\mu = -it^a v_\mu^a(x)$ to the reggeized gluon field $A_\pm(x) = - i t^a A_\pm^a (x)$, with $t^a$ a SU$(N_c)$ generator in the fundamental representation, ${\rm tr}(t^at^b) = \delta^{ab}/2$. For the definition of light-cone directions we follow the conventions established in the original publication \cite{Lipatov:1995pn}, \begin{align} \label{eq:13} k^\pm & = n^\pm \cdot k = n_\mp \cdot k = k_\mp, \end{align} with $n^\pm \cdot n^\mp = 2$ and $(n^\pm)^2 = 0$. This implies the following Sudakov decomposition of a four momentum \begin{align} \label{eq:14} k & = \frac{k^+}{2} n^- + \frac{k^-}{2} n^+ + {\bm k} = \frac{k_-}{2} n_+ + \frac{k_+}{2} n_- + {\bm k}. \end{align} Note that transverse momenta and coordinates will be denoted by bold letters. Furthermore \begin{align} \label{eq:9} \partial_\pm x^\pm & = 2, & \partial_\mp x^\pm & = 0\,. \end{align} High energy factorized amplitudes reveal strong ordering in plus and minus components of momenta which leads to the following kinematic constraint obeyed by the reggeized gluon field: \begin{align} \label{eq:kinematic} \partial_+ A_- (x)& = 0 = \partial_- A_+(x). \end{align} Even though the reggeized gluon field is charged under the QCD gauge group SU$(N_c)$, it is defined to be invariant under local gauge transformation $\delta_L A_\pm = 0$. With the local gauge transformations of gluon and quark fields given by \begin{align} \label{eq:gauge_gluon} \delta_{\text{L}} v\mu &= \frac{1}{g}[D_\mu, \chi_L], & \delta_{\text{L}} \psi &= -\chi_L \psi. & D_\mu & = \partial_\mu + g v_\mu, \end{align} where $D_\mu$ denotes the covariant derivative and $\chi_L$ the parameter of the local gauge transformations which decreases for $x \to \infty$, the reggeized gluons fields are {\it invariant} under local gauge transformations, \begin{align} \label{eq:localgauge} \delta_\text{L} A_\pm = \frac{1}{g}[A_\pm, \chi_L] = 0 \, . \end{align} The kinetic term and the gauge invariant coupling of the reggeized gluon field to the QCD gluon field are provided by the induced term \begin{align} \label{eq:1efflagrangian} S_{\text{ind.}} & = \int \text{d}^4 x \, \bigg\{ \text{tr}\left[\left(T_-[v(x)] - A_-(x) \right)\partial^2_\perp A_+(x)\right] \notag \\ & \hspace{3cm} +\text{tr}\left[\left(T_+[v(x)] - A_+(x) \right)\partial^2_\perp A_-(x)\right] \bigg\}. \end{align} The functionals $T_\pm[v] $ can be obtained from the following operator definition \begin{align} \label{eq2:efflagrangian} T_\pm[v] = & -\frac{1}{g}\partial_\pm \frac{1}{1 + \frac{g}{\partial_\pm}v_\pm} = v_\pm - g v_\pm\frac{1}{\partial_\pm} v_\pm + g^2 v_\pm \frac{1}{\partial_\pm} v_\pm\frac{1}{\partial_\pm} v_\pm - \ldots \notag \\ \end{align} where the integral operator is implied to act on a unit constant matrix from the left. Boundary conditions of the $1/\partial_\pm$ are fixed through \begin{align} \label{eq:6} \frac{1}{1 + \frac{g}{\partial_\pm}v_\pm} & = \mathcal{P}\exp\bigg(-\frac{g}{2} \int_{-\infty}^{x^\pm}dx'^\pm v_\pm(x')\bigg) \notag \\ &= 1 -\frac{g}{2} \int_{-\infty}^{x^\pm}dx'^\pm v_\pm(x') + \frac{g^2}{4} \int_{-\infty}^{x^\pm} dx^{'\pm} \int_{-\infty}^{x^{'\pm}} dx^{''\pm} v_\pm(x') v_\pm(x'') + \ldots \end{align} Due to the induced term in Eq.~(\ref{eq:effac}), the Feynman rules of the effective action comprise, apart from the usual QCD Feynman rules, the propagator of the reggeized gluon and an infinite number of so-called induced vertices. The leading order vertices and propagators are summarized in Fig.~\ref{fig:3}. \begin{figure}[th] \centering \parbox{.7cm}{\includegraphics[height = 1.8cm]{indu0.pdf}} $= \displaystyle \begin{array}[h]{ll} \\ \\ \frac{- i}{2}{\bm q}^2 \delta^{a c} (n^\pm)^\nu, \\ \\ \qquad k^\pm = 0. \end{array} $ \parbox{1.2cm}{ \includegraphics[height = 1.8cm]{propR0.pdf}} $= \displaystyle \begin{array}[h]{ll} \delta^{ab} \frac{ 2 i}{{\bm q}^2} \end{array}$, \parbox{1.7cm}{\includegraphics[height = 1.8cm]{indu1.pdf}} $ \displaystyle = \begin{array}[h]{ll} \\ \\ \frac{g}{2} f^{c_1 c_2 a} \frac{{\bm q}^2}{k_1^\pm} (n^\pm)^{\nu_1} (n^\pm)^{\nu_2}, \\ \\ \quad k_1^\pm + k_2^\pm = 0, \end{array}$ \\ \parbox{3cm}{\center (a)} \parbox{4cm}{\center (b)} \parbox{5cm}{\center (c)} \vspace{1cm} \parbox{2.4cm}{\includegraphics[height = 1.8cm]{indu2.pdf}} $ \displaystyle \begin{array}[h]{l} \displaystyle \\ \displaystyle= \frac{ig^2}{2} {\bm{q}}^2 \left(\frac{f^{a_3a_2 e} f^{a_1ea}}{k_3^\pm k_1^\pm} + \frac{f^{a_3a_1 e} f^{a_2ea}}{k_3^\pm k_2^\pm}\right) (n^\pm)^{\nu_1} (n^\pm)^{\nu_2} (n^\pm)^{\nu_3}, \\ \\ \qquad \qquad k_1^\pm + k_2^\pm + k_3^\pm = 0. \end{array} $ \\ \vspace{.3cm} \parbox{1cm}{(d)} \caption{\small Feynman rules for the lowest-order effective vertices of the effective action. Wavy lines denote reggeized fields and curly lines gluons. Note that in comparison with the Feynman rules used in \cite{quarkjet,gluonjet,Hentschinski:2014esa,traject} we absorbe a factor $1/2$ into the vertices which is compensated by changing the residue of the reggeized gluon propagator from $1/2$ to $2$.} \label{fig:3} \end{figure} These induced vertices are special in the sense that they contain only the anti-symmetric color-octet sector of the eikonal operator Eq.~\eqref{eq2:efflagrangian}. While the projection on the color octet sector arises automatically from the induced term due to the combination with the reggeized gluon field, the anti-symmetric color structure (written in terms of SU$(N_c)$ structure constants only) requires in general use of a corresponding projector, for an explicit construction see \cite{Hentschinski:2011xg}. The original argument given by Lipatov for this projection is based on the observation that in generalized Multi-Regge Kinematics the values of the operator $\partial_\pm$ acting on a gluonic field is never zero for the vertices arising from Eq.~\eqref{eq:1efflagrangian}, since the resulting light-cone momenta are proportional to large center of mass energies of clusters of particles significantly separated in rapidity. In particular \begin{align} \label{eq:19} \frac{1}{\partial_\pm} \tilde{v}_\pm(p) & = \frac{i}{p_\pm} \tilde{v}_\pm(p) \end{align} with $p_\pm \neq 0$ where $\tilde{v}_\pm(p)$ denotes the Fourier transform of the gluonic field $v(x)$; this is especially true for the case of real particle production within the generalized Multi-Regge Kinematics, which initiated the discussion of the formulation of the high energy effective action in \cite{Lipatov:1995pn}. For a more detailed discussion we refer to \cite{Antonov:2004hh}. With $p_\pm \neq 0$, anti-symmetric color structure as given in Fig.~\ref{fig:3} arises automatically from the high energy effective action, see also the discussion in \cite{Antonov:2004hh}. The condition $p_\pm \neq 0$ is however at least at first violated in the evaluation of loop integrals, where the $p_\pm$ are integrated over all possible values. The projection of \cite{Hentschinski:2011xg} implies then the use of the boundary conditions of Eq.~\eqref{eq:6}, with an additional projection for the color structure of the vertices Fig.~\ref{fig:3} on the desired anti-symmetric color octet sector. Corresponding symmetric counter-parts are then taken into account by exchange of multiple reggeized gluons and combination of multiple reggeized gluons and induced vertices, see also the discussion in Appendix \ref{sec:multi-gluon-exchange}. In the following we use always the pole prescription for induced vertices proposed in \cite{Hentschinski:2011xg}. \section{Resummation of a strong reggeized gluon field} \label{sec:parametrize} In the following we provide a formulation of the high energy effective action which allows for a straightforward resummation of multiple reggeized gluon exchange in the chase of quasi-elastic scattering, which is the relevant case for describing scattering of a dilute partonic projectile on a dense target nucleus or proton. \subsection{A special parametrization of the gluonic field} \label{sec:spec-param-gluon} The bulk of calculations performed within the framework set by the high energy effective action employs the vertex Fig.~\ref{fig:3}.a) which provides a direct transition between a reggeized gluon field and a conventional QCD gluon. As noted in~\cite{Lipatov:1995pn, Lipatov:1996ts}, it is possible to avoid the use of such a direct transition vertex, if one performs a shift $v_\pm \to V_\pm = v_\pm + A_\pm$ of the gluonic field in the effective action\footnote{Such a shift has been used for instance in \cite{Braun:2017qij, Hentschinski:2009zz}}. Such a shift has however the disadvantage that the gluonic field $v_\pm$ transforms like a gauge field under local gauge transformations while the reggeized gluon field is invariant under such transformations. To avoid such differing transformation properties, the following parametrization of the gluonic field has been proposed in~\cite{Lipatov:1995pn}: \begin{align} \label{eq:para0} V^\mu(x) & = v^\mu(x) + \frac{n_+^\mu}{2} U[v_+(x)] A_-(x)U^{-1}[v_+(x)] + \frac{n_-^\mu}{2} U[v_-(x)] A_+(x)U^{-1}[v_-(x)] \notag \\ &= v^\mu(x) + \frac{n_+^\mu}{2} B_-(x) + \frac{n_-^\mu}{2} B_+(x) \, , \end{align} where \begin{align} \label{eq:1} B_\pm[v_\mp] = U[v_\mp] A_\pm U^{-1}[v_\mp] \,. \end{align} and (inverse) Wilson line operators are defined as \begin{align} \label{eq:U} U[v_\pm] &= \frac{1}{1 + \frac{g}{\partial_\pm} v_\pm}, & U^{-1}[v_\pm] & = 1 + \frac{g}{\partial_\pm} v_\pm \, . \end{align} Here the integral operators $U$ and $U^{-1}$ act on a unit constant matrix from the left- and right-had sides, respectively. For the above composite field $B_\pm[v_\mp]$, one finds the following gauge transformation properties: \begin{align} \label{eq:deltaterm} \delta_L B_\pm & = \delta_L U[v_\mp] A_\pm U^{-1}[v_\mp] + U[v_\mp] A_\pm\delta_L U^{-1}[v_\mp] = \left[g B_\pm, \chi_L \right]\,. \end{align} As a consequence the shifted gluonic field Eq.~\eqref{eq:para0} transforms as \begin{align} \label{eq:deltaterm2} \delta V_\pm & = \left[D_\pm, \chi \right] + [g B_\pm, \chi] = \left[D_\pm + g B_\pm, \chi \right], \end{align} {\it i.e.} the field $V_\mu$ has consistent gauge transformation properties corresponding to a gauge field. In the following we will use the above parametrization of the gluonic field to expand the high energy effective action for the quasi-elastic case around the reggeized gluon field $A_+$ which we treat as a strong classical background field $g A_+ \sim 1$. \subsection{The effective Lagrangian quadratic in $v_\mu$} \label{sec:effect-lagr-quadr} In the following we limit ourselves to the quasi-elastic case where the Lagrangian contains only the induced terms corresponding to the functional $W_-[v]$. The second set of induced terms is left aside for the moment. This is sufficient to describe the interaction of a dilute projectile with a target characterized by high parton densities in the high energy limit, where the $A_+$ will couple through the reggeized gluon propagator to color charges in the target. To construct the effective action for quasi-elastic processes, we use the following parametrization of the gluonic field \begin{align} \label{eq:para1} V^\mu(x) & = v^\mu(x) + \frac{1}{2} (n_-)^\mu B_+[v_-] \end{align} and consider the following effective action for the quasi-elastic case \begin{align} \label{eq:2} S_{\text{eff}}^{\text{q.e.}} & = S_{\text{QCD}} + S_{\text{ind.}}^{\text{q.e.}} \end{align} with \begin{align} \label{eq:3} S_{\text{QCD}} & = \int d^4 x \left[ {\rm tr} \left( \frac{1}{2} G_{\mu\nu}G^{\mu\nu} \right) + \bar{\psi}(x) \left( i \slashed{D} \right) \psi(x) \right]\,, \end{align} where $ G_{\mu\nu} = \frac{1}{g} \left[D_\mu, D_\nu \right]$ and \begin{align} \label{eq:4} S_{\text{ind.}}^{\text{q.e.}} & = \int d^4 x\, {\rm tr} \left( \left\{ T_-[v] - A_- (x) \right\} \partial^2 A_+(x)\right]\,. \end{align} Keeping fields $A_+$ to all orders and expanding in quantum fluctuations $v_\mu$ and $\psi$, $\bar{\psi}$ to quadratic order we obtain \begin{align} \label{eq:5} S_{\text{eff}}^{\text{q.e.}} & = \int d^4x \left[\mathcal{L}_0 + \mathcal{L}_1 - {\rm tr}\left( A_-\partial^2 A_+ \right)\right] + \mathcal{O}(v_\mu^3), \end{align} with the kinetic term of the gluonic and quark field \begin{align} \label{eq:l0} \mathcal{L}_0 & = {\rm tr} \left( -v^\mu [g_{\mu\nu} \partial^2 - \partial_\mu \partial_\nu] v^\nu\right) + \bar{\psi} i \slashed{\partial} \psi \end{align} and the quadratic terms which describe interaction with the reggeized gluon field, \begin{align} \label{eq:Lagrangian1X} \mathcal{L}_1 & = g\cdot \bigg\{ \frac{i}{2}\bar{\psi} \slashed{n}_- A_+ \psi + {\rm tr} \bigg[ \partial_- v_\mu [ A_+, v^\mu] + 2 \partial_\mu v_- [v^\mu, A_+] + \notag \\ & \hspace{5cm} + \partial^2 v_- \left[\left(\frac{1}{\partial_-}v_- \right), A_+ \right] - v_- \left(\frac{1}{\partial_-}v_- \right) \partial^2 A_+ \bigg] \bigg\}\,. \end{align} Since we assume that the reggeized gluon field couples to high partonic densities in the target, we have $g A_+ \sim 1 $; the term $\mathcal{L}_1$ is therefore of the same order as $\mathcal{L}_0$. The term ${\rm tr}(A_- \partial^2 A_+ )$ provides the kinetic term of the reggeized gluon field which is only needed to connect the $A_+$ field to {\it e.g.} the target. \subsection{Parton-parton-reggeized gluon vertices} \label{sec:part-part-regg} The above Lagrangian $\mathcal{L}_1$ allows now for the straightforward determination of the quark-quark-reggeized gluon (QQR) and gluon-gluon-reggeized gluon (GGR) vertex. Keeping an explicit dependence on the reggeized gluon field, we find for quarks, \begin{align} \label{eq:QQR} \parbox{2.8cm}{\includegraphics[width=2.8cm]{QQR_label.pdf}} & = -ig t^c_{ji} \Gamma_{\beta\alpha}(r,p) \int d^4z \, e^{-iz\cdot(p-r)} A^c_+(z), & \Gamma_{\beta\alpha}(r,p) & = -\frac{1}{2} \slashed{n}^+_{\alpha\beta}\,, \end{align} which coincides with the expression used {\it e.g.} in \cite{quarkjet}. For gluons one obtains instead \begin{align} \label{eq:GG} \parbox{2.8cm}{\includegraphics[width=2.8cm]{GGR_label.pdf}} & = -ig T^c_{ba} \Gamma^{\nu\mu}(r,p) \int d^4z \, e^{-iz\cdot(p-r)} A^c_+(z), \notag \\ \Gamma_+^{\nu\mu}(r, p) & = p^+ g^{\mu\nu} - (n^+)^\mu p^\nu - (n^+)^\nu r^\mu + \frac{r \cdot p}{p^+} (n^+)^\mu (n^+)^\nu \notag \\ & = p^+ g^{\mu\nu}_\perp - (n^+)^\mu {\bm p}^\nu - (n^+)^\nu {\bm r}^\mu - \frac{{\bm r} \cdot {\bm p}}{p^+} (n^+)^\mu (n^+)^\nu\,, \end{align} with $T^c_{ab} = -if^{abc}$. Since $\partial_-A_+ = 0$, the integral over $z$ yields for both vertices a $\delta(p^+ - r^+)$. We note that the above GGR-vertex was already obtained in \cite{Lipatov:1995pn}; it differs from the GGR-vertex obtained in {\it e.g.} \cite{gluonjet,Antonov:2004hh}, which is derived using the direct transition vertex Fig.~\ref{fig:3}.a. The above GGR vertex obeys the following important properties: at first one finds current conservation on the level of the vertex, even if the the second gluon is not real and/or does not carry physical polarization, \begin{align} \label{eq:9} r_\nu \cdot \Gamma^{\nu\mu}_+(r,p) & = 0 = \Gamma^{\nu\mu}_+(r, p) \cdot p_\mu\,. \end{align} A disadvantage of the above vertex, already noticed in \cite{Lipatov:1995pn} is that the term $p\cdot r/p^+$ is in potential conflict with the Steinmann-relations \cite{Steinmann}, since it may yield individual Feynman diagrams which contain singularities in overlapping channels {\it e.g.} the $s$ and the $t$-channel. Nevertheless, since this vertex is obtained from a shift in the gluonic field from an effective action which explicitly obeys the Steinmann-relations, the terms which potentially violate the Steinmann relations should cancel for physical quantities. Application of this vertex to the calculation of physical observables should be therefore safe. Apart from the above relation, this GGR-vertex also obeys \begin{align} \label{eq:7} n^+_\nu \cdot \Gamma^{\nu\mu}_+(r,p) & = 0 = \Gamma^{\nu\mu}_+(r, p) \cdot n^+_\mu \,, \end{align} as well as \begin{align} \label{eq:8} \Gamma^{\nu\alpha}_+(r,k)\cdot (-g_{\alpha\alpha'} )\cdot \Gamma^{\alpha'\mu}_+(k,p) & = -p^+ \Gamma^{\nu\mu}_+(r,p)\,. \end{align} Identical properties hold for the QQR-vertex, \begin{align} \label{eq:10} \Gamma_{\beta\gamma'}(r,p) \slashed{n}_{\gamma'\gamma} &= 0 = \slashed{n}_{\beta\beta'} \Gamma_{\beta'\gamma}(r,p) \,, \notag \\ \Gamma_{\beta\gamma}(r,p) \slashed{k}_{\gamma\gamma'} \Gamma_{\gamma'\alpha}(r,p) & = -p^+ \Gamma_{\beta\alpha}(r,p)\,. \end{align} \subsection{Properties of the reggeized gluon field} \label{sec:prop-regg-gluon} The last two properties Eq.~\eqref{eq:8} and Eq.~\eqref{eq:10} are of high importance to arrive at a summation of the reggeized gluon field to all orders. Before addressing this task, we first recall the following property of the reggeized gluon field, \begin{align} \label{eq:7} \partial_- A_+ (x) & = 0, & A_+ (x)& = A_+ (x_0^-, {\bm x}, x^+) \notag \\ \partial_+ A_- (x) & = 0, & A_- (x)& = A_- (x_0^+, {\bm x}, x^+) \,, \end{align} with a $x_0^\pm$ a constant which is common to all $A_+$ fields; since the scattering amplitude dependes by Lorentz invariance not on absolute space-time values, this constant can be conveniently set to $x_0^\pm = 0$. To keep the presentation as general as possible, we keep in the following however the dependence on $x_0^\pm$ and set it only to zero when comparing to other approaches. We further recall that the propagator of the reggeized gluon field, Fig.~\ref{fig:3}.b, which connects clusters significantly separated in rapidity, comes with a purely transverse denominator. The corresponding configuration space propagator is therefore in four dimensions given by \begin{align} \label{eq:11} \langle A_+(x)A_-(y) \rangle & = \int \frac{d^4 q}{(2 \pi)^4} e^{-iq\cdot(x - y)}\frac{2i}{{\bm q}^2} \notag \\ & = \frac{1}{2} \int \frac{d^2 {\bm q}}{(2 \pi)^2} \int \frac{d q^+}{2 \pi} e^{-iq^+(x_0^- - y^-)/2} \int \frac{d q^-}{2 \pi} e^{-iq^- (x^+ - x_0^+) /2} e^{i{\bm q}\cdot({\bm x} - {\bm y})}\frac{2i}{{\bm q}^2} \notag \\ & = 4\delta( y^- - x_0^-)\delta(x^+ - x^+_0) \cdot \int \frac{d^2 {\bm q}}{(2 \pi)^2} e^{i{\bm q}\cdot({\bm x} - {\bm y})}\frac{i}{{\bm q}^2} . \end{align} The four dimensional reggeized gluon propagator can therefore be interpreted as the propagator of a two-dimensional reggeized gluon field $\alpha({\bm z})$, together with corresponding delta functions, \begin{align} \label{eq:13} \langle A_+(x)A_-(y)\rangle & = 4\delta(x^+-x^+_0) \delta(y^- - x^-_0) \cdot \langle \alpha({\bm x}) \alpha (\bm y)\rangle, \end{align} with \begin{align} \label{eq:21} \langle \alpha({\bm x}) \alpha (\bm 0)\rangle &= \int \frac{d^2 {\bm q}}{(2 \pi)^2} \frac{i e^{i{\bm q}\cdot({\bm x})}}{{\bm q}^2} . \end{align} The result then suggests to parametrize the reggeized gluon field as : \begin{align} \label{eq:10} A_+ (x)& = 2 \cdot \alpha ({\bm x}) \delta(x^+ - x^+_0)\,, \end{align} where the factor of two appears due to the chosen convention for light-cone directions. We note that such a parametrization is commonly used in calculations within the CGC-formalism, see {\it e.g.} \cite{Balitsky:1995ub, jimwlk1,jimwlk2,jimwlk6,jimwlk8}, with $x_0^+ = 0$. This treatment of the reggeized gluon field is possible, since the fields $A_\pm$ are within the effective action to be treated as external classical fields for individual rapidity clusters, while they only connect to other clusters through the above reggeized gluon propagator. \subsection{All order summation of the reggeized gluon fields} \label{sec:all-order-summation} To sum up the interaction of partons with reggeized gluon fields to all orders in $\alpha_s$, it is necessary to determine the free gluon propagator of the quantum fluctuations $v^\mu$, which requires fixing a gauge following the usual Faddeev-Popov procedure. While the following discussion will be based on covariant gauge, we will also comment on the corresponding results obtained in axial light cone gauge with the free propagators given by the usual expressions \begin{align} \label{eq:free} \tilde{G}_{\text{cov.},\mu\nu}^{(0),ab}(k) & = \delta^{ab}\tilde{D}_0(k) \left[- g_{\mu\nu} + (1 - \xi) \frac{k_\mu k_\nu}{k^2}\right] = \delta^{ab}d_{\mu\nu}(k, \xi) \tilde{D}_0(k)\,, \notag \\ \tilde{G}^{(0),ab}_{\text{l.c.}, \mu\nu}(k) & =\delta^{ab} \tilde{D}_0(k) \left[- g_{\mu\nu} +\frac{k_\mu (n^+)_\nu + (n^+)_\mu k_\nu}{k \cdot n^+}\right] = \delta^{ab} d_{\text{l.c.},\mu\nu}(k, n^+) \tilde{D}^{(0)}(k)\,, \end{align} with \begin{align} \label{eq:G0} \tilde{D}^{(0)}(k) & = \frac{i}{k^2 + i\epsilon}. \end{align} If not denoted otherwise, we will in the following always use covariant gauge. For the quark propagator one finds the usual expression \begin{align} \label{eq:12} \tilde{S}^{(0)}_{F}(k) & = \slashed{k} \tilde{D}^{(0)}(k)\,. \end{align} Due to the properties Eq.~\eqref{eq:9}, Eq.~\eqref{eq:7} , connecting two GGR vertices with a gluon propagator, the polarization tensor of the latter reduces always to $-g_{\mu\nu}$, since all other terms are set to zero. Using further the properties Eqs.~\eqref{eq:8} and \eqref{eq:10}, the interaction of $n$ reggeized gluons with a quark or gluon reduces to essentially to \begin{align} \label{eq:15} & \prod_{i=1}^n \int d z_i^4 \prod_{j=1}^n \int \frac{d^4 k_j}{(2 \pi)^4} \, (-k_1^+) D_0(k_1) e^{i k_1\cdot (z_1 - z_{2})}\ldots (-k_{n-1}^+) D_0(k_{n-1}) e^{i k_{n-1}\cdot (z_{n-1} - z_{n})} \notag \\ & \hspace{5cm} e^{-ip\cdot z_1} \left(-igA_+(z_n)\right) \ldots \left(-ig A_+(z_1)\right) e^{ ir\cdot z_n} \notag \\ & = -2 \pi \delta( p^+ - r^+) e^{-i x_0^+(p^- - r^-)} \int d^2 {\bm z} e^{i {\bm z} \cdot ({\bm p} - {\bm r})} \notag \\ & \hspace{2.5cm} \bigg[ \theta(p^+) \mathrm{P} \left( \frac{-g}{2} \right)^n \int \prod_{i=1}^n dz^+_i \tilde{A}_+(z_i) -\theta(-p^+) \overline{\mathrm{P}} \left( \frac{g}{2} \right)^n \int \prod_{i=1}^n dz^+_i \tilde{A}_+(z_i) \bigg]\,. \end{align} To arrive at the above identity, we used the property Eq.~\eqref{eq:10}. $A_+ = - it_{ji}^c A_+^c$ are reggeized gluon fields in the fundamental representation for quarks while gluons require $A_+ \to \tilde{A}_+= - iT_{ba}^c A_+^c$ {\it i.e.} reggeized gluon fields in the adjoint representation. (Anti-)path ordering of color matrices is as usually defined as \begin{align} \label{eq:pathordering} \mathrm{P} A_+(z_n^+, {\bm z} ) \cdots A_+(z^+_1, {\bm z}) & \equiv A_+(z^+_n, {\bm z}) \cdots A_+(z^+_1, {\bm z}) \theta(z^+_n - z_{n-1}^+) \ldots \theta(z_2^+ - \ldots z^+_1) \, \notag \\ \overline{ \mathrm{P}} A_+(z_n^+, {\bm z} ) \cdots A_+(z^+_1, {\bm z}) & \equiv A_+(z^+_1, {\bm z}) \cdots A_+(z^+_n, {\bm z}) \theta(z^+_n - z_{n-1}^+) \ldots \theta(z_2^+ - \ldots z^+_1) . \end{align} Summing finally over the number of reggeized gluons, one obtains for gluons the following effective vertex which sums up the interaction with an arbitrary number of reggeized gluon fields, \begin{align} \label{eq:finally} \parbox{4cm}{\includegraphics[width=4cm]{tau_G.pdf}} & = \tau_{G,\nu\mu}^{ab}(p, -r) = - 4 \pi \delta(p^+ - r^+) \Gamma_{\nu\mu}(r,p) e^{-i x_0^+(p^- - r^-)} \notag \\ & \hspace{-1cm} \cdot \int d^2 {\bm z} e^{i {\bm z} \cdot ({\bm p} - {\bm r})} \bigg[ \theta(p^+) \left[ U^{ba}({\bm z}) - \delta^{ab} \right]- \theta(-p^+) \left[ [U^{ba}({\bm z})]^\dagger - \delta^{ab} \right]\bigg]. \end{align} For quarks one finds, \begin{align} \label{eq:quark_vertex} \parbox{4cm}{\includegraphics[width=4cm]{tau_F.pdf}} & = \tau_{F}(q,-r)= 2 \pi \delta(p^+ - r^+) \slashed{n}^+ e^{-i x_0^+(p^- - r^-)} \notag \\ & \cdot \int d^2 {\bm z} e^{i {\bm z} \cdot ({\bm p} - {\bm r})} \bigg[ \theta(p^+) \left[ W({\bm z}) - 1 \right]- \theta(-p^+) \left[ [W({\bm z})]^\dagger - 1 \right]\bigg]\,. \end{align} To write down the above expressions, we introduced Wilson lines in the adjoint \begin{align} \label{eq:Uab} U^{ab}({\bm z}) & = \mathrm{P} \exp \left(-\frac{g}{2}\int_{-\infty}^\infty dz^+ \tilde{A}_+ \right), & \tilde{A}_+ & = -iT^c_{ab} A^c_+\,, \end{align} and the fundamental representation \begin{align} \label{eq:W} W({\bm z}) & = \mathrm{P} \exp \left(-\frac{g}{2}\int_{-\infty}^\infty dz^+ {A}_+ \right), & {A}_+ & = -it^c_{ij} A^c_+\,. \end{align} In contrast to the notation used in \cite{Hentschinski:2011xg,Ayala:2017rmh} and elsewhere, we use here the letter $W$ to denote the Wilson line in the fundamental representation to avoid confusion with the gluonic field in the effective action. The above expressions Eq.~\eqref{eq:finally} and Eq.~\eqref{eq:quark_vertex} are one of the central results of this paper. \section{Comparison with expressions in the literature} \label{sec:literature} At this stage it is necessary to compare the result derived from Lipatov's high energy effective action with the conventional quark and gluon propagators in the presence of a background field used in the literature. \subsection{Comparison with propagators in the presence of a background field} \label{sec:comp-with-prop} Corresponding resummed propagators are within the effective action now easily obtained. Using Eqs.~\eqref{eq:finally} and \eqref{eq:quark_vertex} one finds for the resummed quark ($S_F$) and gluon ($G$) propagators: \begin{align} \label{eq:prop-action} S_{F} (p,q) &= S_{F}^{(0)} (p) (2 \pi)^4 \delta^{(4)}(p - q) \,+\, S_{F}^{(0)}(p) \, \cdot \, \tau_{F} (p,q) \, \cdot \, S_{F}^{(0)} (q)\, , \notag \\ G_{\mu\nu}^{ad} (p,q) &= G^{(0),ab}_{\;\;\mu\nu} (p) (2 \pi)^4 \delta^{(4)}(p - q) \, + \, G^{(0),ab}_{\;\;\mu\alpha} (p) \, \cdot \, \tau_{G}^{\alpha\beta,bc} (p,q) \, \cdot \, G^{(0), cd}_{\;\;\beta \nu} (q) \, , \end{align} where for the moment we do not specify the gauge of the free gluon propagators. These expression are now to be compared with propagators obtained from treating the target as a background field in light-cone gauge $b \cdot n_- = 0$ with the only non-zero component \begin{align} \label{eq:bplus} b_+(x^+, {\bm z}) = \delta(x^+)\beta({\bm z}), \end{align} while $b^\mu_\perp = 0$. Using the Fourier transform of corresponding counter parts in configuration space, see {\it e.g.} \cite{McLerran:1994vd} one finds in momentum space (see {\it e.g.} \cite{Ayala:2017rmh} for expressions used in a recent calculation), \begin{align} \label{eq:prop-bg} S^{[b]}_{F} (p,q) &= S_{F}^{(0)} (p) (2 \pi)^4 \delta^{(4)}(p - q) + S_{F}^{(0)}(p) \, \cdot \, \tilde{\tau}_{F} (p,q) \, \cdot \, S_{F}^{(0)} (q)\, , \notag \\ G_{\mu\nu}^{[b],ad} (p,q) &= G^{(0),ab}_{\text{l.c.},\mu\nu} (p) (2 \pi)^4 \delta^{(4)}(p - q) + G^{(0),ab}_{\;\;\mu\alpha} (p) \, \cdot \, \tilde{\tau}_{G}^{\alpha\beta,bc} (p,q) \, \cdot \, G^{(0), cd}_{\text{l.c.},\beta \nu} (q) \, , \end{align} where the gluon propagator is now restricted to $v \cdot n_- = 0$ light-cone gauge. The superscript `$[b]$' indicates that these propagators have been derived using the background field in light-cone gauge and not the reggeized field $A_+$. One has \begin{align} \label{eq:quarkinteractionCGC} % \tilde{\tau}_{F}(p,-q) & = 2 \pi \delta(p^+ - q^+) ~ \slashed{n}^+ \notag \\ & \times \int d^{2} {\bm z} e^{i{\bm z} \cdot ({ \bm p} - { \bm q})} \left\{\theta(p^+) \big[ W[b]({\bm z}) -1 \big] - \theta(-p^+) \big[ W[b]^\dagger({\bm z}) -1 \big] \right\} \\ \label{eq:gluoninteractionCGC} \tilde{\tau}_{G,\nu\mu}^{ab}(p,q)&= 2 \pi \delta(p^+ - q^+) ~ ( - 2 p^+ g_{\nu\mu}) \notag \\ & \times \int d^{2} {\bm z} e^{i{\bm z} \cdot ({\bm p} - {\bm q})} \left\{\theta(p^+) \big[ U^{ab}[b]({\bm z}) -1 \big] - \theta(-p^+) \big[ \left(U^{ab}[b]\right)^\dagger({\bm z}) -1 \big] \right\}, \end{align} with Wilson lines in fundamental ($W$) and adjoint ($U$) representation \begin{align} \label{eq:wilson} W[b](\bm z) & = \mathrm{P} \exp \left( -\frac{g}{2} \int\limits_{-\infty}^\infty d x^+ b^{-,c}(x^+, {\bm z})t^c \right), & b^{-}(x^+, {\bm z}) & = -i b^{-,c}(x^+, {\bm z})t^c \notag \\ U[b](\bm z) & = \mathrm{P} \exp \left(- \frac{g}{2} \int\limits_{-\infty}^\infty d x^+ b^{-,c}(x^+, {\bm z})T^c \right), & \tilde{b}^{-}(x^+, {\bm z}) & = -i b^{-,c}(x^+, {\bm z})T^c \,. \end{align} Leaving aside potential differences in the Wilson lines, to which we will turn in Sec.~\ref{sec:wilsonlines}, one observes that both quark propagators agree directly with each other (if one sets $x_0^+ = 0$). To carry out a similar comparison for the gluon, we consider first the case where the external free propagators in Eq.~\eqref{eq:prop-action} are taken in $v \cdot n_- = 0$ light-cone gauge. Since $d_{\text{l.c.}}^{\mu\nu}(p, n^+) n^+_\nu = 0 = d_{\text{l.c.}}^{\mu\nu}(r, n^+) n^+_\mu$, all terms in the vertex $\Gamma^{\nu\mu}(r,p)$ which contain a $n^+_\mu$ or $n^+_\nu$ vanish. One therefore remains with the $2 p^+ g_{\mu\nu}$ term only which is precisely the term used in Eq.~\eqref{eq:gluoninteractionCGC}. Both expression therefore agree for $x_0^+ = 0$. We further note that both the light-cone gauge polarization tensor and the GGR-vertex can be factorized into the products of a `left' and `right' tensor, \begin{align} \label{eq:lc_connect1} c_L^{\mu\alpha}(p, n^+) & = \left( g^{\mu\alpha} - \frac{(n^+)^\mu p^\alpha}{p \cdot n^+} \right) & c_R^{\alpha\nu}(r, n^+) & = \left( g^{\alpha\nu} - \frac{ r^\alpha (n^+)^\nu}{r \cdot n^+} \right)\,, \end{align} where \begin{align} \label{eq:moreconnect} \Gamma^{\mu\nu} & = p^+ c_L^{\mu\alpha} (p, n^+) c_R^{\alpha\nu}(r, n^+), \end{align} and \begin{align} \label{eq:16} d^{\mu\nu}(p, n^+) & = c_R^{\mu\alpha}(p, n^+) (-g_{\alpha\beta}) c_L^{\beta\nu}(p, n^+). \end{align} This property allows to establish on a diagrammatic level how the vertex $\Gamma^{\mu\nu}$ can build up from properly factorizing the numerator of the light-cone gauge gluon propagator and absorbing them into the vertex; the information contained in Eq.~\eqref{eq:prop-action} and \eqref{eq:prop-bg} is therefore in this sense identical. It is an interesting note aside that a similar mechanism has been used in the construction of a certain projector in \cite{Gituliar:2015agu}. \subsection{Comparison of Wilson lines and the definition of the reggeized gluon} \label{sec:wilsonlines} In the following we attempt a somewhat detailed comparison between the Wilson lines in the reggeized gluon field $A_+$, arising from Lipatov's high energy effective action, and Wilson lines in the background field $b_+$, frequently encountered in CGC calculation in light-cone gauge. While we find that the interpretation of these Wilson lines differs, we would like to stress that for the calculation of correlators in the dilute quasi-elastic region, {\it i.e.} perturbative forward scattering in the presence of a strong background field (reggeized gluon or light-cone gauge), both formalism are equivalent; the only difference is that the effective action allows use of arbitrary gauges\footnote{Nevertheless we would like to stress that calculation based on the background field in light-cone gauge allow at least in principle also for the use of different gauges for the gluon fluctuations.}. The difference lies therefore mainly in the interpretation of the background field, {\it i.e.} the coupling to color sources in a different rapidity cluster. At first both Wilson lines appear to resum identical fields; Eq.~\eqref{eq:10} and Eq.~\eqref{eq:bplus} take identical forms. Obviously one has for a Wilson line of a generic gluonic field $V_+$, \begin{align} \label{eq:Wperm} W[V](x) &= \mathrm{P} \exp\left(-\frac{g}{2} \int\limits_{-\infty}^\infty dx^+ V_+(x) \right) = \sum_{n=0}^\infty \frac{ \left({-g} \right)^n }{2^n n!} \int \prod_{i=1}^n d x_i^+ \notag \\ & \hspace{2cm} \bigg[ V_+(x_1)\dots V_+(x_n) \theta(x_1^+ - x_2^+) \ldots \theta(x^+_{n-1} - x_n^+) + \text{permutations} \bigg]. \end{align} If now $V_+(x) = A_+(x) = -2i\delta(x^+ - x^+_0) \alpha^a({\bm x})t^a$, the permutations of the fields $A(x_i)$, $i = 1, \ldots, n$ are all identical (since their $x^+$ dependence is identical) and we arrive directly at \begin{align} \label{eq:Wperm} W[A](x) &= \sum_{n=0}^\infty \frac{1}{n!} \left(\frac{-g}{2} \right)^n \prod_{i=1}^n \int d x_i^+ A_+(x_1)\dots A_+(x_n) \notag \\ & \hspace{3cm} \left[\theta(x_1^+ - x_2^+) \ldots \theta(x^+_{n-1} - x_n^+) + \text{permutations} \right] \notag \\ & = \sum_{n=0}^\infty \frac{1}{n!} \left(\frac{-g}{2} \right)^n \prod_{i=1}^n \int d x_i^+ A_+(x_1)\dots A_+(x_n) = e^{{ig}\alpha^a({\bm x}) t^a }, \end{align} We therefore obtain a simple matrix exponential. Formally, also the choice $V_+(x) = b_+(x) = -i\delta(x^+) \beta^a({\bm x}, x^-)t^a$ leads obviously to the same result. In the literature such an interpretation is however usually avoided, by treating the contracting of the $x^+$-dependence to delta-like support as an approximation which applies to the calculation of correlators in the background field, while the $b_+$ itself is ordered in the $x^+$ coordinates. see {\it e.g.} \cite{jimwlk8}. \\ While the precise interpretation used is irrelevant for the calculation of correlators in the presence of a background field, the difference becomes striking once correlators of the background field with {\it e.g.} color charges in a rapidity cluster significantly separated in rapidity are considered (``the dense target''). Vertices which describe the interaction of the Wilson line with $n$-reggeized gluons fields come with purely symmetric color tensors, since the precise ordering of fields is irrelevant. For the gluonic field $b_+(x)$ such a result is not acceptable, since one would miss the corresponding anti-symmetric and mixed symmetry correlators. Within the effective action, the interaction with these color charges does not occur directly through the reggeized gluon field, but through the induced vertices Fig.~\ref{fig:3} and corresponding higher order vertices. Following the treatment in \cite{Hentschinski:2011xg}, theses vertices carry only anti-symmetric color tensors (corresponding to a combination of anti-commutators of SU$(N_c)$ generators). Combining these induced vertices with the symmetric $m$ reggeized gluon state to construct a `Wilson-line-$n$ gluon' vertex ($n \geq m$), where the coupling to the Wilson line is always mediated by at least one reggeized gluon, one recovers the complete symmetry structure. For a pedagogic presentation for the case up to three gluons we refer to Appendix \ref{sec:multi-gluon-exchange}; see also the discussion in \cite{Hentschinski:2009zz}. \\ At this point we would like to return to a proposal made in \cite{Caron-Huot:2013fea} for the definition of the reggeized gluon from Wilson-lines in the Balitsky-JIMWLK formalism. There it has been proposed to define the reggeized gluon $R^a({\bm z})$ as the logarithm of the adjoint Wilson line, \begin{align} \label{eq:17} R^a({\bm z}) &\equiv \frac{1}{g N_c} f^{abc} \log U^{bc}({\bm z})\,. \end{align} Using the above results, one finds directly for the results obtained from Lipatov's high energy effective action, \begin{align} \label{eq:18} R^a({\bm z}) & = \frac{1}{g N_c} f^{abc} \left[{ig} \alpha^d({\bm z})T^d_{bc} \right] = \alpha^a({\bm z}) = \frac{1}{2} \int dx^+ A_+^a(x^+, {\bm z}), \end{align} {\it i.e.} the definition of the reggeized gluon of \cite{Caron-Huot:2013fea} coincides with the reggeized gluon field of Lipatov's effective action, once this field is integrated over the corresponding light-cone coordinate\footnote{At least within the high energy effective action, a definition based on the Wilson lines in the fundamental representation would be equally possible, {\it i.e.} $R^a({\bm z}) = \frac{2}{ig}{\rm tr}(t^a \log [V({\bm z})]) = \alpha^a({\bm z})$}. \section{Balitsky-JIMWLK evolution} \label{sec:balitsky-jimwlk-evol} In the following we demonstrate that the high energy evolution of Wilson lines of reggeized gluons (obtained within the high energy effective action) leads directly to the leading order Balitsky-JIMWLK evolution equation. Even though this is expected, given the coincidence in the resummed gluon and quark propagators, this provides an important consistency check, in particular for future calculation of CGC-observables. We will then investigate the question whether integrating out quantum fluctuations of a general ensemble of Wilson lines gives indeed rise to the Balitsky-JIMWLK evolution equation. \\ Within Lipatov's high energy effective action, the determination of high energy evolution requires in general the high energy effective action for `central-rapidity' processes, {\it i.e.} the effective action which contains both $A_-$ and the $A_+$ reggeized gluon fields and corresponding induced vertices. For the discussion of dense-dilute collision the decomposition provided by the effective action for central rapidities is however not very efficient; the additional set of induced vertices provides a certain color decomposition of amplitudes which describe gluon production from a multi-reggeized gluon exchange. While it has been demonstrated at the level of the scattering amplitude for four-reggeized gluon exchange that after a certain reshuffling of terms the $2-4$ reggeized gluon vertex (triple Pomeron vertex) arises from the high energy effective action\cite{Hentschinski:2009zz} (which at the same time can be shown to arise as well from Balitsky-JIMWLK evolution \cite{Bartels:2004ef}), the calculation is rather cumbersome. While the reformulation of the effective action provided in Sec.~\ref{sec:parametrize} already provides a first simplification, it is easier to recover the Balitsky-JIMWLK evolution equation from the quantum fluctuations of the quasi-elastic Lagrangian. For an ensemble of Wilson lines the latter are directly proportional to the high energy divergence, without the need to drop any finite terms. We hope to return to the description which uses the high energy effective action for central rapidity processes in a future publication. \\ For the following discussion it sufficient to consider Wilson lines in the fundamental representation. While adjoint Wilson lines can be rewritten in terms of fundamental Wilson lines using the well-known relation \begin{align} \label{eq:4} U^{ab}({\bm z}) & = 2 {\rm tr} \left[ t^a W({\bm z})t^b W^\dagger({\bm z})\right]\,, \end{align} the hermitian conjugate of a fundamental Wilson lines follows trivially from the discussion of the fundamental Wilson line. We will therefore consider the quantum fluctuations of an ensemble of $n$ fundamental Wilson lines in the reggeized gluon fields, \begin{align} \label{eq:5} W[A_+]({\bm z}_1) \otimes \ldots \otimes W[A_+]({\bm z}_n). \end{align} \subsection{Feynman rules for quantum fluctuations of a Wilson line} \label{sec:fr} \begin{figure}[th] \centering \parbox{3.5cm}{\includegraphics[height=1.5cm]{wilson.pdf}} $ = \displaystyle W[A_+]({\bm z}, x_0^-) $, $\qquad$ \parbox{2cm}{\includegraphics[width=2cm]{eikprop.pdf}} $ = \displaystyle \frac{i}{p^- + i\epsilon} $, \\ \parbox{7cm}{\center (a)} \parbox{5cm}{\center (b)} \\ \parbox{3.5cm}{\includegraphics[width=3.5cm]{wilson_vertex.pdf}} $ = \displaystyle \frac{g}{q^+} (n^+)^\mu e^{-iq^+ x_0^-/2 + i {\bm q} \cdot {\bm z}} \cdot \bigg[ W[A]({\bm z}, x_0^-), t^a \bigg] $, \\ \parbox{10cm}{\center (c)} \\ \parbox{4cm}{\includegraphics[width=3.5cm]{eikvertex.pdf}} $ = \displaystyle ig t^a (n^-)^\mu e^{-iq^+ x_0^-/2 + i {\bm q} \cdot {\bm z}} .$ \hspace{2.5cm} $\,$ \\ \parbox{10cm}{\center (d)} \caption{\small Feynman rules for the calculation of quadratic fluctuations of the Wilson lines for covariant or $v_-=0$ gauge. Note that the the Wilson-line-gluon vertex (d) conserves momentum as usually, while four momenta are not conserved at the vertices (a) and (c). Momenta which are not fixed by external momenta are understood to be integrated over with the measure $d^4 p/(2 \pi)^4$}. \label{fig:feynman_wilson} \end{figure} Integrating out the quantum fluctuations $v^\mu$ is most easily achieved, if one supplements the effective action with an auxiliary complex 1-dimensional scalar field, $\varphi = \varphi(x^+, {\bm z}, x_0^-)$ where ${\bm z}, x_0^-=0$ are constant for the dynamics of the scalar field. The field is charged in the fundamental representation of $SU(N_c)$ and transforms under gauge transformations as \begin{align} \label{eq:gauge_scalar} \delta_L \varphi & = - \chi_L \varphi. \end{align} The 1-dimensional gauge invariant action of this field, which describes interaction with the gluonic field, is given by \begin{align} \label{eq:Lagrangian_scalar} S[\varphi, V] & \int d x^+ \varphi^\dagger \left[ i \partial_+ + i g v_+ \right]\varphi \,, \end{align} where all fields are taken at fixed $({\bm x}, x_0^-)$. One obtains in a straightforward manner for the propagator of this scalar field \begin{align} \label{eq:propagator} \left \langle x^-\left | \frac{1}{1 + \frac{g}{\partial_+ + \epsilon} V_+} \frac{1}{\partial_+ + \epsilon} \right |y^- \right \rangle & = \mathrm{P} \exp \left(\frac{-g}{2} \int_{y^+}^{x^+} d z^+ v_+ \right). \end{align} As a next step we use the parametrization Eq.~\eqref{eq:para1} of the gluonic field and limit ourselves to terms quadratic in the quantum fluctuation. Limiting ourselves further to covariant or $v_-=0$ gauges, the following simplified shift is sufficient\footnote{Covariant gauge requires correlators of $v_-$ and $v_+$ fields as well as two $v_+$ fields; the correlator of two $v_-$ vanishes on the other hand. $v_-=0$ gauge requires on the other hand only the correlator of two $v_+$ fields}, \begin{align} \label{eq:shift_new} v^\mu & \to V^\mu = v^\mu + \frac{1}{2}(n_-)^\mu \left( A_+ + [A_+, \frac{g}{\partial_-}v_-] \right) + \mathcal{O}(v_-^2). \end{align} Expanding our expressions around the background field $gA_+\sim 1$, the shifted action is given by \begin{align} \label{eq:Lagrangian_scalar_shift} S[\varphi, A_+, v] & = \int d x^+ \varphi^\dagger \left[ i \partial_+ + i g\left( v_+ + A_+ + [A_+, \frac{g}{\partial_-}v_-] \right)\right] \varphi\, . \end{align} The resulting set of Feynman rules necessary for the calculation of $\mathcal{O}(g^2)$ corrections within covariant and/or $v_-=0$ gauge are then summarized in Fig.~\ref{fig:feynman_wilson}. \subsection{Calculating quantum fluctuations} \label{sec:int2} Since we require only fluctuations up to quadratic order, it is sufficient to consider the correlator of two Wilson lines at 1-loop The non-zero diagrams for self-energy type corrections to one Wilson line are given by \begin{align} \label{eq:30} \parbox{4cm}{\includegraphics[height=2cm]{self1.pdf}} + \parbox{4cm}{\includegraphics[height=2cm]{self2.pdf}} + \parbox{4cm}{\includegraphics[height=2cm]{self3.pdf}.} \end{align} For interactions between 2 Wilson lines, evaluation of the following diagrams is sufficient (the remaining diagrams can be deduced from symmetry), \begin{align} \label{eq:diag_2lines} \parbox{4cm}{\includegraphics[height=3cm]{4point2.pdf}} + \parbox{4cm}{\includegraphics[height=3cm]{4point4.pdf}} + \parbox{4cm}{\includegraphics[height=3cm]{4point3.pdf}\,.} \end{align} Note that correlators of Wilson-lines are only infra-red finite, if projected onto the color singlet. The general case of colored Wilson lines is nevertheless of interest; in particular it allows to recover the gluon Regge trajectory, see \cite{Caron-Huot:2013fea} for a detailed discussion. We therefore work in $d = 4 + 2 \epsilon$ space-time dimensions, with the vertices Eq.~\eqref{eq:finally} and Eq.~\eqref{eq:quark_vertex} generalizing trivially. We obtain \begin{align} \label{eq:self2} & \parbox{4.5cm}{\center \includegraphics[height=2cm]{self2.pdf}} = (ig)^2 \! \int \frac{d^d p}{(2 \pi)^d} \!\int \frac{d^d r}{(2 \pi)^d} \frac{i}{-p^- - i \epsilon} \frac{i}{- r^- - i\epsilon} \frac{-i}{p^2 + i \epsilon} \frac{-i}{r^2 + i \epsilon} \notag \\ & 2 \pi \delta(p^+ - r^+) \int d^{2 + 2 \epsilon} {\bm z} e^{-i {\bm p }\cdot ({\bm x} - {\bm z})} e^{-i {\bm r }\cdot ({\bm z} - {\bm x})} t^b V({\bm x}) t^a \notag \\ & \hspace{6cm} \cdot \left[ U^{ab}({\bm z}) - \delta^{ab} \right]- \theta(-p^+) \left[ [U^{ab}({\bm z})]^\dagger - \delta^{ab} \right] \notag \\ & = \frac{g^2}{ \pi} \int_0^\infty \frac{dp^+}{p^+} \int d^{2 + 2 \epsilon} {\bm z} \, t^b V({\bm x}) t^a \left[ U^{ab}({\bm z}) - \delta^{ab} \right] \frac{\Gamma^2(1+\epsilon)}{(4\pi^{2 + 2 \epsilon})} \frac{({\bm x}- {\bm z})\cdot({\bm x}- {\bm z})}{[({\bm x}- {\bm z})^2]^{1+\epsilon}[({\bm x}- {\bm z})^2]^{1+\epsilon}} \end{align} The divergent integral over the plus-momenta provides the high-energy singularity which defines the kernel of the high energy evolution. The precise choice of the regulator is irrelevant for leading order accuracy. In the following we chose $\Lambda_{a,b} \to \infty$ and a scale $s_0$ of the order of the transverse scale, also known as the reggeization scale, to regularize the integral as, \begin{align} \label{eq:reg2} \int \limits_{s_0/\Lambda^b}^{\Lambda^a} \frac{d p^+}{p^+} & = \ln \left(\frac{\Lambda_a \Lambda_b}{s_0} \right)\,. \end{align} To derive the high energy evolution of Wilson-lines, $\Lambda_a$ will be the regulator of interest, since it limits the $p^+$ integral from above. With the $\overline{\text{MS}}$ strong coupling constant in $d=4 + 2 \epsilon$ dimensions \begin{align} \label{eq:MSbaralphaS} \alpha_s & = \frac{g^2 \mu^{2\epsilon} \Gamma(1-\epsilon)}{(4 \pi)^{1 + \epsilon}}, \end{align} we obtain \begin{align} \label{eq:self2_fertig} \parbox{4.5cm}{\center \includegraphics[height=2cm]{self2.pdf}} & = \ln \left(\frac{\Lambda_a \Lambda_b}{s_0} \right)\frac{\alpha_s}{\pi^2} \left(\frac{4}{\pi \mu^2} \right)^\epsilon \frac{\Gamma(1 + \epsilon)^2}{\Gamma(1-\epsilon)} \notag \\ & \hspace{-2cm} \cdot \int d^{2 + 2\epsilon} {\bm z} \frac{({\bm x} - {\bm z})\cdot ({\bm x} - {\bm z}) }{[({\bm x} - {\bm z})^2]^{1 + \epsilon}[({\bm x} - {\bm z})^2]^{1 + \epsilon}} t^b W({\bm x}) t^a \left[U^{ba}({\bm z}) - \delta^{ab} \right] \, . \end{align} We further have \begin{align} \label{eq:self13} & \parbox{4.5cm}{\center \includegraphics[height=2cm]{self1.pdf}} + \parbox{4.5cm}{\center \includegraphics[height=2cm]{self3.pdf}} = \notag \\ & = \ln \left(\frac{\Lambda_a \Lambda_b}{s_0} \right)\frac{\alpha_s \Gamma^2(1+\epsilon)}{2 \pi^2 \Gamma(1-\epsilon)} \left(\frac{4 }{\pi \mu^2} \right)^\epsilon \int d^{2 + 2 \epsilon} {\bm z} \frac{({\bm x} - {\bm z}) \cdot ({\bm x} - {\bm z})}{[({\bm x} - {\bm z})^2]^{1 + \epsilon}[({\bm x} - {\bm z})^2]^{1 + \epsilon}} \notag \\ & \hspace{6cm} \left[2 t^a W({\bm x}) t^a - t^a t^a W({\bm x}) - W({\bm x})t^a t^a \right] \, . \end{align} Combining both contributions one obtains \begin{align} \label{eq:combineSELF} & \ln \left( \frac{\Lambda_a \Lambda_b}{s_0} \right) \frac{\alpha_s \Gamma^2(1+\epsilon)}{2 \pi^2 \Gamma(1-\epsilon)} \left(\frac{4 }{\pi \mu^2} \right)^\epsilon \int d^{2 + 2 \epsilon} {\bm z} \frac{({\bm x} - {\bm z}) \cdot ({\bm x} - {\bm z})}{[({\bm x} - {\bm z})^2]^{1 + \epsilon}[({\bm x} - {\bm z})^2]^{1 + \epsilon}} \notag \\ & \hspace{6cm} \left[2 U^{ba}({\bm z}) t^b W({\bm x}) t^a - t^a t^a W({\bm x}) - W({\bm x})t^a t^a \right] \,. \end{align} The calculation for the interaction of two Wilson lines follows in complete analogy: \begin{align} \label{eq:4point_fertig} & \parbox{4.5cm}{\center \includegraphics[height=3cm]{4point4.pdf}} = \ln \left(\frac{\Lambda_a \Lambda_b}{s_0} \right)\frac{\alpha_s}{\pi^2} \left(\frac{4}{\pi \mu^2} \right)^\epsilon \frac{\Gamma(1 + \epsilon)^2}{\Gamma(1-\epsilon)} \notag \\ & \hspace{2cm} \cdot \int d^{2 + 2\epsilon} {\bm z} \frac{({\bm x} - {\bm z})\cdot ({\bm y} - {\bm z}) }{[({\bm x} - {\bm z})^2]^{1 + \epsilon}[({\bm y} - {\bm z})^2]^{1 + \epsilon}} \quad t^b W({\bm x}) \otimes W({\bm y}) t^a \left[U^{ba}({\bm z}) - \delta^{ab} \right]\,, \end{align} and \begin{align} \label{eq:self13} & \parbox{4.5cm}{\center \includegraphics[height=3cm]{4point2.pdf}} + \parbox{4.5cm}{\center \includegraphics[height=3cm]{4point3.pdf}} = \notag \\ & = \ln \left(\frac{\Lambda_a \Lambda_b}{s_0} \right) \frac{\alpha_s \Gamma^2(1+\epsilon)}{2 \pi^2 \Gamma(1-\epsilon)} \left(\frac{4 }{\pi \mu^2} \right)^\epsilon \int d^{2 + 2 \epsilon} {\bm z} \frac{({\bm x} - {\bm z}) \cdot ({\bm y} - {\bm z})}{[({\bm x} - {\bm z})^2]^{1 + \epsilon}[({\bm y} - {\bm z})^2]^{1 + \epsilon}} \notag \\ & \hspace{0cm} \left[ t^a W({\bm x}) \otimes W({\bm y}) t^a + W({\bm x})t^a \otimes t^a W({\bm y}) - t^a W({\bm x}) \otimes t^a W({\bm y}) - W({\bm x})t^a \otimes W({\bm y})t^a \right] \end{align} We then obtain for the complete correlator of 2 Wilson lines \begin{align} \label{eq:20} & \parbox{5cm}{\includegraphics[width=5cm]{gen_1loop.pdf}} = \ln \left(\frac{\Lambda_a \Lambda_b}{s_0} \right) \frac{\alpha_s \Gamma^2(1+\epsilon)}{2 \pi^2 \Gamma(1-\epsilon)} \left(\frac{4 }{\pi \mu^2} \right)^\epsilon \int d^{2 + 2 \epsilon} {\bm z} \notag \\ & \bigg\{ \frac{({\bm x} - {\bm z}) \cdot ({\bm x} - {\bm z})}{[({\bm x} - {\bm z})^2]^{1 + \epsilon}[({\bm x} - {\bm z})^2]^{1 + \epsilon}} \left[ 2 U^{ab}({\bm z}) t^b W({\bm x}) t^a - t^a t^a W({\bm x}) - W({\bm x})t^at^a \right]\otimes W({\bm y})\notag \\ &+ \frac{({\bm y} - {\bm z}) \cdot ({\bm y} - {\bm z})}{[({\bm y} - {\bm z})^2]^{1 + \epsilon}[({\bm y} - {\bm z})^2]^{1 + \epsilon}} \left[ 2 U^{ab}({\bm z}) t^b W({\bm y}) t^a - t^a t^a W({\bm y}) - W({\bm y})t^at^a \right]\otimes W({\bm x})\notag \\ &+ \frac{({\bm x} - {\bm z}) \cdot ({\bm y} - {\bm z})}{[({\bm x} - {\bm z})^2]^{1 + \epsilon}[({\bm y} - {\bm z})^2]^{1 + \epsilon}} \bigg[-2 t^a W({\bm x}) \otimes t^a W({\bm y}) -2 W({\bm x})t^a \otimes W({\bm y}) t^a \notag \\ & \hspace{4cm} + 2 U^{ab}({\bm z}) t^a W({\bm x}) \otimes W({\bm y})t^b + 2 U^{ab}({\bm z}) t^a W({\bm y}) \otimes W({\bm x})t^b \bigg] \bigg\}\,. \end{align} Using the above result it is straightforward to obtain the high energy evolution of an ensemble of $n$ Wilson lines as \begin{align} \label{eq:2.6} - \Lambda_a \frac{d}{d \Lambda_a} \left[W({\bm x}_1) \otimes \ldots \otimes W({\bm x}_n) \right] &= \sum_{i,j = 1} H_{ij} \left[W({\bm x}_1) \otimes \ldots \otimes W({\bm x}_n) \right]\,, \end{align} with the Balitsky-JIMWLK Hamiltonian \begin{align} \label{eq:2.7} H_{ij} & = \frac{\alpha_s \Gamma^2(1+\epsilon)}{2 \pi^2 \Gamma(1-\epsilon)} \left(\frac{4 }{\pi \mu^2} \right)^\epsilon \int d^{2+2\epsilon} {\bm z} \frac{({\bm x}_i - {\bm z}) \cdot ({\bm x}_j - {\bm z})} {[({\bm x}_i - {\bm z})^2]^{1+\epsilon} [({\bm x}_j - {\bm z})^2]^{1+\epsilon}} \notag \\ & \hspace{4cm} \left[ T^a_{i, L} T^a_{j, L} + T^a_{i, R} T^a_{j, R} - U^{ab}({\bm z}) \left(T^a_{i, L} T^b_{j, R} + T^a_{j, L} T^b_{i, R} \right) \right]. \end{align} In the presentation we followed here closely \cite{Caron-Huot:2013fea} and define $T^a_{L,i}$ and $T^a_{R,j}$ as the group generators acting to the left (L) or to the right (R) on the Wilson line $W({\bm x}_i)$, \begin{align} \label{eq:2.5} T^a_{L,i} [W({\bm z}_i)]& \equiv t^a W({\bm z}_i), & T^a_{R,i} [W({\bm z}_i)]& \equiv W({\bm z}_i) t^a. \end{align} \section{Conclusion and Outlook} \label{sec:conclusion-summary} We investigated to which extent it is possible to obtain within Lipatov's high energy effective action gluon and quark propagators, which resum interaction with a strong (reggeized) gluon background field, and whether the effective action allows to rederive Balitsky-JIMWLK evolution. We found that both question can be answered positively.To arrive at this result, we used a special parametrization of the gluonic field, already proposed in \cite{Lipatov:1995pn}. This parametrization allows both an expansion of the gluonic field around the reggeized gluon field -- which is assumed to be strong -- and provides consistent gauge transformation properties for the parametrized gluonic field. Expanding the resulting effective Lagrangian up to quadratic order in quantum fluctuations around the strong reggeized gluon field, we obtain a new kind of gluon-gluon-reggeized gluon vertex as well the usually quark-quark-reggeized gluon vertex. Both vertices allow for a straightforward resummation of the reggeized gluon field to all orders into Wilson lines. The resulting resummed gluon and quark propagators agree for $v_-=0$ light-cone gauge with corresponding propagators which include all order resummation of a gluonic background field in light-cone gauge. The latter are frequently employed in the calculation of perturbative observables in the presence of high parton densities, in particular within the Color Glass Condensate effective theory. Finally we demonstrated that these propagators allow to recover the complete (leading order) Balitsky-JIMWLK evolution equation for Wilson lines from Lipatov's high energy effective action. \\ Our results demonstrate that high energy factorization as formulated within the Balitsky-JIMWLK evolution and high energy factorization as formulated within Lipatov's high energy effective action are equivalent. At the same time, Lipatov's high energy effective action provides additional flexibility for actual calculations, since it allows to adopt in a straightforward manner different gauges to determine quantum fluctuations of the gluonic field. Moreover a matching of results obtained within the BFKL-formalism and Lipatov's high energy effective action on the one hand and light-front perturbation theory and the Color-Glass-Condensate should be now facilitated. As an important side result we confirm the proposed determination of the reggeized gluon from Balitsky-JIMWLK evolution proposed in \cite{Caron-Huot:2013fea}, within the context of Lipatov's high energy effective action. \\ Future lines of research need to address the mentioned matching of NLO results obtained within the two different frameworks as well as the the explicit calculation of new NLO observables. Even though a number of important NLO results have been obtained in the past for scattering of a perturbative projectile on a dense target, see {\it i.e.} \cite{nlocgc}, there is still a need to refine the available tools for such calculations. Another direction of research needs to address the possible description of central production processes at high parton densities as {\it i.e.} required for the analysis of nucleus-nucleus collisions and/or high multiplicity events. While the current study is limited to the quasi-elastic region, such a program requires the investigation of the corresponding effective action which contains induced terms for both plus and minus reggeized gluon fields. This is also related to the question whether such central production terms can be formulated in a way which gives automatically rise to the Balitsky-JIMWLK hierarchy. Related to this question is the possible extension of Balitsky-JIMWLK evolution to exclusive observables, generalizing already existing results \cite{Hentschinski:2005er}. \subsubsection*{Acknowledgments} Conversations with Lev~N.~Lipatov, Jochen Bartels, Agustin Sabio Vera, Grigorios Chachamis, Jose~D.Madrigal Martinez and Krzysztof Kutak on the effective action and related topics as well as collaboration with Alejandra Ayala, Jamal Jalilian-Marian and Maria-Elena Tejeda Yeomans at an early stage of this project are gratefully acknowledged.
{ "timestamp": "2018-09-03T02:00:40", "yymm": "1802", "arxiv_id": "1802.06755", "language": "en", "url": "https://arxiv.org/abs/1802.06755" }
\section{Introduction} In classical graph problems, we are given a graph and the task is to \emph{find} a feasible solution. In \emph{reconfiguration problems}, we are given two feasible solutions -- an input configuration and a target configuration -- and the task is to find a sequence of moves that turns the input configuration into the target configuration. \subparagraph{Recoloring problems.} Perhaps the most natural example of a reconfiguration problem is \emph{recoloring}: we are given a graph $G$ and two proper $k$-colorings of $G$, let us call them $s$ and $t$, and the task is to find a way to turn $s$ into $t$ by changing the color of one node at a time, such that each intermediate step is a proper coloring. More formally, the task is to find a sequence of proper $k$-colorings $x_0, x_1, \dotsc, x_L$ such that $x_0 = s$ and $x_L = t$, and $x_{i-1}$ and $x_i$ differ only at one node. Such problems have been studied extensively from the perspective of graph theory and classical centralized algorithms, but the problems are typically inherently \emph{global} and solutions are long, i.e., $L$ is large in the worst case. In this work we introduce recoloring problems in a \emph{distributed} setting. We show that there are natural relaxations of the problem that are attractive from the perspective of distributed graph algorithms: they admit solutions that are short and that can be found \emph{locally} (e.g., in sublinear number of rounds). Distributed recoloring problems are closely related to classical symmetry-breaking problems that have been extensively studied in the area of distributed graph algorithms, but as we will see, they also introduce new kinds of challenges. \begin{figure} \centering \includegraphics[scale=0.8]{3color0.pdf} \quad \includegraphics[scale=0.8]{3color1.pdf} \quad \includegraphics[scale=0.8]{3color2.pdf} \quad \includegraphics[scale=0.8]{3color3.pdf} \caption{Distributed recoloring: the input coloring $s$ can be seen on the left and the target coloring $t$ on the very right. The illustration shows one possible way to reach the target coloring in three steps by, in each step, changing the colors of an independent set of nodes.} \label{fig:recolor} \end{figure} \subparagraph{Distributed recoloring.} We will work in the usual LOCAL model of distributed computing: Each node $v \in V$ of the input graph $G = (V,E)$ is a computer, and each edge $e \in E$ represents a communication link between two computers. Computation proceeds in synchronous rounds: each node sends a message to each of its neighbors, receives a message from each of its neighbors, and updates its local state. Eventually, all nodes have to announce their local outputs and stop; the running time of the algorithm is the number of communication rounds until all nodes stop. We assume that the algorithm is deterministic, and each node is labeled with a unique identifier. In \emph{distributed recoloring}, each node $v \in V$ is given two colors, an \emph{input color} $s(v)$ and a \emph{target color} $t(v)$. It is guaranteed that both $s$ and $t$ form a proper coloring of $G$, that is, $s(u) \ne s(v)$ and $t(u) \ne t(v)$ for all $\{u,v\} \in E$. Each node $v \in V$ has to output a finite \emph{recoloring schedule} $x(v) = \bigl(x_0(v), x_1(v), \dotsc, x_\ell(v)\bigr)$ for some $\ell = \ell(v)$. For convenience, we define $x_i(v) = x_\ell(v)$ for $i > \ell(v)$. We say that the node \emph{changes its color at time $i > 0$} if $x_{i-1}(v) \ne x_i(v)$; let $C_i$ be the set of nodes that change their color at time $i$. Define $L = \max_v \ell(v)$; we call $L$ the \emph{length} of the solution. A solution is feasible if the following holds: \begin{enumerate} \item $x_0 = s$ and $x_L = t$, \item $x_i$ is a proper coloring of $G$ for all $i$, \item $C_i$ is an independent set of $G$ for all $i$. \end{enumerate} The key differences between distributed recoloring and classical recoloring are: \begin{enumerate} \item Input and output are given in a distributed manner: no node knows everything about $G$, $s$, and $t$, and no node needs to know everything about $x_i$ or the length of the solution $L$. \item We do not require that only one node changes its color; it is sufficient that adjacent nodes do not change their colors simultaneously. \end{enumerate} See Figure~\ref{fig:recolor} for a simple example of distributed recoloring steps. Note that a solution to distributed recoloring is locally checkable in the following sense: to check that a solution is feasible, it is enough to check independently for each edge $\{u,v\} \in E$ that the recoloring sequences $x(u)$ and $x(v)$ are compatible with each other, and for each node $v \in V$ that $x(v)$ agrees with $s(v)$ and $t(v)$. However, distributed recoloring is not necessarily an LCL problem \cite{naor1995can} in the formal sense, as the length of the output per node is not a priori bounded. We emphasize that we keep the following aspects well-separated: what is the complexity of \emph{finding} the schedule, and how \emph{long} the schedules are. Hence it makes sense to ask, e.g., if it is possible to find a schedule of length $O(1)$ in $O(\log n)$ rounds (note that the physical reconfiguration of the color of the node may be much slower than communication and computation). \subparagraph{Recoloring with extra colors.} Recoloring is computationally very hard, as solutions do not always exist, and deciding whether a solution exists is PSPACE-hard. It is in a sense analogous to problems such as finding an \emph{optimal} node coloring of a given graph; such problems are not particularly interesting in the LOCAL model, as the complexity is trivially global. To make the problem much more interesting we slightly relax it. We define a \emph{$k+c$ recoloring problem} (a.k.a.\ \emph{$k$-recoloring with $c$ extra colors}) as follows: \begin{itemize} \item We are given colorings with $s(v), t(v) \in [k]$. \item All intermediate solutions must satisfy $x_i(v) \in [k+c]$. \end{itemize} Here we use the notation $[n] = \{1,2,\dotsc,n\}$. \begin{figure}[b] \centering \includegraphics{line.pdf} \caption{In the input graph, a bipartition is given. Therefore, the target coloring can be reached by using one extra color in three steps.} \label{fig:line} \end{figure} The problem of $k+c$ recoloring is meaningful also beyond the specific setting of distributed recoloring. For example, here is an example of a very simple observation: \begin{lem}\label{lem:bipartite} Recoloring with $1$ extra color is always possible in any bipartite graph, with a distributed schedule of length $L = 3$. \end{lem} \begin{proof} Let the bipartition be $V = V_1 \cup V_2$. First each node $v \in V_1$ switches to $k+1$, then each $v \in V_2$ switches to color $t(v)$, and finally each $v \in V_1$ switches to color $t(v)$. \end{proof} Incidentally, it is easy to extend this result to show that $k$-recoloring with $c = \chi-1$ extra colors is always possible with a schedule of length $O(c)$ in a graph with chromatic number $\chi$, and in particular $k$-recoloring with $c = k-1$ extra colors is trivial. Figure~\ref{fig:line} gives an illustration of recoloring a bipartite graph with one extra color. As a corollary, we can solve distributed $k+1$ recoloring in trees in $O(n)$ rounds, with a schedule of length $O(1)$: simply find a bipartition and apply the above lemma. However, is this optimal? Clearly finding a bipartition in a tree requires $\Omega(n)$ rounds, but can we solve recoloring with $1$ extra color strictly faster? These are examples of problems that we study in this work. We initiate the study of distributed complexity of recoloring, with the ultimate objective of finding a complete characterization of graph families and parameters $k$, $c$, and $L$ such that distributed $k+c$ recoloring with schedules of length $L$ can be solved efficiently in a distributed setting. As we will see, the problem turns out to be surprisingly rich already in very restricted settings such as grids or $3$-regular trees. Many of the standard lower bound techniques fail; in particular, known results on the hardness of graph coloring do not help here, as we are already given two proper colorings of the input graph. \subparagraph{Contributions.} Our main contribution is a comprehensive study of the complexity of distributed recoloring in various graph families; the results are summarized in Tables \ref{tab:cycles}--\ref{tab:3reg}. The highlights of this work are the following results: \begin{enumerate} \item \textbf{\boldmath An algorithm for $3+1$ recoloring on trees.} On trees, $3$-recoloring is inherently global: it is easy to see that the worst-case running time is $\Theta(n)$ and the worst-case schedule length is $\Theta(n)$. With one extra color, we can trivially find a schedule of length $O(1)$ in time $O(n)$. However, we show that we can do much better: it is possible to find a schedule of length $O(1)$ in time $O(\log n)$. Here the key component is a new algorithm that solves the following sub-problem in $O(\log n)$ rounds: given a tree, find an independent set $I$ such that the removal of $I$ splits the tree in components of size $1$ or $2$. This subroutine may find applications in other contexts as well. These results are presented in Section~\ref{sec:treepositive}. \item \textbf{\boldmath An algorithm for $3+1$ recoloring for graphs of degree at most $3$.} In general graphs, $3+1$ recoloring is not necessarily possible; we can construct a small $4$-regular graph in which $3+1$ recoloring is not solvable. However, we will show that if the maximum degree of the graph is at most $3$ (i.e., we have a \emph{subcubic} graph), $3+1$ recoloring is always possible. Moreover, we can find a schedule of length $O(\log n)$ in time $\polylog(n)$. This result is presented in Section~\ref{sec:subcubicpositive}. \item \textbf{\boldmath Complexity of $3+1$ recoloring on toroidal grids.} We also give a complete characterization of $3+1$ recoloring in one particularly interesting family of $4$-regular graphs: $2$-dimensional toroidal grids (a.k.a.\ torus grid graphs, Cartesian graph products of two cycles). While the case of $1$-dimensional grids (cycles) is easy to characterize completely, the case of $2$-dimensional grids turns out to be much more interesting. Here our main contribution is the following graph-theoretic result: in an $h \times w$ toroidal grid, $3+1$ recoloring is possible for any input if and only if (i)~both $h$ and $w$ are even, or (ii)~$h = 4$, or (iii)~$w = 4$. In all other cases we can find $3$-colorings $s$ and $t$ such that $t$ is not reachable from $s$ even if we can use $1$ extra color. As a simple corollary, $3+1$ recoloring is inherently global from the perspective of distributed computing, and it takes $\Theta(n)$ rounds to solve even if we have the promise that e.g.\ $h$ and $w$ are even (and hence a schedule of length $\Theta(1)$ trivially exists). This result is presented in Section~\ref{sec:grids}. \end{enumerate} Additionally, several simple upper and lower bounds and corollaries are given in Sections \ref{sec:simpleub} and~\ref{sec:simplecor}. \subparagraph{Motivation.} As a simple application scenario, consider the task of reconfiguring a system of unmanned aerial vehicles. Here each node is an aircraft, the color corresponds to an altitude range, and an edge corresponds to a pair of aircraft whose paths might cross and hence need to be kept at different cruising altitudes to avoid collisions. For each aircraft there are designated areas in which they can safely change their altitude. To reconfigure the entire system, we could take all aircraft to these areas simultaneously. However, this may be a costly maneuver. Another possibility is to reserve a longer timespan during which a set $X$ of aircraft may change their altitudes, whenever they happen to be at convenient locations. Now if we let two aircraft $u, v \in X$ change their altitudes during the same timespan, we need to ensure that any intermediate configuration is safe, regardless of whether $u$ or $v$ happens to change its altitude first. Furthermore, we would like to complete reconfiguration in minimal time (short schedule), and we would like to waste precious airspace as little as possible and hence keep as few altitude levels as possible in reserve for reconfiguration (few extra colors). This scenario -- as well as many similar scenarios, such as the task of reconfiguring the frequency bands of radio transmitters in a manner that never causes interference, even if the clocks are not perfectly synchronized -- give rise to the following variant of distributed recoloring that we call \emph{weak recoloring}: if two adjacent nodes $u$ and $v$ change their color simultaneously at time $i$, then $\bigl\{ x_{i-1}(u), x_i(u) \bigr\} \cap \bigl\{ x_{i-1}(v), x_i(v) \bigr\} = \emptyset$, that is, we have a proper coloring regardless of whether $u$ or $v$ changes its color first. Let us now contrast weak recoloring with \emph{strong recoloring}, in which adjacent nodes never change colors simultaneously. Trivially, strong recoloring solves weak recoloring. But the converse is also true up to constant factors: if we have $k$ input colors and a solution to weak recoloring of length $L$, then we can also find a solution to strong recoloring of length $kL$. To see this, we can implement one weak recoloring step in $k$ strong recoloring substeps such that in substep $j$ nodes of input color $j$ change their colors. As our focus is on the case of a small number of input colors, we can equally well study strong or weak recoloring here; all of our results hold for either of them. While weak recoloring is closer to applications, we present our results using strong recoloring, as it has a more convenient definition. \section{Related work} \subparagraph{Reconfiguration and recoloring.} Recoloring, and more generally combinatorial reconfiguration has received attention over the past few years. Combinatorial reconfiguration problems consist of finding step-by-step transformations between two feasible solutions such that all intermediate results are also feasible. They model dynamic situations where a given solution is in place and has to be modified, but no disruption can be afforded. We refer the reader to the nice survey~\cite{Jansurvey} for a full overview, and focus here on node coloring as a reference problem. As mentioned earlier, we introduce distributed recoloring here, but centralized recoloring has been studied extensively before. Two main models are considered: \begin{enumerate} \item \emph{Node recoloring:} at each step, we can recolor a node into a new color that does not appear on its neighborhood \item \emph{Kempe recoloring:} at each step, we can switch the colors in a bichromatic component (we operate a Kempe change). \end{enumerate} The usual questions are of the form: Given a graph $G$ and an integer $k$, are all its $k$-colorings equivalent (up to node or Kempe recolorings)? What is the complexity of deciding that? What is the maximum number of operations needed to go from to the other? All of those questions can also be asked for two specific $k$-colorings $s$ and $t$ of $G$. Are they equivalent (up to node or Kempe recolorings)? What is the complexity of deciding that? What is the maximum number of operations needed to go from $s$ to $t$ in $G$? While the complexity of questions related to Kempe recoloring remains elusive, the problems related to node recoloring are typically PSPACE-hard~\cite{bonsma2009finding}. The related question of deciding equivalence when a bound on the length of an eligible recoloring sequence is given as part of the input has also been considered \cite{bonsma2014complexity}. We know that the maximum number of operations needed to go from one $3$-coloring to another in a tree is $\Theta(n)$~\cite{cereceda2011finding}. While $(\Delta+1)$-recoloring a graph with no node of degree more than $\Delta$ is not always possible, having $\Delta+2$ colors always suffices \cite{jerrum1995very}, and there are also meaningful results to obtain for the problem of $(\Delta+1)$-recoloring~\cite{feghali2016reconfigurations}. Two other settings have received special attention: characterizing fully when $3$-recoloring is possible \cite{cereceda2011finding,cereceda2009mixing}, and guaranteeing short reconfiguration sequences in the case of sparse graphs for various notions of sparse \cite{bonamy2013recoloring,bousquet2016fast}. Kempe changes were introduced in 1879 by Kempe in his attempted proof of the Four Color Theorem~\cite{kempe79}. Though this proof was fallacious, the Kempe change technique has proved useful in, for example, the proof of the Five Color Theorem and a short proof of Brooks' Theorem. Most works on the topic initially focused on planar graphs, but significant progress was recently obtained in more general settings. We know that all $k$-colorings of a graph with no node of degree more than $k$ are equivalent (w.r.t.\ Kempe changes), except in the case of one very specific graph: the $3$-prism~\cite{bonamy2015conjecture,feghali2017kempe,meyniel3}. Note that some other variants have also been studied, perhaps most notably the question of how many nodes to recolor at once so that the graph can be recolored \cite{mcdonald2015connectedness}. While we will not discuss Kempe recoloring in our work, we point out that recoloring with extra colors is closely connected to Kempe recoloring: Kempe recolorability implies recolorability with one extra color (while the converse is not true). Hence the negative results related to one extra color also hold for Kempe recoloring. \subparagraph{Distributed graph coloring.} Panconesi and Srinivasan~\cite{panconesi1995local} have used Kempe operations to design efficient distributed algorithms for graph coloring with $\Delta$ colors. Other than that we are not aware of prior work on distributed recoloring. On the other hand, the literature on the standard distributed coloring is vast. The best overview on the topic is the book by Barenboim and Elkin \cite{barenboim2013distributed}; the most important recent developments include the following results. There is a randomized $O\bigl(\log^* n + 2^{\sqrt{\log \log n}}\bigr)$ -time algorithm for $(\Delta + 1)$-coloring by Chang et al.~\cite{Chang2018}. In the case of trees, the number of colors can be reduced to $\Delta$ with the cost of increasing the runtime to $O(\log_{\Delta} \log n)$~\cite{Chang2016}. On the deterministic side, the best known $(\Delta + 1)$-coloring algorithm requires $O(\Delta^{3/4} \log \Delta + \log^* n)$ communication rounds~\cite{Barenboim2015}. In the case of trees, the \emph{rake-and-compress} -method by Miller and Reif gives a $3$-coloring in time $O(\log n)$~\cite{MillerR89}. However, there seems to be surprisingly little technology that one can directly transfer between the coloring domain and recoloring domain. Toroidal grids are a good example: by prior work \cite{brandt2017lcl}, $3$-coloring is an inherently global problem, and by the present work, $3+1$ recoloring is an inherently global problem, but the arguments that are used in these proofs are very different (despite the fact that both of them are related to the idea that a ``parity'' is preserved). \section{Preliminaries} In this article, each graph $G=(V,E)$ is a simple undirected graph where $V$ represents its node set and $E$ its edge set. For a subset of nodes $S \subseteq V$, we denote by $G[S]$ the subgraph induced by $S$. For a node $u \in V$, we denote by $N(u)$ the \emph{open neighborhood} of $u$ that is the set of all the neighbors of $u$ and by $N[u]$ its \emph{closed neighborhood} i.e.\ the set $N(u) \cup \{u\}$. For a subset $S \subseteq V$, its closed neighborhood corresponds to the set $\bigcup_{u \in S} N[u]$. The \emph{degree} of a node is the number of neighbors. A \emph{$k$-regular graph} is a graph in which all nodes have degree $k$, a \emph{cubic graph} is the same thing as a $3$-regular graph, and a \emph{subcubic graph} is a graph in which all nodes have degree at most $3$. A \emph{tree} is a connected acyclic graph, and a \emph{$k$-regular tree} is a tree in which each node has degree $1$ or $k$. A \emph{maximal independent set} (MIS) $S \subseteq V$ is an independent set (i.e.\ a set of pairwise non-adjacent nodes) such that for each non-MIS node $u \notin S, N(u) \cap S \neq \emptyset$. Given a graph $G=(V,E)$, a \emph{list-assignment} is a function which assigns to each node $v \in V$ a list of colors $L(v)$. An \emph{$L$-coloring} of $G$ is a function $c$ that assigns to each node $v \in V$ a color $c(v) \in L(v)$ such that for any two adjacent nodes $u, v \in V$, we have $c(u) \neq c(v)$. A graph $G$ is \emph{$k$-list-colorable} if it admits an \emph{$L$-coloring} for every list-assignment where the list of each node is of size at least $k$. Therefore, list-coloring generalizes node-coloring if we consider the special case where each node receives the same input list. The notion of \emph{$L$-recoloring} is the natural generalization of $k$-recoloring: the same elementary steps are considered, and every intermediate coloring must be an $L$-coloring. In order to output a recoloring schedule, it is convenient to consider the question of recoloring a graph $G$ from a coloring $s$ to a coloring $t$, rather than the more symmetric question of whether the two colorings are equivalent in the given setting. We take this opportunity to note that we can reverse time and hence recoloring schedule from $s$ to $t$ also yields a recoloring schedule from $t$ to $s$. In the rest of the paper, we therefore address the two questions as one. \section{Warmup -- simple results}\label{sec:simpleub} We will start by presenting a number of simpler upper and lower bounds that also serve as an introduction to the topic of distributed recoloring. \subsection{Upper bounds} \begin{lem}\label{lem:minusone} In any graph, $k+c$ recoloring for $c = k-1$ is possible in $0$ communication rounds, with a schedule of length $O(k)$. \end{lem} \begin{proof} Generalize the idea of Lemma~\ref{lem:bipartite}; note that the schedule of node $v$ depends only on $s(v)$ and $t(v)$, and not on the colors of any other node around it. \end{proof} \begin{lem}\label{lem:3paths} In paths and trees, $3$-recoloring is possible in $O(n)$ rounds, with a schedule of length $O(n)$. \end{lem} \begin{proof} Every node has full knowledge of the graph. The statement can be intuited by induction on the size of the tree, but we delay a formal proof to Section \ref{sec:treepositive} and more precisely Lemma \ref{lem:recoltree}. \end{proof} \begin{lem}\label{lem:3plus1paths} In cycles and paths, $3+1$ recoloring is possible in $O(1)$ rounds, with a schedule of length $O(1)$. \end{lem} \begin{proof} Use the input coloring to find a maximal independent set $I$. Nodes of $I$ switch to color $4$. Nodes of $V\setminus I$ induce paths of length $O(1)$, apply Lemma~\ref{lem:3paths} there to recolor each of the paths by brute force, without using the extra color $4$. Finally, nodes of $I$ switch to their target colors. \end{proof} \begin{lem}\label{lem:beyonddelta} Let $G$ be a graph of maximum degree at most $\Delta$, and let $k \ge \Delta+2$. Then $k$-recoloring with $c$ extra colors is at least as easy as $(k-1)$-recoloring with $c+1$ extra colors. \end{lem} \begin{proof} Given a $k$-coloring $x$, we can construct a $(k-1)$-coloring $x'$ as follows: all nodes of color $k$ pick a new color from $\{1,2,\dotsc,k-1\}$ that is not used by any of their neighbors. Note that $x \to x'$ is a valid step in distributed recoloring (nodes of color $k$ form an independent set), and by reversing the time, also $x' \to x$ is a valid step. Hence to recolor $s \to t$ with $c$ extra colors, it is sufficient to recolor $s' \to t'$ with $c+1$ extra colors (color $k$ no longer appears in the input and target colorings and can be used as an auxiliary color during recoloring). Then we can put everything together to form a recoloring schedule $s \to s' \to t' \to t$, with only constant overhead in the running time and schedule length. \end{proof} \begin{lem}\label{lem:4plus1subcubic} In subcubic graphs, $4+1$ recoloring is possible in $O(1)$ rounds, with a schedule of length $O(1)$. \end{lem} \begin{proof} Use the input coloring to find a maximal independent set $I$ in constant time. Nodes of $I$ switch to color $5$. Delete $I$; we are left with a graph $G'$ that consists of paths and isolated nodes. Apply Lemmas \ref{lem:beyonddelta} and \ref{lem:3plus1paths} to solve $4+0$ recoloring in each connected component of $G'$. Finally nodes of $I$ can switch to their target colors. \end{proof} \begin{lem}\label{lem:4plus2grids} In toroidal grids, $4+2$ recoloring is possible in $O(1)$ rounds, with a schedule of length $O(1)$. \end{lem} \begin{proof} Pick a maximal independent set $I$, color it with color $6$, and delete; we have a graph of degree at most $3$ and $1$ extra color. Apply Lemma~\ref{lem:4plus1subcubic} to recolor it, and finally nodes of $I$ can switch to their target colors. \end{proof} \begin{lem}\label{lem:5plus1grids} In toroidal grids, $5+1$ recoloring is possible in $O(1)$ rounds, with a schedule of length $O(1)$. \end{lem} \begin{proof} Pick a maximal independent set $I$, color it with color $6$, and delete; we have a graph of degree at most $3$ and $5+0$ colors remaining. Apply Lemma~\ref{lem:beyonddelta} to reduce to the case of $4+1$ colors, and then use Lemma~\ref{lem:4plus1subcubic}. \end{proof} \begin{lem}\label{lem:MISplusforest} For any graph $G$ on $n$ nodes, for any two $k$-colorings $\alpha, \beta$ of $G$, if we can compute in $O(f(n))$ rounds an MIS $S$ such that $V\setminus S$ induces a forest of trees of depth at most $O(d(n))$, we can compute in $O(f(n)+d(n))$ rounds how to $(k+1)$-recolor $G$ from $\alpha$ to $\beta$ with schedule of length $O(d(n))$. \end{lem} \begin{proof} The idea is quite simple: each node in $S$ goes into color $k+1$. We then use the algorithm described in the proof of Lemma \ref{lem:recoltree} to find a recoloring with schedule of length $O(d(n))$ for each connected component after the removal of $S$. After that, each node of $S$ can go to their final color. \end{proof} \subsection{Lower bounds} \begin{lem}\label{lem:needsextra} Recoloring without any extra colors is not possible in the following settings for some pairs of input and target colorings: \begin{enumerate}[label=(\alph*),noitemsep] \item $2$-recoloring paths or trees. \item $2$-recoloring cycles. \item $3$-recoloring cycles. \item $2$-recoloring toroidal grids. \item $3$-recoloring toroidal grids. \item $4$-recoloring toroidal grids. \item $5$-recoloring toroidal grids. \item $2$-recoloring cubic graphs. \item $3$-recoloring cubic graphs. \item $4$-recoloring cubic graphs. \end{enumerate} \end{lem} \begin{proof} We can construct a source coloring in which no node can make a move, and a target coloring different from the input coloring. Here we show examples of the source coloring $s$; the target coloring can be constructed by $t(v) \equiv s(v) + 1 \bmod k$: \begin{enumerate}[label=(\alph*)] \item A path with $2$ nodes, $s = \begin{bmatrix}1 & 2\end{bmatrix}$. \item A $4$-cycle, $s = \begin{bmatrix}1 & 2 & 1 & 2\end{bmatrix}$. \item A $4$-cycle, $s = \begin{bmatrix}1 & 2 & 3\end{bmatrix}$. \item A $4 \times 4$ grid, $ s = \begin{bsmallmatrix} 1 & 2 & 1 & 2 \\ 2 & 1 & 2 & 1 \\ 1 & 2 & 1 & 2 \\ 2 & 1 & 2 & 1 \end{bsmallmatrix} $. \item A $3 \times 3$ grid, $ s = \begin{bsmallmatrix} 1 & 2 & 3 \\ 2 & 3 & 1 \\ 3 & 1 & 2 \end{bsmallmatrix} $. \item A $4 \times 4$ grid, $ s = \begin{bsmallmatrix} 1 & 2 & 3 & 4 \\ 3 & 4 & 1 & 2 \\ 1 & 2 & 3 & 4 \\ 3 & 4 & 1 & 2 \end{bsmallmatrix} $. \item A $5 \times 5$ grid, $ s = \begin{bsmallmatrix} 1 & 2 & 3 & 4 & 5 \\ 3 & 4 & 5 & 1 & 2 \\ 5 & 1 & 2 & 3 & 4 \\ 2 & 3 & 4 & 5 & 1 \\ 4 & 5 & 1 & 2 & 3 \end{bsmallmatrix} $. \item Complete bipartite graph $K_{3,3}$, with $s$ constructed from the bipartition. \item Prism graph: connect the nodes of a $3$-cycle colored with $\begin{bmatrix}1 & 2 & 3\end{bmatrix}$ to another $3$-cycle colored with $\begin{bmatrix}2 & 3 & 1\end{bmatrix}$, in this order. \item Complete graph $K_4$. \qedhere \end{enumerate} \end{proof} \begin{lem}\label{lem:3pathslb} In paths and trees, $3$-recoloring without extra colors requires $\Omega(n)$ rounds and produces schedules of length $\Omega(n)$ in the worst case. This holds also in the case of $3$-regular trees. \end{lem} \begin{proof} Consider a long path with the input coloring $1,2,3,1,2,3,\dotsc,1,2,3$ and observe that a node of degree $2$ can change its color only after at least one neighbor has changed colors. We can embed such a path also in a $3$-regular tree. \end{proof} \begin{lem}\label{lem:4treelb} In trees, $4$-recoloring without extra colors requires $\Omega(\log n)$ time and produces schedules of length $\Omega(\log n)$ in the worst case. \end{lem} \begin{proof} It is sufficient to construct a $3$-regular tree in which each node is surrounded by nodes of all other colors: color the root with color $1$ and its neighbors with colors $2$, $3$, and $4$. Then recursively for each leaf node of color $x$ that is already adjacent to a node of color $y$, add two new neighbors with the colors $\{1,2,3,4\} \setminus \{x,y\}$, etc., and continue in a balanced manner such that the distance between the root and the nearest leaf is logarithmic. Now a non-leaf node can change its color only once its neighbor has changed its color. \end{proof} \section{Recoloring algorithm for trees}\label{sec:treepositive} In this section, we provide two efficient algorithms for recoloring and list-recoloring trees. Note that Theorem~\ref{thm:tree4} is tight; see the full version for more details. \begin{theorem}\label{thm:tree3-1} For any $k\in \mathbb{N}$, for every tree $T$ on $n$ nodes, for any two $k$-colorings $\alpha, \beta$ of $T$, we can compute in $O(\log n)$ rounds how to recolor $T$ from $\alpha$ to $\beta$ with $1$ extra color and a schedule of length $O(1)$. \end{theorem} \begin{theorem}\label{thm:tree4} For every tree $T$ on $n$ nodes and any list assignment $L$ of at least $4$ colors to every node of $T$, for any two $L$-colorings $\alpha, \beta$ of $T$, we can compute in $O(\log n)$ rounds how to $L$-recolor $T$ from $\alpha$ to $\beta$ with schedule of length $O(\log n)$. \end{theorem} We first discuss how to compute efficiently an independent set with some desirable properties. For this, we use a simple modification of the \emph{rake and compress} method by Reif and Miller~\cite{MillerR89}. More precisely, we iterate rake and compress operations, and label nodes based on the step at which they are reached. We then use the labels to compute an independent set satisfying given properties. We finally explain how to make use of the special independent set to obtain an efficient recoloring algorithm, in each case. \begin{defn}\label{def:hlabel} A \emph{light $h$-labeling} is a labeling $V \to [h]$ such that for any $i \in [h]$: \begin{enumerate} \item Any node labeled $i$ has at most two neighbors with label $\geq i$, at most one of which with label $\geq i+1$. \item No two adjacent nodes labeled $i$ both have a neighbor with label $\geq i+1$. \end{enumerate} \end{defn} \begin{lem}\label{lem:treelabeling} There is an $O(\log n)$-round algorithm that finds a light $(2 \log n)$-labeling of a tree. \end{lem} \begin{proof} As discussed above, we merely use a small variant of the \emph{rake and compress} method. At step $i$, we remove all nodes of degree $1$ and all nodes of degree $2$ that belong to a chain of at least three nodes of degree $2$, and assign them label $i$. One can check that this yields a light labeling. It remains to discuss how many different labels are used, i.e. how many steps it takes to delete the whole tree. Let us argue that no node remains after $2 \log n$ rounds. Let $T$ be a tree, let $V_1$ (resp.\ $V_2$, $V_3$) be the number of nodes of degree $1$ (resp.\ $2$, $\geq 3$) in the tree, and let $T'$ be the tree obtained from $T$ by replacing any maximal path of nodes of degree $2$ with an edge. Note that $|V(T')|=|V_1|+|V_3|$. Let $W$ be the set of nodes in $T$ that have degree $2$ with both neighbors of degree $2$. Note that $|V_2 \setminus W|\leq 2 |E(T')| =2(|V_1|+|V_3|-1)$. Note also that $|V_1|\geq |V_3|$, simply by the fact that there are fewer edges than nodes in a tree. It follows that $|W|\geq |V_2|- 2(|V_1|+|V_3|-1)= |V(T)|-|V_1|-|V_3|-2(|V_1|+|V_3|-1) \geq |V(T)|-6|V_1|$. Consequently, we obtain $|W|+|V_1|\geq \frac{|V|}6$. In other words, at every step, we remove in particular $W \cup V_1$, hence at least a sixth of the nodes. It follows that at after $k$ steps, the number of remaining nodes is at most $n \cdot \bigl(\frac56\bigr)^k$. Note that this is less than $1$ once $k \geq 2 \log n$. \end{proof} We now discuss how to make use of light $h$-labelings. \begin{lem}\label{lem:uselightlabels} For any graph $T$, any $3$-coloring $\alpha$ of $T$, and any integer $h$, let $L$ be a light $h$-labeling of $T$. There is an $O(h)$-round algorithm that finds a maximal independent set $S$ such that $T \setminus S$ only has connected components on $1$ or $2$ nodes. \end{lem} \begin{proof} In brief, we proceed as follows: at step $i = h, h-1, \dotsc, 1$, we first add all nodes of label $i$ which have a neighbor of label $\geq i+1$ that is not in $S$ (they form an independent set by definition of a light label), then use the $3$-coloring to obtain a fast greedy algorithm to make $S$ maximal on the nodes of label $\geq i$. The detailed algorithm can be found in the full version. \begin{algorithm}[t] \caption{\label{algo:stabletreedecomposition}\textsc{Decomposing into an independent set and components of size $\le2$}} \begin{algorithmic}[1] \REQUIRE A tree $T$, a 3-coloring $\alpha$ and a light $h$-label of $T$. \ENSURE A set $S$ of $V(T)$ such that $G[S]$ is an independent set and every connected component of $G[V\setminus S]$ has size at most $2$. \FOR{$i$ from $h$ down to 1} \FOR{$u$ with label $i$ (in parallel)} \STATE If $u$ has a neighbor of higher label that is not in $S$, add $u$ to $S$ \ENDFOR \FOR{$j$ from $1$ to 3} \FOR{$u$ with label $i$ and color $j$ (in parallel)} \STATE If $N(u)\cap S=\emptyset$, add $u$ to $S$ \ENDFOR \ENDFOR \ENDFOR \end{algorithmic} \end{algorithm} The fact that the output $S$ is an independent set follows directly from the construction, as does the fact that the running time in $O(h)$ rounds. We note that no connected component of $T \setminus S$ contains nodes of different labels, due to the first operation at step $i$. It remains to argue that for any $i$, the nodes of label $i$ that do not belong to $S$ only form connected components of size $1$ or $2$. Assume for a contradiction that there is a node $u$ of label $i$ which has two neighbors $v$ and $w$, also of label $i$, such that none of $\{u,v,w\}$ belongs to $S$. By definition of a light label, the node $u$ has no other neighbor of label $\geq i$, a contradiction to the fact that we build $S$ to be an MIS among the nodes of label $\geq i$. \end{proof} Combining Lemmas~\ref{lem:treelabeling} and~\ref{lem:uselightlabels}, and observing that a $3$-coloring of a tree can be obtained in $O(\log n)$ rounds, we immediately obtain the following. \begin{lem}\label{lem:treeMIS} There is an $O(\log n)$-round algorithm that finds an MIS in a tree, such that every component induced by non-MIS nodes is of size one or two. \end{lem} We are now ready to prove Theorem~\ref{thm:tree3-1}. \begin{proof}[Proof of Theorem~\ref{thm:tree3-1}] First, we use Lemma~\ref{lem:treeMIS} to obtain in $O(\log n)$ rounds an MIS $S$ such that $T \setminus S$ only has connected components of size $1$ or $2$. We recolor each node in $S$ with the extra color. Remove $S$, and recolor each component from $\alpha$ to $\beta$ without using any extra colors; this can be done in $O(1)$ recoloring rounds. Each node in $S$ can then go directly to its color in~$\beta$. \end{proof} Moving on to the list setting, we have to use a more convoluted approach since there is no global extra color that we can use. Before discussing $4$-list-recoloring, we discuss $3$-list-recoloring. For the sake of intuition, we start by presenting an algorithm for $3$-recoloring trees, and explain afterwards how to adapt it for the list setting. \begin{lem}\label{lem:recoltree} For every tree $T$ with radius at most $p$ and for any two 3-colorings $\alpha, \beta$ of $T$, we can compute in $O(p)$ rounds how to $3$-recolor $T$ from $\alpha$ to $\beta$ with a schedule of length $O(p)$. \end{lem} \begin{proof} Let $c\colon V \to [3]$ be a $3$-coloring of $T$. We introduce an identification operation: Given a leaf $u$ and a node $v$ such that $u$ and $v$ have a common neighbor $w$, we recolor $u$ with $c(v)$, and from then on we pretend that $u$ and $v$ are a single node. In other words, we delete $u$ from the tree we are considering, and reflect any recoloring of $v$ to the node $u$. Note that these operations can stack up: the recoloring of a single node might be reflected on an arbitrarily large independent set in the initial tree. We now briefly describe an algorithm to recolor a $3$-coloring into a $2$-coloring $c'$ in $O(p)$ rounds, with schedule $O(p)$. First, root $T$ on a node $r$ which is at distance at most $p$ of any node of $T$. Any node of $T$ which is not adjacent to the root has a \emph{grandparent}, which is defined as its parent's parent. Then, at each step, we consider the set $A$ of leaves of $T$ which have a grandparent, if any. We identify each leaf in $A$ with its grandparent (note that the notion of grandparent guarantees that this operation is well-defined, and that the operation results in $A$ being deleted). This process stops when $T$ consists only of the root $r$ and its children. We select one of the children arbitrarily and identify the others with it. This results in $T$ being a single edge. Note that the color partition of $c'$ is compatible with the identification operations, as we only ever identify nodes at even distance of each other. We then recolor $T$ into $c'$: this is straightforward in the realm of $3$-recoloring. We can now choose a $2$-coloring of $T$ (this can be done in $O(p)$ rounds), and apply the above algorithm to $3$-recolor both $\alpha$ and $\beta$ to that $2$-coloring. This results in a $3$-recoloring between $\alpha$ and $\beta$ with schedule $O(p)$. \end{proof} The same idea can be adapted to list coloring: \begin{lem}\label{lem:recoltreelist} For every tree $T$ with radius at most $p$, for any list assignment $L$ of at least $3$ colors to each node, for any two $L$-colorings $\alpha, \beta$ of $T$, we can compute in $O(p)$ rounds how to $L$-recolor $T$ from $\alpha$ to $\beta$ with schedule $O(p)$. \end{lem} \begin{proof} We adapt the identification operation introduced in the proof of Lemma~\ref{lem:recoltree}, merely by adapting the notion of having the same color. Let $u$ and $v$ be two nodes with a common neighbor $w$. We say $u$ has the same color as $v$ \emph{with respect to $w$} in the following cases: \begin{itemize} \item If $L(u)\neq L(w)$, then $u$ is colored with the smallest element of $L(u) \setminus L(w)$ \item If $L(u)=L(w)$ and the color of $v$ belongs to $L(u)$, then $u$ is colored the same as $v$ \item If $L(u)=L(w)$ and the color of $v$ does not belong to $L(u)$, then $u$ is colored with the smallest element of $L(w)$ that differs from the color of $w$ \end{itemize} Therefore, when we identify a leaf $u$ with a node $v$ that has a common neighbor $w$ with $u$, we first assign to $u$ the same color as $v$ with respect to $w$, and from then on we pretend that $u$ and $v$ are a single node. In other words, any recoloring of $v$ is mirrored on $u$ so that at each step, the node $u$ has the same color as $v$ with respect to $w$. Note that in some cases it may be that the color of $u$ does not actually change when the color of $v$ does. When the operations stack up, i.e. a node $u$ is identified with a node $v$ which is identified with a node $x$, we do not claim transitivity of the relation. In particular, $u$ and $x$ have no common neighbor, hence them having the same color is not well-defined. We merely enforce that $u$ has the same color as $v$ with respect to their common neighbor, and that $v$ has the same color as $w$ with respect to their common neighbor. We insist on the fact that the definition of having the same color only depends on the list assignment. In particular, let us consider the situation once no more identification operation can be operated, i.e. the tree has been identified into an edge (see the proof of Lemma~\ref{lem:recoltree}). The coloring of the edge characterizes entirely the coloring of the whole tree, regardless of the initial coloring. Therefore, we can pick an arbitrary $L$-coloring of the edge, and recolor both $\alpha$ and $\beta$ into the corresponding $L$-coloring of the tree in $O(p)$ rounds with schedule $O(p)$. This results in computing in $O(p)$ rounds an $L$-recoloring between $\alpha$ and $\beta$ with a schedule of length~$O(p)$. \end{proof} To prove Theorem~\ref{thm:tree4}, we first split the tree in small components. We slightly adapt the proof of Lemma~\ref{lem:uselightlabels}: \begin{lem}\label{lem:uselightlabelsforlists} For any tree $T$, any $3$-coloring $\alpha$ of $T$, and any integer $h$, let $L$ be a light $h$-label of $T$. There is a $O(h)$-round algorithm that finds a maximal independent set $S$ such that no node has two neighbors in $S$ and $T \setminus S$ only has connected components of radius $O(h)$. \end{lem} \begin{proof} The algorithm is far simpler than Algorithm~\ref{algo:stabletreedecomposition}. We compute the set $R$ of nodes with no neighbor of higher label. We note that $G[R]$ is a collection of paths and cycles. We compute an independent set $S \subseteq R$ that is maximal subject to the property that no node in $R$ has two neighbors in $S$. Note that by definition of light label, no node outside of $R$ may have two neighbors in $R$ (hence in $S$). It remains to argue that $T \setminus S$ only has connected components of radius $O(h)$. We point out that every connected component of $T[R]$ contains an element of $S$. Therefore, any connected subset of nodes of $T[R]$ has at most one neighbor of higher label, since $T$ is a tree. Together with the fact that any connected component of $T[R \setminus S]$ has at most $2$ nodes, we derive the conclusion. \end{proof} Now we are ready to prove Theorem~\ref{thm:tree4}. \begin{proof}[Proof of Theorem~\ref{thm:tree4}] Compute (in $O(\log n)$ rounds) an independent set $S$ such any two elements of $S$ are at distance at least $2$ of each other and every connected component of $T \setminus S$ has radius $O(\log n)$. By Lemmas~\ref{lem:treelabeling} and~\ref{lem:uselightlabelsforlists} and the fact that a $3$-coloring of a tree can be computed in $O(\log n)$ rounds, we compute (in $O(\log n)$ rounds) an $L$-coloring $\gamma$ of $T \setminus S$ such that every node adjacent to an element $u \in S$ has a color different from $\alpha(u)$ and $\beta(u)$. Note that this coloring exists since any tree is $2$-list-colorable. Use Lemma~\ref{lem:recoltreelist} to recolor each connected component of $T \setminus S$ from $\alpha$ to $\gamma$. Recolor every element of $S$ with its color in $\beta$. Use Lemma~\ref{lem:recoltreelist} to recolor each connected component $T \setminus S$ from $\gamma$ to $\beta$. Note that this yields an $L$-recoloring of $T$ from $\alpha$ to $\beta$ with schedule $O(\log n)$. \end{proof} Note that a direct corollary of Theorem \ref{thm:tree4} is that for any $k-$coloring $\alpha$, $\beta$ of a trees with $k\ge4$, a schedule of length $\Theta(\log n)$ can be found in $\Theta(\log n)$ rounds. \section{Recoloring algorithm for subcubic graphs}\label{sec:subcubicpositive} In this section we study recoloring in subcubic graphs (graphs of maximum degree at most $3$); our main result is summarized in the following theorem: \begin{theorem}\label{thm:cubic3-1} For every subcubic graph $G$ on $n$ nodes, for any two $3$-colorings $\alpha, \beta$ of $G$, we can compute in $O(\log^2 n)$ rounds how to recolor $G$ from $\alpha$ to $\beta$ with $1$ extra color and a schedule of length $O(\log n)$. \end{theorem} A \emph{theta} is formed of three node-disjoint paths between two nodes. Note that in particular if a graph contains two cycles sharing at least one edge, then it contains a theta. We note $B^{k}(u)$ the set of nodes at distance at most $k$ to $u$. We show here, roughly, that there is around every node a nice structure that we can use to design a valid greedy algorithm for the whole graph. This proof is loosely inspired by one in~\cite{ABBE}. \begin{lem}\label{lem:thereisthetaor2} For every subcubic graph $G$ on $n$ nodes, for every node $u \in V(G)$, there is a node $v$ with degree at most $2$ or a theta that is contained in $B^{2 \log n}(u)$. \end{lem} \begin{proof} Assume for a contradiction that there is a subcubic graph $G$ on $n$ nodes with a node $u$ such that $B^{2 \log n}(u)$ contains no node of degree $2$ nor any theta. Let $B$ be the set of nodes at distance at most $2 \log n$ from $u$, and $B^-$ the set of nodes at distance at most $2 \log n-1$ from $u$. Let $\mathcal{C}$ be the set of cycles of $G$ contained in $B$. Note that cycles in $\mathcal{C}$ are edge-disjoint by assumption on $u$ and thus node-disjoint since $G$ is cubic. We select a set $\mathcal{E}$ by picking for every $C \in \mathcal{C}$ an arbitrary edge in $E(C)$ among those with both endpoints farthest from $u$. Note that $|\mathcal{E}|=|\mathcal{C}|$, and that by choice of $\mathcal{E}$, every edge in $B$ with both endpoints at the same distance of $u$ is selected in~$\mathcal{E}$. Therefore, the distance to $u$ yields a natural orientation of the edges in $B \setminus \mathcal{E}$, orientation from closer node to $u$ toward further node. We also note that by choice of $\mathcal{E}$, for any edge $wx$ in $\mathcal{E}$ such that $x$ is farther away from $u$ than $w$, the node $x$ has another neighbor $y$ at the same distance of $u$ as $w$. In that case, note that the edge $xy$ does not belong to $\mathcal{E}$. We claim as a consequence that the distance from $u$ is the same in $B$ as in $B \setminus \mathcal{E}$. For any node $w \in B$, we say an outgoing edge is \emph{useful} if it does not belong to $\mathcal{E}$. In addition to the above remarks, we make two observations: \begin{enumerate} \item Every node in $B^-$ has at least one useful edge. \item If a node $w$ in $B^-$ has only one useful edge $wx$, then $x$ has two outgoing useful edges. \end{enumerate} Let us consider the graph $H$ obtained from $G[B]$ by removing all edges in $\mathcal{E}$. We claim that every node in $B$ has degree at least $2$ in $H$, and that no two adjacent nodes in $H$ have degree $2$: this is immediate from the observations and remarks above. We also observe that $H$ is a tree. Let $H'$ be the graph obtained from $H$ by replacing every node of degree two with an edge. We note that $H'$ is a $3$-regular tree of root $u$ and with no leaf at distance less than $\log n$ of $u$. It follows that $H'$ contains at least $1+3 \cdot 2^{\log n} > n$ nodes, a contradiction. \end{proof} \begin{lem}\label{lem:decompositionextension} Let $G$ be a subcubic graph, let $p$ be an integer, and let $\mathcal{A}$ be a collection of thetas and nodes of degree $\leq 2$ in $G$ each at distance at least $2$ of each other. Let $r \geq 1$ be such that no element of $\mathcal{A}$ has diameter more than $\frac{r}2$. If the nodes of $G \setminus (\bigcup_{A \in \mathcal{A}} A)$ can be partitioned into $S$ and $F$ such that $G[S]$ is an independent set and $G[F]$ is a forest of radius at most $p$, then there is a partition $(S',F')$ of $\bigcup_{A \in \mathcal{A}} A$ such that $G[S \cup S']$ is an independent set and $G[F \cup F']$ is a forest of radius at most $p+r$. \end{lem} \begin{proof} Our construction ensures that any pair of nodes that are not connected in $G[F]$ are not connected in $G[F \cup F']$ neither. Hence, it suffices to prove that the statement holds for a single element of $\mathcal{A}$, since the elements of $\mathcal{A}$ are by hypothesis non-adjacent. Let $A$ be an element of $\mathcal{A}$. We consider two cases depending on whether $A$ is a node of degree at most $2$ or is a theta. \begin{itemize} \item If $A$ consists of a node $v$ of degree $1$, or $2$, we set $v$ to be in $F'$ if it has a neighbor in $S$, in $S'$ otherwise. Note that since $v$ has at most one neighbor in $F$, the radius of $F \cup F'$ is at most one more than that of $F$. \item If $A$ consists of a theta with endpoints $u$ and $v$ and three node-disjoint paths $P_1, P_2, P_3$, we prove independently that each $P_i$ admits a partition that is compatible with $u$ being set to $S'$ and $v$ to $F'$, in such a way that the connected component of $F \cup F'$ that contains $v$ is contained in $F'$. We do this by induction on the number of nodes in $P_i$. If $P_i$ has no internal node, the conclusion immediately follows. If all the neighbors of $P_i$ at distance $\geq 3$ of $u$ through $P_i$ are in $S$, we set all of $P_i$ to $F'$. Otherwise, let $w$ be the neighbor of $P_i$ in $F$ that with smallest distance ($\geq 3)$ to $u$ through $P_i$. Let $x$ be the neighbor of $w$ in $P_i$. We apply induction on $P_i \setminus \{\textrm{nodes closer to $u$ than $x$}\}$, with $x$ in the role of $u$. Note that $x$ is distinct from $v$ and not adjacent to $u$, by construction. The nodes between $u$ and $x$ on $P_i$ are added to $F'$. Note that these nodes are connected to at most one component of $G[F]$, on the first node of $P_i$. We extend the resulting decomposition to the rest of $P_i$ by setting all corresponding nodes to $F'$. \qedhere \end{itemize} \end{proof} \begin{lem}\label{lem:algostableforestdecompositionrunningtime} Let $G$ be a subcubic graph on $n$ nodes. We can compute in $O(\log^2 n)$ rounds a partition $(S,F)$ of the nodes of $G$ that $G[S]$ is an independent set and $G[F]$ is a forest of radius $O(\log n)$. \end{lem} \begin{proof} To that purpose, we combine the previous lemmas in Algorithm~\ref{algo:stableforestdecomposition}. The algorithm computes a decomposition as desired and runs in $O(\log n)+RS(n)$ rounds, where $RS(n)$ is the number of rounds necessary to compute a $(4 \log n, 8 \log n)$-ruling set in a subcubic graph. We derive from~\cite{panconesi1992improved} that $RS(n) = O(\log^2 (n))$, hence the conclusion. \end{proof} \begin{algorithm} \caption{\label{algo:stableforestdecomposition}\textsc{Decomposing into a small forest and an independent set}} \begin{algorithmic}[1] \REQUIRE A subcubic graph $G$. \ENSURE A decomposition $(F,S)$ of $V(G)$ such that $G[S]$ is an independent set and every connected component of $G[F]$ has radius at most $\log n$. \FOR{$u$ in $V(G)$ (in parallel)} \STATE Acquire knowledge on $B^{2 \log n}(u)$ \STATE Select in the node set of $B^{2 \log n}(u)$ a configuration $C(u)$ that is a minimal theta or a node of degree $1$ or $2$ \ENDFOR \STATE Compute a $(4 \log n, 8 \log n)$-ruling set $X$ in $G$ \STATE Define $\mathcal{A}=\cup_{u \in X}\{C(u)\}$ \STATE Compute the distance of every node in $G$ to an element of $\mathcal{A}$ \STATE Let $F=S=\emptyset$ \FOR{$i=8 \log n$ downto $1$} \STATE Extend the partition $(F,S)$ to the nodes at distance $i$ from $\mathcal{A}$, more precisely: \STATE Each connected component is a path or cycle where no internal node has an already assigned neighbor, let $U_i$ be the set of the internal nodes \STATE Assuming a pre-computed MIS on each layer for the sets $U_i$, assign that MIS to $S$ \STATE Extend greedily on the remaining nodes (which form bounded-size components), assigning nodes to $S$ when possible, to $F$ when not \ENDFOR \STATE Extend the partition $(F,S)$ to the nodes belonging to an element of $\mathcal{A}$ using Lemma~\ref{lem:decompositionextension} \end{algorithmic} \end{algorithm} We are now ready to prove Theorem \ref{thm:cubic3-1}, which we do in a similar fashion as Theorem \ref{thm:tree3-1}. \begin{proof} Use Lemma \ref{lem:algostableforestdecompositionrunningtime}, and obtain a decomposition $(S,F)$ as stated. Recolor all of $S$ to the extra color, then use Lemma \ref{lem:recoltreelist} on each connected component of $G[F]$ so that all nodes of $F$ reach their target color (remember that each connected component of $G[F]$ has radius $O(\log n)$). Finally recolor each node of $S$ with its target color. \end{proof} \section{Recoloring in toroidal grids}\label{sec:grids} In this section we study toroidal grids (torus grid graphs). Throughout this section, an $h \times w$ toroidal grid is the Cartesian graph product of cycles of lengths $h$ and $w$; we assume $h \ge 3$ and $w \ge 3$. A toroidal grid can be constructed from an $h \times w$ grid by wrapping both boundaries around into a torus. In the full version, we show that e.g.\ $2+0$, $3+0$, and $4+0$ recoloring is not always possible, and by Lemma~\ref{lem:minusone} e.g.\ $2+1$, $3+2$, and $4+3$ recoloring is trivial. The first nontrivial case is $3+1$ recoloring; in this section we give a complete characterization of $3+1$ recolorability in toroidal grids: \begin{theorem}\label{thm:grids} Let $G$ be the $h \times w$ toroidal grid graph. Then $3+1$ recoloring is possible for any source and target coloring in the following cases: (i)~both $h$ and $w$ are even, or (ii)~$h = 4$, or (iii)~$w = 4$. For all other cases it is possible to construct $3$-colorings $s$ and $t$ such that $t$ is not reachable from $s$ by valid recoloring operations using $1$ extra color. \end{theorem} This also shows that $3+1$ recoloring is an inherently global problem in toroidal grids, even if we have a promise that recoloring is possible. For example, if there was a sublinear-time distributed recoloring algorithm $A$ for $6 \times w$ grids for an even $w$, we could apply the same algorithm in a $6 \times w$ grid with an odd $w$ (the algorithm cannot tell the difference between these two cases in time $o(w)$), and hence we could solve recoloring in $6 \times w$ grids for all $w$, which contradicts Theorem~\ref{thm:grids}. By a similar argument, distributed recoloring in non-toroidal grids is also an inherently global problem. \subparagraph{Existence.} To prove Theorem~\ref{thm:grids}, let us start with the positive results. If $h$ and $w$ are even, the graph is bipartite and recoloring is always possible by Lemma~\ref{lem:bipartite}. The remaining cases are covered by the following lemma. \begin{lem}\label{lem:grids4xw} Let $G$ be a $4 \times w$ toroidal grid for any $w \ge 3$, and let $s$ and $t$ be any $3$-colorings. Then there exists a recoloring from $s$ to $t$ with one extra color. \end{lem} \begin{proof} We first take an MIS $S$ over pairs of consecutive columns, i.e.\ a set of indices of the form $(i,i+1)$ such that every column $j \notin S$ is such that at least one of $j-1$ and $j+2$ belongs to $S$, every column $i \in S$ is such that precisely one of $i-1$ and $i+1$ is in $S$. Note that indices are taken modulo $w$. For every pair in $S$, we select a maximal independent set of the corresponding columns. The resulting union yields an independent set $R$. We then greedily make $R$ maximal columnwise away from $S$. We recolor $R$ with the extra color. It remains to argue that $G\setminus R$ can reach its targeted coloring. We note that since leaves are not problematic, removing $R$ essentially boils down to removing the columns with index in $S$. Note that the remaining connected components are cycles of length 4. Cycles of length 4 can be always $3$-recolored. Note that the above proof yields in fact an $O(\log n)$ rounds algorithm that outputs an $O(1)$ schedule. We can improve it into an $O(1)$-round algorithm, simply by pointing out that there is only a finite number of possible colorings for a column, and two adjacent columns cannot have the same coloring. This allows us to compute $S$ in constant time. \end{proof} \subparagraph{Non-existence.} Let us now prove the negative result. Our high-level plan is as follows. Let $G$ be an $h \times w$ toroidal grid. We will look at all \emph{tiles} of size $2 \times 2$. If $G$ is properly colored with $k$ colors, so is each tile. The following two tiles are of special importance to us; we call these tiles of \emph{type A}: \[ \begin{bmatrix} {\color{blue} 2} & {\color{red} 3} \\ {\color{red} 3} & {\color{black} 1} \end{bmatrix}, \quad \begin{bmatrix} {\color{black} 1} & {\color{red} 3} \\ {\color{red} 3} & {\color{blue} 2} \end{bmatrix}. \] We are interested in the \emph{number} of type-A tiles. For example, consider the following colorings of the $3 \times 3$ toroidal grid: \[ s = \begin{bmatrix} {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} \\ {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} \\ {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} \end{bmatrix}, \quad t = \begin{bmatrix} {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} \\ {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} \end{bmatrix}. \] Here $s$ contains $3$ tiles of type A (recall that we wrap around at the boundaries), while $t$ does not have any tiles of type A. In particular, $s$ has an odd number of type-A tiles and $t$ has an even number of type-A tiles. In brief, we say that the \emph{A-parity} of $s$ is odd and the A-parity of $t$ is even. It turns out that this is sufficient to show that recoloring from $s$ to $t$ with one extra color is not possible (see the full version of the article for the proof of this lemma): \begin{lem}\label{lem:gridsAparity} Let $G$ be a toroidal grid, and let $s$ and $t$ be two $3$-colorings. If $s$ and $t$ have different A-parities, then it is not possible to recolor $G$ from $s$ to $t$ with $1$ extra color. \end{lem} \begin{proof} Let us now return to the last missing ingredient: the proof of Lemma~\ref{lem:gridsAparity}, which says that $3+1$ recoloring from $s$ to $t$ is not possible if we have different A-parities. Intuitively, we would like to prove that recoloring operations preserve the A-parity. Unfortunately, this is not the case, as we can use the extra color. For example, if you take a proper $3$-coloring $s$ with an odd A-parity and replace all nodes of color $3$ with color $4$, you will have a proper $4$-coloring $t$ with an even A-parity. It turns out that recoloring operations do preserve a certain kind of parity of $2 \times 2$ tiles, but it is much more involved than merely preserving type-A parity. We introduce a new set of $2 \times 2$ tiles that we call \emph{type-B} tiles; the parity of the number of type-B tiles is called {B-parity}: \input{typeb.tex} The collection of type-B tiles looks indeed somewhat arbitrary, but the following property is easy to verify: there is exactly one node of color $4$ in each type-A tile. Therefore if $x$ is a proper $3$-coloring, then B-parity of $x$ is even. In particular, in $3+1$ recoloring, the initial coloring $s$ and the target coloring $t$ both have even B-parities. The magic behind the choice of type-B tiles is that it happens to satisfy the following property. \begin{lem}\label{lem:ABparity} If you change the color of one node in a properly $4$-colored grid so that the result is also a proper $4$-coloring, A-parity changes if and only if B-parity changes. \end{lem} \begin{proof} Enumerate all $3\times 3$ neighborhoods and all possible ways to change the middle node, and check that the claim holds. We have made a computer program that verifies the claim and a human-readable list of all cases available online.\footnote{\url{https://github.com/suomela/recoloring}} \end{proof} Now any $3+1$ recoloring from $s$ to $t$ can be serialized so that we change the color of one node at a time. We start with an even B-parity (no type-B tiles in $s$), apply Lemma~\ref{lem:ABparity} at each step, and eventually we arrive at even B-parity (no type-B tiles in $t$). As B-parity did not change between $s$ and $t$, also A-parity cannot change between $s$ and $t$. This concludes the proof of Lemma~\ref{lem:gridsAparity}. \end{proof} Hence the A-parity of a coloring partitions the space of colorings in two components that are not connected by $3+1$ recoloring operations. To complete the proof of Theorem~\ref{thm:grids}, it now suffices to construct a pair of $3$-colorings with different A-parities for each relevant combination of $h$ and $w$. \subparagraph{\boldmath Odd $h$, odd $w$.} First assume that both $h$ and $w$ are odd. The simplest case is $h = w$. In that case we can simply have $3$s on the anti-diagonal and color all remaining areas with colors $1$ and $2$; this gives a coloring $s$ with $h$ type-A tiles (odd type-A parity). If we put $3$s on the diagonal, we can construct a coloring $t$ with $0$ type-A tiles (even type-A parity). Here are examples for $h = w = 5$: \[ s = \begin{bmatrix} {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} \\ {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} \\ {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} \\ {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \end{bmatrix}, \quad t = \begin{bmatrix} {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \\ {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} \\ {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} \end{bmatrix}. \] If $h \ne w$, assume w.l.o.g.\ that $h < w$, and in particular, $w = h + 2\ell$ for some $\ell$. Then we can take the diagonal construction for $h \times h$ and add $\ell$ copies of the two rightmost columns. For example, for $h = 5$ and $w = 9$ (and hence $\ell = 2$) we get \[ s = \begin{bmatrix} {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} \\ {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \\ {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} \\ {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \end{bmatrix}, \quad t = \begin{bmatrix} {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \\ {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \\ {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} \end{bmatrix}. \] Note that each additional pair of columns results in one new tile of type A in both $s$ and $t$; hence overall we will have $h + \ell$ type-A tiles in $s$ and $\ell$ type-A tiles in $t$, and as $h$ was odd, the parity differs. \subparagraph{\boldmath Odd $h$, even $w \ne 4$.} The case that remains to be considered is that exactly one of $h$ and $w$ is odd; w.l.o.g., assume that $h$ is odd and $w$ is even. Also recall that $w \ne 4$, and hence we can focus on the case $h \ge 3$ and $w \ge 6$. For the base case of $h = 3$ and $w = 6$ we can use the following configuration; here in $s$ there is a sequence of $3$s that \emph{wraps around vertically twice}, while in $t$ there is a sequence of $3$s that does not wrap around vertically. Here $s$ has got $6$ type-A tiles (even), while $t$ has got $3$ type-A tiles (odd): \[ s = \begin{bmatrix} {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} \\ {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} \\ {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} \end{bmatrix}, \quad t = \begin{bmatrix} {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \\ {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} \\ {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} \end{bmatrix}. \] If $h = 3$ and $w = 6 + 2\ell$, we can take the above construction and pad it by duplicating the two leftmost columns $\ell$ times. Each such duplication results in one new type-A tile in both configurations, maintaining the difference in parities. For example, for $h = 3$ and $w = 8$ we get \[ s = \begin{bmatrix} {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} \\ {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} \\ {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} \end{bmatrix}, \quad t = \begin{bmatrix} {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \\ {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} \\ {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} \end{bmatrix}. \] Finally, if $h = 3 + 2\ell$, we can take the above construction for $h = 3$ and take $2\ell$ copies of the top row, shifting it back and forth to preserve the coloring. For example, for $h = 7$ and $w = 8$ we get \[ s = \begin{bmatrix} {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} \\ {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} \\ {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} \\ {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} \\ {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} & {\color{red} 3} & {\color{black} 1} & {\color{blue} 2} \end{bmatrix}, \quad t = \begin{bmatrix} {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \\ {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \\ {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} \\ {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} & {\color{black} 1} & {\color{blue} 2} \\ {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} & {\color{red} 3} & {\color{black} 1} \\ {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} & {\color{blue} 2} & {\color{red} 3} \end{bmatrix}. \] This way we preserve the basic topological structure, with a sequence of $3$s wrapping around vertically twice in $s$ and zero times in $t$. Note that in $s$ we will get $2\ell$ new type-A tiles (as there are $2$ nodes of color $3$ in the top row) and in $t$ we will get $0$ new type-A tiles, again preserving the parity difference. This concludes the proof of Theorem~\ref{thm:grids}. \section{Simple corollaries}\label{sec:simplecor} \begin{lem}\label{lem:MISplusforest} Assume that we are given a graph $G$ and input and target colorings with $k \ge 3$ colors. Assume that in $O(f(n))$ rounds we can find an independent set $I$ of $G$ such that $V \setminus I$ induces a forest of trees of depth at most $O(d(n))$. Then in $O(f(n)+d(n))$ rounds we can solve $k+1$ recoloring, with a schedule of length $O(d(n))$. \end{lem} \begin{proof} Each node in $I$ switches to color $k+1$. We then use the algorithm described in the proof of Lemma \ref{lem:recoltree} to find a recoloring with schedule of length $O(d(n))$ for each connected component after the removal of $I$. After that, each node of $I$ can switch to its final color. \end{proof} \subparagraph{Acknowledgments.} The authors would like to thank Nicolas Bousquet for helpful discussions regarding the proof of Lemma~\ref{lem:thereisthetaor2}. \input{results.tex} \clearpage \bibliographystyle{plainurl}
{ "timestamp": "2019-01-18T02:11:17", "yymm": "1802", "arxiv_id": "1802.06742", "language": "en", "url": "https://arxiv.org/abs/1802.06742" }